Updates from: 06/27/2022 01:06:11
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md
Previously updated : 11/09/2021 Last updated : 06/26/2022
The **SubjectNamingInfo** element contains the following attribute:
| Attribute | Required | Description | | | -- | -- |
-| ClaimType | Yes | A reference to an output claim's **PartnerClaimType**. The output claims must be defined in the relying party policy **OutputClaims** collection. |
+| ClaimType | Yes | A reference to an output claim's **PartnerClaimType**. The output claims must be defined in the relying party policy **OutputClaims** collection with a **PartnerClaimType**. For example, `<OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub" />`, or `<OutputClaim ClaimTypeReferenceId="signInName" PartnerClaimType="signInName" />`. |
| Format | No | Used for SAML Relying parties to set the **NameId format** returned in the SAML Assertion. | The following example shows how to define an OpenID Connect relying party. The subject name info is configured as the `objectId`:
active-directory Groups Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-troubleshooting.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Troubleshoot and resolve groups issues
+This article contains troubleshooting information for groups in Azure Active Directory (Azure AD), part of Microsoft Entra.
## Troubleshooting group creation issues
active-directory Licensing Directory Independence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-directory-independence.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Understand how multiple Azure Active Directory tenant organizations interact
-In Azure Active Directory (Azure AD), each Azure AD organization is fully independent: a peer that is logically independent from the other Azure AD organizations that you manage. This independence between organizations includes resource independence, administrative independence, and synchronization independence. There is no parent-child relationship between organizations.
+In Azure Active Directory (Azure AD, part of Microsoft Entra, each Azure AD organization is fully independent: a peer that is logically independent from the other Azure AD organizations that you manage. This independence between organizations includes resource independence, administrative independence, and synchronization independence. There is no parent-child relationship between organizations.
## Resource independence
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md
Previously updated : 09/22/2021 Last updated : 06/24/2022
# Scenarios, limitations, and known issues using groups to manage licensing in Azure Active Directory
-Use the following information and examples to gain a more advanced understanding of Azure Active Directory (Azure AD) group-based licensing.
+Use the following information and examples to gain a more advanced understanding of group-based licensing in Azure Active Directory (Azure AD), part of Microsoft Entra.
## Usage location
active-directory Licensing Groups Assign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-assign.md
Previously updated : 05/26/2022 Last updated : 06/24/2022
# Assign licenses to users by group membership in Azure Active Directory
-This article walks you through assigning product licenses to a group of users and verifying that they're licensed correctly in Azure Active Directory (Azure AD).
+This article walks you through assigning product licenses to a group of users and verifying that they're licensed correctly in Azure Active Directory (Azure AD), part of Microsoft Entra.
In this example, the Azure AD organization contains a security group called **HR Department**. This group includes all members of the human resources department (around 1,000 users). You want to assign Office 365 Enterprise E3 licenses to the entire department. The Yammer Enterprise service that's included in the product must be temporarily disabled until the department is ready to start using it. You also want to deploy Enterprise Mobility + Security licenses to the same group of users.
active-directory Licensing Groups Change Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-change-licenses.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Change license assignments for a user or group in Azure Active Directory
-This article describes how to move users and groups between service license plans in Azure Active Directory (Azure AD). The goal Azure AD's approach is to ensure that there's no loss of service or data during the license change. Users should switch between services seamlessly. The license plan assignment steps in this article describe changing a user or group on Office 365 E1 to Office 365 E3, but the steps apply to all license plans. When you update license assignments for a user or group, the license assignment removals and new assignments are made simultaneously so that users do not lose access to their services during license changes or see license conflicts between plans.
+This article describes how to move users and groups between service license plans in Azure Active Directory (Azure AD), part of Microsoft Entra. The goal Azure AD's approach is to ensure that there's no loss of service or data during the license change. Users should switch between services seamlessly. The license plan assignment steps in this article describe changing a user or group on Office 365 E1 to Office 365 E3, but the steps apply to all license plans. When you update license assignments for a user or group, the license assignment removals and new assignments are made simultaneously so that users do not lose access to their services during license changes or see license conflicts between plans.
## Before you begin
active-directory Licensing Groups Migrate Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-migrate-users.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# How to migrate users with individual licenses to groups for licensing
-You may have existing licenses deployed to users in the organizations via direct assignment; that is, using PowerShell scripts or other tools to assign individual user licenses. Before you begin using group-based licensing to manage licenses in your organization, you can use this migration plan to seamlessly replace existing solutions with group-based licensing.
+In Azure Active DIrectory (Azure AD), part of Microsoft Entra, you can have licenses deployed to users in your tenant organizations by direct assignment, using PowerShell scripts or other tools to assign individual user licenses. Before you begin using group-based licensing to manage licenses in your organization, you can use this migration plan to seamlessly replace existing solutions with group-based licensing.
The most important thing to keep in mind is that you should avoid a situation where migrating to group-based licensing will result in users temporarily losing their currently assigned licenses. Any process that may result in removal of licenses should be avoided to remove the risk of users losing access to services and their data.
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
Previously updated : 04/21/2021 Last updated : 06/24/2022
# Identify and resolve license assignment problems for a group in Azure Active Directory
-Group-based licensing in Azure Active Directory (Azure AD) introduces the concept of users in a licensing error state. In this article, we explain the reasons why users might end up in this state.
+Group-based licensing in Azure Active Directory (Azure AD), part of Microsoft Entra, introduces the concept of users in a licensing error state. In this article, we explain the reasons why users might end up in this state.
When you assign licenses directly to individual users, without using group-based licensing, the assignment operation might fail for reasons that are related to business logic. For example, there might be an insufficient number of licenses or a conflict between two service plans that can't be assigned at the same time. The problem is immediately reported back to you.
active-directory Licensing Ps Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md
# PowerShell and Graph examples for group-based licensing in Azure AD
-Full functionality for group-based licensing is available through the [Azure portal](https://portal.azure.com), and currently there are some useful tasks that can be performed using the existing [MSOnline PowerShell
+Full functionality for group-based licensing in Azure Active Directory (Azure AD), part of Microsoft Entra, is available through the [Azure portal](https://portal.azure.com), and currently there are some useful tasks that can be performed using the existing [MSOnline PowerShell
cmdlets](/powershell/module/msonline) and Microsoft Graph. This document provides examples of what is possible. > [!NOTE]
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
# Product names and service plan identifiers for licensing
-When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes and are accurate only as of the date when this article was last updated. Microsoft does not plan to update them for newly added services periodically.
+When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes in Azure Active Directory (Azure AD), part of Microsoft Entra, and are accurate only as of the date when this article was last updated. Microsoft does not plan to update them for newly added services periodically.
- **Product name**: Used in management portals - **String ID**: Used by PowerShell v1.0 cmdlets when performing operations on licenses or by the **skuPartNumber** property of the **subscribedSku** Microsoft Graph API
active-directory Linkedin Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Integrate LinkedIn account connections in Azure Active Directory
-You can allow users in your organization to access their LinkedIn connections within some Microsoft apps. No data is shared until users consent to connect their accounts. You can integrate your organization in the Azure Active Directory (Azure AD) [admin center](https://aad.portal.azure.com).
+You can allow users in your organization to access their LinkedIn connections within some Microsoft apps. No data is shared until users consent to connect their accounts. You can integrate your organization in the [admin center](https://aad.portal.azure.com) for Azure Active Directory (Azure AD), part of Microsoft Entra.
> [!IMPORTANT] > The LinkedIn account connections setting is currently being rolled out to Azure AD organizations. When it is rolled out to your organization, it is enabled by default.
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
Previously updated : 03/29/2021 Last updated : 06/24/2022
To mitigate the risks, you must understand how tokens work. There are many kinds
Access tokens and refresh tokens are frequently used with thick client applications, and also used in browser-based applications such as single page apps. -- When users authenticate to Azure AD, authorization policies are evaluated to determine if the user can be granted access to a specific resource.
+- When users authenticate to Azure Active Directory (Azure AD), part of Microsoft Entra, authorization policies are evaluated to determine if the user can be granted access to a specific resource.
- If authorized, Azure AD issues an access token and a refresh token for the resource.
active-directory Users Search Enhanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-search-enhanced.md
Previously updated : 06/15/2022 Last updated : 06/24/2022
# User management enhancements in Azure Active Directory
-This article describes how to use the user management enhancements in the Azure Active Directory (Azure AD) portal. The **All users** page and user profile pages have been updated to provide more information and make it easier to find users.
+This article describes how to use the user management enhancements in the admin center for Azure Active Directory (Azure AD), part of Microsoft Entra. The **All users** page and user profile pages have been updated to provide more information and make it easier to find users.
Enhancements include: -- Infinite scroll so you no longer have to select ΓÇÿLoad moreΓÇÖ to view more users
+- Preloaded scrolling so that you no longer have to select ΓÇÿLoad moreΓÇÖ to view more users
- More user properties can be added as columns including city, country, employee ID, employee type, and external user state - More user properties can be filtered on including custom security attributes, on-premises extension attributes, and manager - More ways to customize your view, like using drag-and-drop to reorder columns
active-directory Users Sharing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-sharing-accounts.md
Previously updated : 12/03/2020 Last updated : 06/24/2022
## Overview
-Sometimes organizations need to use a single username and password for multiple people, which typically happens in two cases:
+In Azure Active Directory (Azure AD), part of Microsoft Entra, sometimes organizations need to use a single username and password for multiple people, which often happens in the following cases:
* When accessing applications that require a unique sign in and password for each user, whether on-premises apps or consumer cloud services (for example, corporate social media accounts).
-* When creating multi-user environments. You might have a single, local account that has elevated privileges and is used to do core setup, administration, and recovery activities. For example, the local "global administrator" account for Microsoft 365 or the root account in Salesforce.
+* When creating multi-user environments. You might have a single, local account that has elevated privileges and is used to do core setup, administration, and recovery activities. For example, the local Global Administrator account for Microsoft 365 or the root account in Salesforce.
Traditionally, these accounts are shared by distributing the credentials (username and password) to the right individuals or storing them in a shared location where multiple trusted agents can access them.
active-directory Active Directory Groups Membership Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-membership-azure-portal.md
Previously updated : 10/19/2018 Last updated : 6/22/2022
# Add or remove a group from another group using Azure Active Directory
-This article helps you to add and remove a group from another group using Azure Active Directory.
+This article helps you to add and remove a group from another group using Azure Active Directory. When a group is added to another group, it creates a nested group.
>[!Note] >If you're trying to delete the parent group, see [How to update or delete a group and its members](active-directory-groups-delete-group.md). ## Add a group to another group
-You can add an existing Security group to another existing Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time.
+You can add an existing Security group to another existing Security group (also known as nested groups), which creates a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time.
>[!Important]
->We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li><li>Adding security groups as members of mail-enabled security groups</li><li> Adding groups as members of a role-assignable group.</li></ul>
+>We don't currently support:<br>
+>- Adding groups to a group synced with on-premises Active Directory.<br>
+>- Adding Security groups to Microsoft 365 groups.<br>
+>- Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.<br>
+>- Assigning apps to nested groups.<br>
+>- Applying licenses to nested groups.<br>
+>- Adding distribution groups in nesting scenarios.<br>
+>- Adding security groups as members of mail-enabled security groups.
+ ### To add a group as a member of another group
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
na Previously updated : 03/22/2022 Last updated : 06/22/2022
This article describes how to create one or more access reviews for group member
For more information, see [License requirements](access-reviews-overview.md#license-requirements).
-If you are reviewing access to an application, then before creating the review, see the article on how to [prepare for an access review of users' access to an application](access-reviews-application-preparation.md) to ensure the application is integrated with Azure AD.
+If you're reviewing access to an application, then before creating the review, see the article on how to [prepare for an access review of users' access to an application](access-reviews-application-preparation.md) to ensure the application is integrated with Azure AD.
## Create a single-stage access review
If you are reviewing access to an application, then before creating the review,
> [!NOTE] > If you selected **All Microsoft 365 groups with guest users**, your only option is to review **Guest users only**.
-1. Or if you are conducting group membership review, you can create access reviews for only the inactive users in the group (preview). In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who have not signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
+1. After you select the scope of the review, you can determine how nested group membership is reviewed (Preview). On the **Nested groups** setting, select:
+ - **Review all users assignments, including assignment from nested group membership** if you want to include indirect members in your review. Deny decisions won't be applied to indirect users.
+ - Or, **Review only direct assignments, including direct users and unexpanded nested groups** if you want to only review direct members and groups. Indirect members and groups won't be included in the review and decisions are applied to direct users and groups only. For more information about access reviews of nested group memberships see [Review access of a nested group (preview)](manage-access-review.md#review-access-of-nested-group-membership-preview).
+1. If you scoped the review to **All users and groups** and chose **Review only direct assignments, including direct users and unexpanded nested groups**, when you select a reviewer, your selection options are limited:
+ - If you select **Managers of users** as the reviewer, a fallback reviewer must be selected to review the groups with access to the nested group.
+ - If you select **Users review their own access** as the reviewer, the nested groups won't be included in the review. To have the groups reviewed, you must select a different reviewer and not a self-review.
+1. Or if you are conducting group membership review, you can create access reviews for only the inactive users in the group (preview). In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who haven't signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
1. Select **Next: Reviews**. ### Next: Reviews
-1. You can create a single-stage or multi-stage review (preview). For a single stage review continue here. To create a multi-stage access review (preview), follow the steps in [Create a multi-stage access review (preview)](#create-a-multi-stage-access-review-preview)
+1. You can create a single-stage or multi-stage review (preview). For a single stage review, continue here. To create a multi-stage access review (preview), follow the steps in [Create a multi-stage access review (preview)](#create-a-multi-stage-access-review-preview).
1. In the **Specify reviewers** section, in the **Select reviewers** box, select either one or more people to make decisions in the access reviews. You can choose from:
A multi-stage review allows the administrator to define two or three sets of rev
> [!WARNING] > Data of users included in multi-stage access reviews are a part of the audit record at the start of the review. Administrators may delete the data at any time by deleting the multi-stage access review series. For general information about GDPR and protecting user data, see the [GDPR section of the Microsoft Trust Center](https://www.microsoft.com/trust-center/privacy/gdpr-overview) and the [GDPR section of the Service Trust portal](https://servicetrust.microsoft.com/ViewPage/GDPRGetStarted).
-1. After you have selected the resource and scope of your review, move on to the **Reviews** tab.
+1. After you've selected the resource and scope of your review, move on to the **Reviews** tab.
-1. Click the checkbox next to **(Preview) Multi-stage review**.
+1. Select the checkbox next to **(Preview) Multi-stage review**.
1. Under **First stage review**, select the reviewers from the dropdown menu next to **Select reviewers**.
A multi-stage review allows the administrator to define two or three sets of rev
1. Add the duration for the second stage.
-1. By default, you will see two stages when you create a multi-stage review. However, you can add up to three stages. If you want to add a third stage, click **+ Add a stage** and complete the required fields.
+1. By default, you'll see two stages when you create a multi-stage review. However, you can add up to three stages. If you want to add a third stage, select **+ Add a stage** and complete the required fields.
-1. You can decide to allow 2nd and 3rd stage reviewers to the see decisions made in the previous stage(s).If you want to allow them to see the decisions made prior, click the box next to **Show previous stage(s) decisions to later stage reviewers** under **Reveal review results**. Leave the box unchecked to disable this setting if youΓÇÖd like your reviewers to review independently.
+1. You can decide to allow 2nd and 3rd stage reviewers to the see decisions made in the previous stage(s).If you want to allow them to see the decisions made prior, select the box next to **Show previous stage(s) decisions to later stage reviewers** under **Reveal review results**. Leave the box unchecked to disable this setting if youΓÇÖd like your reviewers to review independently.
![Screenshot that shows duration and show previous stages setting enabled for multi-stage review.](./media/create-access-review/reveal-multi-stage-results-and-duration.png) 1. The duration of each recurrence will be set to the sum of the duration day(s) you specified in each stage.
-1. Specify the **Review recurrence**, the **Start date**, and **End date** for the review. The recurrence type must be at least as long as the total duration of the recurrence (i.e., the max duration for a weekly review recurrence is 7 days).
+1. Specify the **Review recurrence**, the **Start date**, and **End date** for the review. The recurrence type must be at least as long as the total duration of the recurrence (for example, the max duration for a weekly review recurrence is seven days).
1. To specify which reviewees will continue from stage to stage, select one or multiple of the following options next to **Specify reviewees to go to next stage** : ![Screenshot that shows specify reviewees setting and options for multi-stage review.](./media/create-access-review/next-stage-reviewees-setting.png)
Use the following instructions to create an access review on a team with shared
1. Select **+ New access review**.
-1. Select **Teams + Groups** and then click **Select teams + groups** to set the **Review scope**. B2B direct connect users and teams are not included in reviews of **All Microsoft 365 groups with guest users**.
+1. Select **Teams + Groups** and then click **Select teams + groups** to set the **Review scope**. B2B direct connect users and teams aren't included in reviews of **All Microsoft 365 groups with guest users**.
1. Select a Team that has shared channels shared with 1 or more B2B direct connect users or Teams.
Use the following instructions to create an access review on a team with shared
> - If you set **Select reviewers** to **Users review their own access** or **Managers of users**, B2B direct connect users and Teams won't be able to review their own access in your tenant. The owner of the Team under review will get an email that asks the owner to review the B2B direct connect user and Teams. > - If you select **Managers of users**, a selected fallback reviewer will review any user without a manager in the home tenant. This includes B2B direct connect users and Teams without a manager.
-1. Go on to the **Settings** tab and configure additional settings. Then go to the **Review and Create** tab to start your access review. For more detailed information about creating a review and configuration settings, see our [Create a single-stage access review](#create-a-single-stage-access-review).
+1. Go on to the **Settings** tab and configure extra settings. Then go to the **Review and Create** tab to start your access review. For more detailed information about creating a review and configuration settings, see our [Create a single-stage access review](#create-a-single-stage-access-review).
## Allow group owners to create and manage access reviews of their groups (preview)
active-directory Manage Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-access-review.md
na Previously updated : 08/20/2021 Last updated : 04/29/2022
When reviewing guest user access to Microsoft 365 groups, you can either create
You can then decide whether to ask each guest to review their own access or to ask one or more users to review every guest's access. These scenarios are covered in the following sections.+
+### Review access of nested group membership (Preview)
+For some scenarios, access to resources such as security groups, enterprise applications, and privileged roles can be granted through a security group assigned access to the resource. To learn more, go to [Add or remove a group from another group](../fundamentals/active-directory-groups-membership-azure-portal.md).
+
+Administrators can perform an access review of members of nested groups. When the administrator creates the review, they can choose whether their reviewers can make decisions on indirect members or only on direct members. An example of an indirect user is a user that has access to a security group that has access to another security group, application or role.
+
+![Diagram showing example of nested group membership.](media/manage-access-review/nested-group-membership-access-review.png)
+
+If the administrator decides to only allow reviews on direct members, reviewers can approve and deny access for nested groups or role-assignable groups as an entity. If denied, the nested group or role-assignable group will lose access to the resource.
+
+1. To create an access review of a nested group, go to [Create an access review of groups or applications](create-access-review.md#scope) and follow the guidance on nested groups.
+
+2. To review access of a nested group, go to [Review access for nested group memberships (preview)](perform-access-review.md#review-access-for-nested-group-memberships-preview).
### Ask guests to review their own membership in a group
active-directory Perform Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md
na Previously updated : 2/18/2022 Last updated : 6/22/2022
# Review access to groups and applications in Azure AD access reviews
-Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD and other Microsoft Online Services with a feature called Azure AD access reviews. This article will go over how a designated reviewer performs an access review for members of a group or users with access to an application. If you would like to review access to an access package read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md)
+Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD, and other Microsoft Online Services with a feature called Azure AD access reviews. This article will go over how a designated reviewer performs an access review for members of a group or users with access to an application. If you would like to review access to an access package, read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md)
## Perform access review using My Access You can review access to groups and applications via My Access, an end-user friendly portal for granting, approving, and reviewing access needs.
You can review access to groups and applications via My Access, an end-user frie
![Example email from Microsoft to review access to a group](./media/perform-access-review/access-review-email-preview.png)
-1. Click the **Start review** link to open the access review.git pu
+1. Select the **Start review** link to open the access review.git pu
### Navigate directly to My Access
You can also view your pending access reviews by using your browser to open My A
## Review access for one or more users
-After you open My Access under Groups and Apps you can see:
+After you open My Access under Groups and Apps, you can see:
- **Name** The name of the access review.-- **Due** The due date for the review. After this date denied users could be removed from the group or app being reviewed.
+- **Due** The due date for the review. After this date, denied users could be removed from the group or app being reviewed.
- **Resource** The name of the resource under review. - **Progress** The number of users reviewed over the total number of users part of this access review.
-Click on the name of an access review to get started.
+Select on the name of an access review to get started.
![Pending access reviews list for apps and groups](./media/perform-access-review/access-reviews-list-preview.png)
-Once that it opens, you will see the list of users in scope for the access review.
+Once that it opens, you'll see the list of users in scope for the access review.
> [!NOTE] > If the request is to review your own access, the page will look different. For more information, see [Review access for yourself to groups or applications](review-your-access.md).
There are two ways that you can approve or deny access:
1. Select one or more users by clicking the circle next to their names. 1. Select **Approve** or **Deny** on the bar above.
- - If you are unsure if a user should continue to have access or not, you can click **Don't know**. The user gets to keep their access and your choice is recorded in the audit logs. It is important that you keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
+ - If you're unsure if a user should continue to have access or not, you can select **Don't know**. The user gets to keep their access and your choice is recorded in the audit logs. It's important that you keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
![Open access review listing the users who need review](./media/perform-access-review/user-list-preview.png)
-1. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason is not required. You can still provide a reason for your decision and the information that you include will be available to other approvers for review.
+1. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason isn't required. You can still provide a reason for your decision and the information that you include will be available to other approvers for review.
-1. Click **Submit**.
+1. Select **Submit**.
- You can change your response at any time until the access review has ended. If you want to change your response, select the row and update the response. For example, you can approve a previously denied user or deny a previously approved user. > [!IMPORTANT]
There are two ways that you can approve or deny access:
To make access reviews easier and faster for you, we also provide recommendations that you can accept with a single click. The recommendations are generated based on the user's sign-in activity.
-1. Select one or more users and then Click **Accept recommendations**.
+1. Select one or more users and then select **Accept recommendations**.
![Open access review listing showing the Accept recommendations button](./media/perform-access-review/accept-recommendations-preview.png) 1. Or to accept recommendations for all unreviewed users, make sure that no users are selected and click on the **Accept recommendations** button on the top bar.
-1. Click **Submit** to accept the recommendations.
+1. Select **Submit** to accept the recommendations.
> [!NOTE]
To make access reviews easier and faster for you, we also provide recommendation
If multi-stage access reviews have been enabled by the administrator, there will be 2 or 3 total stages of review. Each stage of review will have a specified reviewer.
-You will review access either manually or accept the recommendations based on sign-in activity for the stage you are assigned as the reviewer.
+You'll review access either manually or accept the recommendations based on sign-in activity for the stage you're assigned as the reviewer.
-If you are the 2nd stage or 3rd stage reviewer, you will also see the decisions made by the reviewers in the prior stage(s) if the administrator enabled this setting when creating the access review. The decision made by a 2nd or 3rd stage reviewer will overwrite the previous stage. So, the decision the 2nd stage reviewer makes will overwrite the first stage, and the 3rd stage reviewer's decision will overwrite the second stage.
+If you're the 2nd stage or 3rd stage reviewer, you'll also see the decisions made by the reviewers in the prior stage(s) if the administrator enabled this setting when creating the access review. The decision made by a 2nd or 3rd stage reviewer will overwrite the previous stage. So, the decision the 2nd stage reviewer makes will overwrite the first stage, and the 3rd stage reviewer's decision will overwrite the second stage.
![Select user to show the multi-stage access review results](./media/perform-access-review/multi-stage-access-review.png)
Approve or deny access as outlined in [Review access for one or more users](#rev
To review access of B2B direct connect users, use the following instructions:
-1. As the reviewer, you should receive an email that requests you to review access for the team or group. Click the link in the email, or navigate directly to https://myaccess.microsoft.com/.
+1. As the reviewer, you should receive an email that requests you to review access for the team or group. Select the link in the email, or navigate directly to https://myaccess.microsoft.com/.
1. Follow the instructions in [Review access for one or more users](#review-access-for-one-or-more-users) to make decisions to approve or deny the users access to the Teams. > [!NOTE] > Unlike internal users and B2B Collaboration users, B2B direct connect users and Teams **don't** have recommendations based on last sign-in activity to make decisions when you perform the review.
-If a Team you review has shared channels, all B2B direct connect users and teams that access those shared channels are part of the review. This includes B2B collaboration users and internal users. When a B2B direct connect user or team is denied access in an access review, the user will lose access to every shared channel in the Team. To learn more about B2B direct connect users, read [B2B direct connect](../external-identities/b2b-direct-connect-overview.md).
+If a Team you review has shared channels, all B2B direct connect users and teams that access those shared channels are part of the review. B2B collaboration users and internal users are included in this review. When a B2B direct connect user or team is denied access in an access review, the user will lose access to every shared channel in the Team. To learn more about B2B direct connect users, read [B2B direct connect](../external-identities/b2b-direct-connect-overview.md).
+
+## Review access for nested group memberships (preview)
+To review access of nested group members:
+
+1. Follow the link in the notification email or go directly to
+https://myaccess.microsoft.com/ to complete the review.
+
+1. If the review creator chooses to include groups in the review, youΓÇÖll see them listed in the
+review as either a user or a group within the resource.
+
+Resources include:
+- security groups
+- applications
+- Azure roles
+- Azure AD roles
+
+> [!Note]
+> M365 groups and access packages don't support nested groups, so you can't review access for these resource types in a nested group scenario.
## If no action is taken on access review
-When the access review is setup, the administrator has the option to use advanced settings to determine what will happen in the event a reviewer doesn't respond to an access review request.
+When the access review is set up, the administrator can use advanced settings to determine what will happen in the event a reviewer doesn't respond to an access review request.
-The administrator can set up the review so that if reviewers do not respond at the end of the review period, all unreviewed users can have an automatic decision made on their access. This includes the loss of access to the group or application under review.
+The administrator can set up the review so that if reviewers don't respond at the end of the review period, all unreviewed users can have an automatic decision made on their access. This decision can include the loss of access to the group or application under review.
## Next steps
active-directory How To View Managed Identity Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md
description: Step-by-step instructions for viewing the activities made to manage
documentationcenter: '' -+ editor: ''- na Previously updated : 01/11/2022 Last updated : 06/24/2022
active-directory How To View Managed Identity Service Principal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
az ad sp list --display-name <Azure resource name>
## Next steps
-For more information on managing Azure AD service principals using Azure CLI, see [az ad sp](/cli/azure/ad/sp).
+For more information on managing Azure AD service principals, see [Azure CLI ad sp](/cli/azure/ad/sp).
active-directory How To View Managed Identity Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-portal.md
na Previously updated : 02/23/2022 Last updated : 06/24/2022
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
description: An overview of the managed identities for Azure resources.
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
ms.devlang: Previously updated : 01/25/2022 Last updated : 06/24/2022
# What are managed identities for Azure resources?
-A common challenge for developers is the management of secrets, credentials, certificates, keys etc used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
+A common challenge for developers is the management of secrets, credentials, certificates, and keys used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having to manage any credentials.
The following video shows how you can use managed identities:</br>
> [!VIDEO https://docs.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny] -- Here are some of the benefits of using managed identities: - You don't need to manage credentials. Credentials arenΓÇÖt even accessible to you. - You can use managed identities to authenticate to any resource that supports [Azure AD authentication](../authentication/overview-authentication.md), including your own applications.-- Managed identities can be used without any additional cost.
+- Managed identities can be used at no extra cost.
> [!NOTE] > Managed identities for Azure resources is the new name for the service formerly known as Managed Service Identity (MSI).
The following table shows the differences between the two types of managed ident
## How can I use managed identities for Azure resources?
-For using Managed identities, you have should do the following:
+You can use managed identities by following the steps below:
+ 1. Create a managed identity in Azure. You can choose between system-assigned managed identity or user-assigned managed identity.
-2. In case of user-assigned managed identity, assign the managed identity to the "source" Azure Resource, such as an Azure Logic App or an Azure Web App.
+2. When working with a user-assigned managed identity, assign the managed identity to the "source" Azure Resource, such as an Azure Logic App or an Azure Web App.
3. Authorize the managed identity to have access to the "target" service.
-4. Use the managed identity to perform access. For this, you can use the Azure SDK with the Azure.Identity library. Some "source" resources offer connectors that know how to use Managed identities for the connections. In that case you simply use the identity as a feature of that "source" resource.
+4. Use the managed identity to access a resource. In this step, you can use the Azure SDK with the Azure.Identity library. Some "source" resources offer connectors that know how to use Managed identities for the connections. In that case, you use the identity as a feature of that "source" resource.
## What Azure services support the feature?<a name="which-azure-services-support-managed-identity"></a>
active-directory Qs Configure Powershell Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vmss.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor: na Previously updated : 01/11/2022 Last updated : 06/24/2022
If your virtual machine scale set has multiple user-assigned managed identities,
```azurepowershell-interactive Update-AzVmss -ResourceGroupName myResourceGroup -Name myVmss -IdentityType UserAssigned -IdentityID "<USER ASSIGNED IDENTITY NAME>" ```
-If your virtual machine scale set does not have a system-assigned managed identity and you want to remove all user-assigned managed identities from it, use the following command:
+If your virtual machine scale set doesn't have a system-assigned managed identity and you want to remove all user-assigned managed identities from it, use the following command:
```azurepowershell-interactive Update-AzVmss -ResourceGroupName myResourceGroup -Name myVmss -IdentityType None
active-directory Tutorial Linux Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-sas.md
description: Tutorial showing how to use a Linux VM system-assigned managed iden
documentationcenter: '' -+ na Previously updated : 02/17/2022 Last updated : 06/24/2022
Response:
## Next steps
-In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using a SAS credential. To learn more about Azure Storage SAS see:
+In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using a SAS credential. To learn more about Azure Storage SAS, see:
> [!div class="nextstepaction"] >[Using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md)
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Title: Use managed identities from a virtual machine to access Cosmos DB description: Learn how to use managed identities with Windows VMs using the Azure portal, CLI, PowerShell, Azure Resource Manager template -+ Previously updated : 01/11/2022 Last updated : 06/24/2022 ms.tool: azure-cli, azure-powershell
In this article, we set up a virtual machine to use managed identities to connec
## Create a resource group
-Create a resource group called **mi-test**. We will use this resource group for all resources used in this tutorial.
+Create a resource group called **mi-test**. We'll use this resource group for all resources used in this tutorial.
- [Create a resource group using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) - [Create a resource group using the CLI](../../azure-resource-manager/management/manage-resource-groups-cli.md#create-resource-groups)
New-AzVm `
# [Azure CLI](#tab/azure-cli)
-Create a VM using [az vm create](/cli/azure/vm/#az-vm-create). The following example creates a VM named *myVM* with a system-assigned managed identity, as requested by the `--assign-identity` parameter. The `--admin-username` and `--admin-password` parameters specify the administrative user name and password account for virtual machine sign-in. Update these values as appropriate for your environment:
+Create a VM using [Azure CLI vm create command](/cli/azure/vm/#az-vm-create). The following example creates a VM named *myVM* with a system-assigned managed identity, as requested by the `--assign-identity` parameter. The `--admin-username` and `--admin-password` parameters specify the administrative user name and password account for virtual machine sign-in. Update these values as appropriate for your environment:
```azurecli-interactive az vm create --resource-group myResourceGroup --name myVM --image win2016datacenter --generate-ssh-keys --assign-identity --admin-username azureuser --admin-password myPassword12
The steps below show you how to create a virtual machine with a user-assigned ma
# [Portal](#tab/azure-portal)
-Today, the Azure portal does not support assigning a user-assigned managed identity during the creation of a VM. You should create a virtual machine and then assign a user assigned managed identity to it.
+Today, the Azure portal doesn't support assigning a user-assigned managed identity during the creation of a VM. You should create a virtual machine and then assign a user assigned managed identity to it.
[Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md#user-assigned-managed-identity)
Under the resources element, add the following entry to assign a user-assigned m
## Create a Cosmos DB account
-Now that we have a VM with either a user-assigned managed identity or a system-assigned managed identity we need a Cosmos DB account available where you have administrative rights. If you need to create a Cosmos DB account for this tutorial the [Cosmos DB quickstart](../..//cosmos-db/sql/create-cosmosdb-resources-portal.md) provides detailed steps on how to do that.
+Now that we have a VM with either a user-assigned managed identity or a system-assigned managed identity we need a Cosmos DB account available where you have administrative rights. If you need to create a Cosmos DB account for this tutorial, the [Cosmos DB quickstart](../..//cosmos-db/sql/create-cosmosdb-resources-portal.md) provides detailed steps on how to do that.
>[!NOTE] > Managed identities may be used to access any Azure resource that supports Azure Active Directory authentication. This tutorial assumes that your Cosmos DB account will be configured as shown below.
New-AzCosmosDBSqlRoleAssignment -AccountName $accountName `
When the role assignment step completes, you should see results similar to the ones shown below. # [Azure CLI](#tab/azure-cli)
Getting access to Cosmos using managed identities may be achieved using the Azur
The ManagedIdentityCredential class attempts to authentication using a managed identity assigned to the deployment environment. The [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme) class goes through different authentication options in order. The second authentication option that DefaultAzureCredential attempts is Managed identities.
-In the example shown below you create a database, a container, an item in the container, and read back the newly created item using the virtual machine's system assigned managed identity. If you want to use a user-assigned managed identity, you need to specify the user-assigned managed identity by specifying the managed identity's client ID.
+In the example shown below, you create a database, a container, an item in the container, and read back the newly created item using the virtual machine's system assigned managed identity. If you want to use a user-assigned managed identity, you need to specify the user-assigned managed identity by specifying the managed identity's client ID.
```csharp string userAssignedClientId = "<your managed identity client Id>";
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
Previously updated : 10/07/2021 Last updated : 6/2/2022
The need for access to privileged Azure resource and Azure AD roles by employees
## Prerequisites To create access reviews for Azure resources, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role for the Azure resources. To create access reviews for Azure AD roles, you must be assigned to the [Global Administrator](../roles/permissions-reference.md#global-administrator) or the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
The need for access to privileged Azure resource and Azure AD roles by employees
3. For **Azure AD roles**, select **Azure AD roles** under **Privileged Identity Management**. For **Azure resources**, select **Azure resources** under **Privileged Identity Management**.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in Azure Portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Screenshot of select Identity Governance button in Azure portal." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
4. For **Azure AD roles**, select **Azure AD roles** again under **Manage**. For **Azure resources**, select the subscription you want to manage. 5. Under Manage, select **Access reviews**, and then select **New** to create a new access review.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/access-reviews.png" alt-text="Azure AD roles - Access reviews list showing the status of all reviews screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/access-reviews.png" alt-text="Screenshot of access reviews list showing the status of all reviews.":::
6. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/name-description.png" alt-text="Create an access review - Review name and description screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/name-description.png" alt-text="Screenshot of review name and description.":::
7. Set the **Start date**. By default, an access review occurs once, starts the same time it's created, and it ends in one month. You can change the start and end dates to have an access review start in the future and last however many days you want.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/start-end-dates.png" alt-text="Start date, frequency, duration, end, number of times, and end date screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/start-end-dates.png" alt-text="Screenshot of Start date, frequency, duration, end, number of times, and end date fields.":::
8. To make the access review recurring, change the **Frequency** setting from **One time** to **Weekly**, **Monthly**, **Quarterly**, **Annually**, or **Semi-annually**. Use the **Duration** slider or text box to define how many days each review of the recurring series will be open for input from reviewers. For example, the maximum duration that you can set for a monthly review is 27 days, to avoid overlapping reviews.
The need for access to privileged Azure resource and Azure AD roles by employees
10. In the **Users Scope** section, select the scope of the review. For **Azure AD roles**, the first scope option is Users and Groups. Directly assigned users and [role-assignable groups](../roles/groups-concept.md) will be included in this selection. For **Azure resource roles**, the first scope will be Users. Groups assigned to Azure resource roles are expanded to display transitive user assignments in the review with this selection. You may also select **Service Principals** to review the machine accounts with direct access to either the Azure resource or Azure AD role.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/users.png" alt-text="Users scope to review role membership of screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/users.png" alt-text="Screenshot of Users scope to review role membership section.":::
11. Or, you can create access reviews only for inactive users (preview). In the *Users scope* section, set the **Inactive users (on tenant level) only** to **true**. If the toggle is set to *true*, the scope of the review will focus on inactive users only. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users inactive for the specified number of days will be the only users in the review.
The need for access to privileged Azure resource and Azure AD roles by employees
> [!NOTE] > Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/review-role-membership.png" alt-text="Review role memberships screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/review-role-membership.png" alt-text="Screenshot of review role memberships option.":::
13. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/assignment-type-select.png" alt-text="Reviewers list of assignment types screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/assignment-type-select.png" alt-text="Screenshot of reviewers list of assignment types.":::
14. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reviewers.png" alt-text="Reviewers list of selected users or members (self)":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reviewers.png" alt-text="Screenshot of reviewers list of selected users or members (self) button.":::
- **Selected users** - Use this option to designate a specific user to complete the review. This option is available regardless of the scope of the review, and the selected reviewers can review users, groups and service principals.
- - **Members (self)** - Use this option to have the users review their own role assignments. This option is only available if the review is scoped to **Users and Groups** or **Users**. For **Azure AD roles**, role-assignable groups will not be a part of the review when this option is selected.
- - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. This option is only available if the review is scoped to **Users and Groups** or **Users**. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. For **Azure AD roles**, role-assignable groups will be reviewed by the fallback reviewer if one is selected.
+ - **Members (self)** - Use this option to have the users review their own role assignments. This option is only available if the review is scoped to **Users and Groups** or **Users**. For **Azure AD roles**, role-assignable groups won't be a part of the review when this option is selected.
+ - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. This option is only available if the review is scoped to **Users and Groups** or **Users**. Upon selecting Manager, you also can specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. For **Azure AD roles**, role-assignable groups will be reviewed by the fallback reviewer if one is selected.
### Upon completion settings 1. To specify what happens after a review completes, expand the **Upon completion settings** section.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings.png" alt-text="Upon completion settings to auto apply and should review not respond screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings.png" alt-text="Screenshot of Upon completion settings section to auto apply and should reviewer not respond.":::
2. If you want to automatically remove access for users that were denied, set **Auto apply results to resource** to **Enable**. If you want to manually apply the results when the review completes, set the switch to **Disable**.
-3. Use the **If reviewer don't respond** list to specify what happens for users that are not reviewed by the reviewer within the review period. This setting does not impact users who were reviewed by the reviewers.
+3. Use the **If reviewer don't respond** list to specify what happens for users that aren't reviewed by the reviewer within the review period. This setting doesn't impact users who were reviewed by the reviewers.
- **No change** - Leave user's access unchanged - **Remove access** - Remove user's access - **Approve access** - Approve user's access - **Take recommendations** - Take the system's recommendation on denying or approving the user's continued access
-4. Use the **Action to apply on denied guest users** list to specify what happens for guest users that are denied. This setting is not editable for Azure AD and Azure resource role reviews at this time; guest users, like all users, will always lose access to the resource if denied.
+4. Use the **Action to apply on denied guest users** list to specify what happens for guest users that are denied. This setting isn't editable for Azure AD and Azure resource role reviews at this time; guest users, like all users, will always lose access to the resource if denied.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/action-to-apply-on-denied-guest-users.png" alt-text="Upon completion settings - Action to apply on denied guest users screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/action-to-apply-on-denied-guest-users.png" alt-text="Screenshot of Action to apply on denied guest users selected.":::
-5. You can send notifications to additional users or groups to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
+5. You can send notifications to other users or groups to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings-additional-receivers.png" alt-text="Upon completion settings - Add additional users to receive notifications screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings-additional-receivers.png" alt-text="Screenshot of Add additional users to receive notifications selected.":::
### Advanced settings
-1. To specify additional settings, expand the **Advanced settings** section.
+1. To specify extra settings, expand the **Advanced settings** section.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/advanced-settings.png" alt-text="Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders screenshot.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/advanced-settings.png" alt-text="Screenshot of Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders option.":::
1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information. Recommendations are based on a 30-day interval period where users who have logged in the past 30 days are recommended access, while users who have not are recommended denial of access. These sign-ins are irrespective of whether they were interactive. The last sign-in of the user is also displayed along with the recommendation.
The need for access to privileged Azure resource and Azure AD roles by employees
1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes.
-1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review.
-1. The content of the email sent to reviewers is auto-generated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
+1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who haven't completed their review.
+1. The content of the email sent to reviewers is auto-generated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as other instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/email-info.png" alt-text="Content of the email sent to reviewers with highlights":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/email-info.png" alt-text="Screenshot of the content of the email sent to reviewers with highlights.":::
## Manage the access review You can track the progress as the reviewers complete their reviews on the **Overview** page of the access review. No access rights are changed in the directory until the review is completed. Below is a screenshot showing the overview page for **Azure resources** and **Azure AD roles** access reviews. If this is a one-time review, then after the access review period is over or the administrator stops the access review, follow the steps in [Complete an access review of Azure resource and Azure AD roles](pim-complete-azure-ad-roles-and-resource-roles-review.md) to see and apply the results.
-To manage a series of access reviews, navigate to the access review, and you will find upcoming occurrences in Scheduled reviews, and edit the end date or add/remove reviewers accordingly.
+To manage a series of access reviews, navigate to the access review, and you'll find upcoming occurrences in Scheduled reviews, and edit the end date or add/remove reviewers accordingly.
Based on your selections in **Upon completion settings**, auto-apply will be executed after the review's end date or when you manually stop the review. The status of the review will change from **Completed** through intermediate states such as **Applying** and finally to state **Applied**. You should expect to see denied users, if any, being removed from roles in a few minutes. ## Impact of groups assigned to Azure AD roles and Azure resource roles in access reviews
-ΓÇó For **Azure AD roles**, role-assignable groups can be assigned to the role using [role-assignable groups](../roles/groups-concept.md). When a review is created on an Azure AD role with role-assignable groups assigned, the group name shows up in the review without expanding the group membership. The reviewer can approve or deny access of the entire group to the role. Denied groups will lose their assignment to the role when review results are applied.
+- For **Azure AD roles**, role-assignable groups can be assigned to the role using [role-assignable groups](../roles/groups-concept.md). When a review is created on an Azure AD role with role-assignable groups assigned, by default, the group name shows up in the review without expanding the group membership. The reviewer can approve or deny access of the entire group to the role. Denied groups will lose their assignment to the role when review results are applied.
-ΓÇó For **Azure resource roles**, any security group can be assigned to the role. When a review is created on an Azure resource role with a security group assigned, the users assigned to that security group will be fully expanded and shown to the reviewer of the role. When a reviewer denies a user that was assigned to the role via the security group, the user will not be removed from the group, and therefore the apply of the deny result will be unsuccessful.
+- For **Azure resource roles**, any security group can be assigned to the role. When a review is created on an Azure resource role with a security group assigned, by default, the users assigned to that security group will be fully expanded and shown to the reviewer of the role. When a reviewer denies a user that was assigned to the role via the security group, the user won't be removed from the group, and therefore the apply of the deny result will be unsuccessful.
> [!NOTE]
-> It is possible for a security group to have other groups assigned to it. In this case, only the users assigned directly to the security group assigned to the role will appear in the review of the role.
+> It's possible for a security group to have other groups assigned to it. In this case, only the users assigned directly to the security group assigned to the role will appear in the review of the role.
+
+These default applications will change if the administrator specifies settings for access reviews of nested groups.
## Update the access review After one or more access reviews have been started, you may want to modify or update the settings of your existing access reviews. Here are some common scenarios that you might want to consider: -- **Adding and removing reviewers** - When updating access reviews, you may choose to add a fallback reviewer in addition to the primary reviewer. Primary reviewers may be removed when updating an access review. However, fallback reviewers are not removable by design.
+- **Adding and removing reviewers** - When updating access reviews, you may choose to add a fallback reviewer in addition to the primary reviewer. Primary reviewers may be removed when updating an access review. However, fallback reviewers aren't removable by design.
> [!Note] > Fallback reviewers can only be added when reviewer type is manager. Primary reviewers can be added when reviewer type is selected user. -- **Reminding the reviewers** - When updating access reviews, you may choose to enable the reminder option under Advanced Settings. Once enabled, users will receive an email notification at the midpoint of the review period, regardless of whether they have completed the review or not.
+- **Reminding the reviewers** - When updating access reviews, you may choose to enable the reminder option under Advanced Settings. Once enabled, users will receive an email notification at the midpoint of the review period, regardless of whether they've completed the review or not.
:::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reminder-setting.png" alt-text="Screenshot of the reminder option under access reviews settings.":::
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
Title: How to customize your Azure Active Directory Verifiable Credentials (preview)
+ Title: How to customize your Microsoft Entra Verified ID (preview)
description: This article shows you how to create your own custom verifiable credential
Previously updated : 06/08/2022 Last updated : 06/22/2022 # Customer intent: As a developer I am looking for information on how to enable my users to control their own information
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-Verifiable credentials are made up of two components, the rules and display definitions. The rules definition determines what the user needs to provide before they receive a verifiable credential. The display definition controls the branding of the credential and styling of the claims. In this guide, we will explain how to modify both files to meet the requirements of your organization.
+Verifiable credentials are made up of two components, the rules and display definitions. The rules definition determines what the user needs to provide before they receive a verifiable credential. The display definition controls the branding of the credential and styling of the claims. In this guide, we'll explain how to modify both files to meet the requirements of your organization.
> [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
+> Microsoft Entra Verified ID is currently in public preview.
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Verifiable credentials are made up of two components, the rules and display defi
The rules definition is a simple JSON document that describes important properties of verifiable credentials. In particular, it describes how claims are used to populate your verifiable credential.
-There are currently three input types that are available to configure in the rules definition. These types are used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your DID. The following are the four types with explanations.
+There are currently four input types that are available to configure in the rules definition. These types are used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your DID. The following are the four types with explanations.
- ID Token - ID Token Hint - Verifiable credentials via a verifiable presentation. - Self-Attested Claims
-**ID token:** When this option is configured, you will need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
+**ID token:** When this option is configured, you'll need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
-**ID token hint:** The sample App and Tutorial use the ID Token Hint. When this option is configured, the relying party app will need to provide claims that should be included in the VC in the Request Service API issuance request. Where the relying party app gets the claims from is up to the app, but it can come from the current login session, from backend CRM systems or even from self asserted user input.
+**ID token hint:** The sample App and Tutorial use the ID Token Hint. When this option is configured, the relying party app will need to provide claims that should be included in the VC in the Request Service API issuance request. Where the relying party app gets the claims from is up to the app, but it can come from the current sign-in session, from backend CRM systems or even from self asserted user input.
**Verifiable credentials:** The end result of an issuance flow is to produce a Verifiable Credential but you may also ask the user to Present a Verifiable Credential in order to issue one. The rules definition is able to take specific claims from the presented Verifiable Credential and include those claims in the newly issued Verifiable Credential from your organization.
There are currently three input types that are available to configure in the rul
![detailed view of verifiable credential card](media/credential-design/issuance-doc.png)
-**Static claims:** Additionally we are able to declare a static claim in the rules definition, however this input doesn't come from the user. The Issuer defines a static claim in the rules definition and would look like any other claim in the Verifiable Credential. Add a credentialSubject after vc.type and declare the attribute and the claim.
+**Static claims:** Additionally we can declare a static claim in the rules definition, however this input doesn't come from the user. The Issuer defines a static claim in the rules definition and would look like any other claim in the Verifiable Credential. Add a credentialSubject after vc.type and declare the attribute and the claim.
```json "vc": {
There are currently three input types that are available to configure in the rul
## Input type: ID token
-To get ID Token as input, the rules definition needs to configure the well-known endpoint of the OIDC compatible Identity system. In that system you need to register an application with the correct information from [Issuer service communication examples](issuer-openid.md). Additionally, the client_id needs to be put in the rules definition, as well as a scope parameter needs to be filled in with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token.
+To get ID Token as input, the rules definition needs to configure the well-known endpoint of the OIDC compatible Identity system. In that system you need to register an application with the correct information from [Issuer service communication examples](issuer-openid.md). Additionally, the client_id needs to be put in the rules definition, and a scope parameter needs to be filled in with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token.
```json {
To get ID Token as input, the rules definition needs to configure the well-known
} ```
-Please see [idToken attestation](rules-and-display-definitions-model.md#idtokenattestation-type) for reference of properties.
+See [idToken attestation](rules-and-display-definitions-model.md#idtokenattestation-type) for reference of properties.
## Input type: ID token hint
See [idTokenHint attestation](rules-and-display-definitions-model.md#idtokenhint
### vc.type: Choose credential type(s)
-All verifiable credentials must declare their "type" in their rules definition. The type of a credential distinguishes your verifiable credentials from credentials issued by other organizations and ensures interoperability between issuers and verifiers. To indicate a credential type, you must provide one or more credential types that the credential satisfies. Each type is represented by a unique string - often a URI will be used to ensure global uniqueness. The URI doesn't need to be addressable; it is treated as a string.
+All verifiable credentials must declare their "type" in their rules definition. The type of a credential distinguishes your verifiable credentials from credentials issued by other organizations and ensures interoperability between issuers and verifiers. To indicate a credential type, you must provide one or more credential types that the credential satisfies. Each type is represented by a unique string - often a URI will be used to ensure global uniqueness. The URI doesn't need to be addressable; it's treated as a string.
As an example, a diploma credential issued by Contoso University might declare the following types:
As an example, a diploma credential issued by Contoso University might declare t
| `https://schemas.ed.gov/universityDiploma2020` | Declares that diplomas issued by Contoso University contain attributes defined by the United States department of education. | | `https://schemas.contoso.edu/diploma2020` | Declares that diplomas issued by Contoso University contain attributes defined by Contoso University. |
-By declaring all three types, Contoso University's diplomas can be used to satisfy different requests from verifiers. A bank can request a set of `EducationCredential`s from a user, and the diploma can be used to satisfy the request. But the Contoso University Alumni Association can request a credential of type `https://schemas.contoso.edu/diploma2020`, and the diploma will also satisfy the request.
+Contoso declaring three types of diplomas, allows them to issue credentials that satisfy different requests from verifiers. A bank can request a set of `EducationCredential`s from a user, and the diploma can be used to satisfy the request. But the Contoso University Alumni Association can request a credential of type `https://schemas.contoso.edu/diploma2020`, and the diploma will also satisfy the request.
To ensure interoperability of your credentials, it's recommended that you work closely with related organizations to define credential types, schemas, and URIs for use in your industry. Many industry bodies provide guidance on the structure of official documents that can be repurposed for defining the contents of verifiable credentials. You should also work closely with the verifiers of your credentials to understand how they intend to request and consume your verifiable credentials.
Verifiable credentials issued to users are displayed as cards in Microsoft Authe
![issuance documentation](media/credential-design/detailed-view.png)
-Cards also contain customizable fields that you can use to let users know the purpose of the card, the attributes it contains, and more.
+Cards also contain customizable fields. You can use these fields to let users know the purpose of the card, the attributes it contains, and more.
## Create a credential display definition
See [Display definition model](rules-and-display-definitions-model.md#displaymod
Now you have a better understanding of verifiable credential design and how you can create your own to meet your needs. - [Issuer service communication examples](issuer-openid.md)-- Reference for [Rules and Display definitions](rules-and-display-definitions-model.md)
+- Reference for [Rules and Display definitions](rules-and-display-definitions-model.md)
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
# Customer intent: As a developer I am looking to create a developer Azure Active Directory account so I can participate in the Preview with a P2 license.
-# How to create a free Azure Active Directory developer tenant
+# Microsoft Entra Verified ID developer information
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] > [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
+> Microsoft Entra verified ID is currently in public preview.
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!NOTE]
-> While in Preview a P2 license is required.
+> The requirement of an Azure AD P2 license was removed in early May 2002. Azure AD Free tier is now supported.
-There are two easy ways to create a free Azure Active Directory with a P2 trial license so you can install the Verifiable Credential Issuer service and you can test creating and validating Verifiable Credentials:
+## Creating an Azure AD tenant for development
+
+There are two easy ways to create a free Azure Active Directory so you can onboard the Verifiable Credential service and test issuing and verifying Verifiable Credentials:
- [Join](https://aka.ms/o365devprogram) the free Microsoft 365 Developer Program and get a free sandbox, tools, and other resources like an Azure Active Directory with P2 licenses. Configured Users, Groups, mailboxes etc. - Create a new [tenant](../develop/quickstart-create-new-tenant.md) and activate a [free trial](https://azure.microsoft.com/trial/get-started-active-directory/) of Azure AD Premium P1 or P2 in your new tenant.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
Title: Link your Domain to your Decentralized Identifier (DID) (preview) - Azure Active Directory Verifiable Credentials
+ Title: Link your Domain to your Decentralized Identifier (DID) (preview) - Microsoft Entra Verified ID
description: Learn how to DNS Bind? documentationCenter: ''
Previously updated : 06/02/2022 Last updated : 06/22/2022 #Customer intent: Why are we doing this?
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] > [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
+> Microsoft Entra Verified ID is currently in public preview.
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
We follow the [Well-Known DID configuration](https://identity.foundation/.well-k
] ```
-2. The verifiable credential service in Azure AD generates a compliant well-known configuration resource that you can host on your domain. The configuration file includes a self-issued verifiable credential of credentialType 'DomainLinkageCredential' signed with your DID that has an origin of your domain. Here is an example of the config doc that is stored at the root domain URL.
+2. The verifiable credential service in Azure AD generates a compliant well-known configuration resource that you can host on your domain. The configuration file includes a self-issued verifiable credential of credentialType 'DomainLinkageCredential' signed with your DID that has an origin of your domain. Here's an example of the config doc that is stored at the root domain URL.
```json
It is of high importance that you link your DID to a domain recognizable to the
## How do you update the linked domain on your DID? 1. Navigate to the Verifiable Credentials | Getting Started page.
-1. On the left side of the page select **Domain**.
+1. On the left side of the page, select **Domain**.
1. In the Domain box, enter your new domain name. 1. Select **Publish**.
If the trust system is ION, once the domain changes are published to ION, the do
Congratulations, you now have bootstrapped the web of trust with your DID!
+## Linked Domain domain made easy for developers
+
+The easiest way for a developer to get a domain to use for linked domain is to use Azure Storage's static website feature. You can't control what the domain name will be, other than it will contain your storage account name as part of it's hostname.
+
+Follow these steps to quickly set up a domain to use for Linked Domain:
+
+1. Create an **Azure Storage account**. During storage account creation, choose StorageV2 (general-purpose v2 account) and Locally redundant storage (LRS).
+1. Go to that Storage Account and select **Static website** in the left hand menu and enable static website. If you can't see the **Static website** menu item, you didn't create a **V2** storage account.
+1. Copy the primary endpoint name that appears after saving. This value is your domain name. It looks something like `https://<your-storageaccountname>.z6.web.core.windows.net/`.
+
+When it comes time to upload the `did-configuration.json` file, take the following steps:
+
+1. Go to that Storage Account and select **Containers** in the left hand menu. Then select the container named `$web`.
+1. Select **Upload** and select on the folder icon to find your file
+1. Before uploaded, open the **Advanced** section and specify `.well-known` in the **Upload to folder** textbox.
+1. Upload the file.
+
+You now have your file publicly available at a URL that looks something like `https://<your-storageaccountname>.z6.web.core.windows.net/.well-known/did-configuration.json`.
+ ## Next steps -- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
+- [How to customize your Microsoft Entra Verified ID](credential-design.md)
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
+
+ Title: How to create verifiable credentials for idTokens
+description: Learn how to use the QuickStart to create custom credentials for idTokens
+documentationCenter: ''
+++++ Last updated : 06/22/2022++
+#Customer intent: As an administrator, I am looking for information to help me disable
++
+# How to create verifiable credentials for idTokens
++
+> [!IMPORTANT]
+> Microsoft Entra Verified ID is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) using the [idTokens attestation](rules-and-display-definitions-model.md#idtokenattestation-type) will produce an issuance flow where the user will be required to do an interactive sign-in to an OIDC identity provider in the Authenticator. Claims in the id_token the identity provider returns can be used to populate the issued verifiable credential. The claims mapping section in the rules definition specifies which claims are used.
+
+## Create a Custom credential with the idTokens attestation type
+
+When you select + Add credential in the portal, you get the option to launch two Quickstarts. Select [x] Custom credential and select Next.
+
+![Screenshot of VC quickstart](media/how-to-use-quickstart/quickstart-startscreen.png)
+
+In the next screen, you enter JSON for the Display and the Rules definitions and give the credential a type name. Select Create to create the credential.
+
+![screenshot of create new credential section with JSON sample](media/how-to-use-quickstart/quickstart-create-new.png)
+
+## Sample JSON Display definitions
+
+The Display JSON definition is very much the same regardless of attestation type. You just have to adjust the labels depending on what claims your VC have. The Display JSON definition is the same regardless of attestation type. The expected JSON for the Display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, you add multiple entries with a comma as separator.
+
+```json
+{
+ "locale": "en-US",
+ "card": {
+ "title": "Verified Credential Expert",
+ "issuedBy": "Microsoft",
+ "backgroundColor": "#000000",
+ "textColor": "#ffffff",
+ "logo": {
+ "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png",
+ "description": "Verified Credential Expert Logo"
+ },
+ "description": "Use your verified credential to prove to anyone that you know all about verifiable credentials."
+ },
+ "consent": {
+ "title": "Do you want to get your Verified Credential?",
+ "instructions": "Sign in with your account to get your card."
+ },
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.userName",
+ "label": "User name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.displayName",
+ "label": "Display name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.firstName",
+ "label": "First name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.lastName",
+ "label": "Last name",
+ "type": "String"
+ }
+ ]
+}
+```
+
+## Sample JSON Rules definitions
+
+The JSON attestation definition should contain the **idTokens** name, the [OIDC configuration details](rules-and-display-definitions-model.md#idtokenattestation-type) and the claims mapping section. The expected JSON for the Rules definitions is the inner content of the rules attribute, which starts with the attestation attribute. The claims mapping in the below example will require that you do the token configuration as explained below in the section [Claims in id_token from Identity Provider](#claims-in-id_token-from-identity-provider).
+
+```json
+{
+ "attestations": {
+ "idTokens": [
+ {
+ "clientId": "8d5b446e-22b2-4e01-bb2e-9070f6b20c90",
+ "configuration": "https://didplayground.b2clogin.com/didplayground.onmicrosoft.com/B2C_1_sisu/v2.0/.well-known/openid-configuration",
+ "redirectUri": "vcclient://openid",
+ "scope": "openid profile email",
+ "mapping": [
+ {
+ "outputClaim": "userName",
+ "required": true,
+ "inputClaim": "$.upn",
+ "indexed": false
+ },
+ {
+ "outputClaim": "displayName",
+ "required": true,
+ "inputClaim": "$.name",
+ "indexed": false
+ },
+ {
+ "outputClaim": "firstName",
+ "required": true,
+ "inputClaim": "$.given_name",
+ "indexed": false
+ },
+ {
+ "outputClaim": "lastName",
+ "required": true,
+ "inputClaim": "$.family_name",
+ "indexed": true
+ }
+ ],
+ "required": false
+ }
+ ]
+ }
+}
+```
+
+## Application Registration
+
+The **clientId** attribute is the AppId of a registered application in the OIDC identity provider. For **Azure Active Directory**, you create the application via these steps.
+
+1. Navigate to [Azure Active Directory in portal.azure.com](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
+1. Select **App registrations** and select on **+New registration** and give the app a name
+1. Let the selection of **Accounts in this directory only** if you only want accounts in your tenant to be able to sign in
+1. In **Redirect URI (optional)**, select **Public client/native (mobile & desktop)** and enter value **vcclient://openid**
+
+If you want to be able to test what claims are in the token, do the following
+1. Select **Authentication** in the left hand menu and do
+1. **+Add platform**
+1. **Web**
+1. Enter **https://jwt.ms** as **Redirect URI** and select **ID Tokens (used for implicit and hybrid flows)**
+1. Select on **Configure**
+
+Once you finish testing your id_token, you should consider removing **https://jwt.ms** and the support for **implicit and hybrid flows**.
+
+For **Azure Active Directory**, you can test your app registration and that you get an id_token via running the following in the browser if you have enabled support for redirecting to jwt.ms.
+
+```http
+https://login.microsoftonline.com/<your-tenantId>/oauth2/v2.0/authorize?client_id=<your-appId>&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid%20profile&response_type=id_token&prompt=login
+```
+
+Replace the <your-tenantidNote that you need to have **profile** as part of the **scope** in order to get the extra claims.
+
+For **Azure Active Directory B2C**, the app registration process is the same but B2C has built in support in the portal for testing your B2C policies via the **Run user flow** functionality.
+
+## Claims in id_token from Identity Provider
+
+Claims must exist in the returned identity provider so that they can successfully populate your VC.
+If the claims don't exist, there will be no value in the issued VC. Most OIDC identity providers don't issue a claim in an id_token if the claim has a null value in the user's profile. Make sure you include the claim in the id_token definition and that the user has a value for the claim in the user profile.
+
+For **Azure Active Directory**, see documentation [Provide optional claims to your app](../../active-directory/develop/active-directory-optional-claims.md) on how to configure what claims to include in your token. The configuration is per application, so the configuration you make should be for the app with AppId specified in the **clientId** in the rules definition.
+
+To match the above Display & Rules definition, you should have your application manifest having its **optionalClaims** looking like below.
+
+```json
+"optionalClaims": {
+ "idToken": [
+ {
+ "name": "upn",
+ "source": null,
+ "essential": false,
+ "additionalProperties": []
+ },
+ {
+ "name": "family_name",
+ "source": null,
+ "essential": false,
+ "additionalProperties": []
+ },
+ {
+ "name": "given_name",
+ "source": null,
+ "essential": false,
+ "additionalProperties": []
+ },
+ {
+ "name": "preferred_username",
+ "source": null,
+ "essential": false,
+ "additionalProperties": []
+ }
+ ],
+ "accessToken": [],
+ "saml2Token": []
+},
+```
+
+For **Azure Active Directory B2C**, configuring other claims in your id_token depends on if your B2C policy is a **User Flow** or a **Custom Policy**. For documentation on User Flows, see [Set up a sign-up and sign-in flow in Azure Active Directory B2C](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow) and for Custom Policy, see documentation [Provide optional claims to your app](../../active-directory-b2c/configure-tokens.md?pivots=b2c-custom-policy#provide-optional-claims-to-your-app).
+
+For other identity providers, see the relevant documentation.
+
+## Configure the samples to issue and verify your Custom credential
+
+To configure your sample code to issue and verify using custom credentials, you need:
+
+- Your tenant's issuer DID
+- The credential type
+- The manifest url to your credential.
+
+The easiest way to find this information for a Custom Credential is to go to your credential in the portal, select **Issue credential** and switch to Custom issue.
+
+![Screenshot of QuickStart issue credential screen.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
+
+After switching to custom issue, you have access to a textbox with a JSON payload for the Request Service API. Replace the place holder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+
+![Screenshot of Quickstart custom issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
+
+## Next steps
+
+- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
+
+ Title: How to create verifiable credentials for self-asserted claims
+description: Learn how to use the QuickStart to create custom credentials for self-issued
+documentationCenter: ''
+++++ Last updated : 06/22/2022++
+#Customer intent: As a verifiable credentials Administrator, I want to create a verifiable credential for self-asserted claims scenario
++
+# How to create verifiable credentials for self-asserted claims
++
+> [!IMPORTANT]
+> Microsoft Entra Verified ID is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) using the [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) will produce an issuance flow where the user will be required to manually enter values for the claims in the Authenticator.
+
+## Create a Custom credential with the selfIssued attestation type
+
+When you select + Add credential in the portal, you get the option to launch two QuickStarts. Select [x] Custom credential and select Next.
+
+![Screenshot of VC quickstart](media/how-to-use-quickstart/quickstart-startscreen.png)
+
+In the next screen, you enter JSON for the Display and the Rules definitions and give the credential a type name. Select Create to create the credential.
+
+![screenshot of create new credential section with JSON sample](media/how-to-use-quickstart/quickstart-create-new.png)
+
+## Sample JSON Display definitions
+
+The Display JSON definition is very much the same regardless of attestation type. You just have to adjust the labels depending on what claims your VC have. The expected JSON for the Display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, you add multiple entries with a comma as separator.
+
+```json
+{
+ "locale": "en-US",
+ "card": {
+ "title": "Verified Credential Expert",
+ "issuedBy": "Microsoft",
+ "backgroundColor": "#000000",
+ "textColor": "#ffffff",
+ "logo": {
+ "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png",
+ "description": "Verified Credential Expert Logo"
+ },
+ "description": "Use your verified credential to prove to anyone that you know all about verifiable credentials."
+ },
+ "consent": {
+ "title": "Do you want to get your Verified Credential?",
+ "instructions": "Sign in with your account to get your card."
+ },
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.displayName",
+ "label": "Name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.companyName",
+ "label": "Company",
+ "type": "String"
+ }
+ ]
+}
+```
+
+## Sample JSON Rules definitions
+
+The JSON attestation definition should contain the **selfIssued** name and the claims mapping section. Since the claims are selfIssued, the value will be the same for the **outputClaim** and the **inputClaim**. The expected JSON for the Rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
+
+```json
+{
+ "attestations": {
+ "selfIssued": {
+ "mapping": [
+ {
+ "outputClaim": "displayName",
+ "required": true,
+ "inputClaim": "displayName",
+ "indexed": false
+ },
+ {
+ "outputClaim": "companyName",
+ "required": true,
+ "inputClaim": "companyName",
+ "indexed": false
+ }
+ ],
+ "required": false
+ }
+ }
+}
+```
+
+## Claims input during issuance
+
+During issuance, the Microsoft Authenticator will prompt the user to enter values for the specified claims. There's no validation of user input.
+
+![selfIssued claims input](media/how-to-use-quickstart-selfissued\selfIssued-claims-input.png)
+
+## Configure the samples to issue and verify your Custom credential
+
+To configure your sample code to issue and verify using custom credentials, you need:
+
+- Your tenant's issuer DID
+- The credential type
+- The manifest url to your credential.
+
+The easiest way to find this information for a Custom Credential is to go to your credential in the portal, select **Issue credential** and switch to Custom issue.
+
+![Screenshot of QuickStart issue credential screen.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
+
+After switching to custom issue, you have access to a textbox with a JSON payload for the Request Service API. Replace the place holder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+
+![Screenshot of Quickstart custom issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
+
+## Next steps
+
+- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)
active-directory How To Use Quickstart Verifiedemployee https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-verifiedemployee.md
+
+ Title: Tutorial - Issue a Verifiable Credential for directory based claims
+description: In this tutorial, you learn how to issue verifiable credentials, from directory based claims, by using a sample app.
++++++ Last updated : 06/22/2022
+# Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
++++
+# Issue a Verifiable Credential for directory based claims
++
+In this guide, you'll create a credential where the claims come from a user profile in the directory of the Azure AD tenant. With directory based claims you can create Verifiable Credentials of type VerifiedEmployee, if the users in the directory are employees.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Create a user in the directory
+> - Setup the user for Microsoft Authenticator
+> - Create a Verified employee credential
+> - Configure the samples to issue and verify your VerifiedEmployee credential
++
+## Prerequisites
+
+- [Set up a tenant for Azure AD Verifiable Credentials](verifiable-credentials-configure-tenant.md).
+- Complete the tutorial for [issuance](verifiable-credentials-configure-issuer.md) and [verification](verifiable-credentials-configure-verifier.md) of verifiable credentials.
+- A mobile phone with Microsoft Authenticator that can be used as the test user account.
+
+## Create a user in the directory
+
+If you already have a test user, you can skip this section. If you want to create a test user, follow the steps below:
+
+1. As an **User Admin**, navigate to the Azure Active Directory in the [Azure portal](https://portal.azure.com/#view/Microsoft_AAD_IAM/UsersManagementMenuBlade/~/MsGraphUsers)
+1. Select **Users** and **+ New user**, then keep selection on [x] Create user
+1. Fill in **User name**, **Name**, **First name** and **Last name**.
+1. Check **[x] Show Password** and copy the temporary password to somewhere, like Notepad, then select the Create button
+1. Find the new user, select to **view profile** and select **Edit**. Update the following attributes then select Save:
+ - Job Title
+ - Email (in the Contact Info section. Doesn't have to be an existing email address)
+ - Photo (select JPG/PNG file with low, thumbnail like, resolution)
+1. Open a new, private, browser window and navigate to page like [https://myapps.microsoft.com/](https://myapps.microsoft.com/) and sign in with your new user. The user name would be something like meganb@yourtenant.onmicrosoft.com. You'll be prompted to change your password
+
+## Set up the user for Microsoft Authenticator
+
+Your test user needs to have Microsoft Authenticator setup for the account. To enable Authenticator on the test user account, follow these steps:
+
+1. On your mobile test device, open Microsoft Authenticator, go to the Authenticator tab at the bottom and tap **+** sign to **Add account**. Select **Work or school account**
+1. At the prompt, select **Sign in**. Don't select ΓÇ£Scan QR codeΓÇ¥
+1. Sign in with the test userΓÇÖs credentials in the Azure AD tenant
+1. Authenticator will launch [https://aka.ms/mfasetup](https://aka.ms/mfasetup) in the browser on your mobile device. need to sign in again with your test users credentials.
+1. In the **Set up your account in the app**, select **Pair your account to the app by clicking this link**. The Microsoft Authenticator app and opens and you see your test user as an added account
+
+If [https://aka.ms/mfasetup](https://aka.ms/mfasetup) launches without prompting you to sign in, that means you have already set up authenticator for another user on this device. When already configured with a user, Authenticator signs you in automatically. Sign out the browser's currently logged in user and then repeat the steps above. If you zoom in on the page, you find the **Sign out** button at the top right corner
+
+## Create a Verified employee credential
+
+When you select + Add credential in the portal, you get the option to launch two Quickstarts. Select **Verified employee** and select Next.
+
+![Quickstart start screen](media/how-to-use-quickstart-verifiedemployee/verifiable-credentials-configure-verifiedemployee-quickstart.png)
+
+In the next screen, you enter some of the Display definitions, like logo url, text and background color. Since the credential is a managed credential with directory based claims, rules definitions are predefined. You don't need to enter rule definition details. The credential type will be **VerifiedEmployee** and the claims from the userΓÇÖs profile are pre-set. Select Create to create the credential.
+
+![Card styling](media/how-to-use-quickstart-verifiedemployee/verifiable-credentials-configure-verifiedemployee-styling.png)
+
+## Claims schema for Verified employee credential
+
+All of the claims in the Verified employee credential come from attributes in the [user's profile](/graph/api/resources/user) in Azure AD for the issuing tenant. All claims, except photo, come from the Microsoft Graph Query [https://graph.microsoft.com/v1.0/me](/graph/api/user-get). The photo claim comes from the value returned from the Microsoft Graph Query [https://graph.microsoft.com/v1.0/me/photo/$value.](/graph/api/profilephoto-get)
+
+| Claim | Directory attribute | Value |
+||||
+| `revocationId` | `userPrincipalName`| The UPN of the user is added as a claim named `revocationId` and gets indexed.|
+| `displayName` | `displayName` | The displayName of the user |
+| `givenName` | `givenName` | First name of the user |
+| `surname` | `surname` | Last name of the user |
+| `jobTitle` | `jobTitle` | The user's job title. This attribute doesn't have a value by default in the user's profile. If the user's profile has no value specified, there's no `jobTitle` claim in the issued VC. |
+| `preferredLanguage` | `preferredLanguage` | Should follow [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) and contain a value like `en-us`. There's no default value specified. If there's no value, no claim is included in the issued VC. |
+| `mail` | `mail` | The user's email address. The `mail` value isn't the same as the UPN. It's also an attribute that doesn't have a value by default.
+| `photo` | `photo` | The uploaded photo for the user. The image type (JPEG, PNG, etc.), depends on the uploaded image type. When presenting the photo claim to a verifier, the photo claim is in the UrlEncode(Base64Encode(photo)) format. To use the photo, the verifier application has to Base64Decode(UrlDecode(photo)).
+
+See full Azure AD user profile [properties reference](/graph/api/resources/user).
+
+If attribute values change in the user's Azure AD profile, the VC isn't automatically reissued. You must reissue it manually. Issuance would be the same as the issuance process when working with the samples.
+
+## Configure the samples to issue and verify your VerifiedEmployee credential
+
+Verifiable Credentials for directory based claims can be issued and verified just like any other credentials you create. All you need is your issuer DID for your tenant, the credential type and the manifest url to your credential. The easiest way to find these values for a Managed Credential is to view the credential in the portal, select Issue credential and switch to Custom issue. These steps bring up a textbox with a skeleton JSON payload for the Request Service API.
+
+![Custom issue](media/how-to-use-quickstart-verifiedemployee/verifiable-credentials-configure-verifiedemployee-custom-issue.png)
+
+In this screen, you have values that you can copy and paste to your sample deploymentΓÇÖs configuration files. IssuerΓÇÖs DID is the authority value.
+
+- **authority** - Issuer's DID
+- **type** - the credential type is always `VerifiedEmployee` when looking at a verified employee credential
+- **manifest** - the credential manifest URL
+
+The configuration file depends on the sample in-use.
+
+- **Dotnet** - [appsettings.json](https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet/blob/main/1-asp-net-core-api-idtokenhint/appsettings.json)
+- **node** - [config.json](https://github.com/Azure-Samples/active-directory-verifiable-credentials-node/blob/main/1-node-api-idtokenhint/config.json)
+- **python** - [config.json](https://github.com/Azure-Samples/active-directory-verifiable-credentials-python/blob/main/1-python-api-idtokenhint/config.json)
+- **Java** - values are set as environment variables in [run.cmd](https://github.com/Azure-Samples/active-directory-verifiable-credentials-jav/docker-run.sh when using docker.
+
+## Next steps
+
+Learn [how to customize your verifiable credentials](credential-design.md).
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
Title: How to create credentials using the QuickStart
-description: Learn how to use the QuickStart to create custom credentials
+ Title: How to create verifiable credentials for ID token hint
+description: Learn how to use the QuickStart to create custom verifiable credential for ID token hint
documentationCenter: ''
Last updated 06/16/2022
-#Customer intent: As an administrator, I am looking for information to help me disable
+#Customer intent: As a verifiable credentials Administrator, I want to create a verifiable credential for the ID token hint scenario
-# How to create credentials using the Quickstart
+# How to create verifiable credentials for ID token hint
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] > [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
+> Microsoft Entra Verified ID is currently in public preview.
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
-To use the Azure Active Directory Verifiable Credentials QuickStart, you only need to complete verifiable credentials onboarding.
+To use the Microsoft Entra Verified ID QuickStart, you only need to complete verifiable credentials onboarding.
## What is the QuickStart?
After switching to custom issue, you have access to a textbox with a JSON payloa
## Next steps -- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)
+- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)
+- Reference for creating a credential using the [idToken] attestation (idtoken-reference.md)
active-directory Verifiable Credentials Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-standards.md
Previously updated : 06/16/2021 Last updated : 06/22/2022 # Customer intent: As a developer I am looking for information around the open standards supported by Microsoft Entra verified ID
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-This page outlines currently supported open standards for Microsoft Entra Verified ID. The full document outlining how to build an implementation that interoperates with Microsoft is [JWT VC Presentation Profile](https://identity.foundation/jwt-vc-presentation-profile/).
+Microsoft is actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. WeΓÇÖve worked with these groups to identify and develop critical standards, and have implemented the open standards in our services. This page outlines currently supported open standards for Microsoft Entra Verified ID.
## Standard bodies
Entra Verified ID supports the following Key Types for the JWS signature verific
|secp256k1|ES256K| |Ed25519|EdDSA|
+## Interoperability
+
+Microsoft is collaborating with organizations members of Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. Our collaboration efforts aim to build a Verifiable Credentials Interoperability profile to support standards based issuance, revocation, presentation and wallet portability.
+
+Today, we have a working JWT VC presentation profile that supports interoperable presentation of Verifiable Credentials between Wallets and Verifiers/RPs. Join us at DIF Claims and Credentials working group: [aka.ms/vcinterop](https://aka.ms/vcinterop)
+ ## Next steps -- [Get started with verifiable credentials](verifiable-credentials-configure-tenant.md)
+- [Get started with verifiable credentials](verifiable-credentials-configure-tenant.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Title: What's new for Azure Active Directory Verifiable Credentials (preview)
-description: Recent updates for Azure Active Directory Verifiable Credentials
+ Title: What's new for Microsoft Entra Verified ID (preview)
+description: Recent updates for Microsoft Entra Verified ID
Previously updated : 05/10/2022 Last updated : 06/24/2022
-#Customer intent: As an Azure AD Verifiable Credentials issuer, verifier or developer, I want to know what's new in the product so that I can make full use of the functionality as it becomes available.
+#Customer intent: As an Microsoft Entra Verified ID issuer, verifier or developer, I want to know what's new in the product so that I can make full use of the functionality as it becomes available.
-# What's new in Azure Active Directory Verifiable Credentials (preview)
+# What's new in Microsoft Entra Verified ID (preview)
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-This article lists the latest features, improvements, and changes in the Azure Active Directory (Azure AD) Verifiable Credentials service.
+This article lists the latest features, improvements, and changes in the Microsoft Entra Verified ID service.
+
+## June 2022
+
+In June, we introduced a set of new preview features:
+- Web as a new, default, trust system that users' can choose when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials) a tenant. Web means your tenant uses [did:web](https://w3c-ccg.github.io/did-method-web/) as its trust system. ION is still available.
+- [Quickstarts](how-to-use-quickstart.md) as a new way to create Managed Credentials. Managed Credentials no longer use of Azure Storage to store the Display & Rules JSON definitions. You need to migrate your Azure Storage based credentials to become Managed Credentials and we'll provide instructions shortly.
+- Managed Credential [Quickstart for Verifiable Credentials of type VerifiedEmployee](how-to-use-quickstart-verifiedemployee.md) with directory based claims from your tenant.
+- Updated documentation that describes the different ways to use the [Quickstarts](how-to-use-quickstart.md) and a [Rules and Display definition model](rules-and-display-definitions-model.md).
## May 2022
We are expanding our service to all Azure AD customers! Verifiable credentials a
Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. Follow [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to apply the required configuration changes. >[!IMPORTANT]
-> If changes are not applied before **May 4, 2022**, you will experience errors on issuance and presentation for your application or service using the Azure AD Verifiable Credentials Service. [Update service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
+> If changes are not applied before **May 4, 2022**, you will experience errors on issuance and presentation for your application or service using the Microsoft Entra Verified ID Service. [Update service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
## March 2022 -- Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure portal.
+- Microsoft Entra Verified ID customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure portal.
- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for iOS. [More information](whats-new.md?#microsoft-authenticator-did-generation-update) ## February 2022
-We are rolling out some breaking changes to our service. These updates require Azure AD Verifiable Credentials service reconfiguration. End-users need to have their verifiable credentials reissued.
+We are rolling out some breaking changes to our service. These updates require Microsoft Entra Verified ID service reconfiguration. End-users need to have their verifiable credentials reissued.
-- The Azure AD Verifiable Credentials service can now store and handle data processing in the Azure European region. [More information](whats-new.md?#azure-ad-verifiable-credentials-available-in-europe)-- Azure AD Verifiable Credentials customers can take advantage of enhancements to credential revocation. These changes add a higher degree of privacy through the implementation of the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. [More information](whats-new.md?#credential-revocation-with-enhanced-privacy)
+- The Microsoft Entra Verified ID service can now store and handle data processing in the Azure European region.
+- Microsoft Entra Verified ID customers can take advantage of enhancements to credential revocation. These changes add a higher degree of privacy through the implementation of the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. [More information](whats-new.md?#credential-revocation-with-enhanced-privacy)
- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-did-generation-update) >[!IMPORTANT] > All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service).
-### Azure AD Verifiable Credentials available in Europe
+### Microsoft Entra Verified ID available in Europe
-Since the beginning of the Azure AD Verifiable Credentials service public preview, the service has only been available in our Azure North America region. Now, the service is also available in our Azure Europe region.
+Since the beginning of the Microsoft Entra Verified ID service public preview, the service has only been available in our Azure North America region. Now, the service is also available in our Azure Europe region.
- New customers with Azure AD European tenants now have their Verifiable Credentials data located and processed in our Azure Europe region.-- Customers with Azure AD tenants setup in Europe who start using the Azure AD Verifiable Credentials service after February 15, 2022, have their data automatically processed in Europe. There's no need to take any further actions.-- Customers with Azure AD tenants setup in Europe that started using the Azure AD Verifiable Credentials service before February 15, 2022, are required to reconfigure the service on their tenants before March 31, 2022.
+- Customers with Azure AD tenants setup in Europe who start using the Microsoft Entra Verified ID service after February 15, 2022, have their data automatically processed in Europe. There's no need to take any further actions.
+- Customers with Azure AD tenants setup in Europe that started using the Microsoft Entra Verified ID service before February 15, 2022, are required to reconfigure the service on their tenants before March 31, 2022.
Take the following steps to configure the Verifiable Credentials service in Europe:
Take the following steps to configure the Verifiable Credentials service in Euro
#### Are there any changes to the way that we use the Request API as a result of this move?
-Applications that use the Azure Active Directory Verifiable Credentials service must use the Request API endpoint that corresponds to their Azure AD tenant's region.
+Applications that use the Microsoft Entra Verified ID service must use the Request API endpoint that corresponds to their Azure AD tenant's region.
| Tenant region | Request API endpoint POST | ||-|
To confirm which endpoint you should use, we recommend checking your Azure AD te
The Azure AD Verifiable Credential service supports the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. Each Issuer tenant now has an [Identity Hub](https://identity.foundation/identity-hub/spec/) endpoint used by verifiers to check on the status of a credential using a privacy-respecting mechanism. The identity hub endpoint for the tenant is also published in the DID document. This feature replaces the current status endpoint. To uptake this feature follow the next steps:+ 1. [Check if your tenant has the Hub endpoint](verifiable-credentials-faq.md#how-can-i-check-if-my-tenant-has-the-new-hub-endpoint). 1. If so, go to the next step. 1. If not, [reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in your tenant and go to the next step.
Sample contract file:
3. You have to issue new verifiable credentials using your new configuration. All verifiable credentials previously issued continue to exist. Your previous DID remains resolvable however, they use the previous status endpoint implementation. >[!IMPORTANT]
-> You have to reconfigure your Azure AD Verifiable Credential service instance to create your new Identity hub endpoint. You have until March 31st 2022, to schedule and manage the reconfiguration of your deployment. On March 31st, 2022 deployments that have not been reconfigured will lose access to any previous Azure AD Verifiable Credentials service configuration. Administrators will need to set up a new service instance.
+> You have to reconfigure your Azure AD Verifiable Credential service instance to create your new Identity hub endpoint. You have until March 31st 2022, to schedule and manage the reconfiguration of your deployment. On March 31st, 2022 deployments that have not been reconfigured will lose access to any previous Microsoft Entra Verified ID service configuration. Administrators will need to set up a new service instance.
### Microsoft Authenticator DID Generation Update
We are making protocol updates in Microsoft Authenticator to support Single Long
## December 2021 - We added [Postman collections](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/Postman) to our samples as a quick start to start using the Request Service REST API.-- New sample added that demonstrates the integration of [Azure AD Verifiable Credentials with Azure AD B2C](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/B2C).-- Sample for setting up the Azure AD Verifiable Credentials services using [PowerShell and an ARM template](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/ARM).-- Sample Verifiable Credential configuration files to show sample cards for [IDToken](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDToken), [IDTokenHit](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDTokenHint) and [Self-attested](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDTokenHint) claims.
+- New sample added that demonstrates the integration of [Microsoft Entra Verified ID with Azure AD B2C](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/B2C).
+- Sample for setting up the Microsoft Entra Verified ID services using [PowerShell and an ARM template](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/ARM).
+- Sample Verifiable Credential configuration files to show sample cards for [ID Token](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDToken), [IDTokenHit](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDTokenHint) and [Self-attested](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDTokenHint) claims.
## November 2021
Callback types enforcing rules so that URL endpoints for callbacks are reachable
## October 2021
-You can now use [Request Service REST API](get-started-request-api.md) to build applications that can issue and verify credentials from any programming language. This new REST API provides an improved abstraction layer and integration to the Azure AD Verifiable Credentials Service.
+You can now use [Request Service REST API](get-started-request-api.md) to build applications that can issue and verify credentials from any programming language. This new REST API provides an improved abstraction layer and integration to the Microsoft Entra Verified ID Service.
It's a good idea to start using the API soon, because the NodeJS SDK will be deprecated in the following months. Documentation and samples now use the Request Service REST API. For more information, see [Request Service REST API (preview)](get-started-request-api.md).
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
Customer must use the new Configuration API v2 by changing their deployment scri
> * DNS hostname must be resolvable to IP addresses and the corresponding IP addresses must be reachable. > This might require additional configuration in case you are using a private DNS, internal VNET or other infrastrutural requirements.
-### Meet minimal security requirements
+### Security
+
+#### Available TLS cipher suites
+
+At launch, self-hosted gateway v2.0 only used a subset of the cipher suites that v1.x was using. As of v2.0.4, we have brought back all the cipher suites that v1.x supported.
+
+You can learn more about the used cipher suites in [this article](self-hosted-gateway-overview.md#available-cipher-suites) or use v2.1.1 to [control what cipher suites to use](self-hosted-gateway-overview.md#managing-cipher-suites).
+
+#### Meet minimal security requirements
During startup, the self-hosted gateway will prepare the CA certificates that will be used. This requires the gateway container to run with at least user ID 1001 and can't use read-only file system.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
This article explains how the self-hosted gateway feature of Azure API Managemen
The self-hosted gateway feature expands API Management support for hybrid and multi-cloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
-With the self-hosted gateway, customers have the flexibility to deploy a containerized version of the API Management gateway component to the same environments where they host their APIs. All self-hosted gateways are managed from the API Management service they are federated with, thus providing customers with the visibility and unified management experience across all internal and external APIs. Placing the gateways close to the APIs allows customers to optimize API traffic flows and address security and compliance requirements.
+With the self-hosted gateway, customers have the flexibility to deploy a containerized version of the API Management gateway component to the same environments where they host their APIs. All self-hosted gateways are managed from the API Management service they're federated with, thus providing customers with the visibility and unified management experience across all internal and external APIs. Placing the gateways close to the APIs allows customers to optimize API traffic flows and address security and compliance requirements.
Each API Management service is composed of the following key components:
We provide a variety of container images for self-hosted gateways to meet your n
| Tag convention | Recommendation | Example | Rolling tag | Recommended for production | | - | -- | - | - | - |
-| `{major}.{minor}.{patch}` | Use this tag to always to run the same version of the gateway |`2.0.0` | ❌ | ✔️ |
+| `{major}.{minor}.{patch}` | Use this tag to always run the same version of the gateway |`2.0.0` | ❌ | ✔️ |
| `v{major}` | Use this tag to always run a major version of the gateway with every new feature and patch. |`v2` | ✔️ | ❌ | | `v{major}-preview` | Use this tag if you always want to run our latest preview container image. | `v2-preview` | ✔️ | ❌ | | `latest` | Use this tag if you want to evaluate the self-hosted gateway. | `latest` | ✔️ | ❌ |
You can find a full list of available tags [here](https://mcr.microsoft.com/prod
#### Use of tags in our official deployment options
-Our deployment options in the Azure portal use the `v2` tag which allows customers to use the most recent version of the self-hosted gateway v2 container image with all feature updates and patches.
+Our deployment options in the Azure portal use the `v2` tag that allows customers to use the most recent version of the self-hosted gateway v2 container image with all feature updates and patches.
> [!NOTE] > We provide the command and YAML snippets as reference, feel free to use a more specific tag if you wish to.
-When installing with our Helm chart, image tagging is optimized for you. The Helm chart's application version pins the gateway to a given version and does not rely on `latest`.
+When installing with our Helm chart, image tagging is optimized for you. The Helm chart's application version pins the gateway to a given version and doesn't rely on `latest`.
Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
Example - `v2` tag was released with `2.0.0` container image, but when `2.1.0` w
Self-hosted gateways require outbound TCP/IP connectivity to Azure on port 443. Each self-hosted gateway must be associated with a single API Management service and is configured via its management plane. A self-hosted gateway uses connectivity to Azure for: - Reporting its status by sending heartbeat messages every minute-- Regularly checking for (every 10 seconds) and applying configuration updates whenever they are available
+- Regularly checking for (every 10 seconds) and applying configuration updates whenever they're available
- Sending metrics to Azure Monitor, if configured to do so - Sending events to Application Insights, if set to do so
The self-hosted gateway v2 requires the following:
* The public IP address of the API Management instance in its primary location * The hostname of the instance's configuration endpoint: `<apim-service-name>.configuration.azure-api.net`
-Additionally, customers that use API inspector or quotas in their policies have to ensure that the following additional dependencies are accessible:
+Additionally, customers that use API inspector or quotas in their policies have to ensure that the following dependencies are accessible:
* The hostname of the instance's associated blob storage account: `<blob-storage-account-name>.blob.core.windows.net` * The hostname of the instance's associated table storage account: `<table-storage-account-name>.table.core.windows.net`
The self-hosted gateway is designed to "fail static" and can survive temporary l
When configuration backup is turned off and connectivity to Azure is interrupted: - Running self-hosted gateways will continue to function using an in-memory copy of the configuration-- Stopped self-hosted gateways will not be able to start
+- Stopped self-hosted gateways won't be able to start
When configuration backup is turned on and connectivity to Azure is interrupted:
When configuration backup is turned on and connectivity to Azure is interrupted:
When connectivity is restored, each self-hosted gateway affected by the outage will automatically reconnect with its associated API Management service and download all configuration updates that occurred while the gateway was "offline".
+## Security
+
+### Transport Layer Security (TLS)
+
+> [!IMPORTANT]
+> This overview is only applicable to the self-hosted gateway v1 & v2.
+
+#### Supported protocols
+
+The self-hosted gateway provides support for TLS v1.2 by default.
+
+Customers using custom domains can enable TLS v1.0 and/or v1.1 [in the control plane](/rest/api/apimanagement/current-ga/gateway-hostname-configuration/create-or-update).
+
+#### Available cipher suites
+
+> [!IMPORTANT]
+> This overview is only applicable to the self-hosted gateway v2.
+
+The self-hosted gateway uses the following cipher suites for both client and server connections:
+
+- `TLS_AES_256_GCM_SHA384`
+- `TLS_CHACHA20_POLY1305_SHA256`
+- `TLS_AES_128_GCM_SHA256`
+- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`
+- `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`
+- `TLS_DHE_RSA_WITH_AES_256_GCM_SHA384`
+- `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256`
+- `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256`
+- `TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256`
+- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`
+- `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`
+- `TLS_DHE_RSA_WITH_AES_128_GCM_SHA256`
+- `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384`
+- `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384`
+- `TLS_DHE_RSA_WITH_AES_256_CBC_SHA256`
+- `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`
+- `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`
+- `TLS_DHE_RSA_WITH_AES_128_CBC_SHA256`
+- `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA`
+- `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`
+- `TLS_DHE_RSA_WITH_AES_256_CBC_SHA`
+- `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA`
+- `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`
+- `TLS_DHE_RSA_WITH_AES_128_CBC_SHA`
+- `TLS_RSA_WITH_AES_256_GCM_SHA384`
+- `TLS_RSA_WITH_AES_128_GCM_SHA256`
+- `TLS_RSA_WITH_AES_256_CBC_SHA256`
+- `TLS_RSA_WITH_AES_128_CBC_SHA256`
+- `TLS_RSA_WITH_AES_256_CBC_SHA`
+- `TLS_RSA_WITH_AES_128_CBC_SHA`
+
+#### Managing cipher suites
+
+As of v2.1.1 and above, you can manage the ciphers that are being used through the configuration:
+
+- `net.server.tls.ciphers.allowed-suites` allows you to define a comma-separated list of ciphers to use for the TLS connection between the API client and the self-hosted gateway.
+- `net.client.tls.ciphers.allowed-suites` allows you to define a comma-separated list of ciphers to use for the TLS connection between the self-hosted gateway and the backend.
+ ## Next steps - Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
azure-cognitive-service-layout:
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
+ - apiKey={FORM_RECOGNIZER_KEY}
ports: - "5000" networks:
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
+ - apiKey={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports: - "5000:5050"
- ocrvnet azure-cognitive-service-read: container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
environment: - EULA=accept - billing={COMPUTER_VISION_ENDPOINT_URI}
- - key={COMPUTER_VISION_KEY}
+ - apiKey={COMPUTER_VISION_KEY}
networks: - ocrvnet
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
+ - apiKey={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports: - "5000:5050"
- ocrvnet azure-cognitive-service-read: container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
environment: - EULA=accept - billing={COMPUTER_VISION_ENDPOINT_URI}
- - key={COMPUTER_VISION_KEY}
+ - apiKey={COMPUTER_VISION_KEY}
networks: - ocrvnet
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
+ - apiKey={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000 ports: - "5000:5050"
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
+ - apiKey={FORM_RECOGNIZER_KEY}
networks: - ocrvnet
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
+ - apiKey={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports: - "5000:5050"
- ocrvnet azure-cognitive-service-read: container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
environment: - EULA=accept
- - billing={COMPUTER_VISION_ENDPOINT_URI}
- - key={COMPUTER_VISION_KEY}
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apiKey={COMPUTER_VISION_KEY}
networks: - ocrvnet
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
<!-- markdownlint-disable MD034 --> # Train a custom model using the Sample Labeling tool
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+ In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data. > [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
azure-fluid-relay Container Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-recovery.md
++
+description: Learn how to recover container data
+ Title: Recovering Fluid data
+ Last updated : 06/22/2022++++
+# Recovering container data
+
+In this scenario, we'll be exploring data recovery. We consider data to be corrupted when container reaches an invalid state where it can't process further user actions. The outcome of corrupted state is container being unexpectedly closed. Often it's transient state, and upon reopening, the container may behave as expected. In a situation where a container fails to load even after multiple retries, we offer APIs and flows you can use to recover your data, as described below.
+
+## How Fluid Framework and Azure Fluid Relay save state
+
+Fluid framework periodically saves state, called summary, without any explicit backup action initiated by the user. This workflow occurs every one (1) minute if there's no user activity, or sooner if there are more than 1000 pending ops present. Each pending op roughly translates to an individual user action (select, text input etc.) that wasn't summarized yet.
+
+## Azure client APIs
+
+We've added following methods to AzureClient that will enable developers to recover data from corrupted containers.
+
+[`getContainerVersions(ID, options)`](https://fluidframework.com/docs/apis/azure-client/azureclient/#azure-client-azureclient-getcontainerversions-Method)
+
+`getContainerVersions` allows developers to view the previously generated versions of the container.
+
+[copyContainer(ID, containerSchema)](https://fluidframework.com/docs/apis/azure-client/azureclient/#azure-client-azureclient-copycontainer-Method)
+
+`copyContainer` allows developers to generate a new detached container from a specific version of another container.
+
+## Example recovery flow
+
+```typescript
+
+async function recoverDoc(
+ client: AzureClient,
+ orgContainerId: string,
+ containerScema: ContainerSchema,
+): Promise<string> {
+ /* Collect doc versions */
+ let versions: AzureContainerVersion[] = [];
+ try {
+ versions = await client.getContainerVersions(orgContainerId);
+ } catch (e) {
+ return Promise.reject(new Error("Unable to get container versions."));
+ }
+
+ for (const version of versions) {
+ /* Versions are returned in chronological order.
+ Attempt to copy doc from next available version */
+ try {
+ const { container: newContainer } = await client.copyContainer(
+ orgContainerId,
+ containerSchema,
+ version,
+ );
+ return await newContainer.attach();
+ } catch (e) {
+ // Error. Keep going.
+ }
+ }
+
+ return Promise.reject(new Error("Could not recreate document"));
+}
+
+```
+
+## Key observations
+
+### We're creating a new Container
+
+We aren't recovering (rolling back) existing container. `copyContainer` will give us new instance, with data being copied from the original container. In this process, old container isn't deleted.
+
+### New Container is detached
+
+ New container is initially in `detached` state. We can continue working with detached container, or immediately attach. After calling `attach` we'll get back unique Container ID, representing newly created instance.
cognitive-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
In this tutorial, you'll learn how to:
## Prerequisites * An [Azure subscription](https://azure.microsoft.com/free/cognitive-services) * [Microsoft Power BI Desktop](https://powerbi.microsoft.com/get-started/), available for free.
-* An excel file (.xlsx) containing time series data points. The example data for this quickstart can be found on [GitHub](https://go.microsoft.com/fwlink/?linkid=2090962)
+* An excel file (.xlsx) containing time series data points. The example data for this quickstart can be found on [GitHub](https://github.com/Azure-Samples/AnomalyDetector/blob/master/sampledata/example-data.xlsx)
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector" title="Create an Anomaly Detector resource" target="_blank">create an Anomaly Detector resource </a> in the Azure portal to get your key and endpoint. * You will need the key and endpoint from the resource you create to connect your application to the Anomaly Detector API. You'll do this later in the quickstart.
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
This table lists required and optional headers for text-to-speech requests:
| `X-Microsoft-OutputFormat` | Specifies the audio output format. For a complete list of accepted values, see [Audio outputs](#audio-outputs). | Required | | `User-Agent` | The application name. The provided value must be fewer than 255 characters. | Required |
-### Audio outputs
-
-This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 24-kHz, 16-kHz, and 8-kHz audio outputs.
-
-```output
-raw-16khz-16bit-mono-pcm riff-16khz-16bit-mono-pcm
-raw-24khz-16bit-mono-pcm riff-24khz-16bit-mono-pcm
-raw-48khz-16bit-mono-pcm riff-48khz-16bit-mono-pcm
-raw-8khz-8bit-mono-mulaw riff-8khz-8bit-mono-mulaw
-raw-8khz-8bit-mono-alaw riff-8khz-8bit-mono-alaw
-audio-16khz-32kbitrate-mono-mp3 audio-16khz-64kbitrate-mono-mp3
-audio-16khz-128kbitrate-mono-mp3 audio-24khz-48kbitrate-mono-mp3
-audio-24khz-96kbitrate-mono-mp3 audio-24khz-160kbitrate-mono-mp3
-audio-48khz-96kbitrate-mono-mp3 audio-48khz-192kbitrate-mono-mp3
-raw-16khz-16bit-mono-truesilk raw-24khz-16bit-mono-truesilk
-webm-16khz-16bit-mono-opus webm-24khz-16bit-mono-opus
-ogg-16khz-16bit-mono-opus ogg-24khz-16bit-mono-opus
-ogg-48khz-16bit-mono-opus
-```
-
-> [!NOTE]
-> If your selected voice and output format have different bit rates, the audio is resampled as necessary. You can decode the `ogg-24khz-16bit-mono-opus` format by using the [Opus codec](https://opus-codec.org/downloads/).
- ### Request body If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
The HTTP status code for each response indicates success or common errors:
If the HTTP status is `200 OK`, the body of the response contains an audio file in the requested format. This file can be played as it's transferred, saved to a buffer, or saved to a file.
+## Audio outputs
+
+This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
+
+|Streaming |Non-Streaming |
+|-|-|
+|audio-16khz-16bit-32kbps-mono-opus|riff-8khz-8bit-mono-alaw |
+|audio-16khz-32kbitrate-mono-mp3 |riff-8khz-8bit-mono-mulaw|
+|audio-16khz-64kbitrate-mono-mp3 |riff-8khz-16bit-mono-pcm |
+|audio-16khz-128kbitrate-mono-mp3 |riff-24khz-16bit-mono-pcm|
+|audio-24khz-16bit-24kbps-mono-opus|riff-48khz-16bit-mono-pcm|
+|audio-24khz-16bit-48kbps-mono-opus| |
+|audio-24khz-48kbitrate-mono-mp3 | |
+|audio-24khz-96kbitrate-mono-mp3 | |
+|audio-24khz-160kbitrate-mono-mp3 | |
+|audio-48khz-96kbitrate-mono-mp3 | |
+|audio-48khz-192kbitrate-mono-mp3 | |
+|ogg-16khz-16bit-mono-opus | |
+|ogg-24khz-16bit-mono-opus | |
+|ogg-48khz-16bit-mono-opus | |
+|raw-8khz-8bit-mono-alaw | |
+|raw-8khz-8bit-mono-mulaw | |
+|raw-8khz-16bit-mono-pcm | |
+|raw-16khz-16bit-mono-pcm | |
+|raw-16khz-16bit-mono-truesilk | |
+|raw-24khz-16bit-mono-pcm | |
+|raw-24khz-16bit-mono-truesilk | |
+|raw-48khz-16bit-mono-pcm | |
+|webm-16khz-16bit-mono-opus | |
+|webm-24khz-16bit-24kbps-mono-opus | |
+|webm-24khz-16bit-mono-opus | |
+
+> [!NOTE]
+> en-US-AriaNeural, en-US-JennyNeural and zh-CN-XiaoxiaoNeural are available in public preview in 48Khz output. Other voices support 24khz upsampled to 48khz output.
+
+> [!NOTE]
+> If your selected voice and output format have different bit rates, the audio is resampled as necessary. You can decode the `ogg-24khz-16bit-mono-opus` format by using the [Opus codec](https://opus-codec.org/downloads/).
+ ## Next steps - [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
There's another feature in Azure Cognitive Service for Language named [document
## Submitting data
+> [!NOTE]
+> * To use conversation summarization, you must [submit an online request and have it approved](https://aka.ms/applyforconversationsummarization/).
+> * Conversation summarization is only available through Language resources in the following regions:
+> * North Europe
+> * East US
+> * UK South
+> * Conversation summarization is only available using:
+> * REST API
+> * Python
+ You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below. When you use this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 06/21/2022 Last updated : 06/22/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **High volume of operations in a key vault**<br>(KV_OperationVolumeAnomaly) | An anomalous number of key vault operations were performed by a user, service principal, and/or a specific key vault. This anomalous activity pattern may be legitimate, but it could be an indication that a threat actor has gained access to the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **Suspicious policy change and secret query in a key vault**<br>(KV_PutGetAnomaly) | A user or service principal has performed an anomalous Vault Put policy change operation followed by one or more Secret Get operations. This pattern is not normally performed by the specified user or service principal. This may be legitimate activity, but it could be an indication that a threat actor has updated the key vault policy to access previously inaccessible secrets. We recommend further investigations. | Credential Access | Medium | | **Suspicious secret listing and query in a key vault**<br>(KV_ListGetAnomaly) | A user or service principal has performed an anomalous Secret List operation followed by one or more Secret Get operations. This pattern is not normally performed by the specified user or service principal and is typically associated with secret dumping. This may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault and is trying to discover secrets that can be used to move laterally through your network and/or gain access to sensitive resources. We recommend further investigations. | Credential Access | Medium |
+| **Unusual access denied - User accessing high volume of key vaults denied**<br>(KV_DeniedAccountVolumeAnomaly) | A user or service principal has attempted access to anomalously high volume of key vaults in the last 24 hours. This anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. We recommend further investigations. | Discovery | Low |
+| **Unusual access denied - Unusual user accessing key vault denied**<br>(KV_UserAccessDeniedAnomaly) | A key vault access was attempted by a user that does not normally access it, this anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. | Initial Access, Discovery | Low |
| **Unusual application accessed a key vault**<br>(KV_AppAnomaly) | A key vault has been accessed by a service principal that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
-| **Unusual operation pattern in a key vault**<br>KV_OperationPatternAnomaly) | An anomalous pattern of key vault operations was performed by a user, service principal, and/or a specific key vault. This anomalous activity pattern may be legitimate, but it could be an indication that a threat actor has gained access to the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
+| **Unusual operation pattern in a key vault**<br>(KV_OperationPatternAnomaly) | An anomalous pattern of key vault operations was performed by a user, service principal, and/or a specific key vault. This anomalous activity pattern may be legitimate, but it could be an indication that a threat actor has gained access to the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
| **Unusual user accessed a key vault**<br>(KV_UserAnomaly) | A key vault has been accessed by a user that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **Unusual user-application pair accessed a key vault**<br>(KV_UserAppAnomaly) | A key vault has been accessed by a user-service principal pair that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **User accessed high volume of key vaults**<br>(KV_AccountVolumeAnomaly) | A user or service principal has accessed an anomalously high volume of key vaults. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to multiple key vaults in an attempt to access the secrets contained within them. We recommend further investigations. | Credential Access | Medium |
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 06/23/2022 Last updated : 06/26/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in June include:
+- [General availability (GA) for Microsoft Defender for Azure Cosmos DB](#general-availability-ga-for-microsoft-defender-for-azure-cosmos-db)
+- [General availability (GA) of Defender for SQL on machines for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-on-machines-for-aws-and-gcp-environments)
- [Drive implementation of security recommendations to enhance your security posture](#drive-implementation-of-security-recommendations-to-enhance-your-security-posture) - [Filter security alerts by IP address](#filter-security-alerts-by-ip-address)-- [General availability (GA) of Defender for SQL on machines for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-on-machines-for-aws-and-gcp-environments) - [Alerts by resource group](#alerts-by-resource-group)-- [General availability (GA) for Microsoft Defender for Azure Cosmos DB](#general-availability-ga-for-microsoft-defender-for-azure-cosmos-db) - [Auto-provisioning of Microsoft Defender for Endpoint unified solution](#auto-provisioning-of-microsoft-defender-for-endpoint-unified-solution) - [Deprecating the "API App should only be accessible over HTTPS" policy](#deprecating-the-api-app-should-only-be-accessible-over-https-policy)
+- [New Key Vault alerts](#new-key-vault-alerts)
-### Drive implementation of security recommendations to enhance your security posture
+### General availability (GA) for Microsoft Defender for Azure Cosmos DB
-Today's increasing threats to organizations stretch the limits of security personnel to protect their expanding workloads. Security teams are challenged to implement the protections defined in their security policies.
+Microsoft Defender for Azure Cosmos DB is now generally available (GA) and supports SQL (core) API account types.
-Now with the governance experience, security teams can assign remediation of security recommendations to the resource owners and require a remediation schedule. They can have full transparency into the progress of the remediation and get notified when tasks are overdue.
+This new release to GA is a part of the Microsoft Defender for Cloud database protection suite, which includes different types of SQL databases, and MariaDB. Microsoft Defender for Azure Cosmos DB is an Azure native layer of security that detects attempts to exploit databases in your Azure Cosmos DB accounts.
-Learn more about the governance experience in [Driving your organization to remediate security issues with recommendation governance](governance-rules.md).
+By enabling this plan, you'll be alerted to potential SQL injections, known bad actors, suspicious access patterns, and potential explorations of your database through compromised identities, or malicious insiders.
-### Filter security alerts by IP address
+When potentially malicious activities are detected, security alerts are generated. These alerts provide details of suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
-In many cases of attacks, you want to track alerts based on the IP address of the entity involved in the attack. Up until now, the IP appeared only in the "Related Entities" section in the single alert blade. Now, you can filter the alerts in the security alerts blade to see the alerts related to the IP address, and you can search for a specific IP address.
+Microsoft Defender for Azure Cosmos DB continuously analyzes the telemetry stream generated by the Azure Cosmos DB services and crosses them with Microsoft Threat Intelligence and behavioral models to detect any suspicious activity. Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data and doesn't have any effect on your database's performance.
+Learn more about [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md).
+
+With the addition of support for Azure Cosmos DB, Defender for Cloud now provides one of the most comprehensive workload protection offerings for cloud-based databases. Security teams and database owners can now have a centralized experience to manage their database security of their environments.
+
+Learn how to [enable protections](enable-enhanced-security.md) for your databases.
### General availability (GA) of Defender for SQL on machines for AWS and GCP environments
Using the multicloud onboarding experience, you can enable and enforce databases
Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and your [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+### Drive implementation of security recommendations to enhance your security posture
+
+Today's increasing threats to organizations stretch the limits of security personnel to protect their expanding workloads. Security teams are challenged to implement the protections defined in their security policies.
+
+Now with the governance experience, security teams can assign remediation of security recommendations to the resource owners and require a remediation schedule. They can have full transparency into the progress of the remediation and get notified when tasks are overdue.
+
+Learn more about the governance experience in [Driving your organization to remediate security issues with recommendation governance](governance-rules.md).
+
+### Filter security alerts by IP address
+
+In many cases of attacks, you want to track alerts based on the IP address of the entity involved in the attack. Up until now, the IP appeared only in the "Related Entities" section in the single alert blade. Now, you can filter the alerts in the security alerts blade to see the alerts related to the IP address, and you can search for a specific IP address.
++ ### Alerts by resource group The ability to filter, sort and group by resource group has been added to the Security alerts page.
You can now also group your alerts by resource group to view all of your alerts
:::image type="content" source="media/release-notes/group-by-resource.png" alt-text="Screenshot that shows how to view your alerts when they're grouped by resource group." lightbox="media/release-notes/group-by-resource.png":::
-### General availability (GA) for Microsoft Defender for Azure Cosmos DB
-
-Microsoft Defender for Azure Cosmos DB is now generally available (GA) and supports SQL (core) API account types.
-
-This new release to GA is a part of the Microsoft Defender for Cloud database protection suite, which includes different types of SQL databases, and MariaDB. Microsoft Defender for Azure Cosmos DB is an Azure native layer of security that detects attempts to exploit databases in your Azure Cosmos DB accounts.
-
-By enabling this plan, you'll be alerted to potential SQL injections, known bad actors, suspicious access patterns, and potential explorations of your database through compromised identities, or malicious insiders.
-
-When potentially malicious activities are detected, security alerts are generated. These alerts provide details of suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
-
-Microsoft Defender for Azure Cosmos DB continuously analyzes the telemetry stream generated by the Azure Cosmos DB services and crosses them with Microsoft Threat Intelligence and behavioral models to detect any suspicious activity. Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data and doesn't have any effect on your database's performance.
-
-Learn more about [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md).
-
-With the addition of support for Azure Cosmos DB, Defender for Cloud now provides one of the most comprehensive workload protection offerings for cloud-based databases. Security teams and database owners can now have a centralized experience to manage their database security of their environments.
-
-Learn how to [enable protections](enable-enhanced-security.md) for your databases.
- ### Auto-provisioning of Microsoft Defender for Endpoint unified solution Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution&preserve-view=true) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
The policy `API App should only be accessible over HTTPS` has been deprecated. T
To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md).
+### New Key Vault alerts
+
+To expand the threat protections provided by Microsoft Defender for Key Vault, we've added two new alerts.
+
+These alerts inform you of an access denied anomaly, is detected for any of your key vaults.
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+|--|--|--|--|
+| **Unusual access denied - User accessing high volume of key vaults denied**<br>(KV_DeniedAccountVolumeAnomaly) | A user or service principal has attempted access to anomalously high volume of key vaults in the last 24 hours. This anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. We recommend further investigations. | Discovery | Low |
+| **Unusual access denied - Unusual user accessing key vault denied**<br>(KV_UserAccessDeniedAnomaly) | A key vault access was attempted by a user that does not normally access it, this anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. | Initial Access, Discovery | Low |
+ ## May 2022 Updates in May include:
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
For example, in an environment running MODBUS, you might want to generate an ale
Use custom, condition-based alert triggering and messaging to help pinpoint specific network activity and effectively update your security, IT, and operational teams. Contact [ms-horizon-support@microsoft.com](mailto:ms-horizon-support@microsoft.com) for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
-## Extend Defender for IoT to enterprise networks
+## Protect enterprise networks
-Microsoft Defender for IoT can protect IoT and OT devices, whether they're connected to IT, OT, or dedicated IoT networks.
+<a name="enterprise"></a>Microsoft Defender for IoT can protect IoT and OT devices, whether they're connected to IT, OT, or dedicated IoT networks.
Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, devices.
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-blob-storage.md
These events are triggered if you enable a hierarchical namespace on the storage
> [!NOTE] > For **Azure Data Lake Storage Gen2**, if you want to ensure that the **Microsoft.Storage.BlobCreated** event is triggered only when a Block Blob is completely committed, filter the event for the `FlushWithClose` REST API call. This API call triggers the **Microsoft.Storage.BlobCreated** event only after data is fully committed to a Block Blob. To learn how to create a filter, see [Filter events for Event Grid](./how-to-filter-events.md).
+### List of the events for SFTP APIs
+
+These events are triggered if you enable a hierarchical namespace on the storage account, and clients use SFTP APIs. For more information about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](../storage/blobs/secure-file-transfer-protocol-support.md).
+
+|Event name|Description|
+|-|--|
+|**Microsoft.Storage.BlobCreated** |Triggered when a blob is created or overwritten. <br>Specifically, this event is triggered when clients use the `put` operation, which corresponds to the `SftpCreate` and `SftpCommit` APIs. An empty blob is created when the file is opened and the uploaded contents are committed when the file is closed.|
+|**Microsoft.Storage.BlobDeleted** |Triggered when a blob is deleted. <br>Specifically, this event is also triggered when clients call the `rm` operation, which corresponds to the `SftpRemove` API.|
+|**Microsoft.Storage.BlobRenamed**|Triggered when a blob is renamed. <br>Specifically, this event is triggered when clients use the `rename` operation on files, which corresponds to the `SftpRename` API.|
+|**Microsoft.Storage.DirectoryCreated**|Triggered when a directory is created. <br>Specifically, this event is triggered when clients use the `mkdir` operation, which corresponds to the `SftpMakeDir` API.|
+|**Microsoft.Storage.DirectoryRenamed**|Triggered when a directory is renamed. <br>Specifically, this event is triggered when clients use the `rename` operation on a directory, which corresponds to the `SftpRename` API.|
+|**Microsoft.Storage.DirectoryDeleted**|Triggered when a directory is deleted. <br>Specifically, this event is triggered when clients use the `rmdir` operation, which corresponds to the `SftpRemoveDir` API.|
+ ### List of policy-related events These events are triggered when the actions defined by a policy are performed.
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.BlobCreated event (SFTP)
+
+If the blob storage account uses SFTP to create or overwrite a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `3`.
+
+* The `data.api` key is set to the string `SftpCreate` or `SftpCommit`.
+
+* The `clientRequestId` key is not included.
+
+* The `contentType` key is set to `application/octet-stream`.
+
+* The `contentOffset` key is included in the data set.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+> [!NOTE]
+> SFTP uploads will generate 2 events. One `SftpCreate` for an initial empty blob created when opening the file and one `SftpCommit` when the file contents are written.
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
+ "eventType": "Microsoft.Storage.BlobCreated",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpCommit",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "eTag": "\"0x8D4BCC2E4835CD0\"",
+ "contentType": "application/octet-stream",
+ "contentLength": 0,
+ "contentOffset": 0,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "dataVersion": "3",
+ "metadataVersion": "1"
+}]
+```
+ ### Microsoft.Storage.BlobDeleted event ```json
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.BlobDeleted event (SFTP)
+
+If the blob storage account uses SFTP to delete a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `2`.
+
+* The `data.api` key is set to the string `SftpRemove`.
+
+* The `clientRequestId` key is not included.
+
+* The `contentType` key is set to `application/octet-stream`.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
+ "eventType": "Microsoft.Storage.BlobDeleted",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpRemove",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "contentType": "text/plain",
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "dataVersion": "2",
+ "metadataVersion": "1"
+}]
+```
+ ### Microsoft.Storage.BlobTierChanged event ```json
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.BlobRenamed event (SFTP)
+
+If the blob storage account uses SFTP to rename a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRename`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-file.txt",
+ "eventType": "Microsoft.Storage.BlobRenamed",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpRename",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-file.txt",
+ "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}]
+```
+ ### Microsoft.Storage.DirectoryCreated event ```json
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.DirectoryCreated event (SFTP)
+
+If the blob storage account uses SFTP to create a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `2`.
+
+* The `data.api` key is set to the string `SftpMakeDir`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-new-directory",
+ "eventType": "Microsoft.Storage.DirectoryCreated",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpMakeDir",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/my-new-directory",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "dataVersion": "2",
+ "metadataVersion": "1"
+}]
+```
+ ### Microsoft.Storage.DirectoryRenamed event ```json
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.DirectoryRenamed event (SFTP)
+
+If the blob storage account uses SFTP to rename a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRename`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-directory",
+ "eventType": "Microsoft.Storage.DirectoryRenamed",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpRename",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-directory",
+ "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-directory",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}]
+```
+ ### Microsoft.Storage.DirectoryDeleted event ```json
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.DirectoryDeleted event (SFTP)
+
+If the blob storage account uses SFTP to delete a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRemoveDir`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/directory-to-delete",
+ "eventType": "Microsoft.Storage.DirectoryDeleted",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpRemoveDir",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/directory-to-delete",
+ "recursive": "false",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}]
+```
+ ### Microsoft.Storage.BlobInventoryPolicyCompleted event ```json
The data object has the following properties:
| `url` | string | The path to the blob. <br>If the client uses a Blob REST API, then the url has this structure: `<storage-account-name>.blob.core.windows.net\<container-name>\<file-name>`. <br>If the client uses a Data Lake Storage REST API, then the url has this structure: `<storage-account-name>.dfs.core.windows.net/<file-system-name>/<file-name>`. | | `recursive` | string | `True` to run the operation on all child directories; otherwise `False`. <br>Appears only for events triggered on blob storage accounts that have a hierarchical namespace. | | `sequencer` | string | An opaque string value representing the logical sequence of events for any particular blob name. Users can use standard string comparison to understand the relative sequence of two events on the same blob name. |
+| `identity` | string | A string value representing the identity associated with the event. For SFTP, this is the local user name.|
| `storageDiagnostics` | object | Diagnostic data occasionally included by the Azure Storage service. When present, should be ignored by event consumers. | ## Tutorials and how-tos
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
A deployment manifest is a JSON document that describes which modules to deploy,
} ```
+ > [!NOTE]
+ > If your container target is unnamed or null within `storageContainersForUpload`, a default name will be assigned to the target. If you wanted to stop uploading to a container, it must be removed completely from `storageContainersForUpload`. For more information, see the `deviceToCloudUploadProperties` section of [Store data at the edge with Azure Blob Storage on IoT Edge](how-to-store-data-blob.md?view=iotedge-2020-11&preserve-view=true#devicetoclouduploadproperties).
+ For information on configuring deviceToCloudUploadProperties and deviceAutoDeleteProperties after your module has been deployed, see [Edit the Module Twin](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Edit-Module-Twin). For more information about desired properties, see [Define or update desired properties](module-composition.md#define-or-update-desired-properties). 6. Select **Add**.
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
Some application scenarios prefer or require the same port to be used by multipl
If you want to reuse the backend port across multiple rules, you must enable Floating IP in the rule definition.
-When Floating IP is enabled, Azure changes the IP address mapping to the Frontend IP address of the Load Balancer frontend instead of backend instance's IP.
-
-Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
+When Floating IP is enabled, Azure changes the IP address mapping to the Frontend IP address of the Load Balancer frontend instead of backend instance's IP. Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
Floating IP can be configured on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to use Floating IP. ## Floating IP Guest OS configuration
-For each VM in the backend pool, run the following commands at a Windows Command Prompt.
+
+In order to function, the Guest OS for the virtual machine needs to be configured to receive all traffic bound for the frontend IP and port of the load balancer. To accomplish this requires:
+* a loopback network interface to be added
+* configuring the loopback with the frontend IP address of the load balancer
+* ensure the system can send/receive packets on interfaces that do not have the IP address assigned to that interface (on Windows, this requires setting interfaces to use the "weak host" model; on Linux this model is normally used by default)
+The host firewall also needs to be open to receiving traffic on the frontend IP port.
+
+> [!NOTE]
+> The examples below all use IPv4; to use IPv6, substitute "ipv6" for "ipv4". Also note that Floating IP for IPv6 does not work for Internal Load Balancers.
+
+### Windows Server
+
+<details>
+ <summary>Expand</summary>
+
+For each VM in the backend pool, run the following commands at a Windows Command Prompt on the server.
To get the list of interface names you have on your VM, type this command: ```console
-netsh interface show interface
+netsh interface ipv4 show interface
```
-For the VM NIC (Azure managed), type this command:
+For the VM NIC (Azure managed), type this command.
```console netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostreceive=enabled ```
+(replace **interfacename** with the name of this interface)
+
+For each loopback interface you added, repeat the commands below.
+
+```console
+netsh interface ipv4 add addr "loopbackinterface" floatingip floatingipnetmask
+netsh interface ipv4 set interface "loopbackinterface" weakhostreceive=enabled weakhostsend=enabled
+```
+(replace **loopbackinterface** with the name of this loopback interface and **floatingip** and **floatingipnetmask** with the appropriate values, e.g. that correspond to the load balancer frontend IP)
-(replace interfacename with the name of this interface)
+Finally, if firewall is being used on the guest host, ensure a rule set up so the traffic can reach the VM on the appropriate ports.
-For each loopback interface you added, repeat these commands:
+A full example configuration is below (assuming a load balancer frontend IP configuration of 1.2.3.4 and a load balancing rule for port 80):
```console
-netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostreceive=enabled
+netsh int ipv4 set int "Ethernet" weakhostreceive=enabled
+netsh int ipv4 add addr "Loopback Pseudo-Interface 1" 1.2.3.4 255.255.255.0
+netsh int ipv4 set int "Loopback Pseudo-Interface 1" weakhostreceive=enabled weakhostsend=enabled
+netsh advfirewall firewall add rule name="http" protocol=TCP localport=80 dir=in action=allow enable=yes
```
+</details>
-(replace interfacename with the name of this loopback interface)
+### Ubuntu
+
+<details>
+ <summary>Expand</summary>
+
+For each VM in the backend pool, run the following commands via an SSH session.
+
+To get the list of interface names you have on your VM, type this command:
+
+```console
+ip addr
+```
+For each loopback interface, repeat these commands, which assigns the floating IP to the loopback alias:
```console
-netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostsend=enabled
+sudo ip addr add floatingip/floatingipnetmask dev lo:0
```
+(replace **floatingip** and **floatingipnetmask** with the appropriate values, e.g. that correspond to the load balancer frontend IP)
+
+Finally, if firewall is being used on the guest host, ensure a rule set up so the traffic can reach the VM on the appropriate ports.
-(replace **interfacename** with the name of this loopback interface)
+A full example configuration is below (assuming a load balancer frontend IP configuration of 1.2.3.4 and a load balancing rule for port 80). This example also assumes the use of [UFW (Uncomplicated Firewall)](https://www.wikipedia.org/wiki/Uncomplicated_Firewall) in Ubuntu.
-> [!IMPORTANT]
-> The configuration of the loopback interfaces is performed within the guest OS. This configuration is not performed or managed by Azure. Without this configuration, the rules will not function.
+```console
+sudo ip addr add 1.2.3.4/24 dev lo:0
+sudo ufw allow 80/tcp
+```
+</details>
## <a name = "limitations"></a>Limitations -- Floating IP is not currently supported on secondary IP configurations for Load Balancing scenarios. This does not apply to Public load balancers with dual-stack configurations or to architectures that utilize a NAT Gateway for outbound connectivity.
+- Floating IP isn't currently supported on secondary IP configurations for Load Balancing scenarios. This doesn't apply to Public load balancers with dual-stack configurations or to architectures that utilize a NAT Gateway for outbound connectivity.
## Next steps
netsh interface ipv4 set interface ΓÇ£interfacenameΓÇ¥ weakhostsend=enabled
- Learn more about [Azure Load Balancer](load-balancer-overview.md). - Learn about [Health Probes](load-balancer-custom-probe-overview.md). - Learn about [Standard Load Balancer Diagnostics](load-balancer-standard-diagnostics.md).-- Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md).
+- Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md).
machine-learning How To Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md
Previously updated : 11/05/2021 Last updated : 06/24/2022 # Add users to your data labeling project
To add a guest user, your organization's external collaboration settings must be
:::image type="content" source="media/how-to-add-users/menu-active-directory.png" alt-text="Select Azure Active Directory from the menu."::: 1. On the left, select **Users**.
-1. At the top, select **New guest user**.
+1. At the top, select **New user**.
+1. Select **Invite external user**.
1. Fill in the name and email address for the user. 1. Add a message for the new user. 1. At the bottom of the page, select **Invite**.
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Title: Deploy MLflow models to online endpoint (preview)
+ Title: Deploy MLflow models to online endpoint
description: Learn to deploy your MLflow model as a web service that's automatically managed by Azure.
ms.devlang: azurecli
-# Deploy MLflow models to online endpoints (preview)
+# Deploy MLflow models to online endpoints
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
ms.devlang: azurecli
> * [v1](./v1/how-to-deploy-mlflow-models.md) > * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md)
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) (preview) for real-time inference. When you deploy your MLflow model to an online endpoint, it's a no-code-deployment so you don't have to provide a scoring script or an environment.
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) for real-time inference. When you deploy your MLflow model to an online endpoint, it's a no-code-deployment so you don't have to provide a scoring script or an environment.
You only provide the typical MLflow model folder contents:
For no-code-deployment, Azure Machine Learning
[!INCLUDE [clone repo & set defaults](../../includes/machine-learning-cli-prepare.md)]
-In this code snippets used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint:
+In this code snippet used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="set_endpoint_name":::
This example shows how you can deploy an MLflow model to an online endpoint usin
# [Endpoints page](#tab/endpoint)
- 1. From the __Endpoints__ page, Select **+Create (preview)**.
+ 1. From the __Endpoints__ page, Select **+Create**.
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
This example shows how you can deploy an MLflow model to an online endpoint usin
# [Models page](#tab/models)
- 1. Select the MLflow model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint (preview)__.
+ 1. Select the MLflow model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint__.
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" alt-text="Screenshot showing how to deploy model from Models UI":::
This section helps you understand how to deploy models to an online endpoint onc
To learn more, review these articles: -- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)-- [Create and use online endpoints (preview) in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
+- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [Use batch endpoints (preview) for batch scoring](how-to-use-batch-endpoint.md)-- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)-- [Access Azure resources with an online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Access Azure resources with an online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
+
+ Title: Manage models registries in Azure Machine Learning with MLflow
+
+description: Explains how to use MLflow for managing models in Azure Machine Learning
+++++ Last updated : 06/08/2022++++
+# Manage models registries in Azure Machine Learning with MLflow
+
+Azure Machine Learning supports MLflow for model management. This represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
+
+## Support matrix for managing models with MLflow
+
+The MLflow client exposes several methods to retrieve and manage models. The following table shows which of those methods are currently supported in MLflow when connected to Azure ML. It also compares it with other models management capabilities in Azure ML.
+
+| Feature | MLflow | Azure ML with MLflow | Azure ML CLIv2 | Azure ML Studio |
+| :- | :-: | :-: | :-: | :-: |
+| Registering models in MLflow format | **&check;** | **&check;** | **&check;** | **&check;** |
+| Registering models not in MLflow format | | | **&check;** | **&check;** |
+| Registering models from runs outputs/artifacts | **&check;** | **&check;**<sup>1</sup> | **&check;**<sup>2</sup> | **&check;** |
+| Listing registered models | **&check;** | **&check;** | **&check;** | **&check;** |
+| Retrieving details of registered model's versions | **&check;** | **&check;** | **&check;** | **&check;** |
+| Editing registered model's versions description | **&check;** | **&check;** | **&check;** | **&check;** |
+| Editing registered model's versions tags | **&check;** | **&check;** | **&check;** | **&check;** |
+| Renaming registered models | **&check;** | <sup>3</sup> | <sup>3</sup> | <sup>3</sup> |
+| Deleting a registered model (container) | **&check;** | <sup>3</sup> | <sup>3</sup> | <sup>3</sup> |
+| Deleting a registered model's version | **&check;** | **&check;** | **&check;** | **&check;** |
+| Manage MLflow model stages | **&check;** | **&check;** | | |
+| Search registered models by name | **&check;** | **&check;** | **&check;** | **&check;**<sup>4</sup> |
+| Search registered models using string comparators `LIKE` and `ILIKE` | **&check;** | | | **&check;**<sup>4</sup> |
+| Search registered models by tag | | | | **&check;**<sup>4</sup> |
+
+> [!NOTE]
+> - <sup>1</sup> Use URIs with format `runs:/<ruin-id>/<path>`.
+> - <sup>2</sup> Use URIs with format `azureml://jobs/<job-id>/outputs/artifacts/<path>`.
+> - <sup>3</sup> Registered models are immutable objects in Azure ML.
+> - <sup>4</sup> Use search box in Azure ML Studio. Partial match supported.
+
+## Registering new models in the registry
+
+### Creating models from an existing run
+
+If you have an MLflow model logged inside of a run and you want to register it in a registry, you can do that by using the run ID and the path where the model was logged. See [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md) to know how to query this information if you don't have it.
+
+```python
+mlflow.register_model(f"runs:/{run_id}/{artifact_path}", model_name)
+```
+
+### Creating models from assets
+
+If you have a folder with an MLModel MLflow model, then you can register it directly. There's no need for the model to be always in the context of a run. To do that you can use the URI schema `file://path/to/model` to register MLflow models stored in the local file system. Let's create a simple model using `Scikit-Learn` and save it in MLflow format in the local storage:
+
+```python
+from sklearn import linear_model
+
+reg = linear_model.LinearRegression()
+reg.fit([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
+
+mlflow.sklearn.save_model(reg, "./regressor")
+```
+
+> [!TIP]
+> The method `save_model` works in the same way as `log_model`. While the latter requires an MLflow run to be active so the model can be logged there, the former uses the local file system for the stage of the model's artifacts.
+
+You can now register the model from the local path:
+
+```python
+import os
+
+model_local_path = os.path.abspath("./regressor")
+mlflow.register_model(f"file://{model_local_path}", "local-model-test")
+```
+
+> [!NOTE]
+> Notice how the model URI schema `file:/` requires absolute paths.
+
+## Querying models
+
+### Querying all the models in the registry
+
+You can query all the registered models in the registry using the MLflow client with the method `list_registered_models`. The MLflow client is required to do all these operations.
+
+```python
+using mlflow
+
+client = mlflow.tracking.MlflowClient()
+```
+
+The following sample prints all the model's names:
+
+```python
+for model in client.list_registered_models():
+ print(f"{model.name}")
+```
+
+### Getting specific versions of the model
+
+The command above will retrieve the model object which contains all the model versions. However, if you want to get the last registered model version of a given model, you can use `get_registered_model`:
+
+```python
+client.get_registered_model(model_name)
+```
+
+If you need a specific version of the model, you can indicate so:
+
+```python
+client.get_model_version(model_name, version=2)
+```
+
+## Model stages
+
+MLflow supports model's stages to manage model's lifecycle. Model's version can transition from one stage to another. Stages are assigned to a model's version (instead of models) which means that a given model can have multiple versions on different stages.
+
+> [!IMPORTANT]
+> Stages can only be accessed using the MLflow SDK. They don't show up in the [Azure ML Studio portal](https://ml.azure.com) and can't be retrieved using neither Azure ML SDK, Azure ML CLI, or Azure ML REST API. Creating deployment from a given model's stage is not supported by the moment.
+
+### Querying model stages
+
+You can use the MLflow client to check all the possible stages a model can be:
+
+```python
+client.get_model_version_stages(model_name, version="latest")
+```
+
+You can see what model's version is on each stage by getting the model from the registry. The following example gets the model's version currently in the stage `Staging`.
+
+> [!WARNING]
+> Stage names are case sensitive.
+
+```python
+client.get_latest_versions(model_name, stages=["Staging"])
+```
+
+> [!NOTE]
+> Multiple versions can be in the same stage at the same time in Mlflow, however, this method returns the latest version (greater version) among all of them.
+
+### Transitioning models
+
+Transitioning a model's version to a particular stage can be done using the MLflow client.
+
+```python
+client.transition_model_version_stage(model_name, version=3, stage="Staging")
+```
+
+By default, if there were an existing model version in that particular stage, it will remain there. Hence, it won't be replaced as multiple model's versions can be in the same stage at the same time. Alternatively, you can indicate `archive_existing_versions=True` to tell MLflow to move the existing model's version to the stage `Archived`.
+
+```python
+client.transition_model_version_stage(
+ model_name, version=3, stage="Staging", archive_existing_versions=True
+)
+```
+
+### Loading models from stages
+
+ou can load a model in a particular stage directly from Python using the `load_model` function and the following URI format. Notice that for this method to success, you need to have all the libraries and dependencies already installed in the environment you're working at.
+
+```python
+model = mlflow.pyfunc.load_model(f"models:/{model_name}/Staging")
+```
+
+## Editing and deleting models
+
+Editing registered models is supported in both Mlflow and Azure ML, however, there are some differences between them that are important to notice:
+
+> [!WARNING]
+> Renaming models is not supported in Azure Machine Learning as model objects are immmutable.
+
+### Editing models
+
+You can edit model's description and tags from a model using Mlflow:
+
+```python
+client.update_model_version(model_name, version=1, description="My classifier description")
+```
+
+To edit tags, you have to use the method `set_model_version_tag` and `remove_model_version_tag`:
+
+```python
+client.set_model_version_tag(model_name, version="1", key="type", value="classification")
+```
+
+Removing a tag:
+
+```python
+client.delete_model_version_tag(model_name, version="1", key="type")
+```
+
+### Deleting a model's version
+
+You can delete any model version in the registry using the MLflow client, as demonstrated in the following example:
+
+```python
+client.delete_model_version(model_name, version="2")
+```
+
+> [!NOTE]
+> Azure Machine Learning doesn't support deleting the entire model container. To achieve the same thing, you will need to delete all the model versions from a given model.
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
+
+ Title: Manage experiments and runs with MLflow
+
+description: Explains how to use MLflow for managing experiments and runs in Azure ML
+++++ Last updated : 06/08/2022++++
+# Manage experiments and runs with MLflow
+
+Experiments and runs in Azure Machine Learning can be queried using MLflow client. This removes the need of any Azure ML specific SDKs to manage anything that happens inside of a training job, allowing dependencies removal and creating a more seamless transition between local runs and cloud.
+
+> [!NOTE]
+> The Azure Machine Learning Python SDK v2 (preview) does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, we recommend to use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure ML.
+
+MLflow client allows you to:
+
+* Create, delete and search for experiments in a workspace
+* Start, stop, cancel and query runs for experiments.
+* Track and retrieve metrics, parameters, artifacts and models from runs.
+
+In this article, you'll learn how to manage experiments and runs in your workspace using Azure ML and MLflow SDK in Python.
+
+## Using MLflow SDK in Azure ML
+
+Use MLflow to query and manage all the experiments in Azure Machine Learning. The MLflow SDK has capabilities to query everything that happens inside of a training job in Azure Machine Learning.
+
+### Prerequisites
+
+* Install `azureml-mlflow` plug-in.
+* If you're running in a compute not hosted in Azure ML, configure MLflow to point to the Azure ML MLtracking URL. You can follow the instruction at [Track runs from your local machine](how-to-use-mlflow-cli-runs.md#track-runs-from-your-local-machine)
+
+### Support matrix for querying runs and experiments
+
+The MLflow client exposes several methods to retrieve runs, including options to control what is returned and how. Use the following table to learn about which of those methods are currently supported in MLflow when connected to Azure Machine Learning:
+
+| Feature | Supported by MLflow | Supported by Azure ML |
+| :- | :-: | :-: |
+| Ordering runs by run fields (like `start_time`, `end_time`, etc) | **&check;** | **&check;** |
+| Ordering runs by attributes | **&check;** | <sup>1</sup> |
+| Ordering runs by metrics | **&check;** | <sup>1</sup> |
+| Ordering runs by parameters | **&check;** | <sup>1</sup> |
+| Ordering runs by tags | **&check;** | <sup>1</sup> |
+| Filtering runs by run fields (like `start_time`, `end_time`, etc) | | <sup>1</sup> |
+| Filtering runs by attributes | **&check;** | <sup>1</sup> |
+| Filtering runs by metrics | **&check;** | **&check;** |
+| Filtering runs by metrics with special characters (escaped) | **&check;** | |
+| Filtering runs by parameters | **&check;** | **&check;** |
+| Filtering runs by tags | **&check;** | **&check;** |
+| Filtering runs with numeric comparators (metrics) including `=`, `!=`, `>`, `>=`, `<`, and `<=` | **&check;** | **&check;** |
+| Filtering runs with string comparators (params, tags, and attributes): `=` and `!=` | **&check;** | **&check;**<sup>2</sup> |
+| Filtering runs with string comparators (params, tags, and attributes): `LIKE`/`ILIKE` | **&check;** | |
+| Filtering runs with comparators `AND` | **&check;** | **&check;** |
+| Filtering runs with comparators `OR` | **&check;** | |
+
+> [!NOTE]
+> - <sup>1</sup> Check the section [Getting runs inside an experiment](#getting-runs-inside-an-experiment) for instructions and examples on how to achieve the same functionality in Azure ML.
+> - <sup>2</sup> `!=` for tags not supported
+
+## Getting all the experiments
+
+You can get all the active experiments in the workspace using MLFlow:
+
+ ```python
+ experiments = mlflow.list_experiments()
+ for exp in experiments:
+ print(exp.name)
+ ```
+
+If you want to retrieve archived experiments too, then include the option `ViewType.ALL` in the `view_type` argument. The following sample shows how:
+
+ ```python
+ from mlflow.entities import ViewType
+
+ experiments = mlflow.list_experiments(view_type=ViewType.ALL)
+ for exp in experiments:
+ print(exp.name)
+ ```
+
+## Getting a specific experiment
+
+Details about a specific experiment can be retrieved using the `get_experiment_by_name` method:
+
+ ```python
+ exp = mlflow.get_experiment_by_name(experiment_name)
+ print(exp)
+ ```
+
+## Getting runs inside an experiment
+
+MLflow allows searching runs inside of any experiment, including multiple experiments at the same time. By default, MLflow returns the data in Pandas `Dataframe` format, which makes it handy when doing further processing our analysis of the runs. Returned data includes columns with:
+
+- Basic information about the run.
+- Parameters with column's name `params.<parameter-name>`.
+- Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
+
+### Getting all the runs from an experiment
+
+By experiment name:
+
+ ```python
+ mlflow.search_runs(experiment_names=[ "my_experiment" ])
+ ```
+By experiment id:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
+ ```
+
+> [!TIP]
+> Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments if required. This may be useful in case you want to compare runs of the same model when it is being logged in different experiments (by different people, different project iterations, etc). You can also use `search_all_experiments=True` if you want to search across all the experiments in the workspace.
+
+Another important point to notice is that get returning runs, all metrics are parameters are also returned for them. However, for metrics containing multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method.
+
+### Ordering runs
+
+By default, experiments are ordered descending by `start_time`, which is the time the experiment was queue in Azure ML. However, you can change this default by using the parameter `order_by`.
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], order_by=["start_time DESC"])
+ ```
+
+Use the argument `max_results` from `search_runs` to limit the number of runs returned. For instance, the following example returns the last run of the experiment:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], max_results=1, order_by=["start_time DESC"])
+ ```
+
+> [!WARNING]
+> Using `order_by` with expressions containing `metrics.*` in the parameter `order_by` is not supported by the moment. Please use `order_values` method from Pandas as shown in the next example.
+
+You can also order by metrics to know which run generated the best results:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ]).sort_values("metrics.accuracy", ascending=False)
+ ```
+
+### Filtering runs
+
+You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters and `metrics` to access metrics logged in the run:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="params.num_boost_round='100'")
+ ```
+
+### Filter runs by status
+
+You can also filter experiment by status. It becomes useful to find runs that are running, completed, canceled or failed. In MLflow, `status` is an `attribute`, so we can access this value using the expression `attributes.status`. The following table shows the possible values:
+
+| Azure ML Job status | MLFlow's `attributes.status` | Meaning |
+| :-: | :-: | :- |
+| Not started | `SCHEDULED` | The job/run was just registered in Azure ML but it has processed it yet. |
+| Queue | `SCHEDULED` | The job/run is scheduled for running, but it hasn't started yet. |
+| Preparing | `SCHEDULED` | The job/run has not started yet, but a compute has been allocated for the execution and it is on building state. |
+| Running | `RUNNING` | The job/run is currently under active execution. |
+| Completed | `FINISHED` | The job/run has completed without errors. |
+| Failed | `FAILED` | The job/run has completed with errors. |
+| Canceled | `KILLED` | The job/run has been canceled or killed by the user/system. |
+
+> [!WARNING]
+> Expressions containing `attributes.status` in the parameter `filter_string` are not support at the moment. Please use Pandas filtering expressions as shown in the next example.
+
+The following example shows all the runs that have been completed:
+
+ ```python
+ runs = mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
+ runs[runs.status == "FINISHED"]
+ ```
+
+## Accessing runs details
+
+By default, MLflow returns runs as a Pandas `Dataframe`. You can get Python objects if needed, which may be useful to get details about them by specifying the `output_format` parameter:
+
+ ```python
+ runs = mlflow.search_runs(
+ experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="params.num_boost_round='100'",
+ output_format="list",
+ )
+ ```
+Details can then be accessed from the `info` member. The following sample shows how to get the `run_id`:
+
+ ```python
+ last_run = runs[-1]
+ print("Last run ID:", last_run.info.run_id)
+ ```
+
+### Getting params and metrics from a run
+
+When runs are returned using `output_format="list"`, you can easily access parameters using the key `data`:
+
+ ```python
+ last_run.data.params
+ ```
+
+In the same way, you can query metrics:
+
+ ```python
+ last_run.data.metrics
+ ```
+For metrics that contain multiple values (for instance, a loss curve, or a PR curve), only the last logged value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. This method requires you to use the `MlflowClient`:
+
+ ```python
+ client = mlflow.tracking.MlflowClient()
+ client.get_metric_history("1234-5678-90AB-CDEFG", "log_loss")
+ ```
+
+### Getting artifacts from a run
+
+Any artifact logged by a run can be queried by MLflow. Artifacts can't be access using the run object itself and the MLflow client should be used instead:
+
+ ```python
+ client = mlflow.tracking.MlflowClient()
+ client.list_artifacts("1234-5678-90AB-CDEFG")
+ ```
+
+The method above will list all the artifacts logged in the run, but they will remain stored in the artifacts store (Azure ML storage). To download any of them, use the method `download_artifact`:
+
+ ```python
+ file_path = client.download_artifacts("1234-5678-90AB-CDEFG", path="feature_importance_weight.png")
+ ```
+
+### Getting models from a run
+
+Models can also be logged in the run and then retrieved directly from it. To retrieve it, you need to know the artifact's path where it is stored. The method `list_artifacats` can be used to find artifacts that are representing a model since MLflow models are always folders. You can download a model by indicating the path where the model is stored using the `download_artifact` method:
+
+ ```python
+ artifact_path="classifier"
+ model_local_path = client.download_artifacts("1234-5678-90AB-CDEFG", path=artifact_path)
+ ```
+
+You can then load the model back from the downloaded artifacts using the typical function `load_model`:
+
+ ```python
+ model = mlflow.xgboost.load_model(model_local_path)
+ ```
+> [!NOTE]
+> In the example above, we are assuming the model was created using `xgboost`. Change it to the flavor applies to your case.
+
+MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. This can be done using the `load_model` method which uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
+
+ ```python
+ model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}")
+ ```
+
+## Getting child (nested) runs information
+
+MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. This is the typical case of hyper-parameter tuning for instance. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
+
+```python
+hyperopt_run = mlflow.last_active_run()
+child_runs = mlflow.search_runs(
+ filter_string=f"tags.mlflow.parentRunId='{hyperopt_run.info.run_id}'"
+)
+```
+
+## Example notebooks
+
+The [MLflow with Azure ML notebooks](https://github.com/Azure/azureml-examples/tree/master/notebooks/using-mlflow) demonstrate and expand upon concepts presented in this article.
+
+ * [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines.
+ * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure ML using MLflow.
+
+## Next steps
+
+* [Manage your models with MLflow](how-to-manage-models.md).
+* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
mariadb Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concept-reserved-pricing.md
Title: Prepay for compute with reserved capacity - Azure Database for MariaDB description: Prepay for Azure Database for MariaDB compute resources with reserved capacity+ - Previously updated : 05/20/2020 Last updated : 06/24/2022 # Prepay for Azure Database for MariaDB compute resources with reserved capacity
You can buy Azure Database for MariaDB reserved capacity in the [Azure portal](h
The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reserved-instance-usage.md). - ## Determine the right server size before purchase The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed databases instance within a specific region and using the same performance tier and hardware generation.</br> For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 32 vCore MariaDB database, and two memory optimized, Gen5 ΓÇô 16 vCore MariaDB databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 ΓÇô 32 vCore database server, and one memory optimized, Gen5 ΓÇô 16 vCore database server. Let's suppose that you know that you will need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5 - ## Buy Azure Database for MariaDB reserved capacity 1. Sign in to the [Azure portal](https://portal.azure.com/).
For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 32
3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for MariaDB** to purchase a new reservation for your MariaDB databases. 4. Fill-in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for MariaDB servers that get the discount depend on the scope and quantity selected. - ![Overview of reserved pricing](media/concepts-reserved-pricing/mariadb-reserved-price.png) - The following table describes required fields. | Field | Description |
You can cancel, exchange, or refund reservations with certain limitations. For m
## vCore size flexibility
-vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit.
+vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit.
## Need help? Contact us
To learn more about Azure Reservations, see the following articles:
* [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md) * [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-mariadb.md) * [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
+* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
mariadb Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-audit-logs.md
Title: Audit logs - Azure Database for MariaDB description: Describes the audit logs available in Azure Database for MariaDB, and the available parameters for enabling logging levels.+ - Previously updated : 6/24/2020 Last updated : 06/24/2022 # Audit Logs in Azure Database for MariaDB
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
| where Category == 'MySqlAuditLogs' | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s | order by TimeGenerated asc nulls last
- ```
+ ```
## Next steps -- [How to configure audit logs in the Azure portal](howto-configure-audit-logs-portal.md)
+- [How to configure audit logs in the Azure portal](howto-configure-audit-logs-portal.md)
mariadb Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for MariaDB description: Learn about Azure Advisor recommendations for MariaDB.+ - Previously updated : 04/12/2021 Last updated : 06/24/2022 # Azure Advisor for MariaDB+ Learn about how Azure Advisor is applied to Azure Database for MariaDB and get answers to common questions. ## What is Azure Advisor for MariaDB?
-The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your MariaDB database.
+
+The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your MariaDB database.
Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations. ## Where can I view my recommendations?+ Recommendations are available from the **Overview** navigation sidebar in the Azure portal. A preview will appear as a banner notification, and details can be viewed in the **Notifications** section located just below the resource usage graphs. :::image type="content" source="./media/concepts-azure-advisor-recommendations/advisor-example.png" alt-text="Screenshot of the Azure portal showing an Azure Advisor recommendation."::: ## Recommendation types+ Azure Database for MariaDB prioritize the following types of recommendations: * **Performance**: To improve the speed of your MariaDB server. This includes CPU usage, memory pressure, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../advisor/advisor-performance-recommendations.md). * **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limit and connection limit recommendations. For more information, see [Advisor Reliability recommendations](../advisor/advisor-high-availability-recommendations.md). * **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../advisor/advisor-cost-recommendations.md). ## Understanding your recommendations+ * **Daily schedule**: For Azure MariaDB databases, we check server telemetry and issue recommendations on a daily schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry on the following day. * **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately. ## Next steps+ For more information, see [Azure Advisor Overview](../advisor/advisor-overview.md).
mariadb Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-backup.md
Title: Backup and restore - Azure Database for MariaDB description: Learn about automatic backups and restoring your Azure Database for MariaDB server.+ - Previously updated : 8/13/2020 Last updated : 06/24/2022 # Backup and restore in Azure Database for MariaDB
Azure Database for MariaDB automatically creates server backups and stores them
Azure Database for MariaDB takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](howto-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption.
-These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MariaDB. You can use [mysqldump](howto-migrate-dump-restore.md) to copy a database.
+These backup files aren't user-exposed and can't be exported. These backups can only be used for restore operations in Azure Database for MariaDB. You can use [mysqldump](howto-migrate-dump-restore.md) to copy a database.
The backup type and frequency is depending on the backend storage for the servers.
Transaction log backups occur every five minutes.
#### General purpose storage servers with up to 4-TB storage
-The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage are not snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider
+The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage aren't snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider
- Provisioning more IOPs to account for backup IOs OR-- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.
+- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There's no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, open a support ticket from Azure portal.
#### General purpose storage servers with up to 16-TB storage
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only.
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it's supported. Backups on these 16-TB storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only.
-Differential snapshot backups occur at least once a day. Differential snapshot backups do not occur on a fixed schedule. Differential snapshot backups occur every 24 hours unless the transaction log (binlog in MariaDB) exceeds 50 GB since the last differential backup. In a day, a maximum of six differential snapshots are allowed.
+Differential snapshot backups occur at least once a day. Differential snapshot backups don't occur on a fixed schedule. Differential snapshot backups occur every 24 hours unless the transaction log (binlog in MariaDB) exceeds 50 GB since the last differential backup. In a day, a maximum of six differential snapshots are allowed.
Transaction log backups occur every five minutes.
-
### Backup retention
-Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](howto-restore-server-portal.md#set-backup-configuration) or [Azure CLI](howto-restore-server-cli.md#set-backup-configuration).
+Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is seven days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](howto-restore-server-portal.md#set-backup-configuration) or [Azure CLI](howto-restore-server-cli.md#set-backup-configuration).
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days:
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to seven days, the recovery window is considered last seven days. In this scenario, all the backups required to restore the server in last seven days are retained. With a backup retention window of seven days:
-- Servers with up to 4-TB storage will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.-- Servers with up to 16-TB storage will retain the full database snapshot, all the differential snapshots and transaction log backups in last 8 days.
+- Servers with up to 4-TB storage will retain up to two full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.
+- Servers with up to 16-TB storage will retain the full database snapshot, all the differential snapshots and transaction log backups in last eight days.
#### Long-term retention of backups
-Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have a option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it.
+
+Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have a option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it.
### Backup redundancy options
-Azure Database for MariaDB provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../availability-zones/cross-region-replication-azure.md). This provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
+Azure Database for MariaDB provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they aren't only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../availability-zones/cross-region-replication-azure.md). This provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
#### Moving from locally redundant to geo-redundant backup storage
-Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](howto-migrate-dump-restore.md) is the only supported option.
+
+Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you can't change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](howto-migrate-dump-restore.md) is the only supported option.
### Backup storage cost
-Azure Database for MariaDB provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mariadb/).
+Azure Database for MariaDB provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you've provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no additional charge. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/mariadb/).
-You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available via the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
+You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available via the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups.
There are two types of restore available:
- **Point-in-time restore** is available with either backup redundancy option and creates a new server in the same region as your original server utilizing the combination of full and transaction log backups. - **Geo-restore** is available only if you configured your server for geo-redundant storage and it allows you to restore your server to a different region utilizing the most recent backup taken.
-The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours.
+The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is less than 12 hours.
> [!IMPORTANT] > Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md). ### Point-in-time restore
-Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option.
+Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It's created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option.
Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect.
You may need to wait for the next transaction log backup to be taken before you
### Geo-restore
-You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. Servers that support up to 4 TB of storage can be restored to the geo-paired region, or to any region that supports up to 16 TB of storage. For servers that support up to 16 TB of storage, geo-backups can be restored in any region that support 16 TB servers as well. Review [Azure Database for MariaDB pricing tiers](concepts-pricing-tiers.md) for the list of supported regions.
+You can restore a server to another Azure region where the service is available if you've configured your server for geo-redundant backups. Servers that support up to 4 TB of storage can be restored to the geo-paired region, or to any region that supports up to 16 TB of storage. For servers that support up to 16 TB of storage, geo-backups can be restored in any region that support 16-TB servers as well. Review [Azure Database for MariaDB pricing tiers](concepts-pricing-tiers.md) for the list of supported regions.
-Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
+Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There's a delay between when a backup is taken and when it's replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
> [!IMPORTANT] >If a geo-restore is performed for a newly created server, the initial backup synchronization may take more than 24 hours depending on data size as the initial full snapshot backup copy time is much higher. Subsequent snapshot backups are incremental copy and hence the restores are faster after 24 hours of server creation. If you are evaluating geo-restores to define your RTO, we recommend you to wait and evaluate geo-restore **only after 24 hours** of server creation for better estimates.
-During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore is not supported.
+During geo-restore, the server configurations that can be changed include compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore isn't supported.
-The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours.
+The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is less than 12 hours.
### Perform post-restore tasks After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running: - If the new server is meant to replace the original server, redirect clients and client applications to the new server-- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.
+- Ensure appropriate VNet rules are in place for users to connect. These rules aren't copied over from the original server.
- Ensure appropriate logins and database level permissions are in place - Configure alerts, as appropriate
mariadb Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-business-continuity.md
Title: Business continuity - Azure Database for MariaDB description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MariaDB service.+ - Previously updated : 7/7/2020 Last updated : 06/24/2022 # Overview of business continuity with Azure Database for MariaDB
The following table compares RTO and RPO in a **typical workload** scenario:
| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | | Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
+\* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
## Recover a server after a user or application error
The geo-restore feature restores the server using geo-redundant backups. The bac
## Cross-region read replicas
-You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
+You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
## FAQ ### Where does Azure Database for MariaDB store customer data?
-By default, Azure Database for MariaDB doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
+By default, Azure Database for MariaDB doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
## Next steps
mariadb Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-certificate-rotation.md
Title: Certificate rotation for Azure Database for MariaDB description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for MariaDB+ - Previously updated : 01/18/2021 Last updated : 06/24/2022 # Understanding the changes in the Root CA change for Azure Database for MariaDB
Azure database for MariaDB users can only use the predefined certificate to conn
As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MariaDB used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
+The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
## What change was performed on February 15, 2021 (02/15/2021)?
There is no change required on client side. if you followed our previous recomme
## Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
+We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
-Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
## What if we removed the BaltimoreCyberTrustRoot certificate?
Since this update is a client-side change, if the client used to read data from
If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider: -- If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
+- If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
```azurecli-interactive Master_SSL_Allowed : Yes
mariadb Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-compatibility.md
Title: Drivers and tools compatibility - Azure Database for MariaDB description: This article describes the MariaDB drivers and management tools that are compatible with Azure Database for MariaDB. + - Previously updated : 3/18/2020 Last updated : 06/24/2022 # MariaDB drivers and management tools compatible with Azure Database for MariaDB
This article describes the drivers and management tools that are compatible with
## MariaDB Drivers
-Azure Database for MariaDB uses the community edition of MariaDB server. Therefore, it is compatible with a wide variety of programming languages and drivers. The MariaDB API and protocol are compatible with those used by MySQL. This means that connectors that work with MySQL should also work with MariaDB.
+Azure Database for MariaDB uses the community edition of MariaDB server. Therefore, it's compatible with a wide variety of programming languages and drivers. The MariaDB API and protocol are compatible with those used by MySQL. This means that connectors that work with MySQL should also work with MariaDB.
The goal is to support the three most recent versions MariaDB drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MariaDB drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MariaDB 10.2 is provided in the following table: **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** |||| PHP | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.
-.NET | [MySqlConnector on GitHub](https://github.com/mysql-net/MySqlConnector) <br> [Installation package from Nuget](https://www.nuget.org/packages/MySqlConnector/) | 0.27 and after | 0.26.5 and before |
+.NET | [MySqlConnector on GitHub](https://github.com/mysql-net/MySqlConnector) <br> [Installation package from NuGet](https://www.nuget.org/packages/MySqlConnector/) | 0.27 and after | 0.26.5 and before |
MySQL Connector/NET | [MySQL Connector/NET](https://github.com/mysql/mysql-connector-net) | 8.0, 7.0, 6.10 | | An encoding bug may cause connections to fail on some non-UTF8 Windows systems. Node.js | [MySQLjs on GitHub](https://github.com/mysqljs/mysql/) <br> Installation package from NPM:<br> Run `npm install mysql` from NPM | 2.15 | 2.14.1 and before GO | https://github.com/go-sql-driver/mysql/releases | 1.3, 1.4 | 1.2 and before | Use `allowNativePasswords=true` in the connection string for version 1.3. Version 1.4 contains a fix and `allowNativePasswords=true` is no longer required.
SSL Connection | X | X | X
SQL Query Auto Completion | X | X | Import and Export Data | X | X | X Export to Multiple Formats | X | X | X
-Backup and Restore | | X |
+Back up and Restore | | X |
Display Server Parameters | X | X | X Display Client Connections | X | X | X ## Next steps -- [Troubleshoot connection issues to Azure Database for MariaDB](howto-troubleshoot-common-connection-issues.md)
+- [Troubleshoot connection issues to Azure Database for MariaDB](howto-troubleshoot-common-connection-issues.md)
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
Previously updated : 2/11/2021 Last updated : 06/24/2022 # Connectivity architecture in Azure Database for MariaDB
-This article explains the Azure Database for MariaDB connectivity architecture as well as how the traffic is directed to your Azure Database for MariaDB instance from clients both within and outside Azure.
+
+This article explains the Azure Database for MariaDB connectivity architecture and how the traffic is directed to your Azure Database for MariaDB instance from clients both within and outside Azure.
## Connectivity architecture
Connection to your Azure Database for MariaDB is established through a gateway t
![Overview of the connectivity architecture](./media/concepts-connectivity-architecture/connectivity-architecture-overview-proxy.png) -
-As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MariaDB. Therefore, in order to connect to your server, such as from corporate networks, it is necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
+As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MariaDB. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
## Azure Database for MariaDB gateway IP addresses
-The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MariaDB server.
+The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MariaDB server.
-As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MariaDB servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MariaDB servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mariadb.database.azure.com`, in the connection string for your application.
-* You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
+* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
The following table lists the gateway IP addresses of the Azure Database for MariaDB gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
-* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
-* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
-
+* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you're provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
+* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
+* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
| **Region name** | **Gateway IP addresses** |**Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** | |:-|:-|:-|:|
The following table lists the gateway IP addresses of the Azure Database for Mar
| West Europe |13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 | | West US |13.86.216.212, 13.86.217.212 |104.42.238.205 | 23.99.34.75| | West US 2 | 13.66.226.202 | | |
-||||
## Connection redirection
-Azure Database for MariaDB supports an additional connection policy, **redirection**, that helps to reduce network latency between client applications and MariaDB servers. With this feature, after the initial TCP session is established to the Azure Database for MariaDB server, the server returns the backend address of the node hosting the MariaDB server to the client. Thereafter, all subsequent packets flow directly to the server, bypassing the gateway. As packets flow directly to the server, latency and throughput have improved performance.
+Azure Database for MariaDB supports another connection policy, **redirection**, that helps to reduce network latency between client applications and MariaDB servers. With this feature, after the initial TCP session is established to the Azure Database for MariaDB server, the server returns the backend address of the node hosting the MariaDB server to the client. Thereafter, all subsequent packets flow directly to the server, bypassing the gateway. As packets flow directly to the server, latency and throughput have improved performance.
This feature is supported in Azure Database for MariaDB servers with engine versions 10.2 and 10.3.
Support for redirection is available in the PHP [mysqlnd_azure](https://github.c
## Frequently asked questions ### What you need to know about this planned maintenance?
-This is a DNS change only which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
+
+This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it's automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
### What are we decommissioning?
-Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
+
+Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We're decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
### How can you validate if your connections are going to old gateway nodes or new gateway nodes?+ Ping your server's FQDN, for example ``ping xxx.mariadb.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway. You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses ### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
-### What do I do if my client applications are still connecting to old gateway server ?
+You'll receive an email to inform you when we'll start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
+
+### What do I do if my client applications are still connecting to old gateway server?
+ This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. ### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring.
+
+This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
### Can I request for a specific time window for the maintenance?
-As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for majority of users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-### I am using private link, will my connections get affected?
-No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
+As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for Most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+
+### I'm using private link, will my connections get affected?
+No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
## Next steps * [Create and manage Azure Database for MariaDB firewall rules using the Azure portal](./howto-manage-firewall-portal.md)
-* [Create and manage Azure Database for MariaDB firewall rules using Azure CLI](./howto-manage-firewall-cli.md)
+* [Create and manage Azure Database for MariaDB firewall rules using Azure CLI](./howto-manage-firewall-cli.md)
mariadb Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity.md
Title: Transient connectivity errors - Azure Database for MariaDB description: Learn how to handle transient connectivity errors for Azure Database for MariaDB. keywords: mysql connection,connection string,connectivity issues,transient error,connection error+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Handling of transient connectivity errors for Azure Database for MariaDB
mariadb Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-access-security-private-link.md
Title: Private Link - Azure Database for MariaDB description: Learn how Private link works for Azure Database for MariaDB.+ - Previously updated : 03/10/2020 Last updated : 06/24/2022 # Private Link for Azure Database for MariaDB
Private endpoints are required to enable Private Link. This can be done using th
### Approval Process
-Once the network admin creates the private endpoint (PE), the admin can manage the private endpoint Connection (PEC) to Azure Database for MariaDB. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for MariaDB connectivity.
+Once the network admin creates the private endpoint (PE), the admin can manage the private endpoint Connection (PEC) to Azure Database for MariaDB. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for MariaDB connectivity.
* Navigate to the Azure Database for MariaDB server resource in the Azure portal. * Select the private endpoint connections in the left pane
Once the network admin creates the private endpoint (PE), the admin can manage t
## Use cases of Private Link for Azure Database for MariaDB - Clients can connect to the private endpoint from the same VNet, [peered VNet](../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases. ![select the private endpoint overview](media/concepts-data-access-and-security-private-link/show-private-link-overview.png) ### Connecting from an Azure VM in Peered Virtual Network (VNet)+ Configure [VNet peering](../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for MariaDB from an Azure VM in a peered VNet. ### Connecting from an Azure VM in VNet-to-VNet environment+ Configure [VNet-to-VNet VPN gateway connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for MariaDB from an Azure VM in a different region or subscription. ### Connecting from an on-premises environment over VPN+ To establish connectivity from an on-premises environment to the Azure Database for MariaDB, choose and implement one of the options: * [Point-to-Site connection](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
The following situations and outcomes are possible when you use Private Link in
## Deny public access for Azure Database for MariaDB
-If you want to rely completely only on private endpoints for accessing their Azure Database for MariaDB, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
+If you want to rely completely only on private endpoints for accessing their Azure Database for MariaDB, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
When this setting is set to *YES*, only connections via private endpoints are allowed to your Azure Database for MariaDB. When this setting is set to *NO*, clients can connect to your Azure Database for MariaDB based on your firewall or VNet service endpoint settings. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
mariadb Concepts Data Access Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-access-security-vnet.md
Title: VNet service endpoints - Azure Database for MariaDB description: 'Describes how VNet service endpoints work for your Azure Database for MariaDB server.'+ - Previously updated : 7/17/2020 Last updated : 06/24/2022 # Use Virtual Network service endpoints and rules for Azure Database for MariaDB
A virtual network rule tells your Azure Database for MariaDB server to accept co
--- <a name="anch-benefits-of-a-vnet-rule-68b"></a> ## Benefits of a virtual network rule
You can salvage the IP option by obtaining a *static* IP address for your VM. Fo
However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage. - <a name="anch-details-about-vnet-rules-38q"></a> ## Details about virtual network rules
Merely setting a VNet firewall rule does not help secure the server to the VNet.
You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal. ## Related articles+ - [Azure virtual networks][vm-virtual-network-overview] - [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d] ## Next steps+ For articles on creating VNet rules, see: - [Create and manage Azure Database for MariaDB VNet rules using the Azure portal](howto-manage-vnet-portal.md)
-
+ <!-- - [Create and manage Azure Database for MariaDB VNet rules using Azure CLI](howto-manage-vnet-using-cli.md) -->
For articles on creating VNet rules, see:
[expressroute-indexmd-744v]: ../expressroute/index.yml
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mariadb Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-in-replication.md
Title: Data-in replication - Azure Database for MariaDB description: Learn about using data-in replication to synchronize from an external server into the Azure Database for MariaDB service.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Replicate data into Azure Database for MariaDB
Last updated 3/18/2020
Data-in Replication allows you to synchronize data from a MariaDB server running on-premises, in virtual machines, or database services hosted by other cloud providers into the Azure Database for MariaDB service. Data-in Replication is based on the binary log (binlog) file position-based replication native to MariaDB. To learn more about binlog replication, see the [binlog replication overview](https://mariadb.com/kb/en/library/replication-overview/). ## When to use Data-in Replication+ The main scenarios to consider using Data-in Replication are: - **Hybrid Data Synchronization:** With Data-in Replication, you can keep data synchronized between your on-premises servers and Azure Database for MariaDB. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server, but want to move the data to a region closer to end users.
The main scenarios to consider using Data-in Replication are:
## Limitations and considerations ### Data not replicated+ The [*mysql system database*](https://mariadb.com/kb/en/library/the-mysql-database-tables/) on the source server is not replicated. Changes to accounts and permissions on the source server are not replicated. If you create an account on the source server and this account needs to access the replica server, then manually create the same account on the replica server side. To understand what tables are contained in the system database, see the [MariaDB documentation](https://mariadb.com/kb/en/library/the-mysql-database-tables/). ### Requirements+ - The source server version must be at least MariaDB version 10.2. - The source and replica server versions must be the same. For example, both must be MariaDB version 10.2. - Each table must have a primary key.
The [*mysql system database*](https://mariadb.com/kb/en/library/the-mysql-databa
- Ensure the the source server has a **public IP address**, the DNS is publicly accessible, or has a fully qualified domain name (FQDN). ### Other+ - Data-in replication is only supported in General Purpose and Memory Optimized pricing tiers. ## Next steps+ - Learn how to [set up data-in replication](howto-data-in-replication.md).
mariadb Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-firewall-rules.md
Title: Firewall rules - Azure Database for MariaDB description: Learn about using firewall rules to enable connections to your Azure Database for MariaDB server.+ - Previously updated : 7/17/2020 Last updated : 06/24/2022 # Azure Database for MariaDB server firewall rules+ Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request. To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
To configure a firewall, create firewall rules that specify ranges of acceptable
**Firewall rules:** These rules enable clients to access your entire Azure Database for MariaDB server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor. ## Firewall overview+ All database access to your Azure Database for MariaDB server is by default blocked by the firewall. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules. Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your Azure Database for MariaDB database, as shown in the following diagram:
Connection attempts from the Internet and Azure must first pass through the fire
![Example flow of how the firewall works](./media/concepts-firewall-rules/1-firewall-concept.png) ## Connecting from the Internet+ Server-level firewall rules apply to all databases on the Azure Database for MariaDB server. If the IP address of the request is within one of the ranges specified in the server-level firewall rules, then the connection is granted.
If the IP address of the request is within one of the ranges specified in the se
If the IP address of the request is outside the ranges specified in any of the database-level or server-level firewall rules, then the connection request fails. ## Connecting from Azure
-It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
+
+It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is not allowed, the request does not reach the Azure Database for MariaDB server. > [!IMPORTANT] > The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+>
![Configure Allow access to Azure services in the portal](./media/concepts-firewall-rules/allow-azure-services.png) ### Connecting from a VNet
-To connect securely to your Azure Database for MariaDB server from a VNet, consider using [VNet service endpoints](./concepts-data-access-security-vnet.md).
+
+To connect securely to your Azure Database for MariaDB server from a VNet, consider using [VNet service endpoints](./concepts-data-access-security-vnet.md).
## Programmatically managing firewall rules
-In addition to the Azure portal, firewall rules can be managed programmatically by using the Azure CLI.
+
+In addition to the Azure portal, firewall rules can be managed programmatically by using the Azure CLI.
See also [Create and manage Azure Database for MariaDB firewall rules using Azure CLI](./howto-manage-firewall-cli.md). ## Troubleshooting firewall issues+ Consider the following points when access to the Microsoft Azure Database for MariaDB server service does not behave as expected: * **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MariaDB Server firewall configuration to take effect.
Consider the following points when access to the Microsoft Azure Database for Ma
* Get static IP addressing instead for your client computers, and then add the IP addresses as firewall rules.
-* **Server's IP appears to be public:** Connections to the Azure Database for MariaDB server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
+* **Server's IP appears to be public:** Connections to the Azure Database for MariaDB server are routed through a publicly accessible Azure gateway. However, the actual server IP is protected by the firewall. For more information, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
* **Cannot connect from Azure resource with allowed IP:** Check whether the **Microsoft.Sql** service endpoint is enabled for the subnet you are connecting from. If **Microsoft.Sql** is enabled, it indicates that you only want to use [VNet service endpoint rules](concepts-data-access-security-vnet.md) on that subnet.
Consider the following points when access to the Microsoft Azure Database for Ma
* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error. ## Next steps+ - [Create and manage Azure Database for MariaDB firewall rules using the Azure portal](./howto-manage-firewall-portal.md) - [Create and manage Azure Database for MariaDB firewall rules using Azure CLI](./howto-manage-firewall-cli.md) - [VNet service endpoints in Azure Database for MariaDB](./concepts-data-access-security-vnet.md)
mariadb Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-high-availability.md
Title: High availability - Azure Database for MariaDB description: This article provides information on high availability in Azure Database for MariaDB+ - Previously updated : 7/7/2020 Last updated : 06/24/2022 # High availability in Azure Database for MariaDB+ The Azure Database for MariaDB service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/MariaDB) uptime. Azure Database for MariaDB provides high availability during planned events such as user-initated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MariaDB can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
-Azure Database for MariaDB is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
+Azure Database for MariaDB is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
## Components in Azure Database for MariaDB
Azure Database for MariaDB is suitable for running mission critical databases th
| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. | ## Planned downtime mitigation
-Azure Database for MariaDB is architected to provide high availability during planned downtime operations.
+
+Azure Database for MariaDB is architected to provide high availability during planned downtime operations.
![view of Elastic Scaling in Azure MariaDB](./media/concepts-high-availability/elastic-scaling-mariadb-server.png)
Here are some planned maintenance scenarios:
| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| | <b>Minor version upgrades | Azure Database for MariaDB automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| - ## Unplanned downtime mitigation
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. MariaDB engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MariaDB mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
-
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. MariaDB engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MariaDB mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
![view of High Availability in Azure MariaDB](./media/concepts-high-availability/availability-mariadb-server.png) ### Unplanned downtime: failure scenarios and service recovery+ Here are some failure scenarios and how Azure Database for MariaDB automatically recovers: | **Scenario** | **Automatic recovery** |
Here are some failure scenarios that require user action to recover:
| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [mysqldump](howto-migrate-dump-restore.md), and then use [restore](howto-migrate-dump-restore.md#restore-your-mariadb-database) to restore those tables into your database. | - ## Summary Azure Database for MariaDB provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for MariaDB protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/MariaDB). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications. ## Next steps+ - Learn about [Azure regions](../availability-zones/az-overview.md) - Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
+- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
mariadb Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-limits.md
Title: Limitations - Azure Database for MariaDB description: This article describes limitations in Azure Database for MariaDB, such as number of connection and storage engine options.+ - Previously updated : 10/2/2020 Last updated : 06/24/2022 # Limitations in Azure Database for MariaDB+ The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. ## Server parameters
The following sections describe capacity, storage engine support, privilege supp
Azure Database for MariaDB supports tuning the values of server parameters. The min and max value of some parameters (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) is determined by the pricing tier and vCores of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits.
-Upon initial deployment, an Azure for MariaDB server includes systems tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](howto-configure-server-parameters-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
+Upon initial deployment, an Azure for MariaDB server includes systems tables for time zone information, but these tables aren't populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](howto-configure-server-parameters-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
-Password plugins such as "validate_password" and "caching_sha2_password" are not supported by the service.
+Password plugins such as "validate_password" and "caching_sha2_password" aren't supported by the service.
## Storage engine support ### Supported+ - [InnoDB](https://mariadb.com/kb/en/library/xtradb-and-innodb/) - [MEMORY](https://mariadb.com/kb/en/library/memory-storage-engine/) ### Unsupported+ - [MyISAM](https://mariadb.com/kb/en/library/myisam-storage-engine/) - [BLACKHOLE](https://mariadb.com/kb/en/library/blackhole/) - [ARCHIVE](https://mariadb.com/kb/en/library/archive/) ## Privileges & data manipulation support
-Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MariaDB server. To maintain the service integrity and SLA at a product level, this service does not expose multiple roles.
+Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MariaDB server. To maintain the service integrity and SLA at a product level, this service doesn't expose multiple roles.
-The MariaDB service does not allow direct access to the underlying file system. Some data manipulation commands are not supported.
+The MariaDB service doesn't allow direct access to the underlying file system. Some data manipulation commands aren't supported.
## Privilege support
The following are unsupported:
- DBA role: Restricted. Alternatively, you can use the administrator user (created during new server creation), allows you to perform most of DDL and DML statements. - SUPER privilege: Similarly, [SUPER privilege](https://mariadb.com/kb/en/library/grant/#global-privileges) is also restricted. - DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a mysqldump.-- System databases: The [mysql system database](https://mariadb.com/kb/en/the-mysql-database-tables/) is read-only and used to support various PaaS functionality. You cannot make changes to the `mysql` system database.
+- System databases: The [mysql system database](https://mariadb.com/kb/en/the-mysql-database-tables/) is read-only and used to support various PaaS functionalities. You can't make changes to the `mysql` system database.
- `SELECT ... INTO OUTFILE`: Not supported in the service. ### Supported+ - `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). ## Functional limitations ### Scale operations+ - Dynamic scaling to and from the Basic pricing tiers is currently not supported.-- Decreasing server storage size is not supported.
+- Decreasing server storage size isn't supported.
### Server version upgrades+ - Automated migration between major database engine versions is currently not supported. ### Point-in-time-restore-- When using the PITR feature, the new server is created with the same configurations as the server it is based on.-- Restoring a deleted server is not supported.+
+- When using the PITR feature, the new server is created with the same configurations as the server it's based on.
+- Restoring a deleted server isn't supported.
### Subscription management+ - Dynamically moving pre-created servers across subscription and resource group is currently not supported. ### VNet service endpoints+ - Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. ### Storage size+ - Please refer to [pricing tiers](concepts-pricing-tiers.md) for the storage size limits per pricing tier. ## Current known issues+ - MariaDB server instance displays the incorrect server version after connection is established. To get the correct server instance engine version, use the `select version();` command. ## Next steps+ - [What's available in each service tier](concepts-pricing-tiers.md) - [Supported MariaDB database versions](concepts-supported-versions.md)
mariadb Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-monitoring.md
Title: Monitoring - Azure Database for MariaDB description: This article describes the metrics for monitoring and alerting for Azure Database for MariaDB, including CPU, storage, and connection statistics.+ - Previously updated : 10/21/2020 Last updated : 06/24/2022 # Monitoring in Azure Database for MariaDB+ Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MariaDB provides various metrics that give insight into the behavior of your server. ## Metrics+ All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../azure-monitor/data-platform.md). For step by step guidance, see [How to set up alerts](howto-alert-metric.md). ### List of metrics+ These metrics are available for Azure Database for MariaDB: |Metric|Metric Display Name|Unit|Description|
Learn more about how to set up notifications in the [planned maintenance notific
- For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../azure-monitor/data-platform.md). - See [How to set up alerts](howto-alert-metric.md) for guidance on creating an alert on a metric.-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for MariaDB.
+- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for MariaDB.
mariadb Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-planned-maintenance-notification.md
Title: Planned maintenance notification - Azure Database for MariaDB description: This article describes the Planned maintenance notification feature in Azure Database for MariaDB+ - Previously updated : 10/21/2020 Last updated : 06/24/2022 # Planned maintenance notification in Azure Database for MariaDB
You can utilize the planned maintenance notifications feature to receive alerts
We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
-You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
+You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
### Check planned maintenance notification from Azure portal 1. In the [Azure portal](https://portal.azure.com), select **Service Health**. 2. Select **Planned Maintenance** tab
-3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
-
+3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
+ ### To receive planned maintenance notification 1. In the [portal](https://portal.azure.com), select **Service Health**.
No, all the Azure regions are patched during the deployment wise window timings.
A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors). - ## Next steps - For any questions or suggestions you might have about working with Azure Database for MariaDB, send an email to the Azure Database for MariaDB Team at AskAzureDBforMariaDB@service.microsoft.com - See [How to set up alerts](howto-alert-metric.md) for guidance on creating an alert on a metric. - [Troubleshoot connection issues to Azure Database for MariaDB](howto-troubleshoot-common-connection-issues.md)-- [Handle transient errors and connect efficiently to Azure Database for MariaDB](concepts-connectivity.md)
+- [Handle transient errors and connect efficiently to Azure Database for MariaDB](concepts-connectivity.md)
mariadb Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-pricing-tiers.md
Title: Pricing tiers - Azure Database for MariaDB description: Learn about the various pricing tiers for Azure Database for MariaDB including compute generations, storage types, storage size, vCores, memory, and backup retention periods.+ - Previously updated : 10/14/2020 Last updated : 06/24/2022 # Azure Database for MariaDB pricing tiers
Azure Database for MariaDB provides up to 100% of your provisioned server storag
## Scale resources
-After you create your server, you can independently change the vCores, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI.
+After you create your server, you can independently change the vCores, the pricing tier (except to and from Basic), the amount of storage, and the backup retention period. You can't change the backup storage type after a server is created. The number of vCores can be scaled up or down. The backup retention period can be scaled up or down from 7 to 35 days. The storage size can only be increased. Scaling of the resources can be done either through the portal or Azure CLI.
When you change the number of vCores, or the pricing tier, a copy of the original server is created with the new compute allocation. After the new server is up and running, connections are switched over to the new server. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This window varies, but in most cases, is less than a minute.
Scaling storage and changing the backup retention period are true online operati
For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/mariadb/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.MariaDBServer) shows the monthly cost on the **Pricing tier** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and choose **Azure Database for MariaDB** to customize the options. ## Next steps+ - Learn about the [service limitations](concepts-limits.md). - Learn how to [create a MariaDB server in the Azure portal](quickstart-create-mariadb-server-database-using-azure-portal.md).
mariadb Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-query-performance-insight.md
Title: Query Performance Insight - Azure Database for MariaDB description: This article describes the Query Performance Insight feature in Azure Database for MariaDB+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Query Performance Insight in Azure Database for MariaDB
Last updated 3/18/2020
Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them. - ## Common scenarios ### Long running queries - Identifying longest running queries in the past X hours - Identifying top N queries that are waiting on resources
-
+ ### Wait statistics - Understanding wait nature for a query
In the portal page of your Azure Database for MariaDB server, select **Query Per
The **Long running queries** tab shows the top 5 queries by average duration per execution, aggregated in 15-minute intervals. You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
-You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively.
+You can select and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively.
![Query Performance Insight long running queries](./media/concepts-query-performance-insight/query-performance-insight-landing-page.png)
-### Wait statistics
+### Wait statistics
> [!NOTE] > Wait statistics are meant for troubleshooting query performance issues. It is recommended to be turned on only for troubleshooting purposes. <br>If you receive the error message in the Azure portal "*The issue encountered for 'Microsoft.DBforMariaDB'; cannot fulfill the request. If this issue continues or is unexpected, please contact support with this information.*" while viewing wait statistics, use a smaller time period.
Queries displayed in the wait statistics view are grouped by the queries that ex
![Query Performance Insight waits statistics](./media/concepts-query-performance-insight/query-performance-insight-wait-statistics.png)
-## Limitations
+## Limitations
* Query performance insight is not supported for version 10.3 ## Next steps -- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MariaDB.
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for MariaDB.
mariadb Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-query-store.md
-
+ Title: Query Store - Azure Database for MariaDB description: Learn about the Query Store feature in Azure Database for MariaDB to help you track performance over time. + - Previously updated : 01/15/2021 Last updated : 06/24/2022 # Monitor Azure Database for MariaDB performance with Query Store
SELECT * FROM mysql.query_store_wait_stats;
## Finding wait queries > [!NOTE]
-> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely.
+> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely.
Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
mariadb Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-read-replicas.md
Title: Read replicas - Azure Database for MariaDB description: 'Learn about read replicas in Azure Database for MariaDB: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.'+ - Previously updated : 01/18/2021 Last updated : 06/24/2022
Australia East, Australia Southeast, Brazil South, Canada Central, Canada East,
In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../availability-zones/cross-region-replication-azure.md).
-If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
+If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
-However, there are limitations to consider:
+However, there are limitations to consider:
* Regional availability: Azure Database for MariaDB is available in France Central, UAE North, and Germany Central. However, their paired regions are not available.
mariadb Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-security.md
Title: Security - Azure Database for MariaDB description: An overview of the security features in Azure Database for MariaDB.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Security in Azure Database for MariaDB
There are multiple layers of security that are available to protect the data on
## Information protection and encryption ### In-transit+ Azure Database for MariaDB secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default. ### At-rest
-The Azure Database for MariaDB service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, with the exception of temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
+The Azure Database for MariaDB service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, with the exception of temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
## Network security
-Connections to an Azure Database for MariaDB server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-A newly created Azure Database for MariaDB server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
+Connections to an Azure Database for MariaDB server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
+
+A newly created Azure Database for MariaDB server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
### IP firewall rules+ IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information. ### Virtual network firewall rules
-Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for MariaDB server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-security-vnet.md).
+Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for MariaDB server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-security-vnet.md).
## Access management While creating the Azure Database for MariaDB server, you provide credentials for an administrator user. This administrator can be used to create additional MariaDB users. - ## Threat protection You can opt in to [Advanced Threat Protection](../security-center/defender-for-databases-introduction.md) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
-[Audit logging](concepts-audit-logs.md) is available to track activity in your databases.
-
+[Audit logging](concepts-audit-logs.md) is available to track activity in your databases.
## Next steps-- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-security-vnet.md)+
+- Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-security-vnet.md)
mariadb Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-server-logs.md
Title: Slow query logs - Azure Database for MariaDB description: Describes the logs available in Azure Database for MariaDB, and the available parameters for enabling different logging levels.+ - Previously updated : 11/6/2020 Last updated : 06/24/2022 # Slow query logs in Azure Database for MariaDB+ In Azure Database for MariaDB, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting. For more information about the slow query log, see the MariaDB documentation for [slow query log](https://mariadb.com/kb/en/library/slow-query-log-overview/).
-When [Query Store](concepts-query-store.md) is enabled on your server, you may see the queries like "`CALL mysql.az_procedure_collect_wait_stats (900, 30);`" logged in your slow query logs. This behavior is expected as the Query Store feature collects statistics about your queries.
+When [Query Store](concepts-query-store.md) is enabled on your server, you may see the queries like "`CALL mysql.az_procedure_collect_wait_stats (900, 30);`" logged in your slow query logs. This behavior is expected as the Query Store feature collects statistics about your queries.
## Configure slow query logging
-By default the slow query log is disabled. To enable it, set `slow_query_log` to ON. This can be enabled using the Azure portal or Azure CLI.
+
+By default the slow query log is disabled. To enable it, set `slow_query_log` to ON. This can be enabled using the Azure portal or Azure CLI.
Other parameters you can adjust include:
Other parameters you can adjust include:
- **log_slow_admin_statements**: if ON includes administrative statements like ALTER_TABLE and ANALYZE_TABLE in the statements written to the slow_query_log. - **log_queries_not_using_indexes**: determines whether queries that do not use indexes are logged to the slow_query_log - **log_throttle_queries_not_using_indexes**: This parameter limits the number of non-index queries that can be written to the slow query log. This parameter takes effect when log_queries_not_using_indexes is set to ON.-- **log_output**: if "File", allows the slow query log to be written to both the local server storage and to Azure Monitor Diagnostic Logs. If "None", the slow query log will only be written to Azure Monitor Diagnostics Logs.
+- **log_output**: if "File", allows the slow query log to be written to both the local server storage and to Azure Monitor Diagnostic Logs. If "None", the slow query log will only be written to Azure Monitor Diagnostics Logs.
> [!IMPORTANT] > If your tables are not indexed, setting the `log_queries_not_using_indexes` and `log_throttle_queries_not_using_indexes` parameters to ON may affect MariaDB performance since all queries running against these non-indexed tables will be written to the slow query log.<br><br>
-> If you plan on logging slow queries for an extended period of time, it is recommended to set `log_output` to "None". If set to "File", these logs are written to the local server storage and can affect MariaDB performance.
+> If you plan on logging slow queries for an extended period of time, it is recommended to set `log_output` to "None". If set to "File", these logs are written to the local server storage and can affect MariaDB performance.
See the MariaDB [slow query log documentation](https://mariadb.com/kb/en/library/slow-query-log-overview/) for full descriptions of the slow query log parameters. ## Access slow query logs+ There are two options for accessing slow query logs in Azure Database for MariaDB: local server storage or Azure Monitor Diagnostic Logs. This is set using the `log_output` parameter.
-For local server storage, you can list and download slow query logs using the Azure portal or the Azure CLI. In the Azure portal, navigate to your server in the Azure portal. Under the **Monitoring** heading, select the **Server Logs** page. For more information on Azure CLI, see [Configure and access server logs using Azure CLI](howto-configure-server-logs-cli.md).
+For local server storage, you can list and download slow query logs using the Azure portal or the Azure CLI. In the Azure portal, navigate to your server in the Azure portal. Under the **Monitoring** heading, select the **Server Logs** page. For more information on Azure CLI, see [Configure and access server logs using Azure CLI](howto-configure-server-logs-cli.md).
Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information. ## Local server storage log retention
-When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available. The 7 GB storage limit for the server logs is available free of cost and cannot be extended.
+
+When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available. The 7 GB storage limit for the server logs is available free of cost and cannot be extended.
Logs are rotated every 24 hours or 7 GB, whichever comes first.
Logs are rotated every 24 hours or 7 GB, whichever comes first.
> The above log retention does not apply to logs that are piped using Azure Monitor Diagnostic Logs. You can change the retention period for the data sinks being emitted to (ex. Azure Storage). ## Diagnostic logs+ Azure Database for MariaDB is integrated with Azure Monitor Diagnostic Logs. Once you have enabled slow query logs on your MariaDB server, you can choose to have them emitted to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about how to enable diagnostic logs, see the how to section of the [diagnostic logs documentation](../azure-monitor/essentials/platform-logs-overview.md). The following table describes what's in each log. Depending on the output method, the fields included and the order in which they appear may vary.
Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log
| where Category == 'MySqlSlowLogs' | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s | where query_time_d > 10
- ```
-
+ ```
+ ## Next Steps+ - [How to configure slow query logs from the Azure portal](howto-configure-server-logs-portal.md) - [How to configure slow query logs from the Azure CLI](howto-configure-server-logs-cli.md)
mariadb Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-server-parameters.md
Title: Server parameters - Azure Database for MariaDB description: This topic provides guidelines for configuring server parameters in Azure Database for MariaDB.+ - Previously updated : 6/25/2020 Last updated : 06/24/2022 # Server parameters in Azure Database for MariaDB This article provides considerations and guidelines for configuring server parameters in Azure Database for MariaDB.
-## What are server parameters?
+## What are server parameters?
The MariaDB engine provides many different server variables/parameters that can be used to configure and tune engine behavior. Some parameters can be set dynamically during runtime while others are "static", requiring a server restart in order to apply.
Review the [MariaDB documentation](https://mariadb.com/kb/en/server-system-varia
### query_cache_size
-The query cache is enabled by default in MariaDB with the `have_query_cache` parameter.
+The query cache is enabled by default in MariaDB with the `have_query_cache` parameter.
Review the [MariaDB documentation](https://mariadb.com/kb/en/server-system-variables/#query_cache_size) to learn more about this parameter.
mariadb Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-servers.md
Title: Servers - Azure Database for MariaDB description: This topic provides considerations and guidelines for working with Azure Database for MariaDB servers.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Server concepts in Azure Database for MariaDB+ This article provides considerations and guidelines for working with Azure Database for MariaDB servers. ## What is an Azure Database for MariaDB server?
The following elements help ensure safe access to your database.
| **SSL** | The service supports enforcing SSL connections between your applications and your database server. See [Configure SSL connectivity in your application to securely connect to Azure Database for MariaDB](./howto-configure-ssl.md). | ## Stop/Start an Azure Database for MariaDB (Preview)+ Azure Database for MariaDB gives you the ability to **Stop** the server when not in use and **Start** the server when you resume activity. This is essentially done to save costs on the database servers and only pay for the resource when in use. This becomes even more important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. Later, when you want to bring the server back online, you can either use the [Azure portal](../mysql/how-to-stop-start-server.md) or [CLI](../mysql/how-to-stop-start-server.md). When the server is in the **Stopped** state, the server's compute is not billed. However, storage continues to to be billed as the server's storage remains to ensure that data files are available when the server is started again.
When the server is in the **Stopped** state, the server's compute is not billed.
During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to [start the server](../mysql/how-to-stop-start-server.md). ### Limitations of Stop/start operation+ - Not supported with read replica configurations (both source and replicas). ## How do I manage a server?+ You can manage Azure Database for MariaDB servers by using the Azure portal or the Azure CLI. ## Next steps+ - For an overview of the service, see [Azure Database for MariaDB Overview](./overview.md) - For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](./concepts-pricing-tiers.md)
mariadb Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-ssl-connection-security.md
Title: SSL/TLS connectivity - Azure Database for MariaDB description: Information for configuring Azure Database for MariaDB and associated applications to properly use SSL connections+ - Previously updated : 07/09/2020 Last updated : 06/24/2022 # SSL/TLS connectivity in Azure Database for MariaDB+ Azure Database for MariaDB supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application. >[!NOTE]
Azure Database for MariaDB supports connecting your database server to client ap
> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md) ## Default settings+ By default, the database service should be configured to require SSL connections when connecting to MariaDB. We recommend to avoid disabling the SSL option whenever possible. When provisioning a new Azure Database for MariaDB server through the Azure portal and CLI, enforcement of SSL connections is enabled by default.
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MariaDB server which is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem.
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MariaDB server which is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem.
Similarly, the following links point to the certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
Azure Database for MariaDB provides the ability to enforce the TLS version for t
| TLS1_1 | TLS 1.1, TLS 1.2 and higher | | TLS1_2 | TLS version 1.2 and higher | - For example, setting the value of Minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected. > [!Note]
As part of the SSL/TLS communication, the cipher suites are validated and only s
* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 ## Next steps+ - Learn more about [server firewall rules](concepts-firewall-rules.md) - Learn how to [configure SSL](howto-configure-ssl.md) - Learn how to [configure TLS](howto-tls-configurations.md)
mariadb Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-supported-versions.md
Title: Supported versions - Azure Database for MariaDB description: Learn which versions of the MariaDB server are supported in the Azure Database for MariaDB service.+ - Previously updated : 7/20/2020 Last updated : 06/24/2022 # Supported Azure Database for MariaDB server versions
Patch version: 10.3.23
Refer to the [MariaDB documentation](https://mariadb.com/kb/en/mariadb-10323-release-notes/) to learn more about improvements and fixes in this version. ## Managing updates and upgrades
-The service automatically manages upgrades for patch updates. For example, 10.2.21 to 10.2.23.
+
+The service automatically manages upgrades for patch updates. For example, 10.2.21 to 10.2.23.
Currently, minor and major version upgrades aren't supported. For example, upgrading from MariaDB 10.2 to MariaDB 10.3 isn't supported. If you'd like to upgrade from 10.2 to 10.3, take a [dump and restore](./howto-migrate-dump-restore.md) it to a server that was created with the new engine version.
mariadb Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/connect-workbench.md
Title: 'Quickstart: Connect MySQL Workbench - Azure Database for MariaDB' description: This quickstart provides the steps to use MySQL Workbench to connect to and query data from Azure Database for MariaDB.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Quickstart: Azure Database for MariaDB: Use MySQL Workbench to connect and query data
-This quickstart demonstrates how to connect to an Azure Database for MariaDB instance by using MySQL Workbench.
+This quickstart demonstrates how to connect to an Azure Database for MariaDB instance by using MySQL Workbench.
## Prerequisites
Get the connection information that's required to connect to the Azure Database
To connect to an Azure Database for MariaDB server by using MySQL Workbench:
-1. Open MySQL Workbench on your computer.
+1. Open MySQL Workbench on your computer.
2. In the **Setup New Connection** dialog box, on the **Parameters** tab, enter the following information:
To connect to an Azure Database for MariaDB server by using MySQL Workbench:
![Set up a new connection](./media/connect-workbench/2-setup-new-connection.png)
-3. To check that all parameters are configured correctly, select **Test Connection**.
+3. To check that all parameters are configured correctly, select **Test Connection**.
-4. Select **OK** to save the connection.
+4. Select **OK** to save the connection.
5. Under **MySQL Connections**, select the tile that corresponds to your server. Wait for the connection to be established. A new SQL tab opens with a blank editor where you can type your queries.
-
+ > [!NOTE] > By default, SSL connection security is required and is enforced on your Azure Database for MariaDB server. Although typically no additional configuration for SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. If you need to disable SSL, on the server overview page in the Azure portal, select **Connection security** from the menu. For **Enforce SSL connection**, select **Disabled**.
To connect to an Azure Database for MariaDB server by using MySQL Workbench:
1. Copy and paste the following sample SQL code into the page of a blank SQL tab to illustrate some sample data. This code creates an empty database named **quickstartdb**. Then, it creates a sample table named **inventory**. The code inserts some rows, and then reads the rows. It changes the data with an update statement, and then reads the rows again. Finally, the code deletes a row, and then reads the rows again.
-
+ ```sql -- Create a database -- DROP DATABASE IF EXISTS quickstartdb; CREATE DATABASE quickstartdb; USE quickstartdb;
-
+ -- Create a table and insert rows DROP TABLE IF EXISTS inventory; CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER); INSERT INTO inventory (name, quantity) VALUES ('banana', 150); INSERT INTO inventory (name, quantity) VALUES ('orange', 154); INSERT INTO inventory (name, quantity) VALUES ('apple', 100);
-
+ -- Read SELECT * FROM inventory;
-
+ -- Update UPDATE inventory SET quantity = 200 WHERE id = 1; SELECT * FROM inventory;
-
+ -- Delete DELETE FROM inventory WHERE id = 2; SELECT * FROM inventory; ``` The screenshot shows an example of the SQL code in MySQL Workbench and the output after it runs:
-
+ ![Select the MySQL Workbench SQL tab to run sample SQL code](media/connect-workbench/3-workbench-sql-tab.png) 2. To run the sample SQL code, on the **SQL File** tab, select the lightening bolt icon on the toolbar. 3. Note the three tabbed results in the **Result Grid** section in the middle of the page.
-4. Note the **Output** list at the bottom of the page. The status of each command is shown.
+4. Note the **Output** list at the bottom of the page. The status of each command is shown.
In this quickstart, you connected to Azure Database for MariaDB by using MySQL Workbench, and you queried data by using the SQL language. <!-- ## Next steps+ > [!div class="nextstepaction"] > [Migrate your database using Export and Import](./concepts-migrate-import-export.md) -->
mariadb Howto Alert Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-alert-metric.md
Title: Configure metric alerts - Azure portal - Azure Database for MariaDB description: This article describes how to configure and access metric alerts for Azure Database for MariaDB from the Azure portal.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Use the Azure portal to set up alerts on metrics for Azure Database for MariaDB
You can configure and get information about alert rules using:
* [Azure Monitor REST API](/rest/api/monitor/metricalerts) ## Create an alert rule on a metric+ 1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for MariaDB server you want to monitor. 2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
You can configure and get information about alert rules using:
5. Within the **Condition** section, select **Add condition**. 6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
-
+ ![Select metric](./media/howto-alert-metric/6-configure-signal-logic.png) 7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**.
-
+ Select **Done** when complete. ![Select metric 2](./media/howto-alert-metric/7-set-threshold-time.png)
You can configure and get information about alert rules using:
9. Fill out the "Add action group" form with a name, short name, subscription, and resource group. 10. Configure an **Email/SMS/Push/Voice** action type.
-
+ Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications.
-
+ Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires. Select **OK** when completed.
You can configure and get information about alert rules using:
11. Specify an Alert rule name, Description, and Severity.
- ![Action group 2](./media/howto-alert-metric/11-name-description-severity.png)
+ ![Action group 2](./media/howto-alert-metric/11-name-description-severity.png)
12. Select **Create alert rule** to create the alert. Within a few minutes, the alert is active and triggers as previously described. ## Manage your alerts+ Once you have created an alert, you can select it and do the following actions: * View a graph showing the metric threshold and the actual values from the previous day relevant to this alert. * **Edit** or **Delete** the alert rule. * **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications. - ## Next steps+ * Learn more about [configuring webhooks in alerts](../azure-monitor/alerts/alerts-webhooks.md).
-* Get an [overview of metrics collection](../azure-monitor/data-platform.md) to make sure your service is available and responsive.
+* Get an [overview of metrics collection](../azure-monitor/data-platform.md) to make sure your service is available and responsive.
mariadb Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-cli.md
Title: Auto grow storage - Azure CLI - Azure Database for MariaDB description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MariaDB.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Auto-grow Azure Database for MariaDB storage using the Azure CLI+ This article describes how you can configure an Azure Database for MariaDB server storage to grow without impacting the workload. The server [reaching the storage limit](concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](concepts-pricing-tiers.md#storage) apply.
mariadb Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for MariaDB description: This article describes how you can enable auto grow storage for Azure Database for MariaDB using Azure portal+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Auto grow storage in Azure Database for MariaDB using the Azure portal+ This article describes how you can configure an Azure Database for MariaDB server storage to grow without impacting the workload. When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](concepts-pricing-tiers.md#storage) apply. ## Prerequisites+ To complete this how-to guide, you need: - An [Azure Database for MariaDB server](./quickstart-create-mariadb-server-database-using-azure-portal.md)
-## Enable storage auto grow
+## Enable storage auto grow
Follow these steps to set MariaDB server storage auto grow: 1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MariaDB server.
-2. On the MariaDB server page, under **Settings** heading, click **Pricing tier** to open the pricing tier page.
+2. On the MariaDB server page, under **Settings** heading, select **Pricing tier** to open the pricing tier page.
3. In the Auto-growth section, select **Yes** to enable storage auto grow. ![Azure Database for MariaDB - Settings_Pricing_tier - Auto-growth](./media/howto-auto-grow-storage-portal/3-auto-grow.png)
-4. Click **OK** to save the changes.
+4. Select **OK** to save the changes.
5. A notification will confirm that auto grow was successfully enabled.
mariadb Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-powershell.md
Title: Auto grow storage - Azure PowerShell - Azure Database for MariaDB description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for MariaDB.+ - Previously updated : 5/26/2020 Last updated : 06/24/2022 # Auto grow storage in Azure Database for MariaDB server using PowerShell
mariadb Howto Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-cli.md
Title: Access audit logs - Azure CLI - Azure Database for MariaDB description: This article describes how to configure and access the audit logs in Azure Database for MariaDB from the Azure CLI.+ - Previously updated : 05/06/2022 Last updated : 06/24/2022 - devx-track-azurecli - kr2b-contr-experiment
To complete this guide:
>[!IMPORTANT] > It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted.
-Enable and configure audit logging using the following steps:
+Enable and configure audit logging using the following steps:
1. Turn on audit logs by setting the **audit_logs_enabled** parameter to "ON".
mariadb Howto Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-portal.md
Title: Access audit logs - Azure portal - Azure Database for MariaDB description: This article describes how to configure and access the audit logs in Azure Database for MariaDB from the Azure portal.+ - Previously updated : 6/24/2020 Last updated : 06/24/2022 # Configure and access audit logs in the Azure portal
Enable and configure audit logging.
1. Add any MariaDB users to be excluded from logging by updating the **audit_log_exclude_users** parameter. Specify users by providing their MariaDB user name. ![Audit log exclude users](./media/howto-configure-audit-logs-portal/audit-log-exclude-users.png)
-1. Once you have changed the parameters, you can click **Save**. Or you can **Discard** your changes.
+1. Once you have changed the parameters, you can select **Save**. Or you can **Discard** your changes.
![Save](./media/howto-configure-audit-logs-portal/save-parameters.png) ## Set up diagnostic logs 1. Under the **Monitoring** section in the sidebar, select **Diagnostic settings**.
-1. Click on "+ Add diagnostic setting"
+1. Select on "+ Add diagnostic setting"
![Add diagnostic setting](./media/howto-configure-audit-logs-portal/add-diagnostic-setting.png) 1. Provide a diagnostic setting name.
Enable and configure audit logging.
1. Select "MySqlAuditLogs" as the log type. ![Configure diagnostic setting](./media/howto-configure-audit-logs-portal/configure-diagnostic-setting.png)
-1. Once you've configured the data sinks to pipe the audit logs to, you can click **Save**.
+1. Once you've configured the data sinks to pipe the audit logs to, you can select **Save**.
![Save diagnostic setting](./media/howto-configure-audit-logs-portal/save-diagnostic-setting.png) 1. Access the audit logs by exploring them in the data sinks you configured. It may take up to 10 minutes for the logs to appear.
Enable and configure audit logging.
## Next steps - Learn more about [audit logs](concepts-audit-logs.md) in Azure Database for MariaDB-- Learn how to configure audit logs in the [Azure CLI](howto-configure-audit-logs-cli.md)
+- Learn how to configure audit logs in the [Azure CLI](howto-configure-audit-logs-cli.md)
mariadb Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-cli.md
Title: Private Link - Azure CLI - Azure Database for MariaDB description: Learn how to configure private link for Azure Database for MariaDB from Azure CLI+ - Previously updated : 01/09/2020 Last updated : 06/24/2022 # Create and manage Private Link for Azure Database for MariaDB using CLI
A Private Endpoint is the fundamental building block for private link in Azure.
## Prerequisites -- You need an [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md).
+- You need an [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md).
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
az group create --name myResourceGroup --location westeurope
``` ## Create a Virtual Network+ Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*: ```azurecli-interactive az network vnet create \
- --name myVirtualNetwork \
- --resource-group myResourceGroup \
- --subnet-name mySubnet
+--name myVirtualNetwork \
+--resource-group myResourceGroup \
+--subnet-name mySubnet
``` ## Disable subnet private endpoint policies + Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update): ```azurecli-interactive az network vnet subnet update \
- --name mySubnet \
- --resource-group myResourceGroup \
- --vnet-name myVirtualNetwork \
- --disable-private-endpoint-network-policies true
+--name mySubnet \
+--resource-group myResourceGroup \
+--vnet-name myVirtualNetwork \
+--disable-private-endpoint-network-policies true
``` ## Create the VM + Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*: ```azurecli-interactive az vm create \
az vm create \
--name myVm \ --image Win2019Datacenter ```
- Note the public IP address of the VM. You will use this address to connect to the VM from the internet in the next step.
+Note the public IP address of the VM. You will use this address to connect to the VM from the internet in the next step.
## Create an Azure Database for MariaDB server
-Create a Azure Database for MariaDB with the az mariadb server create command. Remember that the name of your MariaDB Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
+
+Create a Azure Database for MariaDB with the az mariadb server create command. Remember that the name of your MariaDB Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
```azurecli-interactive # Create a server in the resource group + az mariadb server create \ --name mydemoserver \ --resource-group myResourcegroup \
az mariadb server create \
> - Make sure that both the subscription has the **Microsoft.DBforMariaDB** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] ## Create the Private Endpoint
-Create a private endpoint for the MariaDB server in your Virtual Network:
+
+Create a private endpoint for the MariaDB server in your Virtual Network:
```azurecli-interactive az network private-endpoint create \
az network private-endpoint create \
--private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforMariaDB/servers" --query "id" -o tsv) \ --group-id mariadbServer \ --connection-name myConnection
- ```
-
+```
## Configure the Private DNS Zone
-Create a Private DNS Zone for MariDB server domain and create an association link with the Virtual Network.
+
+Create a Private DNS Zone for MariDB server domain and create an association link with the Virtual Network.
```azurecli-interactive az network private-dns zone create --resource-group myResourceGroup \
az network private-dns link vnet create --resource-group myResourceGroup \
--zone-name "privatelink.mariadb.database.azure.com"\ --name MyDNSLink \ --virtual-network myVirtualNetwork \
- --registration-enabled false
+ --registration-enabled false
#Query for the network interface ID + networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv) az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json # Copy the content for privateIPAddress and FQDN matching the Azure database for MariaDB name #Create DNS records + az network private-dns record-set a create --name mydemoserver --zone-name privatelink.mariadb.database.azure.com --resource-group myResourceGroup az network private-dns record-set a add-record --record-set-name mydemoserver --zone-name privatelink.mariadb.database.azure.com --resource-group myResourceGroup -a <Private IP Address> ```
Connect to the VM *myVm* from the internet as follows:
1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
-1. Once the VM desktop appears, minimize it to go back to your local desktop.
+1. Once the VM desktop appears, minimize it to go back to your local desktop.
## Access the MariaDB server privately from the VM 1. In the Remote Desktop ofΓÇ»*myVM*, open PowerShell.
-2. Enter ΓÇ»`nslookup mydemoserver.privatelink.mariadb.database.azure.com`.
+2. Enter ΓÇ»`nslookup mydemoserver.privatelink.mariadb.database.azure.com`.
You'll receive a message similar to this: ```azurepowershell
Connect to the VM *myVm* from the internet as follows:
| Hostname | Select *mydemoserver.privatelink.mariadb.database.azure.com* | | Username | Enter username as *username@servername* which is provided during the MariaDB server creation. | | Password | Enter a password provided during the MariaDB server creation. |
- ||
5. Select **Test Connection** or **OK**.
Connect to the VM *myVm* from the internet as follows:
8. Close the remote desktop connection to myVm. ## Clean up resources
-When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
+
+When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
```azurecli-interactive az group delete --name myResourceGroup --yes ``` ## Next steps+ Learn more about [What is Azure private endpoint](../private-link/private-endpoint-overview.md) <!-- Link references, to text, Within this same GitHub repo. -->
mariadb Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-portal.md
Title: Private Link - Azure portal - Azure Database for MariaDB description: Learn how to configure private link for Azure Database for MariaDB from Azure portal+ - Previously updated : 01/09/2020 Last updated : 06/24/2022 # Create and manage Private Link for Azure Database for MariaDB using Portal
If you don't have an Azure subscription, create a [free account](https://azure.m
> The private link feature is only available for Azure Database for MariaDB servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. ## Sign in to Azure+ Sign in to the [Azure portal](https://portal.azure.com). ## Create an Azure VM
Sign in to the [Azure portal](https://portal.azure.com).
In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a MariaDB server in Azure). ### Create the virtual network+ In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource. 1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**.
In this section, you will create a Virtual Network and the subnet to host the VM
| Public inbound ports | Leave the default **None**. | | **SAVE MONEY** | | | Already have a Windows license? | Leave the default **No**. |
- |||
1. Select **Next: Disks**.
In this section, you will create a Virtual Network and the subnet to host the VM
| Public IP | Leave the default **(new) myVm-ip**. | | Public inbound ports | Select **Allow selected ports**. | | Select inbound ports | Select **HTTP** and **RDP**.|
- |||
- 1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
In this section, you will create a Virtual Network and the subnet to host the VM
## Create an Azure Database for MariaDB
-In this section, you will create an Azure Database for MariaDB server in Azure.
+In this section, you will create an Azure Database for MariaDB server in Azure.
1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for MariaDB**.
In this section, you will create an Azure Database for MariaDB server in Azure.
| Location | Select an Azure region where you want to want your MariaDB Server to reside. | |Version | Select the database version of the MariaDB server that is required.| | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. |
- |||
7. Select **OK**. 8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. 9. When you see the Validation passed message, select **Create**.
-10. When you see the Validation passed message, select Create.
+10. When you see the Validation passed message, select Create.
> [!NOTE] > In some cases the Azure Database for MariaDB and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
In this section, you will create an Azure Database for MariaDB server in Azure.
## Create a private endpoint
-In this section, you will create a private endpoint to the MariaDB server to it.
+In this section, you will create a private endpoint to the MariaDB server to it.
1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**. 2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
If your virtual network and Azure database for MariaDB account are in different
|**PRIVATE DNS INTEGRATION**|| |Integrate with private DNS zone |Select **Yes**. | |Private DNS Zone |Select *(New)privatelink.mariadb.database.azure.com* |
- |||
> [!Note] > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md) for details. 1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-2. When you see the **Validation passed** message, select **Create**.
+2. When you see the **Validation passed** message, select **Create**.
![Private Link created](media/concepts-data-access-and-security-private-link/show-mariadb-private-link.png)
If your virtual network and Azure database for MariaDB account are in different
## Connect to a VM using Remote Desktop (RDP) -
-After you've created **myVm**, connect to it from the internet as follows:
+After you've created **myVm**, connect to it from the internet as follows:
1. In the portal's search bar, enter *myVm*.
After you've created **myVm**, connect to it from the internet as follows:
1. In the Remote Desktop of *myVM*, open PowerShell.
-2. EnterΓÇ»`nslookup mydemomserver.privatelink.mariadb.database.azure.com`.
+2. EnterΓÇ»`nslookup mydemomserver.privatelink.mariadb.database.azure.com`.
You'll receive a message similar to this: ```azurepowershell
After you've created **myVm**, connect to it from the internet as follows:
3. Test the private link connection for the MariaDB server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation. - 4. In **New connection**, enter or select this information: | Setting | Value |
After you've created **myVm**, connect to it from the internet as follows:
| User name | Enter username as username@servername which is provided during the MariaDB server creation. | |Password |Enter a password provided during the MariaDB server creation. | |SSL|Select **Required**.|
- ||
5. Select **Test Connection** or **OK**.
After you've created **myVm**, connect to it from the internet as follows:
7. Close the remote desktop connection to myVm. ## Clean up resources+ When you're done using the private endpoint, MariaDB server, and the VM, delete the resource group and all of the resources it contains: 1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results.
When you're done using the private endpoint, MariaDB server, and the VM, delete
In this how-to, you created a VM on a virtual network, an Azure Database for MariaDB, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the MariaDB server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../private-link/private-endpoint-overview.md). <!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mariadb Howto Configure Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-logs-cli.md
Title: Access slow query logs - Azure CLI - Azure Database for MariaDB description: This article describes how to access the slow logs in Azure Database for MariaDB by using the Azure CLI command-line utility.+ - ms.devlang: azurecli Previously updated : 4/13/2020 Last updated : 06/24/2022 # Configure and access Azure Database for MariaDB slow query logs by using Azure CLI You can download the Azure Database for MariaDB slow query logs by using Azure CLI, the Azure command-line utility. ## Prerequisites+ To step through this how-to guide, you need: - [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md) - The [Azure CLI](/cli/azure/install-azure-cli) or Azure Cloud Shell in the browser ## Configure logging+ You can configure the server to access the MySQL slow query log by taking the following steps: 1. Turn on slow query logging by setting the **slow\_query\_log** parameter to ON. 2. Select where to output the logs to using **log\_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**. To send logs only to Azure Monitor Logs, select **None**
az mariadb server configuration list --resource-group myresourcegroup --server m
``` ## List logs for Azure Database for MariaDB server+ If **log_output** is configured to "File", you can access logs directly from the server's local storage. To list the available slow query log files for your server, run the [az mariadb server-logs list](/cli/azure/mariadb/server-logs#az-mariadb-server-logs-list) command. You can list the log files for server **mydemoserver.mariadb.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**.
You can list the log files for server **mydemoserver.mariadb.database.azure.com*
az mariadb server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt ``` ## Download logs from the server+ If **log_output** is configured to "File", you can download individual log files from your server with the [az mariadb server-logs download](/cli/azure/mariadb/server-logs#az-mariadb-server-logs-download) command. Use the following example to download the specific log file for the server **mydemoserver.mariadb.database.azure.com** under the resource group **myresourcegroup** to your local environment.
az mariadb server-logs download --name mysql-slow-mydemoserver-2018110800.log --
``` ## Next steps+ - Learn about [slow query logs in Azure Database for MariaDB](concepts-server-logs.md).
mariadb Howto Configure Server Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-logs-portal.md
Title: Access slow query logs - Azure portal - Azure Database for MariaDB description: This article describes how to configure and access the slow query logs in Azure Database for MariaDB from the Azure portal.+ - Previously updated : 4/13/2020 Last updated : 06/24/2022 # Configure and access Azure Database for MariaDB slow query logs from the Azure portal
Last updated 4/13/2020
You can configure, list, and download the [Azure Database for MariaDB slow query logs](concepts-server-logs.md) from the Azure portal. ## Prerequisites+ The steps in this article require that you have [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-portal.md). ## Configure logging
-Configure access to the slow query log.
+
+Configure access to the slow query log.
1. Sign in to the [Azure portal](https://portal.azure.com/).
Configure access to the slow query log.
3. Under the **Monitoring** section in the sidebar, select **Server logs**. ![Screenshot of Server logs options](./media/howto-configure-server-logs-portal/1-select-server-logs-configure.png)
-4. To see the server parameters, select **Click here to enable logs and configure log parameters**.
+4. To see the server parameters, select **Select here to enable logs and configure log parameters**.
5. Turn **slow_query_log** to **ON**.
-6. Select where to output the logs to using **log_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**.
+6. Select where to output the logs to using **log_output**. To send logs to both local storage and Azure Monitor Diagnostic Logs, select **File**.
-7. Change any other parameters needed.
+7. Change any other parameters needed.
-8. Select **Save**.
+8. Select **Save**.
:::image type="content" source="./media/howto-configure-server-logs-portal/3-save-discard.png" alt-text="Screenshot of slow query log parameters and save."::: From the **Server Parameters** page, you can return to the list of logs by closing the page. ## View list and download logs
-After logging begins, you can view a list of available slow query logs, and download individual log files.
+
+After logging begins, you can view a list of available slow query logs, and download individual log files.
1. Open the Azure portal.
After logging begins, you can view a list of available slow query logs, and down
1. Access the slow query logs by exploring them in the data sinks you configured. It can take up to 10 minutes for the logs to appear. ## Next steps+ - See [Access slow query logs in CLI](howto-configure-server-logs-cli.md) to learn how to download slow query logs programmatically. - Learn more about [slow query logs](concepts-server-logs.md) in Azure Database for MariaDB. - For more information about the parameter definitions and logging, see the MariaDB documentation on [logs](https://mariadb.com/kb/en/library/slow-query-log-overview/).
mariadb Howto Configure Server Parameters Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-cli.md
Title: Configure server parameters - Azure CLI - Azure Database for MariaDB description: This article describes how to configure the service parameters in Azure Database for MariaDB using the Azure CLI command line utility.+ - ms.devlang: azurecli Previously updated : 10/1/2020 Last updated : 06/24/2022 # Configure server parameters in Azure Database for MariaDB using the Azure CLI+ You can list, show, and update configuration parameters for an Azure Database for MariaDB server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified. >[!Note] > Server parameters can be updated globally at the server-level, use the [Azure CLI](./howto-configure-server-parameters-cli.md), [PowerShell](./howto-configure-server-parameters-using-powershell.md), or [Azure portal](./howto-server-parameters.md). ## Prerequisites+ To step through this how-to guide, you need: - [An Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md) - [Azure CLI](/cli/azure/install-azure-cli) command-line utility or use the Azure Cloud Shell in the browser. ## List server configuration parameters for Azure Database for MariaDB server+ To list all modifiable parameters in a server and their values, run the [az mariadb server configuration list](/cli/azure/mariadb/server/configuration#az-mariadb-server-configuration-list) command. You can list the server configuration parameters for the server **mydemoserver.mariadb.database.azure.com** under resource group **myresourcegroup**.
az mariadb server configuration list --resource-group myresourcegroup --server m
For the definition of each of the listed parameters, see the MariaDB reference section on [Server System Variables](https://mariadb.com/kb/en/library/server-system-variables/). ## Show server configuration parameter details+ To show details about a particular configuration parameter for a server, run the [az mariadb server configuration show](/cli/azure/mariadb/server/configuration#az-mariadb-server-configuration-show) command. This example shows details of the **slow\_query\_log** server configuration parameter for server **mydemoserver.mariadb.database.azure.com** under resource group **myresourcegroup.**
az mariadb server configuration show --name slow_query_log --resource-group myre
``` ## Modify a server configuration parameter value
-You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MariaDB server engine. To update the configuration, use the [az mariadb server configuration set](/cli/azure/mariadb/server/configuration#az-mariadb-server-configuration-set) command.
+
+You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MariaDB server engine. To update the configuration, use the [az mariadb server configuration set](/cli/azure/mariadb/server/configuration#az-mariadb-server-configuration-set) command.
To update the **slow\_query\_log** server configuration parameter of server **mydemoserver.mariadb.database.azure.com** under resource group **myresourcegroup.** ```azurecli-interactive
If you want to reset the value of a configuration parameter, omit the optional `
az mariadb server configuration set --name slow_query_log --resource-group myresourcegroup --server mydemoserver ```
-This code resets the **slow\_query\_log** configuration to the default value **OFF**.
+This code resets the **slow\_query\_log** configuration to the default value **OFF**.
## Setting parameters not listed
-If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
+
+If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
Update the **init\_connect** server configuration parameter of server **mydemoserver.mariadb.database.azure.com** under resource group **myresourcegroup** to set values such as character set. ```azurecli-interactive
az mariadb server configuration set --name time_zone --resource-group myresource
### Setting the session level time zone
-The session level time zone can be set by running the `SET time_zone` command from a tool like the MariaDB command line or MariaDB Workbench. The example below sets the time zone to the **US/Pacific** time zone.
+The session level time zone can be set by running the `SET time_zone` command from a tool like the MariaDB command line or MariaDB Workbench. The example below sets the time zone to the **US/Pacific** time zone.
```sql SET time_zone = 'US/Pacific';
mariadb Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-using-powershell.md
Title: Configure Azure Database for MariaDB - Azure PowerShell description: This article describes how to configure the service parameters in Azure Database for MariaDB using PowerShell.+ - ms.devlang: azurepowershell Previously updated : 05/06/2022 Last updated : 06/24/2022 - devx-track-azurepowershell - kr2b-contr-experiment
mariadb Howto Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-ssl.md
Title: Configure SSL - Azure Database for MariaDB description: Instructions for how to properly configure Azure Database for MariaDB and associated applications to correctly use SSL connections+ - Previously updated : 07/08/2020 Last updated : 06/24/2022 ms.devlang: csharp, golang, java, php, python, ruby # Configure SSL connectivity in your application to securely connect to Azure Database for MariaDB+ Azure Database for MariaDB supports connecting your Azure Database for MariaDB server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application. ## Obtain SSL certificate - Download the certificate needed to communicate over SSL with your Azure Database for MariaDB server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example). **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
See the following links for certificates for servers in sovereign clouds: [Azure
## Bind SSL ### Connecting to server using MySQL Workbench over SSL
-Configure MySQL Workbench to connect securely over SSL.
-1. From the Setup New Connection dialogue, navigate to the **SSL** tab.
+Configure MySQL Workbench to connect securely over SSL.
+
+1. From the Setup New Connection dialogue, navigate to the **SSL** tab.
1. Update the **Use SSL** field to "Require".
-1. In the **SSL CA File:** field, enter the file location of the **BaltimoreCyberTrustRoot.crt.pem**.
-
+1. In the **SSL CA File:** field, enter the file location of the **BaltimoreCyberTrustRoot.crt.pem**.
+ ![Save SSL configuration](./media/howto-configure-ssl/mysql-workbench-ssl.png) For existing connections, you can bind SSL by right-clicking on the connection icon and choose edit. Then navigate to the **SSL** tab and bind the cert file. ### Connecting to server using the MySQL CLI over SSL
-Another way to bind the SSL certificate is to use the MySQL command-line interface by executing the following commands.
+
+Another way to bind the SSL certificate is to use the MySQL command-line interface by executing the following commands.
```bash mysql.exe -h mydemoserver.mariadb.database.azure.com -u Username@mydemoserver -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\BaltimoreCyberTrustRoot.crt.pem
mysql.exe -h mydemoserver.mariadb.database.azure.com -u Username@mydemoserver -p
> [!NOTE] > When using the MySQL command-line interface on Windows, you may receive an error `SSL connection error: Certificate signature check failed`. If this occurs, replace the `--ssl-mode=REQUIRED --ssl-ca={filepath}` parameters with `--ssl`.
-## Enforcing SSL connections in Azure
+## Enforcing SSL connections in Azure
### Using the Azure portal
-Using the Azure portal, visit your Azure Database for MariaDB server, and then click **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting, and then click **Save**. Microsoft recommends to always enable the **Enforce SSL connection** setting for enhanced security.
+
+Using the Azure portal, visit your Azure Database for MariaDB server, and then select **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting, and then select **Save**. Microsoft recommends to always enable the **Enforce SSL connection** setting for enhanced security.
![enable-ssl for MariaDB server](./media/howto-configure-ssl/enable-ssl.png) ### Using Azure CLI+ You can enable or disable the **ssl-enforcement** parameter by using Enabled or Disabled values respectively in Azure CLI. ```azurecli-interactive az mariadb server update --resource-group myresource --name mydemoserver --ssl-enforcement Enabled ``` ## Verify the SSL connection+ Execute the mysql **status** command to verify that you have connected to your MariaDB server using SSL: ```sql status ```
-Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is AES256-SHA**
+Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is AES256-SHA**
## Sample code+ To establish a secure connection to Azure Database for MariaDB over SSL from your application, refer to the following code samples: ### PHP+ ```php $conn = mysqli_init(); mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/BaltimoreCyberTrustRoot.crt.pem", NULL, NULL) ;
die('Failed to connect to MySQL: '.mysqli_connect_error());
} ``` ### Python (MySQLConnector Python)+ ```python try: conn = mysql.connector.connect(user='myadmin@mydemoserver',
except mysql.connector.Error as err:
print(err) ``` ### Python (PyMySQL)+ ```python conn = pymysql.connect(user='myadmin@mydemoserver', password='yourpassword',
conn = pymysql.connect(user='myadmin@mydemoserver',
``` ### Ruby+ ```ruby client = Mysql2::Client.new( :host => 'mydemoserver.mariadb.database.azure.com',
client = Mysql2::Client.new(
) ``` #### Ruby on Rails+ ```ruby default: &default adapter: mysql2
default: &default
``` ### Golang+ ```go rootCertPool := x509.NewCertPool() pem, _ := ioutil.ReadFile("/var/www/html/BaltimoreCyberTrustRoot.crt.pem")
connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true&
db, _ := sql.Open("mysql", connectionString) ``` ### Java (JDBC)+ ```java # generate truststore and keystore in code+ String importCert = " -import "+ " -alias mysqlServerCACert "+ " -file " + ssl_ca +
sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+"));
sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+")); # use the generated keystore and truststore + System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file"); System.setProperty("javax.net.ssl.keyStorePassword","password"); System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
properties.setProperty("password", 'yourpassword');
conn = DriverManager.getConnection(url, properties); ``` ### Java (MariaDB)+ ```java # generate truststore and keystore in code+ String importCert = " -import "+ " -alias mysqlServerCACert "+ " -file " + ssl_ca +
sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+"));
sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+")); # use the generated keystore and truststore + System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file"); System.setProperty("javax.net.ssl.keyStorePassword","password"); System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
conn = DriverManager.getConnection(url, properties);
``` ### .NET (MySqlConnector)+ ```csharp var builder = new MySqlConnectionStringBuilder {
using (var connection = new MySqlConnection(builder.ConnectionString))
} ``` - ## Next steps+ To learn about certificate expiry and rotation, refer [certificate rotation documentation](concepts-certificate-rotation.md)
mariadb Howto Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-connection-string-powershell.md
Title: Generate a connection string with PowerShell - Azure Database for MariaDB description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for MariaDB.+ - Previously updated : 8/5/2020 Last updated : 06/24/2022 # How to generate an Azure Database for MariaDB connection string with PowerShell
mariadb Howto Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-connection-string.md
Title: Connection strings - Azure Database for MariaDB description: This document lists the currently supported connection strings for applications to connect with Azure Database for MariaDB, including ADO.NET (C#), JDBC, Node.js, ODBC, PHP, Python, and Ruby.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # How to connect applications to Azure Database for MariaDB+ This topic lists the connection string types that are supported by Azure Database for MariaDB, together with templates and examples. You might have different parameters and settings in your connection string. - To obtain the certificate, see [How to configure SSL](./howto-configure-ssl.md).
This topic lists the connection string types that are supported by Azure Databas
- {your_user}@{servername} = userID format for authentication correctly. If you only use the userID, the authentication will fail. ## ADO.NET+ ```csharp Server={your_host}; Port=3306; Database={your_database}; Uid={username@servername}; Pwd={your_password}; SslMode=Preferred; ```
Server= "mydemoserver.mariadb.database.azure.com"; Port=3306; Database= "wpdb";
``` ## JDBC+ ```java String url ="jdbc:mariadb://{your_host}:3306/{your_database}?useSSL=true&trustServerCertificate=true"; myDbConn = DriverManager.getConnection(url, "{username@servername}", {your_password}); ``` ## Node.js+ ```javascript var conn = mysql.createConnection({host: "{your_host}", user: "{your_username}", password: {your_password}, database: {your_database}, port: 3306, ssl:{ca:fs.readFileSync({ca-cert filename})}}); ``` ## ODBC+ ```cpp DRIVER={MARIADB ODBC 3.0 Driver}; Server="{your_host}"; Port=3306; Database={your_database}; Uid="{username@servername}"; Pwd={your_password}; sslca={ca-cert filename}; sslverify=1; ``` ## PHP+ ```php $con=mysqli_init(); mysqli_ssl_set($con, NULL, NULL, {ca-cert filename}, NULL, NULL); mysqli_real_connect($con, "{your_host}", "{username@servername}", {your_password}, {your_database}, 3306); ``` ## Python+ ```python cnx = mysql.connector.connect(user="{username@servername}", password={your_password}, host="{your_host}", port=3306, database={your_database}, ssl_ca={ca-cert filename}, ssl_verify_cert=true) ``` ## Ruby+ ```ruby client = Mysql2::Client.new(username: "{username@servername}", password: {your_password}, database: {your_database}, host: "{your_host}", port: 3306, sslca:{ca-cert filename}, sslverify:false, sslcipher:'AES256-SHA') ``` ## Get the connection string details from the Azure portal
-In the [Azure portal](https://portal.azure.com), go to your Azure Database for MariaDB server, and then click **Connection strings** to get the string list for your instance:
+
+In the [Azure portal](https://portal.azure.com), go to your Azure Database for MariaDB server, and then select **Connection strings** to get the string list for your instance:
![The Connection strings pane in the Azure portal](./media/howto-connection-strings/connection-strings-on-portal.png) The string provides details such as the driver, server, and other database connection parameters. Modify these examples to use your own parameters, such as database name, password, and so on. You can then use this string to connect to the server from your code and applications. <!-- ## Next steps+ - For more information about connection libraries, see [Concepts - Connection libraries](./concepts-connection-libraries.md). - -->
mariadb Howto Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-create-manage-server-portal.md
Title: Manage server - Azure portal - Azure Database for MariaDB description: Learn how to manage an Azure Database for MariaDB server from the Azure portal.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Manage an Azure Database for MariaDB server using the Azure portal+ This article shows you how to manage your Azure Database for MariaDB servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details. ## Sign in+ Sign in to the [Azure portal](https://portal.azure.com). ## Create a server+ Visit the [quickstart](quickstart-create-mariadb-server-database-using-azure-portal.md) to learn how to create and get started with an Azure Database for MariaDB server. ## Scale compute and storage
After server creation you can scale between the General Purpose and Memory Optim
### Scale between General Purpose and Memory Optimized tiers
-You can scale from General Purpose to Memory Optimized and vice-versa. Changing to and from the Basic tier after server creation is not supported.
+You can scale from General Purpose to Memory Optimized and vice-versa. Changing to and from the Basic tier after server creation is not supported.
1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
-2. Select **General Purpose** or **Memory Optimized**, depending on what you are scaling to.
+2. Select **General Purpose** or **Memory Optimized**, depending on what you are scaling to.
![Screenshot shows the Azure portal with Pricing tier selected and a value of Memory Optimized selected.](./media/howto-create-manage-server-portal/change-pricing-tier.png)
You can scale from General Purpose to Memory Optimized and vice-versa. Changing
4. Select **OK** to save changes. - ### Scale vCores up or down 1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
You can scale from General Purpose to Memory Optimized and vice-versa. Changing
3. Select **OK** to save changes. - ### Scale storage up 1. Select your server in the Azure portal. Select **Pricing tier**, located in the **Settings** section.
You can scale from General Purpose to Memory Optimized and vice-versa. Changing
3. Select **OK** to save changes. - ## Update admin password+ You can change the administrator role's password using the Azure portal. 1. Select your server in the Azure portal. In the **Overview** window select **Reset password**.
You can change the administrator role's password using the Azure portal.
3. Select **OK** to save the new password. - ## Delete a server
-You can delete your server if you no longer need it.
+You can delete your server if you no longer need it.
1. Select your server in the Azure portal. In the **Overview** window select **Delete**.
You can delete your server if you no longer need it.
3. Select **Delete**. - ## Next steps+ - Learn about [backups and server restore](howto-restore-server-portal.md) - Learn about [tuning and monitoring options in Azure Database for MariaDB](concepts-monitoring.md)
mariadb Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-create-users.md
Title: Create users - Azure Database for MariaDB description: This article describes how you can create new user accounts to interact with an Azure Database for MariaDB server.+ - Previously updated : 01/18/2021 Last updated : 06/24/2022 # Create users in Azure Database for MariaDB
After the Azure Database for MariaDB server is created, you can use the first se
2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, HeidiSQL, or others. If you're unsure of how to connect, see [Use MySQL Workbench to connect and query data](./connect-workbench.md)
-3. Edit and run the following SQL code. Replace your new user name for the placeholder value `new_master_user`. This syntax grants the listed privileges on all the database schemas (*.*) to the user name (new_master_user in this example).
+3. Edit and run the following SQL code. Replace your new user name for the placeholder value `new_master_user`. This syntax grants the listed privileges on all the database schemas (*.*) to the user name (new_master_user in this example).
```sql CREATE USER 'new_master_user'@'%' IDENTIFIED BY 'StrongPassword!';
-
- GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'new_master_user'@'%' WITH GRANT OPTION;
-
+
+ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'new_master_user'@'%' WITH GRANT OPTION;
+ FLUSH PRIVILEGES; ```
After the Azure Database for MariaDB server is created, you can use the first se
```sql USE sys;
-
+ SHOW GRANTS FOR 'new_master_user'@'%'; ``` ## Create database users 1. Get the connection information and admin user name.
- To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+ To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, HeidiSQL, or others. If you are unsure of how to connect, see [Use MySQL Workbench to connect and query data](./connect-workbench.md) 3. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name, and placeholder value `testdb` with your own database name.
- This sql code syntax creates a new database named testdb for example purposes. Then it creates a new user in the Azure Database for MariaDB service, and grants all privileges to the new database schema (testdb.\*) for that user.
+ This sql code syntax creates a new database named testdb for example purposes. Then it creates a new user in the Azure Database for MariaDB service, and grants all privileges to the new database schema (testdb.\*) for that user.
```sql CREATE DATABASE testdb;
-
+ CREATE USER 'db_user'@'%' IDENTIFIED BY 'StrongPassword!';
-
+ GRANT ALL PRIVILEGES ON testdb . * TO 'db_user'@'%';
-
+ FLUSH PRIVILEGES; ```
After the Azure Database for MariaDB server is created, you can use the first se
```sql USE testdb;
-
+ SHOW GRANTS FOR 'db_user'@'%'; ```
All Azure Database for MySQL servers are created with a user called "azure_super
## Next steps Open the firewall for the IP addresses of the new users' machines to enable them to connect:
-[Create and manage Azure Database for MariaDB firewall rules by using the Azure portal](howto-manage-firewall-portal.md)
+[Create and manage Azure Database for MariaDB firewall rules by using the Azure portal](howto-manage-firewall-portal.md)
<!--or [Azure CLI](howto-manage-firewall-using-cli.md).-->
mariadb Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-data-in-replication.md
Title: Configure data-in Replication - Azure Database for MariaDB description: This article describes how to set up Data-in Replication in Azure Database for MariaDB.+ - Previously updated : 01/18/2021 Last updated : 06/24/2022 # Configure Data-in Replication in Azure Database for MariaDB
Review the [limitations and requirements](concepts-data-in-replication.md#limita
User accounts aren't replicated from the source server to the replica server. To provide user access to the replica server, you must manually create all accounts and corresponding privileges on the newly created Azure Database for MariaDB server.
-3. Add the source server's IP address to the replica's firewall rules.
+3. Add the source server's IP address to the replica's firewall rules.
Update firewall rules using the [Azure portal](howto-manage-firewall-portal.md) or [Azure CLI](howto-manage-firewall-cli.md).
Review the [limitations and requirements](concepts-data-in-replication.md#limita
The following steps prepare and configure the MariaDB server hosted on-premises, in a VM, or in a cloud database service for Data-in Replication. The MariaDB server is the source in Data-in Replication.
-1. Review the [primary server requirements](concepts-data-in-replication.md#requirements) before proceeding.
+1. Review the [primary server requirements](concepts-data-in-replication.md#requirements) before proceeding.
-2. Ensure the source server allows both inbound and outbound traffic on port 3306 and that the source server has a **public IP address**, the DNS is publicly accessible, or has a fully qualified domain name (FQDN).
+2. Ensure the source server allows both inbound and outbound traffic on port 3306 and that the source server has a **public IP address**, the DNS is publicly accessible, or has a fully qualified domain name (FQDN).
Test connectivity to the source server by attempting to connect from a tool such as the MySQL command line hosted on another machine or from the [Azure Cloud Shell](../cloud-shell/overview.md) available in the Azure portal.
The following steps prepare and configure the MariaDB server hosted on-premises,
8. Get the GTID position (optional, needed for replication with GTID). Run the function [`BINLOG_GTID_POS`](https://mariadb.com/kb/en/library/binlog_gtid_pos/) to get the GTID position for the corresponding binlog file name and offset.
-
+ ```sql select BINLOG_GTID_POS('<binlog file name>', <binlog offset>); ```
The following steps prepare and configure the MariaDB server hosted on-premises,
- master_gtid_pos: GTID position from running `select BINLOG_GTID_POS('<binlog file name>', <binlog offset>);` - master_ssl_ca: CA certificate's context. If you're not using SSL, pass in an empty string.* - *We recommend passing in the master_ssl_ca parameter as a variable. For more information, see the following examples. **Examples**
mariadb Howto Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-deny-public-network-access.md
Title: Deny Public Network Access - Azure portal - Azure Database for MariaDB description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for MariaDB + - Previously updated : 03/10/2020 Last updated : 06/24/2022 # Deny Public Network Access in Azure Database for MariaDB using Azure portal
Follow these steps to set MariaDB server Deny Public Network Access:
1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MariaDB server.
-1. On the MariaDB server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+1. On the MariaDB server page, under **Settings**, select **Connection security** to open the connection security configuration page.
1. In Deny Public Network Access, select **Yes** to enable deny public access for your MariaDB server. ![Azure Database for MariaDB Deny network access](./media/howto-deny-public-network-access/deny-public-network-access.PNG)
-1. Click **Save** to save the changes.
+1. Select **Save** to save the changes.
1. A notification will confirm that connection security setting was successfully enabled.
Follow these steps to set MariaDB server Deny Public Network Access:
## Next steps
-Learn about [how to create alerts on metrics](howto-alert-metric.md).
+Learn about [how to create alerts on metrics](howto-alert-metric.md).
mariadb Howto Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-firewall-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for MariaDB description: This article describes how to create and manage Azure Database for MariaDB firewall rules using Azure CLI command-line.+ - ms.devlang: azurecli Previously updated : 3/18/2020 Last updated : 06/24/2022 # Create and manage Azure Database for MariaDB firewall rules by using the Azure CLI+ Server-level firewall rules can be used to manage access to an Azure Database for MariaDB Server from a specific IP address or a range of IP addresses. Using convenient Azure CLI commands, you can create, update, delete, list, and show firewall rules to manage your server. For an overview of Azure Database for MariaDB firewalls, see [Azure Database for MariaDB server firewall rules](./concepts-firewall-rules.md). Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-cli.md). ## Prerequisites+ * [Install Azure CLI](/cli/azure/install-azure-cli). * An [Azure Database for MariaDB server and database](quickstart-create-mariadb-server-database-using-azure-cli.md). ## Firewall rule commands:+ The **az mariadb server firewall-rule** command is used from the Azure CLI to create, delete, list, show, and update firewall rules. Commands:
Commands:
- **update**: Update an Azure MariaDB server firewall rule. ## Sign in to Azure and list your Azure Database for MariaDB Servers+ Securely connect Azure CLI with your Azure account by using the **az login** command. 1. From the command-line, run the following command:
Securely connect Azure CLI with your Azure account by using the **az login** com
``` ## List firewall rules on Azure Database for MariaDB Server + Using the server name and the resource group name, list the existing server firewall rules on the server. Use the [az mariadb server firewall list](/cli/azure/mariadb/server/firewall-rule#az-mariadb-server-firewall-rule-list) command. Notice that the server name attribute is specified in the **--server** switch and not in the **--name** switch. ```azurecli-interactive az mariadb server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver
The output lists the rules, if any, in JSON format (by default). You can use the
az mariadb server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver --output table ``` ## Create a firewall rule on Azure Database for MariaDB Server+ Using the Azure MariaDB server name and the resource group name, create a new firewall rule on the server. Use the [az mariadb server firewall create](/cli/azure/mariadb/server/firewall-rule#az-mariadb-server-firewall-rule-create) command. Provide a name for the rule, as well as the start IP and end IP (to provide access to a range of IP addresses) for the rule. ```azurecli-interactive az mariadb server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.15
az mariadb server firewall-rule create --resource-group myresourcegroup --server
> [!IMPORTANT] > This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+>
Upon success, each create command output lists the details of the firewall rule you have created, in JSON format (by default). If there is a failure, the output shows error message text instead. ## Update a firewall rule on Azure Database for MariaDB server + Using the Azure MariaDB server name and the resource group name, update an existing firewall rule on the server. Use the [az mariadb server firewall update](/cli/azure/mariadb/server/firewall-rule#az-mariadb-server-firewall-rule-update) command. Provide the name of the existing firewall rule as input, as well as the start IP and end IP attributes to update. ```azurecli-interactive az mariadb server firewall-rule update --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.1
Upon success, the command output lists the details of the firewall rule you have
> If the firewall rule does not exist, the rule is created by the update command. ## Show firewall rule details on Azure Database for MariaDB Server+ Using the Azure MariaDB server name and the resource group name, show the existing firewall rule details from the server. Use the [az mariadb server firewall show](/cli/azure/mariadb/server/firewall-rule#az-mariadb-server-firewall-rule-show) command. Provide the name of the existing firewall rule as input. ```azurecli-interactive az mariadb server firewall-rule show --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1
az mariadb server firewall-rule show --resource-group myresourcegroup --server-n
Upon success, the command output lists the details of the firewall rule you have specified, in JSON format (by default). If there is a failure, the output shows error message text instead. ## Delete a firewall rule on Azure Database for MariaDB Server+ Using the Azure MariaDB server name and the resource group name, remove an existing firewall rule from the server. Use the [az mariadb server firewall delete](/cli/azure/mariadb/server/firewall-rule#az-mariadb-server-firewall-rule-delete) command. Provide the name of the existing firewall rule. ```azurecli-interactive az mariadb server firewall-rule delete --resource-group myresourcegroup --server-name mydemoserver --name FirewallRule1
az mariadb server firewall-rule delete --resource-group myresourcegroup --server
Upon success, there is no output. Upon failure, error message text displays. ## Next steps+ - Understand more about [Azure Database for MariaDB Server firewall rules](./concepts-firewall-rules.md). - [Create and manage Azure Database for MariaDB firewall rules using the Azure portal](./howto-manage-firewall-portal.md).-- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-cli.md).
+- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-cli.md).
mariadb Howto Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-firewall-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for MariaDB description: Create and manage Azure Database for MariaDB firewall rules using the Azure portal+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Create and manage Azure Database for MariaDB firewall rules by using the Azure portal+ Server-level firewall rules can be used to manage access to an Azure Database for MariaDB Server from a specified IP address or a range of IP addresses. Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](howto-manage-vnet-portal.md). ## Create a server-level firewall rule in the Azure portal
-1. On the MariaDB server page, under Settings heading, click **Connection Security** to open the Connection Security page for the Azure Database for MariaDB.
+1. On the MariaDB server page, under Settings heading, select **Connection Security** to open the Connection Security page for the Azure Database for MariaDB.
![Azure portal - click Connection security](./media/howto-manage-firewall-portal/1-connection-security.png)
-2. Click **Add My IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
+2. Select **Add My IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
![Azure portal - click Add My IP](./media/howto-manage-firewall-portal/2-add-my-ip.png)
Virtual Network (VNet) rules can also be used to secure access to your server. L
![Azure portal - firewall rules](./media/howto-manage-firewall-portal/4-specify-addresses.png)
-5. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules is successful.
+5. Select **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules is successful.
![Azure portal - click Save](./media/howto-manage-firewall-portal/5-save-firewall-rule.png) ## Connecting from Azure
-To allow applications from Azure to connect to your Azure Database for MariaDB server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and click **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for MariaDB server.
+
+To allow applications from Azure to connect to your Azure Database for MariaDB server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and select **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for MariaDB server.
> [!IMPORTANT] > This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+>
## Manage existing firewall rules in the Azure portal+ Repeat the steps to manage the firewall rules.
-* To add the current computer, click **+ Add My IP**. Click **Save** to save the changes.
-* To add additional IP addresses, type in the **RULE NAME**, **START IP**, and **END IP**. Click **Save** to save the changes.
-* To modify an existing rule, click any of the fields in the rule, and then modify. Click **Save** to save the changes.
-* To delete an existing rule, click the ellipsis […], and then click **Delete**. Click **Save** to save the changes.
+* To add the current computer, select **+ Add My IP**. Select **Save** to save the changes.
+* To add additional IP addresses, type in the **RULE NAME**, **START IP**, and **END IP**. Select **Save** to save the changes.
+* To modify an existing rule, select any of the fields in the rule, and then modify. Select **Save** to save the changes.
+* To delete an existing rule, select the ellipsis […], and then select **Delete**. Select **Save** to save the changes.
## Next steps+
+- Similarly, you can script to [Create and manage Azure Database for MariaDB firewall rules using Azure CLI](howto-manage-firewall-cli.md).
+ - Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure portal](howto-manage-vnet-portal.md).
mariadb Howto Manage Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-cli.md
Title: Manage VNet endpoints - Azure CLI - Azure Database for MariaDB description: This article describes how to create and manage Azure Database for MariaDB VNet service endpoints and rules using Azure CLI command line.+ - ms.devlang: azurecli Previously updated : 01/26/2022 Last updated : 06/24/2022 # Create and manage Azure Database for MariaDB VNet service endpoints using Azure CLI
mariadb Howto Manage Vnet Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-portal.md
Title: Manage VNet endpoints - Azure portal - Azure Database for MariaDB description: Create and manage Azure Database for MariaDB VNet service endpoints and rules using the Azure portal+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Create and manage Azure Database for MariaDB VNet service endpoints and VNet rules by using the Azure portal
Virtual Network (VNet) services endpoints and rules extend the private address s
## Create a VNet rule and enable service endpoints
-1. On the MariaDB server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for MariaDB.
+1. On the MariaDB server page, under the Settings heading, select **Connection Security** to open the Connection Security pane for Azure Database for MariaDB.
2. Ensure that the Allow access to Azure services control is set to **OFF**. > [!Important] > If you set it to ON, your Azure MariaDB Database server accepts communication from any subnet. Leaving the control set to ON might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for MariaDB, together can reduce your security surface area.
-3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)
+3. Next, select on **+ Adding existing virtual network**. If you do not have an existing VNet you can select **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)
![Azure portal - click Connection security](./media/howto-manage-vnet-portal/1-connection-security.png)
-4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then click **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
+4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then select **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
![Azure portal - configure VNet](./media/howto-manage-vnet-portal/2-configure-vnet.png) The account must have the necessary permissions to create a virtual network and service endpoint. Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
-
+ To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
-
+ Learn more about [built-in roles](../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../role-based-access-control/custom-roles.md).
-
+ VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] > [!IMPORTANT] > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for MariaDB, PostgreSQL, and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL, Azure Database for MariaDB, and Azure Database for MySQL servers on the subnet.
- >
+ >
-5. Once enabled, click **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
+5. Once enabled, select **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
![VNet service endpoints enabled and VNet rule created](./media/howto-manage-vnet-portal/3-vnet-service-endpoints-enabled-vnet-rule-created.png) ## Next steps+ - Learn more about [configuring SSL on Azure Database for MariaDB](howto-configure-ssl.md) - Similarly, you can script to [Enable VNet service endpoints and create a VNET rule for Azure Database for MariaDB using Azure CLI](howto-manage-vnet-cli.md). <!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mariadb Howto Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-migrate-dump-restore.md
Title: Migrate with dump and restore - Azure Database for MariaDB description: This article explains two common ways to back up and restore databases in your Azure database for MariaDB by using tools such as mysqldump, MySQL Workbench, and phpMyAdmin.+ - Previously updated : 2/27/2020 Last updated : 06/24/2022 # Migrate your MariaDB database to an Azure database for MariaDB by using dump and restore
This article explains two common ways to back up and restore databases in your A
- Dump and restore using phpMyAdmin ## Prerequisites+ Before you begin migrating your database, do the following: - Create an [Azure Database for MariaDB server - Azure portal](quickstart-create-mariadb-server-database-using-azure-portal.md). - Install the [mysqldump](https://mariadb.com/kb/en/library/mysqldump/) command-line utility. - Download and install [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool for running dump and restore commands. ## Use common tools
-Use common utilities and tools such as MySQL Workbench or mysqldump to remotely connect and restore data into your Azure database for MariaDB. Use these tools on your client machine with an internet connection to connect to the Azure database for MariaDB. Use an SSL-encrypted connection as a best security practice. For more information, see [Configure SSL connectivity in Azure Database for MariaDB](concepts-ssl-connection-security.md). You don't need to move the dump files to any special cloud location when you migrate data to your Azure database for MariaDB.
+
+Use common utilities and tools such as MySQL Workbench or mysqldump to remotely connect and restore data into your Azure database for MariaDB. Use these tools on your client machine with an internet connection to connect to the Azure database for MariaDB. Use an SSL-encrypted connection as a best security practice. For more information, see [Configure SSL connectivity in Azure Database for MariaDB](concepts-ssl-connection-security.md). You don't need to move the dump files to any special cloud location when you migrate data to your Azure database for MariaDB.
## Common uses for dump and restore
-You can use MySQL utilities such as mysqldump and mysqlpump to dump and load databases into an Azure database for MariaDB server in several common scenarios.
+
+You can use MySQL utilities such as mysqldump and mysqlpump to dump and load databases into an Azure database for MariaDB server in several common scenarios.
- Use database dumps when you're migrating an entire database. This recommendation holds when you're moving a large amount of data, or when you want to minimize service interruption for live sites or applications. - Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MariaDB. Azure Database for MariaDB supports only the InnoDB storage engine, and no other storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before you migrate them to your Azure database for MariaDB.
-
- For example, if you have a WordPress app or a web app that uses MyISAM tables, first convert those tables by migrating them into InnoDB format before you restore them to your Azure database for MariaDB. Use the clause `ENGINE=InnoDB` to set the engine to use for creating a new table, and then transfer the data into the compatible table before you restore it.
+
+ For example, if you have a WordPress app or a web app that uses MyISAM tables, first convert those tables by migrating them into InnoDB format before you restore them to your Azure database for MariaDB. Use the clause `ENGINE=InnoDB` to set the engine to use for creating a new table, and then transfer the data into the compatible table before you restore it.
```sql INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
You can use MySQL utilities such as mysqldump and mysqlpump to dump and load dat
- To avoid any compatibility issues when you're dumping databases, ensure that you're using the same version of MariaDB on the source and destination systems. For example, if your existing MariaDB server is version 10.2, you should migrate to your Azure database for MariaDB that's configured to run version 10.2. The `mysql_upgrade` command doesn't function in an Azure Database for MariaDB server, and it isn't supported. If you need to upgrade across MariaDB versions, first dump or export your earlier-version database into a later version of MariaDB in your own environment. You can then run `mysql_upgrade` before you try migrating into your Azure database for MariaDB. ## Performance considerations+ To optimize performance when you're dumping large databases, keep in mind the following considerations: - Use the `exclude-triggers` option in mysqldump. Exclude triggers from dump files to avoid having the trigger commands fire during the data restore. - Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and send a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during the restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive. This is because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.
To optimize performance when you're dumping large databases, keep in mind the fo
- Use partitioned tables when appropriate. - Load data in parallel. Avoid too much parallelism, which could cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal. - Use the `defer-table-indexes` option in mysqlpump when you're dumping databases, so that index creation happens after table data is loaded.-- Copy the backup files to an Azure blob store and perform the restore from there. This approach should be a lot faster than performing the restore across the internet.
+- Copy the backup files to an Azure blob store and perform the restore from there. This approach should be a lot faster than performing the restore across the internet.
## Create a backup file
-To back up an existing MariaDB database on the local on-premises server or in a virtual machine, run the following command by using mysqldump:
+To back up an existing MariaDB database on the local on-premises server or in a virtual machine, run the following command by using mysqldump:
```bash $ mysqldump --opt -u <uname> -p<pass> <dbname> > <backupfile.sql>
The parameters to provide are:
- *\<pass>*: The password for your database (note that there is no space between -p and the password) - *\<dbname>*: The name of your database - *\<backupfile.sql>*: The file name for your database backup -- *\<--opt>*: The mysqldump option
+- *\<--opt>*: The mysqldump option
-For example, to back up a database named *testdb* on your MariaDB server with the user name *testuser* and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database.
+For example, to back up a database named *testdb* on your MariaDB server with the user name *testuser* and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database.
```bash $ mysqldump -u root -p testdb > testdb_backup.sql ```
-To select specific tables to back up in your database, list the table names, separated by spaces. For example, to back up only table1 and table2 tables from the *testdb*, follow this example:
+To select specific tables to back up in your database, list the table names, separated by spaces. For example, to back up only table1 and table2 tables from the *testdb*, follow this example:
```bash $ mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql ```
-To back up more than one database at once, use the --database switch and list the database names, separated by spaces.
+To back up more than one database at once, use the --database switch and list the database names, separated by spaces.
```bash $ mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql ``` ## Create a database on the target server+ Create an empty database on the target Azure Database for MariaDB server where you want to migrate the data. Use a tool such as MySQL Workbench to create the database. The database can have the same name as the database that contains the dumped data, or you can create a database with a different name. To get connected, locate the connection information on the **Overview** pane of your Azure database for MariaDB.
In MySQL Workbench, add the connection information.
![Screenshot of the MySQL Connections pane in MySQL Workbench.](./media/howto-migrate-dump-restore/2_setup-new-connection.png) ## Restore your MariaDB database+ After you've created the target database, you can use the mysql command or MySQL Workbench to restore the data into the newly created database from the dump file. ```bash
The importing process is similar to the exporting process. Do the following:
1. Select the **Go** button to export the backup, execute the SQL commands, and re-create your database. ## Next steps+ - [Connect applications to your Azure database for MariaDB](./howto-connection-string.md).
mariadb Howto Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-move-regions-portal.md
Title: Move Azure regions - Azure portal - Azure Database for MariaDB description: Move an Azure Database for MariaDB server from one Azure region to another using a read replica and the Azure portal.+ - Previously updated : 06/29/2020 Last updated : 06/24/2022 #Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
Last updated 06/29/2020
There are various scenarios for moving an existing Azure Database for MariaDB server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
-You can use an Azure Database for MariaDB [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
+You can use an Azure Database for MariaDB [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
> [!NOTE]
-> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
+> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
## Prerequisites
Stopping replication to the replica server, causes it to become a standalone ser
1. Select **Replication** from the menu, under **SETTINGS**. 1. Select the replica server. 1. Select **Stop replication**.
-1. Confirm you want to stop replication by clicking **OK**.
+1. Confirm you want to stop replication by selecting **OK**.
## Clean up source server
You may want to delete the source Azure Database for MariaDB server. To do so, u
## Next steps
-In this tutorial, you moved an Azure Database for MariaDB server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
+In this tutorial, you moved an Azure Database for MariaDB server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
- Learn more about [read replicas](concepts-read-replicas.md) - Learn more about [managing read replicas in the Azure portal](howto-read-replicas-portal.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
+- Learn more about [business continuity](concepts-business-continuity.md) options
mariadb Howto Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-cli.md
Title: Manage read replicas - Azure CLI, REST API - Azure Database for MariaDB description: This article describes how to set up and manage read replicas in Azure Database for MariaDB using the Azure CLI and REST API.+ - Previously updated : 6/10/2020 Last updated : 06/24/2022 # How to create and manage read replicas in Azure Database for MariaDB using the Azure CLI and REST API
In this article, you will learn how to create and manage read replicas in the Azure Database for MariaDB service using the Azure CLI and REST API. ## Azure CLI+ You can create and manage read replicas using the Azure CLI. ### Prerequisites - [Install Azure CLI 2.0](/cli/azure/install-azure-cli)-- An [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-portal.md) that will be used as the source server.
+- An [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-portal.md) that will be used as the source server.
> [!IMPORTANT] > The read replica feature is only available for Azure Database for MariaDB servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
The `az mariadb server replica create` command requires the following parameters
| name | mydemoreplicaserver | The name of the new replica server that is created. | | source-server | mydemoserver | The name or ID of the existing source server to replicate from. |
-To create a cross region read replica, use the `--location` parameter.
+To create a cross region read replica, use the `--location` parameter.
The CLI example below creates the replica in West US.
az mariadb server replica create --name mydemoreplicaserver --source-server myde
``` > [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
> [!NOTE] > Read replicas are created with the same server configuration as the master. The replica server configuration can be changed after it has been created. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the master. ### List replicas for a source server
-To view all replicas for a given source server, run the following command:
+To view all replicas for a given source server, run the following command:
```azurecli-interactive az mariadb server replica list --server-name mydemoserver --resource-group myresourcegroup
az mariadb server delete --resource-group myresourcegroup --name mydemoserver
``` ## REST API+ You can create and manage read replicas using the [Azure REST API](/rest/api/azure/). ### Create a read replica+ You can create a read replica by using the [create API](/rest/api/mariadb/servers/create): ```http
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
``` > [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized source server and restarted the server, you receive an error. Complete those two steps before you create a replica. A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier. - > [!IMPORTANT] > Before a source server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master. ### List replicas+ You can view the list of replicas of a source server using the [replica list API](/rest/api/mariadb/replicas/listbyserver): ```http
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
``` ### Stop replication to a replica server+ You can stop replication between a source server and a read replica by using the [update API](/rest/api/mariadb/servers/update). After you stop replication to a source server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups
``` ### Delete a source or replica server+ To delete a source or replica server, you use the [delete API](/rest/api/mariadb/servers/delete): When you delete a source server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
When you delete a source server, replication to all read replicas is stopped. Th
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMariaDB/servers/{serverName}?api-version=2017-12-01 ``` - ## Next steps -- Learn more about [read replicas](concepts-read-replicas.md)
+- Learn more about [read replicas](concepts-read-replicas.md)
mariadb Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for MariaDB description: This article describes how to set up and manage read replicas in Azure Database for MariaDB using the portal+ - Previously updated : 6/10/2020 Last updated : 06/24/2022 # How to create and manage read replicas in Azure Database for MariaDB using the Azure portal
Once the replica server has been created, it can be viewed from the **Replicatio
To stop replication between a source and a replica server from the Azure portal, use the following steps:
-1. In the Azure portal, select your source Azure Database for MariaDB server.
+1. In the Azure portal, select your source Azure Database for MariaDB server.
2. Select **Replication** from the menu, under **SETTINGS**.
To stop replication between a source and a replica server from the Azure portal,
![Azure Database for MariaDB - Stop replication](./media/howto-read-replica-portal/stop-replication.png)
-5. Confirm you want to stop replication by clicking **OK**.
+5. Confirm you want to stop replication by selecting **OK**.
![Azure Database for MariaDB - Stop replication confirm](./media/howto-read-replica-portal/stop-replication-confirm.png)
To delete a read replica server from the Azure portal, use the following steps:
![Azure Database for MariaDB - Delete replica](./media/howto-read-replica-portal/delete-replica.png)
-5. Type the name of the replica and click **Delete** to confirm deletion of the replica.
+5. Type the name of the replica and select **Delete** to confirm deletion of the replica.
![Azure Database for MariaDB - Delete replica confirm](./media/howto-read-replica-portal/delete-replica-confirm.png)
To delete a source server from the Azure portal, use the following steps:
![Azure Database for MariaDB - Delete master](./media/howto-read-replica-portal/delete-master-overview.png)
-3. Type the name of the source server and click **Delete** to confirm deletion of the source server.
+3. Type the name of the source server and select **Delete** to confirm deletion of the source server.
![Azure Database for MariaDB - Delete master confirm](./media/howto-read-replica-portal/delete-master-confirm.png)
mariadb Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-powershell.md
Title: Manage Azure Database for MariaDB read replicas description: Learn how to set up and manage read replicas in Azure Database for MariaDB using PowerShell in the General Purpose or Memory Optimized pricing tiers.+ - Previously updated : 05/06/2022 Last updated : 06/24/2022 - devx-track-azurepowershell - kr2b-contr-experiment
mariadb Howto Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-redirection.md
Title: Connect with redirection - Azure Database for MariaDB description: This article describes how you can configure you application to connect to Azure Database for MariaDB with redirection.+ - Previously updated : 6/8/2020 Last updated : 06/24/2022 # Connect to Azure Database for MariaDB with redirection
Last updated 6/8/2020
This topic explains how to connect an application your Azure Database for MariaDB server with redirection mode. Redirection aims to reduce network latency between client applications and MariaDB servers by allowing applications to connect directly to backend server nodes. ## Before you begin
-Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MariaDB server with engine version 10.2 or 10.3.
+
+Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Database for MariaDB server with engine version 10.2 or 10.3.
For details, refer to how to create an Azure Database for MariaDB server using the [Azure portal](quickstart-create-mariadb-server-database-using-azure-portal.md) or [Azure CLI](quickstart-create-mariadb-server-database-using-azure-cli.md).
On your Azure Database for MariaDB server, configure the `redirect_enabled` para
## PHP
-Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft.
+Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft.
The mysqlnd_azure extension is available to add to PHP applications through PECL and it is highly recommended to install and configure the extension through the officially published [PECL package](https://pecl.php.net/package/mysqlnd_azure).
The mysqlnd_azure extension is available to add to PHP applications through PECL
The redirection behavior is determined by the value of `mysqlnd_azure.enableRedirect`. The table below outlines the behavior of redirection based on the value of this parameter beginning in **version 1.1.0+**.
-If you are using an older version of the mysqlnd_azure extension (version 1.0.0-1.0.3), the redirection behavior is determined by the value of `mysqlnd_azure.enabled`. The valid values are `off` (acts similarly as the behavior outlined in the table below) and `on` (acts like `preferred` in the table below).
+If you are using an older version of the mysqlnd_azure extension (version 1.0.0-1.0.3), the redirection behavior is determined by the value of `mysqlnd_azure.enabled`. The valid values are `off` (acts similarly as the behavior outlined in the table below) and `on` (acts like `preferred` in the table below).
|**mysqlnd_azure.enableRedirect value**| **Behavior**| |-|-|
The subsequent sections of the document will outline how to install the `mysqlnd
### Ubuntu Linux #### Prerequisites + - PHP versions 7.2.15+ and 7.3.2+ - PHP PEAR - php-mysql
The subsequent sections of the document will outline how to install the `mysqlnd
php -i | grep "extension_dir" ```
-3. Change directories to the returned folder and ensure `mysqlnd_azure.so` is located in this folder.
+3. Change directories to the returned folder and ensure `mysqlnd_azure.so` is located in this folder.
-4. Locate the folder for .ini files by running the below:
+4. Locate the folder for .ini files by running the below:
```bash php -i | grep "dir for additional .ini files" ```
-5. Change directories to this returned folder.
+5. Change directories to this returned folder.
6. Create a new .ini file for `mysqlnd_azure`. Make sure the alphabet order of the name is after that of mysqnld, since the modules are loaded according to the name order of the ini files. For example, if `mysqlnd` .ini is named `10-mysqlnd.ini`, name the mysqlnd ini as `20-mysqlnd-azure.ini`.
The subsequent sections of the document will outline how to install the `mysqlnd
### Windows #### Prerequisites + - PHP versions 7.2.15+ and 7.3.2+ - php-mysql - Azure Database for MariaDB server
The subsequent sections of the document will outline how to install the `mysqlnd
php -i | find "extension_dir" ```
-5. Copy the `php_mysqlnd_azure.dll` file into the directory returned in step 4.
+5. Copy the `php_mysqlnd_azure.dll` file into the directory returned in step 4.
6. Locate the PHP folder containing the `php.ini` file using the following command:
The subsequent sections of the document will outline how to install the `mysqlnd
php -i | find "Loaded Configuration File" ```
-7. Modify the `php.ini` file and add the following extra lines to enable redirection.
+7. Modify the `php.ini` file and add the following extra lines to enable redirection.
Under the Dynamic Extensions section: ```cmd extension=mysqlnd_azure ```
-
+ Under the Module Settings section: ```cmd [mysqlnd_azure]
The subsequent sections of the document will outline how to install the `mysqlnd
### Confirm redirection
-You can also confirm redirection is configured with the below sample PHP code. Create a PHP file called `mysqlConnect.php` and paste the below code. Update the server name, username, and password with your own.
-
- ```php
+You can also confirm redirection is configured with the below sample PHP code. Create a PHP file called `mysqlConnect.php` and paste the below code. Update the server name, username, and password with your own.
+
+```php
<?php $host = '<yourservername>.mariadb.database.azure.com'; $username = '<yourusername>@<yourservername>';
$db_name = 'testdb';
$db->close(); } ?>
- ```
+```
## Next steps+ For more information about connection strings, see [Connection Strings](howto-connection-string.md).
mariadb Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-cli.md
Title: Restart server - Azure CLI - Azure Database for MariaDB description: This article describes how you can restart an Azure Database for MariaDB server using the Azure CLI.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Restart Azure Database for MariaDB server using the Azure CLI+ This topic describes how you can restart an Azure Database for MariaDB server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
The time required to complete a restart depends on the MariaDB recovery process.
To complete this how-to guide: - You need an [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md).
-
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)] - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - ## Restart the server Restart the server with the following command:
mariadb Howto Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-portal.md
Title: Restart server - Azure portal - Azure Database for MariaDB description: This article describes how you can restart an Azure Database for MariaDB server using the Azure Portal.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Restart Azure Database for MariaDB server using Azure portal+ This topic describes how you can restart an Azure Database for MariaDB server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
The server restart will be blocked if the service is busy. For example, the serv
The time required to complete a restart depends on the MariaDB recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. ## Prerequisites+ To complete this how-to guide, you need: - An [Azure Database for MariaDB server](./quickstart-create-mariadb-server-database-using-azure-portal.md)
The following steps restart the MariaDB server:
1. In the Azure portal, select your Azure Database for MariaDB server.
-2. In the toolbar of the server's **Overview** page, click **Restart**.
+2. In the toolbar of the server's **Overview** page, select **Restart**.
![Azure Database for MariaDB - Overview - Restart button](./media/howto-restart-server-portal/2-server.png)
-3. Click **Yes** to confirm restarting the server.
+3. Select **Yes** to confirm restarting the server.
![Azure Database for MariaDB - Restart confirm](./media/howto-restart-server-portal/3-restart-confirm.png)
The following steps restart the MariaDB server:
## Next steps
-[Quickstart: Create Azure Database for MariaDB server using Azure portal](./quickstart-create-mariadb-server-database-using-azure-portal.md)
+[Quickstart: Create Azure Database for MariaDB server using Azure portal](./quickstart-create-mariadb-server-database-using-azure-portal.md)
mariadb Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-powershell.md
Title: Restart Azure Database for MariaDB server - Azure PowerShell description: Learn how you can restart an Azure Database for MariaDB server using PowerShell. The time required for a restart depends on the MariaDB recovery process.+ - Previously updated : 05/06/2022 Last updated : 06/24/2022 - devx-track-azurepowershell - kr2b-contr-experiment
Restart-AzMariaDbServer -Name mydemoserver -ResourceGroupName myresourcegroup
## Next steps > [!div class="nextstepaction"]
-> [Create an Azure Database for MariaDB server using PowerShell](quickstart-create-mariadb-server-database-using-azure-powershell.md)
+> [Create an Azure Database for MariaDB server using PowerShell](quickstart-create-mariadb-server-database-using-azure-powershell.md)
mariadb Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-dropped-server.md
Title: Restore a deleted Azure Database for MariaDB server description: This article describes how to restore a deleted server in Azure Database for MariaDB using the Azure portal.+ - Previously updated : 4/12/2021 Last updated : 06/24/2022 # Restore a deleted Azure Database for MariaDB server
-When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MariaDB server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
+When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MariaDB server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
## Pre-requisites+ To restore a deleted Azure Database for MariaDB server, you need following: - Azure Subscription name hosting the original server - Location where the server was created ## Steps to restore
-1. Go to the [Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade) from Monitor blade in Azure portal.
+1. Go to the [Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade) from Monitor blade in Azure portal.
-2. In Activity Log, click on **Add filter** as shown and set following filters for the
+2. In Activity Log, select on **Add filter** as shown and set following filters for the
- **Subscription** = Your Subscription hosting the deleted server - **Resource Type** = Azure Database for MariaDB servers (Microsoft.DBForMariaDB/servers)
- - **Operation** = Delete MariaDB Server (Microsoft.DBForMariaDB/servers/delete)
-
+ - **Operation** = Delete MariaDB Server (Microsoft.DBForMariaDB/servers/delete)
+ [![Activity log filtered for delete MariaDB server operation](./media/howto-restore-dropped-server/activity-log.png)](./media/howto-restore-dropped-server/activity-log.png#lightbox)
-
- 3. Double Click on the Delete MariaDB Server event and click on the JSON tab and note the "resourceId" and "submissionTimestamp" attributes in JSON output. The resourceId is in the following format: /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBForMariaDB/servers/deletedserver.
-
- 4. Go to [Create Server REST API Page](/rest/api/mariadb/servers/create) and click on "Try It" tab highlighted in green and login in with your Azure account.
-
- 5. Provide the resourceGroupName, serverName (deleted server name), subscriptionId, derived from resourceId attribute captured in Step 3, while api-version is pre-populated as shown in image.
-
+
+3. Double Select on the Delete MariaDB Server event and select on the JSON tab and note the "resourceId" and "submissionTimestamp" attributes in JSON output. The resourceId is in the following format: /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBForMariaDB/servers/deletedserver.
+
+4. Go to [Create Server REST API Page](/rest/api/mariadb/servers/create) and select on "Try It" tab highlighted in green and login in with your Azure account.
+
+5. Provide the resourceGroupName, serverName (deleted server name), subscriptionId, derived from resourceId attribute captured in Step 3, while api-version is pre-populated as shown in image.
+ [![Create server using REST API](./media/howto-restore-dropped-server/create-server-from-rest-api.png)](./media/howto-restore-dropped-server/create-server-from-rest-api.png#lightbox)
-
- 6. Scroll below on Request Body section and paste the following:
-
+
+6. Scroll below on Request Body section and paste the following:
+ ```json { "location": "Dropped Server Location",
To restore a deleted Azure Database for MariaDB server, you need following:
* "submissionTimestamp", and "resourceId" with the values captured in Step 3. * For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
-8. If you see Response Code 201 or 202, the restore request is successfully submitted.
+8. If you see Response Code 201 or 202, the restore request is successfully submitted.
9. The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for - **Subscription** = Your Subscription
To restore a deleted Azure Database for MariaDB server, you need following:
- **Operation** = Update MariaDB Server Create ## Next steps+ - If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a deleted server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system. - To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/preventing-the-disaster-of-accidental-deletion-for-your-mysql/ba-p/825222).
mariadb Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-cli.md
Title: Backup and restore - Azure CLI - Azure Database for MariaDB description: Learn how to backup and restore a server in Azure Database for MariaDB by using the Azure CLI.+ - ms.devlang: azurecli Previously updated : 3/27/2020 Last updated : 06/24/2022 # How to back up and restore a server in Azure Database for MariaDB using the Azure CLI
The `az mariadb server restore` command requires the following parameters:
WWhen you restore a server to an earlier point in time, a new server is created. The original server and its databases from the specified point in time are copied to the new server.
-The location and pricing tier values for the restored server remain the same as the original server.
+The location and pricing tier values for the restored server remain the same as the original server.
After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page.
The new server created during a restore does not have the VNet service endpoints
## Geo restore
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MariaDB is available.
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MariaDB is available.
To create a server using a geo redundant backup, use the Azure CLI `az mariadb server georestore` command.
mariadb Howto Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-portal.md
Title: Backup and restore - Azure portal - Azure Database for MariaDB description: This article describes how to restore a server in Azure Database for MariaDB using the Azure portal.+ - Previously updated : 6/30/2020 Last updated : 06/24/2022 # How to backup and restore a server in Azure Database for MariaDB using the Azure portal ## Backup happens automatically+ Azure Database for MariaDB servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. ## Prerequisites+ To complete this how-to guide, you need: - An [Azure Database for MariaDB server and database](quickstart-create-mariadb-server-database-using-azure-portal.md)
The backup retention period can be changed on a server through the following ste
In the screenshot below it has been increased to 35 days. ![Backup retention period increased](./media/howto-restore-server-portal/3-increase-backup-days.png)
-4. Click **OK** to confirm the change.
+4. Select **OK** to confirm the change.
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
## Point-in-time restore+ Azure Database for MariaDB allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server. For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level. The following steps restore the sample server to a point-in-time:
-1. In the Azure portal, select your Azure Database for MariaDB server.
+1. In the Azure portal, select your Azure Database for MariaDB server.
2. In the toolbar of the server's **Overview** page, select **Restore**.
The following steps restore the sample server to a point-in-time:
- **Restore point**: Select the point-in-time you want to restore to. - **Target server**: Provide a name for the new server. - **Location**: You cannot select the region. By default it is same as the source server.
- - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
+ - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
-4. Click **OK** to restore the server to restore to a point-in-time.
+4. Select **OK** to restore the server to restore to a point-in-time.
5. Once the restore finishes, locate the new server that is created to verify the data was restored as expected.
The new server created during a restore does not have the VNet service endpoints
## Geo restore
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MariaDB is available.
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for MariaDB is available.
1. Select the **Create a resource** button (+) in the upper-left corner of the portal. Select **Databases** > **Azure Database for MariaDB**. :::image type="content" source="./media/howto-restore-server-portal/2_navigate-to-mariadb.png" alt-text="Navigate to Azure Database for MariaDB.":::
-
-2. Provide the subscription, resource group, and name of the new server.
+
+2. Provide the subscription, resource group, and name of the new server.
3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
-
+ :::image type="content" source="./media/howto-restore-server-portal/3-geo-restore.png" alt-text="Select data source.":::
-
+ > [!NOTE] > When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated. > 4. Select the **Backup** dropdown.
-
+ :::image type="content" source="./media/howto-restore-server-portal/4-geo-restore-backup.png" alt-text="Select backup dropdown."::: 5. Select the source server to restore from.
-
+ :::image type="content" source="./media/howto-restore-server-portal/5-select-backup.png" alt-text="Select backup.":::
-6. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
-
+6. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
+ :::image type="content" source="./media/howto-restore-server-portal/6-accept-backup.png" alt-text="Continue with backup."::: 7. Fill out the rest of the form with your preferences. You can select any **Location**. After selecting the location, you can select **Configure server** to update the **Compute Generation** (if available in the region you have chosen), number of **vCores**, **Backup Retention Period**, and **Backup Redundancy Option**. Changing **Pricing Tier** (Basic, General Purpose, or Memory Optimized) or **Storage** size during restore is not supported.
- :::image type="content" source="./media/howto-restore-server-portal/7-create.png" alt-text="Fill form.":::
+ :::image type="content" source="./media/howto-restore-server-portal/7-create.png" alt-text="Fill form.":::
-8. Select **Review + create** to review your selections.
+8. Select **Review + create** to review your selections.
9. Select **Create** to provision the server. This operation may take a few minutes.
The new server created by geo restore has the same server admin login name and p
The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored. ## Next steps+ - Learn more about the service's [backups](concepts-backup.md) - Learn about [replicas](concepts-read-replicas.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
+- Learn more about [business continuity](concepts-business-continuity.md) options
mariadb Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-powershell.md
Title: Backup and restore - Azure PowerShell - Azure Database for MariaDB description: Learn how to backup and restore a server in Azure Database for MariaDB by using Azure PowerShell.+ - ms.devlang: azurepowershell Previously updated : 05/26/2020 Last updated : 06/24/2022 # How to back up and restore an Azure Database for MariaDB server using PowerShell
original server are restored.
## Next steps > [!div class="nextstepaction"]
-> [How to generate an Azure Database for MariaDB connection string with PowerShell](howto-connection-string-powershell.md)
+> [How to generate an Azure Database for MariaDB connection string with PowerShell](howto-connection-string-powershell.md)
mariadb Howto Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-server-parameters.md
Title: Configure server parameters - Azure portal - Azure Database for MariaDB description: This article describes how to configure MariaDB server parameters in Azure Database for MariaDB using the Azure portal.+ - Previously updated : 10/1/2020 Last updated : 06/24/2022 # Configure server parameters in Azure Database for MariaDB using the Azure portal
Azure Database for MariaDB supports configuration of some server parameters. Thi
## Configure server parameters 1. Sign in to the Azure portal, then locate your Azure Database for MariaDB server.
-2. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MariaDB server.
+2. Under the **SETTINGS** section, select **Server parameters** to open the server parameters page for the Azure Database for MariaDB server.
![Azure portal server parameters page](./media/howto-server-parameters/azure-portal-server-parameters.png) 3. Locate any settings you need to adjust. Review the **Description** column to understand the purpose and allowed values. ![Enumerate drop down](./media/howto-server-parameters/3-toggle_parameter.png)
-4. Click **Save** to save your changes.
+4. Select **Save** to save your changes.
![Save or Discard changes](./media/howto-server-parameters/4-save_parameters.png) 5. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**. ![Reset all to default](./media/howto-server-parameters/5-reset_parameters.png) ## Setting parameters not listed
-If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
+If the server parameter you want to update is not listed in the Azure portal, you can optionally set the parameter at the connection level using `init_connect`. This sets the server parameters for each client connecting to the server.
-1. Under the **SETTINGS** section, click **Server parameters** to open the server parameters page for the Azure Database for MariaDB server.
+1. Under the **SETTINGS** section, select **Server parameters** to open the server parameters page for the Azure Database for MariaDB server.
2. Search for `init_connect` 3. Add the server parameters in the format: `SET parameter_name=YOUR_DESIRED_VALUE` in value the value column. For example, you can change the character set of your server by setting of `init_connect` to `SET character_set_client=utf8;SET character_set_database=utf8mb4;SET character_set_connection=latin1;SET character_set_results=latin1;`
-4. Click **Save** to save your changes.
+4. Select **Save** to save your changes.
## Working with the time zone parameter
mariadb Howto Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-tls-configurations.md
Title: TLS configuration - Azure portal - Azure Database for MariaDB description: Learn how to set TLS configuration using Azure portal for your Azure Database for MariaDB + - Previously updated : 06/02/2020 Last updated : 06/24/2022 # Configuring TLS settings in Azure Database for MariaDB using Azure portal
Follow these steps to set MariaDB server minimum TLS version:
1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MariaDB server.
-1. On the MariaDB server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+1. On the MariaDB server page, under **Settings**, select **Connection security** to open the connection security configuration page.
1. In **Minimum TLS version**, select **1.2** to deny connections with TLS version less than TLS 1.2 for your MariaDB server. ![Azure Database for MariaDB TLS configuration](./media/howto-tls-configurations/tls-configurations.png)
-1. Click **Save** to save the changes.
+1. Select **Save** to save the changes.
1. A notification will confirm that connection security setting was successfully enabled.
Follow these steps to set MariaDB server minimum TLS version:
## Next steps
-Learn about [how to create alerts on metrics](howto-alert-metric.md)
+Learn about [how to create alerts on metrics](howto-alert-metric.md)
mariadb Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-common-connection-issues.md
Title: Troubleshoot connection issues - Azure Database for MariaDB description: Learn how to troubleshoot connection issues to Azure Database for MariaDB, including transient errors requiring retries, firewall issues, and outages.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Troubleshoot connection issues to Azure Database for MariaDB
mariadb Howto Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-query-performance.md
Title: Troubleshoot query performance - Azure Database for MariaDB description: Learn how to use EXPLAIN to troubleshoot query performance in Azure Database for MariaDB.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # How to use EXPLAIN to profile query performance in Azure Database for MariaDB+ **EXPLAIN** is a handy tool to optimize queries. EXPLAIN statement can be used to get information about how SQL statements are executed. The following output shows an example of the execution of an EXPLAIN statement. ```sql
possible_keys: id
``` The new EXPLAIN shows that MariaDB now uses an index to limit the number of rows to 1, which in turn dramatically shortened the search time.
- 
+ ## Covering index+ A covering index consists of all columns of a query in the index to reduce value retrieval from data tables. Here's an illustration in the following **GROUP BY** statement.
- 
+ ```sql mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1\G *************************** 1. row ***************************
possible_keys: NULL
``` As can be seen from the output, MariaDB does not use any indexes because no proper indexes are available. It also shows *Using temporary; Using file sort*, which means MariaDB creates a temporary table to satisfy the **GROUP BY** clause.
- 
+ Creating an index on column **c2** alone makes no difference, and MariaDB still needs to create a temporary table: ```sql 
possible_keys: covered
Extra: Using where; Using index ```
-As the above EXPLAIN shows, MariaDB now uses the covered index and avoid creating a temporary table.
+As the above EXPLAIN shows, MariaDB now uses the covered index and avoid creating a temporary table.
## Combined index+ A combined index consists values from multiple columns and can be considered an array of rows that are sorted by concatenating values of the indexed columns. This method can be useful in a **GROUP BY** statement. ```sql
possible_keys: NULL
``` The EXPLAIN now shows that MariaDB is able to use combined index to avoid additional sorting since the index is already sorted.
- 
+ ## Conclusion
- 
+ Using EXPLAIN and different type of Indexes can increase performance significantly. Having an index on the table does not necessarily mean MariaDB would be able to use it for your queries. Always validate your assumptions using EXPLAIN and optimize your queries using indexes. ## Next steps-- To find peer answers to your most concerned questions or post a new question/answer, visit [Microsoft Q&A question page](/answers/topics/azure-database-mariadb.html) or [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mariadb).+
+- To find peer answers to your most concerned questions or post a new question/answer, visit [Microsoft Q&A question page](/answers/topics/azure-database-mariadb.html) or [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mariadb).
mariadb Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/overview.md
Title: Overview - Azure Database for MariaDB description: Learn about the Azure Database for MariaDB service, a relational database service in the Microsoft cloud based on the MariaDB community edition.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # What is Azure Database for MariaDB?
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 05/11/2022+ - Last updated : 06/24/2022 # Azure Policy built-in definitions for Azure Database for MariaDB
mariadb Quickstart Create Mariadb Server Database Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md
Title: 'Quickstart: Create an Azure DB for MariaDB - ARM template' description: In this Quickstart article, learn how to create an Azure Database for MariaDB server by using an Azure Resource Manager template.+ Previously updated : 05/14/2020 - Last updated : 06/24/2022 # Quickstart: Use an ARM template to create an Azure Database for MariaDB server
mariadb Quickstart Create Mariadb Server Database Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-bicep.md
Title: 'Quickstart: Create an Azure DB for MariaDB - Bicep'
description: In this Quickstart article, learn how to create an Azure Database for MariaDB server using Bicep. Previously updated : 04/28/2022- + Last updated : 06/24/2022 # Quickstart: Use Bicep to create an Azure Database for MariaDB server
mariadb Quickstart Create Mariadb Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-cli.md
Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MariaDB' description: This quickstart describes how to use the Azure CLI to create an Azure Database for MariaDB server in an Azure resource group.+ - ms.devlang: azurecli Previously updated : 3/18/2020 Last updated : 06/24/2022
To connect to the server by using the mysql command-line tool:
## Clean up resources
-If you don't need the resources that you used in this quickstart for another quickstart or tutorial, you can delete them by running the following command:
+If you don't need the resources that you used in this quickstart for another quickstart or tutorial, you can delete them by running the following command:
```azurecli-interactive az group delete --name myresourcegroup
mariadb Quickstart Create Mariadb Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-portal.md
Title: 'Quickstart: Create a server - Azure portal - Azure Database for MariaDB'
description: This article shows you how to use the Azure portal to quickly create a sample Azure Database for MariaDB server in about five minutes. Previously updated : 3/19/2020 Last updated : 06/24/2022
# Quickstart: Create an Azure Database for MariaDB server by using the Azure portal
-Azure Database for MariaDB is a managed service you can use to run, manage, and scale highly available MariaDB databases in the cloud. This quickstart shows you how to create an Azure Database for MariaDB server in about five minutes by using the Azure portal.
+Azure Database for MariaDB is a managed service you can use to run, manage, and scale highly available MariaDB databases in the cloud. This quickstart shows you how to create an Azure Database for MariaDB server in about five minutes by using the Azure portal.
If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
To create an Azure Database for MariaDB server:
Location | *the region closest to your users*| Choose the location that is closest to your users or to your other Azure applications. Version | *the latest version*| The latest version (unless you have specific requirements to use a different version). Pricing tier | See description. | The compute, storage, and backup configurations for your new server. Select **Pricing tier** > **General Purpose**. Keep the default values for the following settings:<br><ul><li>**Compute Generation** (Gen 5)</li><li>**vCore** (4 vCores)</li><li>**Storage** (100 GB)</li><li>**Backup Retention Period** (7 days)</li></ul><br>To enable your server backups in geo-redundant storage, for **Backup Redundancy Options**, select **Geographically Redundant**. <br><br>To save this pricing tier selection, select **OK**. The next screenshot captures these selections.
-
+ > [!NOTE] > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier cannot later be scaled to General Purpose or Memory Optimized. See the [pricing page](https://azure.microsoft.com/pricing/details/mariadb/) for more information.
By default, the following databases are created under your server: **information
## <a name="configure-firewall-rule"></a>Configure a server-level firewall rule
-The Azure Database for MariaDB service creates a firewall at the server level. The firewall prevents external applications and tools from connecting to the server or to any databases on the server unless a firewall rule is created to open the firewall for specific IP addresses.
+The Azure Database for MariaDB service creates a firewall at the server level. The firewall prevents external applications and tools from connecting to the server or to any databases on the server unless a firewall rule is created to open the firewall for specific IP addresses.
To create a server-level firewall rule:
First, we'll use the [mysql](https://dev.mysql.com/doc/refman/5.7/en/mysql.html)
Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 65505 Server version: 5.6.39.0 MariaDB Server
-
+ Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
-
+ Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
-
+ mysql> ```
-
+ > [!TIP] > If the firewall isn't configured to allow the IP address of Azure Cloud Shell, the following error occurs: >
First, we'll use the [mysql](https://dev.mysql.com/doc/refman/5.7/en/mysql.html)
```sql CREATE DATABASE quickstartdb; ```
- The command might take a few minutes to finish.
+ The command might take a few minutes to finish.
- You can create one or more databases on an Azure Database for MariaDB server. You can create a single database per server to utilize all resources, or you can create multiple databases to share the resources. There's no limit on the number of databases that you can create, but multiple databases share the same server resources.
+ You can create one or more databases on an Azure Database for MariaDB server. You can create a single database per server to utilize all resources, or you can create multiple databases to share the resources. There's no limit on the number of databases that you can create, but multiple databases share the same server resources.
6. To list the databases, at the `mysql>` prompt, enter the following command:
To connect to the server by using MySQL Workbench:
Username | *server admin login name* | The server admin sign-in information that you used to create the Azure Database for MariaDB server. Our example user name is **myadmin\@mydemoserver**. If you don't remember the user name, complete the steps earlier in this article to get the connection information. The format is *username\@servername*. Password | *your password* | To save the password, select **Store in Vault**. |
-4. To check that all parameters are configured correctly, select **Test Connection**. Then, select **OK** to save the connection.
+4. To check that all parameters are configured correctly, select **Test Connection**. Then, select **OK** to save the connection.
> [!NOTE] > SSL is enforced by default on your server. It requires additional configuration to connect successfully. For more information, see [Configure SSL connectivity in your application to securely connect to Azure Database for MariaDB](./howto-configure-ssl.md). To disable SSL for this quickstart, on the server overview page in the Azure portal, select **Connection security** in the menu. For **Enforce SSL connection**, select **Disabled**.
mariadb Quickstart Create Mariadb Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-powershell.md
Title: 'Quickstart: Create a server - Azure PowerShell - Azure Database for Mari
description: This quickstart describes how to use PowerShell to create an Azure Database for MariaDB server in an Azure resource group. Previously updated : 05/26/2020 Last updated : 06/24/2022 ms.devlang: azurepowershell
For additional commands, see [MySQL 5.7 Reference Manual - Chapter 4.5.1](https:
| Username | myadmin@mydemoserver | The server admin login you previously noted | | Password | ************* | Use the admin account password you configured earlier |
-1. To test if the parameters are configured correctly, click the **Test Connection** button.
+1. To test if the parameters are configured correctly, select the **Test Connection** button.
1. Select the connection to connect to the server.
mariadb Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/reference-stored-procedures.md
Title: Management stored procedures - Azure Database for MariaDB description: Learn which stored procedures in Azure Database for MariaDB are useful to help you configure data-in replication, set the timezone, and kill queries.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Azure Database for MariaDB management stored procedures
-Stored procedures are available on Azure Database for MariaDB servers to help manage your MariaDB server. This includes managing your server's connections, queries, and setting up Data-in Replication.
+Stored procedures are available on Azure Database for MariaDB servers to help manage your MariaDB server. This includes managing your server's connections, queries, and setting up Data-in Replication.
## Data-in Replication stored procedures
The following stored procedures are available in Azure Database for MariaDB to m
|*mysql.az_load_timezone*|N/A|N/A|Loads time zone tables to allow the `time_zone` parameter to be set to named values (ex. "US/Pacific").| ## Next steps+ - Learn how to set up [Data-in Replication](howto-data-in-replication.md)-- Learn how to use the [time zone tables](howto-server-parameters.md#working-with-the-time-zone-parameter)
+- Learn how to use the [time zone tables](howto-server-parameters.md#working-with-the-time-zone-parameter)
mariadb Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/sample-scripts-azure-cli.md
Title: Azure CLI samples - Azure Database for MariaDB | Microsoft Docs description: This article lists the Azure CLI code samples available for interacting with Azure Database for MariaDB.+ - ms.devlang: azurecli Previously updated : 01/11/2022 Last updated : 06/24/2022 Keywords: azure cli samples, azure cli code samples, azure cli script samples # Azure CLI samples for Azure Database for MariaDB
mariadb Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-change-server-configuration.md
Title: CLI script - Change server parameters - Azure Database for MariaDB description: This sample CLI script lists all available server configurations and updates of an Azure Database for MariaDB.+ - ms.devlang: azurecli
mariadb Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-and-firewall-rule.md
Title: CLI script - Create server - Azure Database for MariaDB description: This sample CLI script creates an Azure Database for MariaDB server and configures a server-level firewall rule.+ - ms.devlang: azurecli
mariadb Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-with-vnet-rule.md
Title: CLI script - Create server with vNet rule - Azure Database for MariaDB description: This sample CLI script creates an Azure Database for MariaDB server with a service endpoint on a virtual network and configures a vNet rule.+ - ms.devlang: azurecli
mariadb Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-point-in-time-restore.md
Title: CLI script - Restore server - Azure Database for MariaDB description: This sample Azure CLI script shows how to restore an Azure Database for MariaDB server and its databases to a previous point in time.+ - ms.devlang: azurecli
mariadb Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-scale-server.md
Title: CLI script - Scale server - Azure Database for MariaDB description: This sample CLI script scales Azure Database for MariaDB server to a different performance level after querying the metrics.+ - ms.devlang: azurecli
mariadb Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-server-logs.md
Title: CLI script - Download slow query logs - Azure Database for MariaDB description: This sample Azure CLI script shows how to enable and download the slow query logs of an Azure Database for MariaDB server.+ - ms.devlang: azurecli
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 06/16/2022 Last updated : 06/24/2022 + - # Azure Policy Regulatory Compliance controls for Azure Database for MariaDB
mariadb Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/select-right-deployment-type.md
Title: Selecting the right deployment type - Azure Database for MariaDB description: This article describes what factors to consider before you deploy Azure Database for MariaDB as either infrastructure as a service (IaaS) or platform as a service (PaaS).+ - Previously updated : 3/18/2020 Last updated : 06/24/2022 # Choose the right MariaDB Server option in Azure
mariadb Tutorial Design Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-cli.md
Title: 'Tutorial: Design an Azure Database for MariaDB - Azure CLI' description: This tutorial explains how to create and manage Azure Database for MariaDB server and database using Azure CLI from the command line.+ - ms.devlang: azurecli Previously updated : 3/18/2020 Last updated : 06/24/2022
If you don't have an Azure subscription, create a [free Azure account](https://a
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#az-account-set) command. ```azurecli-interactive
az account set --subscription 00000000-0000-0000-0000-000000000000
``` ## Create a resource group+ Create an [Azure resource group](../azure-resource-manager/management/overview.md) with [az group create](/cli/azure/group#az-group-create) command. A resource group is a logical container into which Azure resources are deployed and managed as a group. The following example creates a resource group named `myresourcegroup` in the `westus` location.
az group create --name myresourcegroup --location westus
``` ## Create an Azure Database for MariaDB server+ Create an Azure Database for MariaDB server with the `az mariadb server create` command. A server can manage multiple databases. Typically, a separate database is used for each project or for each user. The following example creates an Azure Database for MariaDB server located in `westus` in the resource group `myresourcegroup` with name `mydemoserver`. The server has an administrator log in named `myadmin`. It is a General Purpose, Gen 5 server with 2 vCores. Substitute the `<server_admin_password>` with your own value.
Please see the [pricing tiers](./concepts-pricing-tiers.md) documentation to und
> [!IMPORTANT] > The server admin login and password that you specify here are required to log in to the server and its databases later in this quickstart. Remember or record this information for later use. - ## Configure firewall rule+ Create an Azure Database for MariaDB server-level firewall rule with the `az mariadb server firewall-rule create` command. A server-level firewall rule allows an external application, such as **mysql** command-line tool or MySQL Workbench to connect to your server through the Azure MariaDB service firewall. The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Substitute in the IP address or range of IP addresses that correspond to where you'll be connecting from.
The result is in JSON format. Make a note of the **fullyQualifiedDomainName** an
"location": "westus", "name": "mydemoserver", "resourceGroup": "myresourcegroup",
- "sku": {
+"sku": {
"capacity": 2, "family": "Gen5", "name": "GP_Gen5_2",
The result is in JSON format. Make a note of the **fullyQualifiedDomainName** an
``` ## Connect to the server using mysql+ Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.7/en/mysql.html) to establish a connection to your Azure Database for MariaDB server. In this example, the command is: ```cmd mysql -h mydemoserver.database.windows.net -u myadmin@mydemoserver -p ``` ## Create a blank database+ Once you're connected to the server, create a blank database. ```sql mysql> CREATE DATABASE mysampledb;
mysql> USE mysampledb;
``` ## Create tables in the database+ Now that you know how to connect to the Azure Database for MariaDB database, complete some basic tasks. First, create a table and load it with some data. Let's create a table that stores inventory information.
CREATE TABLE inventory (
``` ## Load data into the tables+ Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data. ```sql INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
Now you have two rows of sample data into the table you created earlier. ## Query and update the data in the tables+ Execute the following query to retrieve information from the database table. ```sql SELECT * FROM inventory;
SELECT * FROM inventory;
``` ## Restore a database to a previous point in time+ Imagine you have accidentally deleted this table. This is something you cannot easily recover from. Azure Database for MariaDB allows you to go back to any point in time in the last up to 35 days and restore this point in time to a new server. You can use this new server to recover your deleted data. The following steps restore the sample server to a point before the table was added. For the restore, you need the following information:
Restoring a server to a point-in-time creates a new server, copied as the origin
The command is synchronous, and will return after the server is restored. Once the restore finishes, locate the new server that was created. Verify the data was restored as expected. ## Next steps+ In this tutorial you learned to: > [!div class="checklist"] > * Create an Azure Database for MariaDB server
In this tutorial you learned to:
> * Load sample data > * Query data > * Update data
-> * Restore data
+> * Restore data
mariadb Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-portal.md
Title: 'Tutorial: Design an Azure Database for MariaDB - Azure portal' description: This tutorial explains how to create and manage an Azure Database for MariaDB server and database by using the Azure portal.+ - Previously updated : 3/18/2020 Last updated : 06/24/2022
You create an Azure Database for MariaDB server with a defined set of [compute a
Location | *the region closest to your users*| Select the location that is closest to your users or to your other Azure applications. Version | *the latest version*| The latest version (unless you have specific requirements for using a different version). Pricing tier | See description. | The compute, storage, and backup configurations for your new server. Select **Pricing tier** > **General Purpose**. Keep the default values for the following settings:<br><ul><li>**Compute Generation** (Gen 5)</li><li>**vCore** (4 vCores)</li><li>**Storage** (100 GB)</li><li>**Backup Retention Period** (7 days)</li></ul><br>To enable your server backups in geo-redundant storage, for **Backup Redundancy Options**, select **Geographically Redundant**. <br><br>To save this pricing tier selection, select **OK**. The next screenshot captures these selections.
-
+ ![Pricing tier](./media/tutorial-design-database-using-portal/3-pricing-tier.png) > [!TIP] > With **auto-growth** enabled your server increases storage when you are approaching the allocated limit, without impacting your workload.
-4. Click **Review + create**. You can click on the **Notifications** button on the toolbar to monitor the deployment process. Deployment can take up to 20 minutes.
+4. Select **Review + create**. You can select on the **Notifications** button on the toolbar to monitor the deployment process. Deployment can take up to 20 minutes.
## Configure the firewall
In our example, the server name is **mydemoserver.mariadb.database.azure.com** a
## Connect to the server by using mysql
-Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.7/en/mysql.html) to establish a connection to your Azure Database for MariaDB server. You can run the mysql command-line tool from Azure Cloud Shell in the browser or from your computer by using the mysql tools installed locally. To open Azure Cloud Shell, select the **Try It** button on a code block in this article or go to the Azure portal and click the **>_** icon in the top right toolbar.
+Use the [mysql command-line tool](https://dev.mysql.com/doc/refman/5.7/en/mysql.html) to establish a connection to your Azure Database for MariaDB server. You can run the mysql command-line tool from Azure Cloud Shell in the browser or from your computer by using the mysql tools installed locally. To open Azure Cloud Shell, select the **Try It** button on a code block in this article or go to the Azure portal and select the **>_** icon in the top right toolbar.
Enter the command to connect:
Imagine that you accidentally deleted an important database table and can't reco
2. On the **Restore** page, enter or select the following information: ![Restore form](./media/tutorial-design-database-using-portal/2-restore-form.png)
-
+ - **Restore point**: Select a point in time that you want to restore to, in the timeframe listed. Make sure you convert your local time zone to UTC. - **Restore to new server**: Enter a new server name to restore to. - **Location**: The region is same as the source server and can't be changed. - **Pricing tier**: The pricing tier is the same as the source server and can't be changed.
-
-3. Select **OK** to restore the server to a point in time [restore to a point in time](./howto-restore-server-portal.md) before the table was deleted. Restoring a server creates a new copy of the server at the point in time that you selected.
+
+3. Select **OK** to restore the server to a point in time [restore to a point in time](./howto-restore-server-portal.md) before the table was deleted. Restoring a server creates a new copy of the server at the point in time that you selected.
## Next steps+ In this tutorial, you use the Azure portal to learned how to: > [!div class="checklist"]
mariadb Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-powershell.md
Title: 'Tutorial: Design a server - Azure PowerShell - Azure Database for MariaDB' description: This tutorial explains how to create and manage Azure Database for MariaDB server and database using PowerShell.+ - ms.devlang: azurepowershell Previously updated : 05/26/2020 Last updated : 06/24/2022
original server are restored.
## Next steps > [!div class="nextstepaction"]
-> [How to back up and restore an Azure Database for MariaDB server using PowerShell](howto-restore-server-powershell.md)
+> [How to back up and restore an Azure Database for MariaDB server using PowerShell](howto-restore-server-powershell.md)
postgresql Howto Ingest Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ingest-azure-stream-analytics.md
+
+ Title: Real-time data ingestion with Azure Stream Analytics - Hyperscale (Citus) - Azure DB for PostgreSQL
+description: How to transform and ingest streaming data
+++++ Last updated : 06/23/2022++
+# How to ingest data using Azure Stream Analytics
+
+[Azure Stream
+Analytics](https://azure.microsoft.com/services/stream-analytics/#features)
+(ASA) is a real-time analytics and event-processing engine that is designed to
+process high volumes of fast streaming data from devices, sensors, and web
+sites. It's also available on the Azure IoT Edge runtime, enabling data
+processing on IoT devices.
++
+Hyperscale (Citus) shines at real-time workloads such as
+[IoT](howto-build-scalable-apps-model-high-throughput.md). For these workloads,
+Azure Stream Analytics (ASA) can act as a no-code, performant and scalable
+alternative to pre-process and stream data from Event Hubs, IoT Hub and Azure
+Blob Storage into Hyperscale (Citus).
+
+## Steps to set up ASA with Hyperscale (Citus)
+
+> [!NOTE]
+>
+> This article uses [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md)
+> as an example datasource, but the technique is applicable to any other source
+> supported by ASA. Also, the demonstration data shown below comes from the
+> [Azure IoT Device Telemetry
+> Simulator](https://github.com/Azure-Samples/Iot-Telemetry-Simulator). This
+> article doesn't cover setting up the simulator.
+
+1. Open **Azure portal** and select **Create a resource** in the upper left-hand corner of the Azure portal.
+1. Select **Analytics** > **Stream Analytics job** from the results list.
+1. Fill out the Stream Analytics job page with the following information:
+ * **Job name** - Name to identify your Stream Analytics job.
+ * **Subscription** - Select the Azure subscription that you want to use for this job.
+ * **Resource group** - Select the same resource group as your IoT Hub.
+ * **Location** - Select geographic location where you can host your Stream Analytics job. Use the location that's closest to your users for better performance and to reduce the data transfer cost.
+ * **Streaming units** - Streaming units represent the computing resources that are required to execute a job.
+ * **Hosting environment** - **Cloud** allows you to deploy to Azure Cloud, and **Edge** allows you to deploy to an IoT Edge device.
+1. Select **Create**. You should see a **Deployment in progress...** notification displayed in the top right of your browser window.
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-02-create.png" alt-text="Create Azure Stream Analytics form." border="true":::
+
+1. Configure job input.
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-03-input.png" alt-text="Configure job input in Azure Stream Analytics." border="true":::
+
+ 1. Once the resource deployment is complete, navigate to your Stream Analytics
+ job. Select **Inputs** > **Add Stream input** > **IoT Hub**.
+
+ 1. Fill out the IoT Hub page with the following values:
+ * **Input alias** - Name to identify the job's input.
+ * **Subscription** - Select the Azure subscription that has the IOT Hub account you created.
+ * **IoT Hub** ΓÇô Select the name of the IoT Hub you have already created.
+ * Leave other options as default values
+ 1. Select **Save** to save the settings.
+ 1. Once the input stream is added, you can also verify/download the dataset flowing in.
+ Below is the data for sample event in our use case:
+
+ ```json
+ {
+ "deviceId": "sim000001",
+ "time": "2022-04-25T13:49:11.6892185Z",
+ "counter": 1,
+ "EventProcessedUtcTime": "2022-04-25T13:49:41.4791613Z",
+ "PartitionId": 3,
+ "EventEnqueuedUtcTime": "2022-04-25T13:49:12.1820000Z",
+ "IoTHub": {
+ "MessageId": null,
+ "CorrelationId": "990407b8-4332-4cb6-a8f4-d47f304397d8",
+ "ConnectionDeviceId": "sim000001",
+ "ConnectionDeviceGenerationId": "637842405470327268",
+ "EnqueuedTime": "2022-04-25T13:49:11.7060000Z"
+ }
+ }
+ ```
+
+1. Configure Job Output.
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-04-output.png" alt-text="Configure job output in Azure Stream Analytics." border="true":::
+
+ 1. Navigate to the Stream Analytics job that you created earlier.
+ 1. Select **Outputs** > **Add** > **Azure PostgreSQL**.
+ 1. Fill out the **Azure PostgreSQL** page with the following values:
+ * **Output alias** - Name to identify the job's output.
+ * Select **"Provide PostgreSQL database settings manually"** and enter the DB server connection details like server FQDN, database, table name, username, and password.
+ * For our example dataset, we chose the table name `device_data`.
+ 1. Select **Save** to save the settings.
+
+ > [!NOTE]
+ > The **Test Connection** feature for Hyperscale (Citus) is currently not
+ > supported and might throw an error, even when the connection works fine.
+
+1. Define transformation query.
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-stream-analytics-05-transformation-query.png" alt-text="Transformation query in Azure Stream Analytics." border="true":::
+
+ 1. Navigate to the Stream Analytics job that you created earlier.
+ 1. For this tutorial, we'll be ingesting only the alternate events from IoT Hub into Hyperscale (Citus) to reduce the overall data size.
+
+ ```sql
+ select
+ counter,
+ iothub.connectiondeviceid,
+ iothub.correlationid,
+ iothub.connectiondevicegenerationid,
+ iothub.enqueuedtime
+ from
+ [src-iot-hub]
+ where counter%2 = 0;
+ ```
+
+ 1. Select **Save Query**
+
+ > [!NOTE]
+ > We are using the query to not only sample the data, but also extract the
+ > desired attributes from the data stream. The custom query option with
+ > stream analytics is helpful in pre-processing/transforming the data
+ > before it gets ingested into the DB.
+
+1. Start the Stream Analytics job and verify output.
+
+ 1. Return to the job overview page and select Start.
+ 1. Under **Start job**, select **Now**, for the Job output start time field. Then, select **Start** to start your job.
+ 1. After few minutes, you can query the Hyperscale (Citus) database to verify the data loaded. The job will take some time to start at the first time, but once triggered it will continue to run as the data arrives.
+
+ ```
+ citus=> SELECT * FROM public.device_data LIMIT 10;
+
+ counter | connectiondeviceid | correlationid | connectiondevicegenerationid | enqueuedtime
+ +--+--++
+ 2 | sim000001 | 7745c600-5663-44bc-a70b-3e249f6fc302 | 637842405470327268 | 2022-05-25T18:24:03.4600000Z
+ 4 | sim000001 | 389abfde-5bec-445c-a387-18c0ed7af227 | 637842405470327268 | 2022-05-25T18:24:05.4600000Z
+ 6 | sim000001 | 3932ce3a-4616-470d-967f-903c45f71d0f | 637842405470327268 | 2022-05-25T18:24:07.4600000Z
+ 8 | sim000001 | 4bd8ecb0-7ee1-4238-b034-4e03cb50f11a | 637842405470327268 | 2022-05-25T18:24:09.4600000Z
+ 10 | sim000001 | 26cebc68-934e-4e26-80db-e07ade3775c0 | 637842405470327268 | 2022-05-25T18:24:11.4600000Z
+ 12 | sim000001 | 067af85c-a01c-4da0-b208-e4d31a24a9db | 637842405470327268 | 2022-05-25T18:24:13.4600000Z
+ 14 | sim000001 | 740e5002-4bb9-4547-8796-9d130f73532d | 637842405470327268 | 2022-05-25T18:24:15.4600000Z
+ 16 | sim000001 | 343ed04f-0cc0-4189-b04a-68e300637f0e | 637842405470327268 | 2022-05-25T18:24:17.4610000Z
+ 18 | sim000001 | 54157941-2405-407d-9da6-f142fc8825bb | 637842405470327268 | 2022-05-25T18:24:19.4610000Z
+ 20 | sim000001 | 219488e5-c48a-4f04-93f6-12c11ed00a30 | 637842405470327268 | 2022-05-25T18:24:21.4610000Z
+ (10 rows)
+ ```
+
+## Next steps
+
+Learn how to create a [real-time
+dashboard](tutorial-design-database-realtime.md) with Hyperscale (Citus).
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
The integration runtime (IR) is the compute infrastructure that Microsoft Purvie
A self-hosted integration runtime (SHIR) can be used to scan data source in an on-premises network or a virtual network. The installation of a self-hosted integration runtime needs an on-premises machine or a virtual machine inside a private network.
-This article describes how to create and manage a self-hosted integration runtime.
+This article covers both set up of a self-hosted integration runtime, and troubleshooting and management.
++
+|Topic | Section|
+|-|-|
+|Set up a new self-hosted integration runtime|[Machine requirements](#prerequisites)|
+||[Source-specific machine requirements are listed under prerequisites in each source article](azure-purview-connector-overview.md)|
+||[Set up guide](#setting-up-a-self-hosted-integration-runtime)|
+|Networking|[Networking requirements](#networking-requirements)|
+||[Proxy servers](#proxy-server-considerations)|
+||[Private endpoints](catalog-private-link.md)|
+||[Troubleshoot proxy and firewall](#possible-symptoms-for-issues-related-to-the-firewall-and-proxy-server)|
+||[Troubleshoot connectivity](troubleshoot-connections.md)|
+|Management|[General](#manage-a-self-hosted-integration-runtime)|
> [!NOTE] > The Microsoft Purview Integration Runtime cannot be shared with an Azure Synapse Analytics or Azure Data Factory Integration Runtime on the same machine. It needs to be installed on a separated machine.
Installation of the self-hosted integration runtime on a domain controller isn't
- Self-hosted integration runtime requires a 64-bit Operating System with .NET Framework 4.7.2 or above. See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details. -- Ensure Visual C++ Redistributable for Visual Studio 2015 or higher is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://docs.microsoft.com/cpp/windows/latest-supported-vc-redist#visual-studio-2015-2017-2019-and-2022).
+- Ensure Visual C++ Redistributable for Visual Studio 2015 or higher is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](/windows/latest-supported-vc-redist#visual-studio-2015-2017-2019-and-2022).
- The recommended minimum configuration for the self-hosted integration runtime machine is a 2-GHz processor with 4 cores, 8 GB of RAM, and 80 GB of available hard drive space. For the details of system requirements, see [Download](https://www.microsoft.com/download/details.aspx?id=39717). - If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message.
You can edit a self-hosted integration runtime by navigating to **Integration ru
You can delete a self-hosted integration runtime by navigating to **Integration runtimes** in the Management center, selecting the IR and then selecting **Delete**. Once an IR is deleted, any ongoing scans relying on it will fail.
-## Service account for Self-hosted integration runtime
+### Notification area icons and notifications
-The default sign in service account of self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
+If you move your cursor over the icon or message in the notification area, you can see details about the state of the self-hosted integration runtime.
-Make sure the account has the permission of Log on as a service. Otherwise self-hosted integration runtime can't start successfully. You can check the permission in **Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Log on as a service**
+### Service account for Self-hosted integration runtime
+The default sign-in service account of self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
-## Notification area icons and notifications
+Make sure the account has the permission of Log-on as a service. Otherwise self-hosted integration runtime can't start successfully. You can check the permission in **Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Log on as a service**
-If you move your cursor over the icon or message in the notification area, you can see details about the state of the self-hosted integration runtime.
## Networking requirements
sentinel Windows Security Event Id Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/windows-security-event-id-reference.md
When ingesting security events from Windows devices using the [Windows Security
The **Common** event set may contain some types of events that aren't so common. This is because the main point of the **Common** set is to reduce the volume of events to a more manageable level, while still maintaining full audit trail capability. -- **Minimal** - A small set of events that might indicate potential threats. This set does not contain a full audit trail. It covers only events that might indicate a successful breach, and other important events that have very low rates of occurrence. For example, it contains successful and failed user logons (event IDs 4624, 4625), but it doesn't contain sign-out information (4634) which, while important for auditing, is not meaningful for breach detection and has relatively high volume. Most of the data volume of this set is consists of sign-in events and process creation events (event ID 4688).
+- **Minimal** - A small set of events that might indicate potential threats. This set does not contain a full audit trail. It covers only events that might indicate a successful breach, and other important events that have very low rates of occurrence. For example, it contains successful and failed user logons (event IDs 4624, 4625), but it doesn't contain sign-out information (4634) which, while important for auditing, is not meaningful for breach detection and has relatively high volume. Most of the data volume of this set consists of sign-in events and process creation events (event ID 4688).
- **Custom** - A set of events determined by you, the user, and defined in a data collection rule using XPath queries. [Learn more about data collection rules](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
storage Storage Blob Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-overview.md
This table shows how this feature is supported in your account and the impact on
| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-<sup>1</sup> Data Lake Storage Gen2 and the Network File System (NFS) 3.0 protocol both require a storage account with a hierarchical namespace enabled.
- <sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled. ## Next steps
stream-analytics Service Bus Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-managed-identity.md
Previously updated : 05/04/2022 Last updated : 06/25/2022
Now that your managed identity is configured, you're ready to add the Service
1. Go to your Stream Analytics job and navigate to the **Outputs** page under **Job Topology**.
-1. Select **Add > Service Bus queue or Service Bus topic**. In the output properties window, search and select your Cosmos DB account and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
+1. Select **Add > Service Bus queue or Service Bus topic**. In the output properties window, search and select your Service Bus account and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
1. Fill out the rest of the properties and select **Save**.
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
Title: How to set up access control for your Azure Synapse workspace
-description: This article will teach you how to control access to an Azure Synapse workspace using Azure roles, Synapse roles, SQL permissions, and Git permissions.
-
+ Title: Access control in Synapse workspace how to
+description: Learn how to control access to Azure Synapse workspaces using Azure roles, Synapse roles, SQL permissions, and Git permissions.
+ Previously updated : 3/07/2022 Last updated : 5/23/2022 -+
-# How to set up access control for your Azure Synapse workspace
+# How to set up access control for your Azure Synapse workspace
-This article will teach you how to control access to a Microsoft Azure Synapse workspace using Azure roles, Azure Synapse roles, SQL permissions, and Git permissions.
+This article teaches you how to control access to a Microsoft Azure Synapse workspace. We'll use a combination of Azure roles, Azure Synapse roles, SQL permissions, and Git permissions to achieve this.
-In this guide, you'll set up a workspace and configure a basic access control system suitable for many Azure Synapse projects. It then describes more advanced options for finer-grained control should you need it.
+In this guide, you'll set up a workspace and configure a basic access control system. You can use this information in many types of Synapse projects. You'll also find advanced options for finer-grained control should you need it.
-Azure Synapse access control can be simplified by using security groups that are aligned with the roles and personas in your organization. You only need to add and remove users from security groups to manage access.
+Synapse access control can be simplified by aligning roles and personas in your organization with security groups. This enables you to manage access to security groups simply by adding and removing users.
-Before you start this walkthrough, read the [Azure Synapse access control overview](./synapse-workspace-access-control-overview.md) to familiarize yourself with the access control mechanisms used by Azure Synapse Analytics.
+Before you begin this walkthrough, read the [Azure Synapse access control overview](./synapse-workspace-access-control-overview.md) to familiarize yourself with access control mechanisms used by Synapse Analytics.
## Access control mechanisms > [!NOTE]
-> The approach taken in this guide is to create several security groups and then assign roles to these groups. After the groups are set up, you only need to manage membership within the security groups to control access to the workspace.
+> The approach in this guide is to create security groups. When you assign roles to these security groups, you only need to manage memberships within those groups to control access to workspaces.
-To secure an Azure Synapse workspace, you'll follow a pattern of configuring the following items:
+To secure a Synapse workspace, you'll configure the following items:
- **Security Groups**, to group users with similar access requirements. - **Azure roles**, to control who can create and manage SQL pools, Apache Spark pools and Integration runtimes, and access ADLS Gen2 storage.-- **Synapse roles**, to control access to published code artifacts, use of Apache Spark compute resources and Integration runtimes -- **SQL permissions**, to control administrative and data plane access to SQL pools. -- **Git permissions**, to control who can access code artifacts in source control if you configure Git-support for the workspace
-
-## Steps to secure an Azure Synapse workspace
+- **Synapse roles**, to control access to published code artifacts, use of Apache Spark compute resources and integration runtimes.
+- **SQL permissions**, to control administrative and data plane access to SQL pools.
+- **Git permissions**, to control who can access code artifacts in source control if you configure Git-support for workspaces.
-This document uses standard names to simplify the instructions. Replace them with names of your choice.
+## Steps to secure a Synapse workspace
+
+This document uses standard names to simplify instructions. Replace them with names of your choice.
|Setting | Standard name | DescriptionΓÇ»| | : | :-- | :- |
This document uses standard names to simplify the instructions. Replace them wit
## STEP 1: Set up security groups
->[!Note]
->During the preview, it was recommended to create security groups mapped to the Azure Synapse **Synapse SQL Administrator** and **Synapse Apache Spark Administrator** roles. With the introduction of new finer-grained Synapse RBAC roles and scopes, it is now recommended that you use these new capabilities to control access to your workspace. These new roles and scopes provide more configuration flexibility and recognize that developers often use a mix of SQL and Spark in creating analytics applications and may need to be granted access to specific resources rather than the entire workspace. [Learn more](./synapse-workspace-synapse-rbac.md) about Synapse RBAC.
+>[!Note]
+>During the preview, you were encouraged to create security groups and to map them to Azure Synapse **Synapse SQL Administrator** and **Synapse Apache Spark Administrator** roles. With the introduction of new finer-grained Synapse RBAC roles and scopes, you are now encouraged to use newer options to control access to your workspace. They give you greater configuration flexibility and they acknowledge that developers often use a mix of SQL and Spark to create analytics applications. So developers may need access to individual resources rather than an entire workspace. [Learn more](./synapse-workspace-synapse-rbac.md) about Synapse RBAC.
Create the following security groups for your workspace: -- **`workspace1_SynapseAdministrators`**, for users who need complete control over the workspace. Add yourself to this security group, at least initially.-- **`workspace1_SynapseContributors`**, for developers who need to develop, debug, and publish code to the service.
+- **`workspace1_SynapseAdministrators`**, for users who need complete control over a workspace. Add yourself to this security group, at least initially.
+- **`workspace1_SynapseContributors`**, for developers who need to develop, debug, and publish code to a service.
- **`workspace1_SynapseComputeOperators`**, for users who need to manage and monitor Apache Spark pools and Integration runtimes.-- **`workspace1_SynapseCredentialUsers`**, for users who need to debug and run orchestration pipelines using the workspace MSI (managed service identity) credential and cancel pipeline runs.
+- **`workspace1_SynapseCredentialUsers`**, for users who need to debug and run orchestration pipelines using workspace MSI (managed service identity) credentials and cancel pipeline runs.
You'll assign Synapse roles to these groups at the workspace scope shortly.
-Also create this security group:
-- **`workspace1_SQLAdmins`**, group for users who need SQL Active Directory Admin authority within SQL pools in the workspace.
+Also create this security group:
+- **`workspace1_SQLAdmins`**, group for users who need SQL Active Directory Admin authority, within SQL pools in the workspace.
-The `workspace1_SQLAdmins` group will be used when you configure SQL permissions in SQL pools as you create them.
+The `workspace1_SQLAdmins` group to configure SQL permissions when you create SQL pools.
-For a basic setup, these five groups are sufficient. Later, you can add security groups to handle users who need more specialized access or to give users access only to specific resources.
+These five groups are sufficient for a basic setup. Later, you can add security groups to handle users who need more specialized access or restrict access to individual resources only.
> [!NOTE] >- Learn how to create a security group in [Create a basic group and add members using Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). >- Learn how to add a security group from another security group in [Add or remove a group from another group using Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-membership-azure-portal.md). >[!Tip]
->Individual Synapse users can use Azure Active Directory in the Azure portal to view their group memberships to determine which roles they've been granted.
+>Individual Synapse users can use Azure Active Directory in the Azure portal to view their group memberships. This allows them to determine which roles they've been granted.
## STEP 2: Prepare your ADLS Gen2 storage account
-An Azure Synapse workspace uses a default storage container for:
- - Storing the backing data files for Spark tables
+Synapse workspaces use default storage containers for:
+ - Storage of backing data files for Spark tables
- Execution logs for Spark jobs
- - Managing libraries that you choose to install
+ - Management of libraries that you choose to install
Identify the following information about your storage: - The ADLS Gen2 account to use for your workspace. This document calls it `storage1`. `storage1` is considered the "primary" storage account for your workspace.-- The container inside `workspace1` that your Synapse workspace will use by default. This document calls it `container1`.
-
+- The container inside `workspace1` that your Synapse workspace will use by default. This document calls it `container1`.
+ - Select **Access control (IAM)**. - Select **Add** > **Add role assignment** to open the Add role assignment page.
Identify the following information about your storage:
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
-## STEP 3: Create and configure your Azure Synapse Workspace
+## STEP 3: Create and configure your Synapse workspace
-In the Azure portal, create an Azure Synapse workspace:
+In Azure portal, create a Synapse workspace:
- Select your subscription -- Select or create a resource group for which you have the Azure **Owner** role.
+- Select or create a resource group for which you have an Azure **Owner** role.
- Name the workspace `workspace1`
In the Azure portal, create an Azure Synapse workspace:
- Open WS1 in Synapse Studio -- Navigate to **Manage** > **Access Control** and assign Synapse roles at *workspace scope* to the security groups as follows:
- - Assign the **Synapse Administrator** role to `workspace1_SynapseAdministrators`
- - Assign the **Synapse Contributor** role to `workspace1_SynapseContributors`
+- In Synapse Studio, navigate to **Manage** > **Access Control**. In **workspace scope**, assign Synapse roles to security groups as follows:
+ - Assign the **Synapse Administrator** role to `workspace1_SynapseAdministrators`
+ - Assign the **Synapse Contributor** role to `workspace1_SynapseContributors`
- Assign the **Synapse Compute Operator** role to `workspace1_SynapseComputeOperators` ## STEP 4: Grant the workspace MSI access to the default storage container
-To run pipelines and perform system tasks, Azure Synapse requires that the workspace managed service identity (MSI) needs access to `container1` in the default ADLS Gen2 account. For more information, see [Azure Synapse workspace managed identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics).
+To run pipelines and perform system tasks, Azure Synapse requires managed service identity (MSI) to have access to `container1` in the default ADLS Gen2 account, for the workspace. For more information, see [Azure Synapse workspace managed identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics).
-- Open the Azure portal-- Locate the storage account, `storage1`, and then `container1`
+- Open Azure portal
+- Locate the storage account, `storage1`, and then `container1`.
- Select **Access control (IAM)**.-- Select **Add** > **Add role assignment** to open the Add role assignment page.
+- To open the **Add role assignment** page, select **Add** > **Add role assignment** .
- Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). | Setting | Value |
To run pipelines and perform system tasks, Azure Synapse requires that the works
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
-## STEP 5: Grant Synapse administrators the Azure Contributor role on the workspace
+## STEP 5: Grant Synapse administrators an Azure Contributor role for the workspace
+
+To create SQL pools, Apache Spark pools and Integration runtimes, users need an Azure Contributor role for the workspace, at minimum. A Contributor role also allows users to manage resources, including pausing and scaling. To use Azure portal or Synapse Studio to create SQL pools, Apache Spark pools and Integration runtimes, you need a Contributor role at the resource group level.
-To create SQL pools, Apache Spark pools and integration runtimes, users must have at least Azure Contributor role at the workspace. The contributor role also allows these users to manage the resources, including pausing and scaling. If you're using Azure portal or Synapse Studio to create SQL pools, Apache Spark pools and integration runtimes, then you need Azure Contributor role at the resource group level.
-- Open the Azure portal
+- Open Azure portal
- Locate the workspace, `workspace1` - Select **Access control (IAM)**.-- Select **Add** > **Add role assignment** to open the Add role assignment page.
+- To open the **Add role assignment** page, select **Add** > **Add role assignment**.
- Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
+ | Setting | Value | | | | | Role | Contributor |
To create SQL pools, Apache Spark pools and integration runtimes, users must hav
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
-## STEP 6: Assign SQL Active Directory Admin role
+## STEP 6: Assign an SQL Active Directory Admin role
-The workspace creator is automatically set up as the SQL Active Directory Admin for the workspace. Only a single user or group can be granted this role. In this step, you assign the SQL Active Directory Admin on the workspace to the `workspace1_SQLAdmins` security group. Assigning this role gives this group highly privileged admin access to all SQL pools and databases in the workspace.
+The *workspace creator* is automatically assigned as *SQL Active Directory Admin* for the workspace. Only a single user or a group can be granted this role. In this step, you assign the SQL Active Directory Admin for the workspace to the `workspace1_SQLAdmins` security group. This gives the group highly privileged admin access to all SQL pools and databases in the workspace.
-The Azure Active Directory admin account controls access to dedicated SQL pools, while Synapse RBAC roles are used to control access to serverless pools. Configure Synapse RBAC roles via Synapse Studio, for more information, see [How to manage Synapse RBAC role assignments in Synapse Studio](../security/how-to-manage-synapse-rbac-role-assignments.md).
--- Open the Azure portal
+- Open Azure portal
- Navigate to `workspace1`-- Under **Settings**, select **SQL Active Directory admin**
+- Under **Settings**, select **Azure Active Directory**
- Select **Set admin** and choose **`workspace1_SQLAdmins`** >[!Note]
->Step 6 is optional. You might choose to grant the `workspace1_SQLAdmins` group a less privileged role. To assign `db_owner` or other SQL roles, you must run scripts on each SQL database.
+>Step 6 is optional. You might choose to grant the `workspace1_SQLAdmins` group a less privileged role. To assign `db_owner` or other SQL roles, you must run scripts on each SQL database.
## STEP 7: Grant access to SQL pools
-By default, all users assigned the Synapse Administrator role are also assigned the SQL `db_owner` role on the serverless SQL pools in the workspace.
-
-Access to SQL pools for other users is controlled using SQL permissions. Assigning SQL permissions requires that SQL scripts are run on each SQL database after creation. There are three cases that require you run these scripts:
-1. Granting other users access to the serverless SQL pool, 'Built-in', and its databases
-2. Granting any user access to dedicated SQL pool databases
+The Synapse Administrator is by default given the SQL `db_owner` role for serverless SQL pools in the workspace as well.
-Example SQL scripts are included below.
+Access to SQL pools for other users is controlled by SQL permissions. Assigning SQL permissions requires SQL scripts to be run on each SQL database post-creation. The following are examples that require you to run these scripts:
+1. To grant users access to the serverless SQL pool, 'Built-in', and its databases.
+1. To grant users access to dedicated SQL pool databases. Example SQL scripts are included later in this article.
-To grant access to a dedicated SQL pool database, the scripts can be run by the workspace creator or any member of the `workspace1_SynapseAdministrators` group.
+1. To grant access to a dedicated SQL pool database, scripts can be run by the workspace creator or any member of the `workspace1_SynapseAdministrators` group.
-To grant access to the serverless SQL pool, 'Built-in', the scripts can be run by any member of the `workspace1_SQLAdmins` group or the `workspace1_SynapseAdministrators` group.
+1. To grant access to the serverless SQL pool, 'Built-in', scripts can be run by any member of the `workspace1_SQLAdmins` group or the `workspace1_SynapseAdministrators` group.
> [!TIP]
-> The steps below need to be run for **each** SQL pool to grant user access to all SQL databases except in section [Workspace-scoped permission](#workspace-scoped-permission) where you can assign a user a sysadmin role at the workspace level.
+>You can grant access to all SQL databases by taking the following steps for **each** SQL pool. Section [Configure-Workspace-scoped permissions](#configure-workspace-scoped-permissions) is an exception to the rule and it allows you to assign a user a sysadmin role at the workspace level.
### STEP 7.1: Serverless SQL pool, Built-in
-In this section, there are script examples showing how to give a user permission to access a particular database or to all databases in the serverless SQL pool, `Built-in`.
+You can use the script examples in this section to give users permission to access an individual database or all databases in the serverless SQL pool, `Built-in`.
> [!NOTE]
-> In the script examples, replace *alias* with the alias of the user or group being granted access, and *domain* with the company domain you are using.
+> In the script examples, replace *alias* with the alias of the user or group being granted access. Replace *domain* with the company domain you are using.
-#### Database-scoped permission
+#### Configure Database-scoped permissions
-To grant access to a user to a **single** serverless SQL database, follow the steps in this example:
+You can grant users access to a **single** serverless SQL database with the steps outlined in this example:
1. Create a login. Change to the `master` database context.
To grant access to a user to a **single** serverless SQL database, follow the st
ALTER ROLE db_owner ADD member alias; -- Type USER name from step 2 ```
-#### Workspace-scoped permission
+#### Configure Workspace-scoped permissions
-To grant full access to **all** serverless SQL pools in the workspace, in the `master` database, use the script in this example:
+You can grant full access to **all** serverless SQL pools in the workspace. Run the script in this example in the `master` database:
```sql CREATE LOGIN [alias@domain.com] FROM EXTERNAL PROVIDER; ALTER SERVER ROLE sysadmin ADD MEMBER [alias@domain.com]; ```
-### STEP 7.2: Dedicated SQL pools
+### STEP 7.2: configure Dedicated SQL pools
-To grant access to a **single** dedicated SQL pool database, follow these steps in the Azure Synapse SQL script editor:
+You can grant access to a **single**, dedicated, SQL pool database. Use these steps in the Azure Synapse SQL script editor:
-1. Create the user in the database by running the following command on the target database, selected using the *Connect to* dropdown:
+1. Create a user in the database by running the following commands. Select the target database in the *Connect to* dropdown:
```sql --Create user in the database
To grant access to a **single** dedicated SQL pool database, follow these steps
``` > [!IMPORTANT]
-> The **db_datareader** and **db_datawriter** database roles can work for read/write permissions if granting **db_owner** permission is not desired.
-> However, for a Spark user to read and write directly from Spark into or from a SQL pool, **db_owner** permission is required.
+> **db_datareader** and **db_datawriter** database roles can provide read/write permission when you do not want to give **db_owner** permissions.
+> However, **db_owner** permission is necessary for Spark users to read and write directly from Spark into or from an SQL pool.
-After creating the users, run queries to validate that the serverless SQL pool can query the storage account.
+You can run queries to confirm that serverless SQL pools can query storage accounts, after you have created your users.
## STEP 8: Add users to security groups
-The initial configuration for your access control system is complete.
+The initial configuration for your access control system is now complete.
-To manage access, you can add and remove users to the security groups you've set up. Although you can manually assign users to Azure Synapse roles, if you do, it won't configure their permissions consistently. Instead, only add or remove users to the security groups.
+You can now add and remove users to the security groups you've set up, to manage access to them. You can manually assign users to Azure Synapse roles, but this sets permissions inconsistently. Instead, only add or remove users to your security groups.
## STEP 9: Network security
Your workspace is now fully configured and secured.
This guide has focused on setting up a basic access control system. You can support more advanced scenarios by creating additional security groups and assigning these groups more granular roles at more specific scopes. Consider the following cases:
-**Enable Git-support** for the workspace for more advanced development scenarios including CI/CD. While in Git mode, Git permissions and Synapse RBAC will determine whether a user can commit changes to their working branch. Publishing to the service only takes place from the collaboration branch. Consider creating a security group for developers who need to develop and debug updates in a working branch but don't need to publish changes to the live service.
+**Enable Git-support** for the workspace for more advanced development scenarios including CI/CD. While in Git mode, Git permissions and Synapse RBAC will determine whether a user can commit changes to their working branch. Publishing to the service only takes place from the collaboration branch. Consider creating a security group for developers who need to develop and debug updates in a working branch but don't need to publish changes to the live service.
-**Restrict developer access** to specific resources. Create additional finer-grained security groups for developers who need access only to specific resources. Assign these groups appropriate Azure Synapse roles that are scoped to specific Spark pools, Integration runtimes, or credentials.
+**Restrict developer access** to specific resources. Create additional finer-grained security groups for developers who need access only to specific resources. Assign these groups appropriate Azure Synapse roles that are scoped to specific Spark pools, Integration runtimes, or credentials.
**Restrict operators from accessing code artifacts**. Create security groups for operators who need to monitor operational status of Synapse compute resources and view logs but who don't need access to code or to publish updates to the service. Assign these groups the Compute Operator role scoped to specific Spark pools and Integration runtimes.
synapse-analytics How To Pause Resume Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/how-to-pause-resume-pipelines.md
Synapse Pipelines allow for the automation of pause and resume, but you can exec
- An existing [Azure Synapse workspace](../get-started-create-workspace.md) - At least one [dedicated SQL pool](../get-started-analyze-sql-pool.md)-- Your workspace must be assigned the Azure contributor role to the affected Dedicated SQL Pool(s). See [Grant Synapse administrators the Azure Contributor role on the workspace](../security/how-to-set-up-access-control.md#step-5-grant-synapse-administrators-the-azure-contributor-role-on-the-workspace).
+- Your workspace must be assigned the Azure contributor role to the affected Dedicated SQL Pool(s). See [Grant Synapse administrators the Azure Contributor role on the workspace](../security/how-to-set-up-access-control.md#step-5-grant-synapse-administrators-an-azure-contributor-role-for-the-workspace).
## Step 1: Create a pipeline in Synapse Studio. 1. Navigate to your workspace and open Synapse Studio.
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Title: Overview of VM Applications in the Azure Compute Gallery (preview) description: Learn more about VM application packages in an Azure Compute Gallery.-+ Last updated 05/18/2022-+
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
Resource|Scenario|Steps|
|Virtual machine scale sets | You can use a public IP address prefix to generate instance-level IPs in a virtual machine scale set, though individual public IP resources won't be created. | Use a [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-with-public-ip-prefix) with instructions to use this prefix for public IP configuration as part of the scale set creation. (Note that the zonal properties of the prefix will be passed to the instance IPs, though they will not show in the output; see [Networking for Virtual Machine Scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine) for more information.) | | Standard load balancers | A public IP address prefix can be used to scale a load balancer by [using all IPs in the range for outbound connections](../../load-balancer/outbound-rules.md#scale). | To associate a prefix to your load balancer: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the load balancer, select the IP prefix as associated with the frontend of your load balancer. | | NAT Gateway | A public IP prefix can be used to scale a NAT gateway by using the public IPs in the prefix for outbound connections. | To associate a prefix to your NAT Gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the NAT Gateway, select the IP prefix as the Outbound IP. (Note that a NAT Gateway can have no more than 16 IPs in total, so a public IP prefix of /28 length is the maximum size that can be used.) |
-| VPN Gateway (AZ SKU) or Application Gateway v2 | You can use a public IP from a prefix for your zone-redundant VPN or Application gateway v2. | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you deploy the [VPN Gateway](../../vpn-gateway/tutorial-create-gateway-portal.md) or [Application Gateway](../../application-gateway/quick-create-portal.md#create-an-application-gateway), be sure to select the IP you previously gave from the prefix.|
## Limitations
virtual-wan Connect Virtual Network Gateway Vwan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/connect-virtual-network-gateway-vwan.md
description: Learn how to connect an Azure VPN gateway (virtual network gateway)
Previously updated : 06/22/2022 Last updated : 06/24/2022
Virtual Network (for virtual network gateway)
## <a name="vnetgw"></a>1. Configure VPN Gateway virtual network gateway
-Create a **VPN Gateway** virtual network gateway in active-active mode for your virtual network. When you create the gateway, you can either use existing public IP addresses for the two instances of the gateway, or you can create new public IPs. You'll use these public IPs when setting up the Virtual WAN sites. For more information about active-active VPN gateways and configuration steps, see [Configure active-active VPN gateways](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md#aagateway).
+In this section you create a VPN Gateway virtual network gateway in active-active mode for your virtual network. When you create the gateway, you can either use existing public IP addresses for the two instances of the gateway, or you can create new public IPs. You'll use these public IPs when setting up the Virtual WAN sites.
-The following sections show example settings for your gateway.
+1. Create a **VPN Gateway** virtual network gateway in active-active mode for your virtual network. For more information about active-active VPN gateways and configuration steps, see [Configure active-active VPN gateways](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md#aagateway).
-### <a name="active-active"></a>Active-active mode setting
+1. The following sections show example settings for your virtual network gateway.
-On the Virtual network gateway **Configuration** page, make sure **active-active** mode is enabled.
+ * **Active-active mode setting** - On the virtual network gateway **Configuration** page, make sure **active-active** mode is enabled.
+ :::image type="content" source="./media/connect-virtual-network-gateway-vwan/active.png" alt-text="Screenshot showing a virtual network gateway with active-active mode enabled." lightbox="./media/connect-virtual-network-gateway-vwan/active.png":::
-### <a name="BGP"></a>BGP setting
+ * **BGP setting** - On the virtual network gateway **Configuration** page, you can (optionally) select **Configure BGP ASN**. If you configure BGP, change the ASN from the default value shown in the portal. For this configuration, the BGP ASN can't be 65515. 65515 will be used by Azure Virtual WAN.
-On the virtual network gateway **Configuration** page, you can (optionally) select **Configure BGP ASN**. If you configure BGP, change the ASN from the default value shown in the portal. For this configuration, the BGP ASN can't be 65515. 65515 will be used by Azure Virtual WAN.
+ :::image type="content" source="./media/connect-virtual-network-gateway-vwan/bgp.png" alt-text="Screenshot shows a virtual network gateway Configuration page with Configure BGP ASN selected." lightbox="./media/connect-virtual-network-gateway-vwan/bgp.png":::
+ * **Public IP addresses** - Once the gateway is created, go to the **Properties** page. The properties and configuration settings will be similar to the following example. Notice the two public IP addresses that are used for the gateway.
-### <a name="pip"></a>Public IP addresses
-
-Once the gateway is created, go to the **Properties** page. The properties and configuration settings will be similar to the following example. Notice the two public IP addresses that are used for the gateway.
-
+ :::image type="content" source="./media/connect-virtual-network-gateway-vwan/public-ip.png" alt-text="Screenshot shows a virtual network gateway Properties page with properties selected." lightbox="./media/connect-virtual-network-gateway-vwan/public-ip.png":::
## <a name="vwansite"></a>2. Create Virtual WAN VPN sites
-To create Virtual WAN VPN sites, navigate to your virtual WAN and, under **Connectivity**, select **VPN sites**. In this section, you'll create two Virtual WAN VPN sites that correspond to the virtual network gateways you created in the previous section.
+In this section, you'll create two Virtual WAN VPN sites that correspond to the virtual network gateways you created in the previous section.
+
+1. On your **Virtual WAN** page, go to **VPN sites**.
+1. On the **VPN sites** page, select **+Create site**.
+1. On the **Create VPN Site** page, on the **Basics** tab, complete the following fields:
-1. Select **+Create site**.
-1. On the **Create VPN sites** page, type the following values:
+ * **Region**: The same region as the Azure VPN Gateway virtual network gateway.
+ * **Name**: Example: Site1
+ * **Device vendor**: The name of the VPN device vendor (for example: Citrix, Cisco, Barracuda). Adding the device vendor can help the Azure Team better understand your environment in order to add additional optimization possibilities in the future, or to help you troubleshoot.
+ * **Private address space**: Enter a value, or leave blank when BGP is enabled.
+1. Select **Next: Links>** to advance to the **Links** page.
+1. On the **Links** page, complete the following fields:
- * **Region** - The same region as the Azure VPN Gateway virtual network gateway.
- * **Device vendor** - Enter the device vendor (any name).
- * **Private address space** - Enter a value, or leave blank when BGP is enabled.
- * **Border Gateway Protocol** - Set to **Enable** if the Azure VPN Gateway virtual network gateway has BGP enabled.
-1. Under **Links**, enter the following values:
+ * **Link Name**: A name you want to provide for the physical link at the VPN Site. Example: Link1.
+ * **Link speed**: This is the speed of the VPN device at the branch location. Example: 50, which means 50 Mbps is the speed of the VPN device at the branch site.
+ * **Link provider name**: The name of the physical link at the VPN Site. Example: ATT, Verizon.
+ * **Link IP Address** - Enter the IP address. For this configuration, it's the same as the first public IP address shown under the (VPN Gateway) virtual network gateway properties.
+ * **BGP Address** and **ASN** - These must be the same as one of the BGP peer IP addresses, and ASN from the VPN Gateway virtual network gateway that you configured in [Step 1](#vnetgw).
- * **Provider Name** - Enter a Link name and a Provider name (any name).
- * **Speed** - Speed (any number).
- * **IP Address** - Enter IP address (same as the first public IP address shown under the (VPN Gateway) virtual network gateway properties).
- * **BGP Address** and **ASN** - BGP address and ASN. These must be the same as one of the BGP peer IP addresses, and ASN from the VPN Gateway virtual network gateway that you configured in [Step 1](#vnetgw).
-1. Review and select **Confirm** to create the site.
+1. Once you have finished filling out the fields, select **Review + create** to verify. Select **Create** to create the site.
1. Repeat the previous steps to create the second site to match with the second instance of the VPN Gateway virtual network gateway. You'll keep the same settings, except using second public IP address and second BGP peer IP address from VPN Gateway configuration. 1. You now have two sites successfully provisioned. ## <a name="connect-sites"></a>3. Connect sites to the virtual hub
-Next, connect both sites to your virtual hub.
+Next, connect both sites to your virtual hub using the following steps. For more information about connecting sites, see [Connect VPN sites to a virtual hub](virtual-wan-site-to-site-portal.md#connectsites).
1. On your Virtual WAN page, go to **Hubs**. 1. On the **Hubs** page, click the hub that you created.
-1. On the page for the hub that you created, in the left pane, click **VPN (Site to site)**.
+1. On the page for the hub that you created, in the left pane, select **VPN (Site to site)**.
1. On the **VPN (Site to site)** page, you should see your sites. If you don't, you may need to click the **Hub association:x** bubble to clear the filters and view your site.
-1. Select the checkbox next to the name of each site that you want to connect (don't click the site name directly), then click **Connect VPN sites**.
-
-1. On the **Connect sites** page, configure the settings.
+1. Select the checkbox next to the name of both sites (don't click the site name directly), then click **Connect VPN sites**.
+1. On the **Connect sites** page, configure the settings. Make sure to note the **Pre-shared key** value that you use. It will be used again later in the exercise when you create your connections.
1. At the bottom of the page, select **Connect**. It takes a short while for the hub to update with the site settings.
-For more information, see [Connect the VPN sites to a virtual hub](virtual-wan-site-to-site-portal.md#connectsites).
- ## <a name="downloadconfig"></a>4. Download the VPN configuration files In this section, you download the VPN configuration file for the sites that you created in the previous section.
In this section, you create two Azure VPN Gateway local network gateways. The co
In this section, you create a connection between the VPN Gateway local network gateways and virtual network gateway. For steps on how to create a VPN Gateway connection, see [Configure a connection](../vpn-gateway/tutorial-site-to-site-portal.md#CreateConnection).
-1. In the portal, navigate to your virtual network gateway and click **Connections**. At the top of the Connections page, click **+Add** to open the **Add connection** page.
+1. In the portal, go to your virtual network gateway and select **Connections**. At the top of the Connections page, select **+Add** to open the **Add connection** page.
1. On the **Add connection** page, configure the following values for your connection: * **Name:** Name your connection. * **Connection type:** Select **Site-to-site(IPSec)** * **Virtual network gateway:** The value is fixed because you're connecting from this gateway. * **Local network gateway:** This connection will connect the virtual network gateway to the local network gateway. Choose one of the local network gateways that you created earlier.
- * **Shared Key:** Enter a shared key.
+ * **Shared Key:** Enter the shared key from earlier.
* **IKE Protocol:** Choose the IKE protocol.
-1. Click **OK** to create your connection.
+1. Select **OK** to create your connection.
1. You can view the connection in the **Connections** page of the virtual network gateway. 1. Repeat the preceding steps to create a second connection. For the second connection, select the other local network gateway that you created.
-1. If the connections are over BGP, after you've created your connections, navigate to a connection and select **Configuration**. On the **Configuration** page, for **BGP**, select **Enabled**. Then, click **Save**. Repeat for the second connection.
+1. If the connections are over BGP, after you've created your connections, go to a connection and select **Configuration**. On the **Configuration** page, for **BGP**, select **Enabled**. Then, select **Save**.
+1. Repeat for the second connection.
## <a name="test"></a>7. Test connections
You can test the connectivity by creating two virtual machines, one on the side
* **Hubs** - Select the hub you want to associate with this connection. * **Subscription** - Verify the subscription. * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network can't have an already existing virtual network gateway.
-1. Click **OK** to create the virtual network connection.
+1. Select **OK** to create the virtual network connection.
1. Connectivity is now set between the VMs. You should be able to ping one VM from the other, unless there are any firewalls or other policies blocking the communication. ## Next steps