Updates from: 07/14/2021 03:05:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Security Audit Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/security-audit-events.md
View account sign-in events seven days ago from now for the account named user t
AADDomainServicesAccountLogon | where TimeGenerated >= ago(7d) | where "user" == tolower(extract("Logon Account:\t(.+[0-9A-Za-z])",1,tostring(ResultDescription)))
-| where "0xc000006a" == tolower(extract("Error Code:\t(.+[0-9A-Za-z])",1,tostring(ResultDescription)))
+| where "0xc000006a" == tolower(extract("Error Code:\t(.+[0-9A-Fa-f])",1,tostring(ResultDescription)))
``` ### Sample query 5
View account sign-in events seven days ago from now for the account named user t
AADDomainServicesAccountLogon | where TimeGenerated >= ago(7d) | where "user" == tolower(extract("Logon Account:\t(.+[0-9A-Za-z])",1,tostring(ResultDescription)))
-| where "0xc0000234" == tolower(extract("Error Code:\t(.+[0-9A-Za-z])",1,tostring(ResultDescription)))
+| where "0xc0000234" == tolower(extract("Error Code:\t(.+[0-9A-Fa-f])",1,tostring(ResultDescription)))
``` ### Sample query 6
View the number of account sign-in events seven days ago from now for all sign-i
```Kusto AADDomainServicesAccountLogon | where TimeGenerated >= ago(7d)
-| where "0xc0000234" == tolower(extract("Error Code:\t(.+[0-9A-Za-z])",1,tostring(ResultDescription)))
+| where "0xc0000234" == tolower(extract("Error Code:\t(.+[0-9A-Fa-f])",1,tostring(ResultDescription)))
| summarize count() ```
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Previously updated : 05/11/2021 Last updated : 07/13/2021
The key benefits of enabling automatic user provisioning are:
### Licensing
-Azure AD provides self-service integration of any application using templates provided in the application gallery menu. For a full list of license requirements, see [Azure AD licensing page](https://azure.microsoft.com/pricing/details/active-directory/).
+Azure AD provides self-service integration of any application using templates provided in the application gallery menu. For a full list of license requirements, see [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
#### Application licensing
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 06/07/2021 Last updated : 07/13/2021
This capability of HR-driven IT provisioning offers the following significant bu
### Licensing
-To configure the cloud HR app to Azure AD user provisioning integration, you require a valid [Azure AD Premium license](https://azure.microsoft.com/pricing/details/active-directory/) and a license for the cloud HR app, such as Workday or SuccessFactors.
+To configure the cloud HR app to Azure AD user provisioning integration, you require a valid [Azure AD Premium license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) and a license for the cloud HR app, such as Workday or SuccessFactors.
You also need a valid Azure AD Premium P1 or higher subscription license for every user that will be sourced from the cloud HR app and provisioned to either Active Directory or Azure AD. Any improper number of licenses owned in the cloud HR app might lead to errors during user provisioning.
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Previously updated : 07/01/2021 Last updated : 07/13/2021
The Generic SQL Connector is a DSN file to connect to the SQL server. First we n
>Alternatively, you can force the agent registration to complete by restarting the provisioning agent on your server. Navigating to your server > search for services in the windows search bar > identify the Azure AD Connect Provisioning Agent Service > right click on the service and restart. ![Restart an agent](.\media\on-premises-ecma-configure\configure-8.png)
-5. After 10 minutes, under the **Admin credentials** section, enter the following URL, replacing "connectorName" portion with the name of the connector on the ECMA Host.
+5. After 10 minutes, under the **Admin credentials** section, enter the following URL, replacing "connectorName" portion with the name of the connector on the ECMA Host. You may also replace localhost with the host name.
|Property|Value| |--|--|
- |Tenant URL|https://localhost:8585/ecma2host_SQL/scim|
+ |Tenant URL|https://localhost:8585/ecma2host_connectorName/scim|
6. Enter the secret token value that you defined when creating the connector. 7. Click Test Connection and wait one minute.
active-directory Active Directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/active-directory-app-proxy-protect-ndes.md
If you're new to Azure AD Application Proxy and want to learn more, see [Remote
Azure AD Application Proxy is built on Azure. It gives you a massive amount of network bandwidth and server infrastructure for better protection against distributed denial-of-service (DDOS) attacks and superb availability. Furthermore, there's no need to open external firewall ports to your on-premises network and no DMZ server is required. All traffic is originated inbound. For a complete list of outbound ports, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](./application-proxy-add-on-premises-application.md#prepare-your-on-premises-environment).
-> Azure AD Application Proxy is a feature that is available only if you are using the Premium or Basic editions of Azure Active Directory. For more information, see [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+> Azure AD Application Proxy is a feature that is available only if you are using the Premium or Basic editions of Azure Active Directory. For more information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
> If you have Enterprise Mobility Suite (EMS) licenses, you are eligible to use this solution. > The Azure AD Application Proxy connector only installs on Windows Server 2012 R2 or later. This is also a requirement of the NDES server.
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
For detailed information on the topic, see [KCD for single sign-on](application-
* **Application publishing and administration** require the *Application Administrator* role. Application Administrators can manage all applications in the directory including registrations, SSO settings, user and group assignments and licensing, Application Proxy settings, and consent. It doesn't grant the ability to manage Conditional Access. The *Cloud Application Administrator* role has all the abilities of the Application Administrator, except that it does not allow management of Application Proxy settings.
-* **Licensing**: Application Proxy is available through an Azure AD Premium subscription. Refer to the [Azure Active Directory Pricing page](https://azure.microsoft.com/pricing/details/active-directory/) for a full list of licensing options and features.
+* **Licensing**: Application Proxy is available through an Azure AD Premium subscription. Refer to the [Azure Active Directory Pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) for a full list of licensing options and features.
### Application Discovery
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md
This process requires two Enterprise Applications. One is a SharePoint on-premis
To complete this configuration, you need the following resources: - A SharePoint 2013 farm or newer. The SharePoint farm must be [integrated with Azure AD](../saas-apps/sharepoint-on-premises-tutorial.md).
+ - An Azure AD tenant with a plan that includes Application Proxy. Learn more about [Azure AD plans and pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
- A [custom, verified domain](../fundamentals/add-custom-domain.md) in the Azure AD tenant. The verified domain must match the SharePoint URL suffix. - An SSL certificate is required. See the details in [custom domain publishing](./application-proxy-configure-custom-domain.md). - On-premises Active Directory users must be synchronized with Azure AD Connect, and must be configure to [sign in to Azure](../hybrid/plan-connect-user-signin.md).
active-directory Application Proxy Integrate With Sharepoint Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server.md
This step-by-step guide explains how to integrate an on-premises SharePoint farm
To perform the configuration, you need the following resources: - A SharePoint 2013 farm or newer.-- An Azure AD tenant with a plan that includes Application Proxy. Learn more about [Azure AD plans and pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+- An Azure AD tenant with a plan that includes Application Proxy. Learn more about [Azure AD plans and pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
- A [custom, verified domain](../fundamentals/add-custom-domain.md) in the Azure AD tenant. - On-premises Active Directory synchronized with Azure AD Connect, through which users can [sign in to Azure](../hybrid/plan-connect-user-signin.md). - An Application Proxy connector installed and running on a machine within the corporate domain.
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/what-is-application-proxy.md
The remote access solution offered by Application Proxy and Azure AD support sev
* **Conditional Access**. Richer policy controls can be applied before connections to your network are established. With Conditional Access, you can define restrictions on the traffic that you allow to hit your backend application. You create policies that restrict sign-ins based on location, strength of authentication, and user risk profile. As Conditional Access evolves, more controls are being added to provide additional security such as integration with Microsoft Cloud App Security (MCAS). MCAS integration enables you to configure an on-premises application for [real-time monitoring](./application-proxy-integrate-with-microsoft-cloud-application-security.md) by leveraging Conditional Access to monitor and control sessions in real-time based on Conditional Access policies. * **Traffic termination**. All traffic to the backend application is terminated at the Application Proxy service in the cloud while the session is re-established with the backend server. This connection strategy means that your backend servers are not exposed to direct HTTP traffic. They are better protected against targeted DoS (denial-of-service) attacks because your firewall isn't under attack. * **All access is outbound**. The Application Proxy connectors only use outbound connections to the Application Proxy service in the cloud over ports 80 and 443. With no inbound connections, there's no need to open firewall ports for incoming connections or components in the DMZ. All connections are outbound and over a secure channel.
-* **Security Analytics and Machine Learning (ML) based intelligence**. Because it's part of Azure Active Directory, Application Proxy can leverage [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) (requires [Premium P2 licensing](https://azure.microsoft.com/pricing/details/active-directory/)). Azure AD Identity Protection combines machine-learning security intelligence with data feeds from Microsoft's [Digital Crimes Unit](https://news.microsoft.com/stories/cybercrime/https://docsupdatetracker.net/index.html) and [Microsoft Security Response Center](https://www.microsoft.com/msrc) to proactively identify compromised accounts. Identity Protection offers real-time protection from high-risk sign-ins. It takes into consideration factors like accesses from infected devices, through anonymizing networks, or from atypical and unlikely locations to increase the risk profile of a session. This risk profile is used for real-time protection. Many of these reports and events are already available through an API for integration with your SIEM systems.
+* **Security Analytics and Machine Learning (ML) based intelligence**. Because it's part of Azure Active Directory, Application Proxy can leverage [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) (requires [Premium P2 licensing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)). Azure AD Identity Protection combines machine-learning security intelligence with data feeds from Microsoft's [Digital Crimes Unit](https://news.microsoft.com/stories/cybercrime/https://docsupdatetracker.net/index.html) and [Microsoft Security Response Center](https://www.microsoft.com/msrc) to proactively identify compromised accounts. Identity Protection offers real-time protection from high-risk sign-ins. It takes into consideration factors like accesses from infected devices, through anonymizing networks, or from atypical and unlikely locations to increase the risk profile of a session. This risk profile is used for real-time protection. Many of these reports and events are already available through an API for integration with your SIEM systems.
* **Remote access as a service**. You don't have to worry about maintaining and patching on-premises servers to enable remote access. Application Proxy is an internet scale service that Microsoft owns, so you always get the latest security patches and upgrades. Unpatched software still accounts for a large number of attacks. According to the Department of Homeland Security, as many as [85 percent of targeted attacks are preventable](https://www.us-cert.gov/ncas/alerts/TA15-119A). With this service model, you don't have to carry the heavy burden of managing your edge servers anymore and scramble to patch them as needed.
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-licensing.md
Previously updated : 06/24/2021 Last updated : 07/13/2021
To protect user accounts in your organization, multi-factor authentication should be used. This feature is especially important for accounts that have privileged access to resources. Basic multi-factor authentication features are available to Microsoft 365 and Azure Active Directory (Azure AD) global administrators for no extra cost. If you want to upgrade the features for your admins or extend multi-factor authentication to the rest of your users, you can purchase Azure AD Multi-Factor Authentication in several ways. > [!IMPORTANT]
-> This article details the different ways that Azure AD Multi-Factor Authentication can be licensed and used. For specific details about pricing and billing, see the [Azure AD Multi-Factor Authentication pricing page](https://azure.microsoft.com/pricing/details/multi-factor-authentication/).
+> This article details the different ways that Azure AD Multi-Factor Authentication can be licensed and used. For specific details about pricing and billing, see the [Azure AD pricing page](https://www.microsoft.com/en-us/security/business/identity-access-management/azure-ad-pricing).
## Available versions of Azure AD Multi-Factor Authentication
-Azure AD Multi-Factor Authentication can be used, and licensed, in a few different ways depending on your organization's needs. You may already be entitled to use Azure AD Multi-Factor Authentication depending on the Azure AD, EMS, or Microsoft 365 license you currently have. The following table details the different ways to get Azure AD Multi-Factor Authentication and some of the features and use cases for each.
+Azure AD Multi-Factor Authentication can be used, and licensed, in a few different ways depending on your organization's needs. You may already be entitled to use Azure AD Multi-Factor Authentication depending on the Azure AD, EMS, or Microsoft 365 license you currently have. For example, the first 50,000 monthly active users in Azure AD External Identities can use MFA and other Premium P1 or P2 features for free. For more information, see [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
+
+The following table details the different ways to get Azure AD Multi-Factor Authentication and some of the features and use cases for each.
| If you're a user of | Capabilities and use cases | | | |
Azure AD Multi-Factor Authentication can be used, and licensed, in a few differe
| [Azure AD Premium P1](../fundamentals/active-directory-get-started-premium.md) | You can use [Azure AD Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) to prompt users for multi-factor authentication during certain scenarios or events to fit your business requirements. | | [Azure AD Premium P2](../fundamentals/active-directory-get-started-premium.md) | Provides the strongest security position and improved user experience. Adds [risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md) to the Azure AD Premium P1 features that adapts to user's patterns and minimizes multi-factor authentication prompts. | | [All Microsoft 365 plans](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans) | Azure AD Multi-Factor Authentication can be enabled all users using [security defaults](../fundamentals/concept-fundamentals-security-defaults.md). Management of Azure AD Multi-Factor Authentication is through the Microsoft 365 portal. For an improved user experience, upgrade to Azure AD Premium P1 or P2 and use Conditional Access. For more information, see [secure Microsoft 365 resources with multi-factor authentication](/microsoft-365/admin/security-and-compliance/set-up-multi-factor-authentication). MFA can also be [enabled on a per-user basis](howto-mfa-userstates.md). |
-| [Azure AD free](../verifiable-credentials/how-to-create-a-free-developer-account.md) | You can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) to prompt users for multi-factor authentication as needed but you don't have granular control of enabled users or scenarios, but it does provide that additional security step.<br /> Even when security defaults aren't used to enable multi-factor authentication for everyone, users assigned the *Azure AD Global Administrator* role can be configured to use multi-factor authentication. This feature of the free tier makes sure the critical administrator accounts are protected by multi-factor authentication. |
+| [Office 365 free](https://www.microsoft.com/microsoft-365/enterprise/compare-office-365-plans)<br>[Azure AD free](../verifiable-credentials/how-to-create-a-free-developer-account.md) | You can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) to prompt users for multi-factor authentication as needed but you don't have granular control of enabled users or scenarios, but it does provide that additional security step.<br /> Even when security defaults aren't used to enable multi-factor authentication for everyone, users assigned the *Azure AD Global Administrator* role can be configured to use multi-factor authentication. This feature of the free tier makes sure the critical administrator accounts are protected by multi-factor authentication. |
## Feature comparison of versions The following table provides a list of the features that are available in the various versions of Azure AD Multi-Factor Authentication. Plan out your needs for securing user authentication, then determine which approach meets those requirements. For example, although Azure AD Free provides security defaults that provide Azure AD Multi-Factor Authentication, only the mobile authenticator app can be used for the authentication prompt, not a phone call or SMS. This approach may be a limitation if you can't ensure the mobile authentication app is installed on a user's personal device.
-| Feature | Azure AD Free - Security defaults | Azure AD Free - Azure AD Global Administrators | Microsoft 365 apps | Azure AD Premium P1 or P2 |
+| Feature | Azure AD Free - Security defaults | Azure AD Free - Azure AD Global Administrators | Office 365 | Azure AD Premium P1 or P2 |
| |::|::|::|::| | Protect Azure AD tenant admin accounts with MFA | ΓùÅ | ΓùÅ (*Azure AD Global Administrator* accounts only) | ΓùÅ | ΓùÅ | | Mobile app as a second factor | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ |
The following table provides a list of the features that are available in the va
## Purchase and enable Azure AD Multi-Factor Authentication
-To use Azure AD Multi-Factor Authentication, register for or purchase an eligible Azure AD tier. Azure AD comes in four editions ΓÇö Free, Microsoft 365 apps, Premium P1, and Premium P2.
+To use Azure AD Multi-Factor Authentication, register for or purchase an eligible Azure AD tier. Azure AD comes in four editionsΓÇöFree, Office 365, Premium P1, and Premium P2.
The Free edition is included with an Azure subscription. See the [section below](#azure-ad-free-tier) for information on how to use security defaults or protect accounts with the *Azure AD Global Administrator* role.
After you have purchased the required Azure AD tier, [plan and deploy Azure AD M
### Azure AD Free tier
-All users in an Azure AD Free tenant can use Azure AD Multi-Factor Authentication through the use of security defaults. The mobile authentication app is the only method that can be used for Azure AD Multi-Factor Authentication when using Azure AD Free security defaults.
+All users in an Azure AD Free tenant can use Azure AD Multi-Factor Authentication by using security defaults. The mobile authentication app is the only method that can be used for Azure AD Multi-Factor Authentication when using Azure AD Free security defaults.
* [Learn more about Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md) * [Enable security defaults for users in Azure AD Free](../fundamentals/concept-fundamentals-security-defaults.md#enabling-security-defaults)
-If you don't want to enable Azure AD Multi-Factor Authentication for all users, you can instead choose to only protect user accounts with the *Azure AD Global Administrator* role. This approach provides additional authentication prompts for critical administrator accounts. You enable Azure AD Multi-Factor Authentication in one of the following ways, depending on the type of account you use:
+If you don't want to enable Azure AD Multi-Factor Authentication for all users, you can instead choose to only protect user accounts with the *Azure AD Global Administrator* role. This approach provides more authentication prompts for critical administrator accounts. You enable Azure AD Multi-Factor Authentication in one of the following ways, depending on the type of account you use:
* If you use a Microsoft Account, [register for multi-factor authentication](https://support.microsoft.com/help/12408/microsoft-account-about-two-step-verification). * If you aren't using a Microsoft Account, [turn on multi-factor authentication for a user or group in Azure AD](howto-mfa-userstates.md). ## Next steps
-* For more information on costs, see [Azure AD Multi-Factor Authentication pricing](https://azure.microsoft.com/pricing/details/multi-factor-authentication/).
+* For more information on costs, see [Azure AD pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
* [What is Conditional Access](../conditional-access/overview.md)
active-directory Concept Password Ban Bad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-password-ban-bad.md
Previously updated : 07/16/2020 Last updated : 07/13/2021
When a user attempts to reset a password to something that would be banned, the
> [!NOTE] > On-premises AD DS users that aren't synchronized to Azure AD also benefit from Azure AD Password Protection based on existing licensing for synchronized users.
-Additional licensing information, including costs, can be found on the [Azure Active Directory pricing site](https://azure.microsoft.com/pricing/details/active-directory/).
+Additional licensing information, including costs, can be found on the [Azure Active Directory pricing site](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Next steps
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-resilient-controls.md
Previously updated : 06/08/2020 Last updated : 07/13/2021
Incorporate the following access controls in your existing Conditional Access po
5. If you are protecting VPN access using Azure AD MFA NPS extension, consider federating your VPN solution as a [SAML app](../manage-apps/view-applications-portal.md) and determine the app category as recommended below. >[!NOTE]
-> Risk-based policies require [Azure AD Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) licenses.
+> Risk-based policies require [Azure AD Premium P2](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) licenses.
The following example describes policies you must create to provide a resilient access control for user to access their apps and resources. In this example, you will require a security group **AppUsers** with the target users you want to give access to, a group named **CoreAdmins** with the core administrators, and a group named **EmergencyAccess** with the emergency access accounts. This example policy set will grant selected users in **AppUsers**, access to selected apps if they are connecting from a trusted device OR provide strong authentication, for example MFA. It excludes emergency accounts and core administrators.
active-directory Howto Authentication Methods Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-methods-activity.md
Previously updated : 03/16/2021 Last updated : 07/13/2021
The following roles have the required permissions:
- Security Administrator - Global Administrator
- An Azure AD Premium P1 or P2 license is required to access usage and insights. Azure AD Multi-Factor Authentication and self-service password reset (SSPR) licensing information can be found on the [Azure Active Directory pricing site](https://azure.microsoft.com/pricing/details/active-directory/).
+ An Azure AD Premium P1 or P2 license is required to access usage and insights. Azure AD Multi-Factor Authentication and self-service password reset (SSPR) licensing information can be found on the [Azure Active Directory pricing site](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## How it works
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Azure AD lets you choose which authentication methods can be used during the sig
To enable the authentication method for passwordless phone sign-in, complete the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) with a *global administrator* account.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an *authentication policy administrator* account.
1. Search for and select *Azure Active Directory*, then browse to **Security** > **Authentication methods** > **Policies**. 1. Under **Microsoft Authenticator**, choose the following options: 1. **Enable** - Yes or No
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-deployment.md
Previously updated : 01/31/2020 Last updated : 07/13/2021
Azure Active Directory is licensed per-user meaning each user requires an approp
To compare editions and features and enable group or user-based licensing, see [Licensing requirements for Azure AD self-service password reset](./concept-sspr-licensing.md).
-For more information about pricing, see [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+For more information about pricing, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
### Prerequisites
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/plan-conditional-access.md
The benefits of deploying Conditional Access are:
See [Conditional Access license requirements](overview.md).
-If additional features are required, you might also need related licenses. For more information, see [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+If additional features are required, you might also need related licenses. For more information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
### Prerequisites
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
For examples, read [examples of how to configure token lifetimes](configure-toke
## License requirements
-Using this feature requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
+Using this feature requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
Customers with [Microsoft 365 Business licenses](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-business-service-description) also have access to Conditional Access features.
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
The following are the prerequisites and the steps if you want to use Conditional
**First**, your app should be integrated with the Microsoft Identity Platform using the use [OpenID Connect](v2-protocols-oidc.md)/ [OAuth 2.0](v2-oauth2-auth-code-flow.md) protocols for authentication and authorization. We recommend you use [Microsoft identity platform authentication libraries](reference-v2-libraries.md) to integrate and secure your application with Azure Active Directory. [Microsoft identity platform documentation](index.yml) is a good place to start learning how to integrate your apps with the Microsoft Identity Platform. Conditional Access Auth Context feature support is built on top of protocol extensions provided by the industry standard[OpenID Connect](v2-protocols-oidc.md) protocol. Developers use a [Conditional Access Auth Context reference](/graph/api/resources/authenticationcontextclassreference) **value** with the [Claims Request](claims-challenge.md) parameter to give apps a way to trigger and satisfy policy.
-**Second**, [Conditional Access](../conditional-access/overview.md) requires Azure AD Premium P1 licensing. More information about licensing can be found on the [Azure AD pricing page](https://azure.microsoft.com/pricing/details/active-directory/).
+**Second**, [Conditional Access](../conditional-access/overview.md) requires Azure AD Premium P1 licensing. More information about licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
**Third**, today it is only available to applications that sign-in users. Applications that authenticate as themselves are not supported. Use the [Authentication flows and application scenarios guide](authentication-flows-app-scenarios.md) to learn about the supported authentication app types and flows in the Microsoft Identity Platform.
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration-confidential-client.md
Title: Migrating confidential client applications to MSAL.NET
+ Title: Migrate confidential client applications to MSAL.NET
-description: Learn how to migrate a confidential client application from Azure AD Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn how to migrate a confidential client application from Azure Active Directory Authentication Library for .NET to Microsoft Authentication Library for .NET.
Last updated 06/08/2021
-#Customer intent: As an application developer, I want to migrate my confidential client app from ADAL.NET to MSAL.NET.
+#Customer intent: As an application developer, I want to migrate my confidential client app from ADAL.NET to MSAL.NET.
-# How to migrate confidential client applications from ADAL.NET to MSAL.NET
+# Migrate confidential client applications from ADAL.NET to MSAL.NET
-Confidential client applications are web apps, web APIs, and daemon applications (calling another service on their own behalf). For details see [Authentication flows and application scenarios](authentication-flows-app-scenarios.md). If your app is based on ASP.NET Core, use [Microsoft.Identity.Web](microsoft-identity-web.md)
+This article describes how to migrate a confidential client application from Azure Active Directory Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET). Confidential client applications are web apps, web APIs, and daemon applications that call another service on their own behalf. For more information about confidential applications, see [Authentication flows and application scenarios](authentication-flows-app-scenarios.md). If your app is based on ASP.NET Core, use [Microsoft.Identity.Web](microsoft-identity-web.md).
-The migration process consists of three steps:
+For app registrations:
-1. Inventory - identify the code in your apps that uses ADAL.NET.
-2. Install the MSAL.NET NuGet package.
-3. Update the code depending on your scenario.
+- You don't need to create a new app registration. (You keep the same client ID.)
+- You don't need to change the preauthorizations (admin-consented API permissions).
-For app registrations, if your application isn't dual stacked (AAD and MSA being two apps):
+## Migration steps
-- You don't need to create a new app registration (you keep the same ClientID)-- You don't need to change the pre-authorizations.
+1. Find the code by using ADAL.NET in your app.
-## Step 1 - Find the code using ADAL.NET in your app
+ The code that uses ADAL in a confidential client application instantiates `AuthenticationContext` and calls either `AcquireTokenByAuthorizationCode` or one override of `AcquireTokenAsync` with the following parameters:
-The code using ADAL in confidential client application instantiates an `AuthenticationContext` and calls either `AcquireTokenByAuthorizationCode` or one override of `AcquireTokenAsync` with the following parameters:
+ - A `resourceId` string. This variable is the app ID URI of the web API that you want to call.
+ - An instance of `IClientAssertionCertificate` or `ClientAssertion`. This instance provides the client credentials for your app to prove the identity of your app.
-- A `resourceId` string. This variable is the **App ID URI** of the web API that you want to call.-- An instance of `IClientAssertionCertificate` or `ClientAssertion` instance. This instance provides the client credentials for your app (proving the identity of your app).
+1. After you've identified that you have apps that are using ADAL.NET, install the MSAL.NET NuGet package [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) and update your project library references. For more information, see [Install a NuGet package](https://www.bing.com/search?q=install+nuget+package).
-## Step 2 - Install the MSAL.NET NuGet package
+1. Update the code according to the confidential client scenario. Some steps are common and apply across all the confidential client scenarios. Other steps are unique to each scenario.
-Once you've identified that you have apps that are using ADAL.NET, install the MSAL.NET NuGet package: [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) and update your project library references.
-For more information on how to install a NuGet package, see [install a NuGet package](https://www.bing.com/search?q=install+nuget+package).
+ The confidential client scenarios are:
-## Step 3 - Update the code
+ - [Daemon scenarios](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=daemon#migrate-daemon-scenarios) supported by web apps, web APIs, and daemon console applications.
+ - [Web API calling downstream web APIs](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=obo#migrate-on-behalf-of-calls-obo-in-web-apis) supported by web APIs calling downstream web APIs on behalf of the user.
+ - [Web app calling web APIs](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=authcode#migrate-acquiretokenbyauthorizationcodeasync-in-web-apps) supported by web apps that sign in users and call a downstream web API.
-Updating code depends on the confidential client scenario. Some steps are common and apply across all the confidential client scenarios. There are also steps that are unique to each scenario.
-
-The confidential client scenarios are as listed below:
--- [Daemon scenarios](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=daemon#migrate-daemon-scenarios) supported by web apps, web APIs, and daemon console applications.-- [Web api calling downstream web apis](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=obo#migrate-on-behalf-of-calls-obo-in-web-apis) supported by web APIs calling downstream web APIs on behalf of the user.-- [Web app calling web apis](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=authcode#migrate-acquiretokenbyauthorizationcodeasync-in-web-apps) supported by Web apps that sign in users and call a downstream web API.-
-You may have provided a wrapper around ADAL.NET to handle certificates and caching. This article uses the same approach to illustrate the migration from ADAL.NET to MSAL.NET process. However, this code is only for demonstration purposes. Don't copy/paste these wrappers or integrate them in your code as they are.
+You might have provided a wrapper around ADAL.NET to handle certificates and caching. This article uses the same approach to illustrate the process of migrating from ADAL.NET to MSAL.NET. However, this code is only for demonstration purposes. Don't copy/paste these wrappers or integrate them in your code as they are.
## [Daemon](#tab/daemon) ### Migrate daemon apps
-Daemon scenarios use the OAuth2.0 [client credential flow](v2-oauth2-client-creds-grant-flow.md). They're also called service to service calls. Your app acquires a token on its own behalf, not on behalf of a user.
+Daemon scenarios use the OAuth2.0 [client credential flow](v2-oauth2-client-creds-grant-flow.md). They're also called service-to-service calls. Your app acquires a token on its own behalf, not on behalf of a user.
-#### Find if your code uses daemon scenarios
+#### Find out if your code uses daemon scenarios
The ADAL code for your app uses daemon scenarios if it contains a call to `AuthenticationContext.AcquireTokenAsync` with the following parameters: -- A resource (App ID URI) as a first parameter.-- A `IClientAssertionCertificate` or `ClientAssertion` as the second parameter.
+- A resource (app ID URI) as a first parameter
+- `IClientAssertionCertificate` or `ClientAssertion` as the second parameter
-It doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using [on behalf of flow](/azure/active-directory/develop/msal-net-migration-confidential-client?#migrate-on-behalf-of-calls-obo-in-web-apis) scenario.
+`AuthenticationContext.AcquireTokenAsync` doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using the [web API calling downstream web APIs ](/azure/active-directory/develop/msal-net-migration-confidential-client?#migrate-on-behalf-of-calls-obo-in-web-apis) scenario.
#### Update the code of daemon scenarios [!INCLUDE [Common steps](includes/msal-net-adoption-steps-confidential-clients.md)]
-In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` by a call to `IConfidentialClientApplication.AcquireTokenClient`.
-
-##### Sample daemon code
+In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` with a call to `IConfidentialClientApplication.AcquireTokenClient`.
-The following table compares the ADAL.NET and MSAL.NET code for daemon scenarios.
+Here's a comparison of ADAL.NET and MSAL.NET code for daemon scenarios:
:::row::: :::column span="":::
public partial class AuthWrapper
const string ClientId = "Guid (AppID)"; const string authority = "https://login.microsoftonline.com/{tenant}";
- // App ID Uri of web API to call
+ // App ID URI of web API to call
const string resourceId = "https://target-api.domain.com"; X509Certificate2 certificate = LoadCertificate();
public partial class AuthWrapper
const string ClientId = "Guid (Application ID)"; const string authority = "https://login.microsoftonline.com/{tenant}";
- // App ID Uri of web API to call
+ // App ID URI of web API to call
const string resourceId = "https://target-api.domain.com"; X509Certificate2 certificate = LoadCertificate();
public partial class AuthWrapper
:::column-end::: :::row-end:::
-#### Token caching
+#### Benefit from token caching
To benefit from the in-memory cache, the instance of `IConfidentialClientApplication` needs to be kept in a member variable. If you re-create the confidential client application each time you request a token, you won't benefit from the token cache.
-You'll need to serialize the AppTokenCache if you choose not to use the default in-memory app token cache. Similarly, If you want to implement a distributed token cache, you'll need to serialize the AppTokenCache. For details see [token cache for a web app or web API (confidential client application)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-web-app-or-web-api-confidential-client-application) and this sample [active-directory-dotnet-v1-to-v2/ConfidentialClientTokenCache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
+You'll need to serialize `AppTokenCache` if you choose not to use the default in-memory app token cache. Similarly, If you want to implement a distributed token cache, you'll need to serialize `AppTokenCache`. For details, see [Token cache for a web app or web API (confidential client application)](msal-net-token-cache-serialization.md?tabs=aspnet) and the sample [active-directory-dotnet-v1-to-v2/ConfidentialClientTokenCache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
-[Learn more about demon scenario](scenario-daemon-overview.md) and how it's implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
+[Learn more about the demon scenario](scenario-daemon-overview.md) and how it's implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
-## [Web api calling downstream web apis](#tab/obo)
+## [Web API calling downstream web APIs](#tab/obo)
-### Migrate web api calling downstream web apis
+### Migrate a web API that calls downstream web APIs
-Web apis calling downstream web apis use the OAuth2.0 [On-Behalf-Of](v2-oauth2-on-behalf-of-flow.md)(OBO) flow. The code of the web API will use the token retrieved from the HTTP authorized header and validate it. This token will be exchanged against a token to call the downstream web API. This token is used as a `UserAssertion` in both ADAL.NET and MSAL.NET.
+Web APIs that call downstream web APIs use the OAuth2.0 [on-behalf-of (OBO)](v2-oauth2-on-behalf-of-flow.md) flow. The code of the web API uses the token retrieved from the HTTP authorized header and validates it. This token is exchanged against a token to call the downstream web API. This token is used as a `UserAssertion` instance in both ADAL.NET and MSAL.NET.
-#### Find if your code uses OBO
+#### Find out if your code uses OBO
The ADAL code for your app uses OBO if it contains a call to `AuthenticationContext.AcquireTokenAsync` with the following parameters: -- A resource (App ID URI) as a first parameter-- A `IClientAssertionCertificate` or `ClientAssertion` as the second parameter.-- A parameter of type `UserAssertion`.
+- A resource (app ID URI) as a first parameter
+- `IClientAssertionCertificate` or `ClientAssertion` as the second parameter
+- A parameter of type `UserAssertion`
-#### Update the code using OBO
+#### Update the code by using OBO
[!INCLUDE [Common steps](includes/msal-net-adoption-steps-confidential-clients.md)]
-In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` by a call to `IConfidentialClientApplication.AcquireTokenOnBehalfOf`.
+In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` with a call to `IConfidentialClientApplication.AcquireTokenOnBehalfOf`.
+
+Here's a comparison of sample OBO code for ADAL.NET and MSAL.NET:
-##### Sample OBO code
:::row::: :::column span=""::: ADAL
public partial class AuthWrapper
:::column-end::: :::row-end:::
-#### Token caching
+#### Benefit from token caching
-For token caching in OBOs, you need to use a distributed token cache. For details see [token cache for a web app or web API (confidential client application)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-web-app-or-web-api-confidential-client-application) and [read through sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache)
+For token caching in OBOs, you need to use a distributed token cache. For details, see [Token cache for a web app or web API (confidential client application)](msal-net-token-cache-serialization.md?tabs=aspnet) and read through [sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
```CSharp
-IMsalTokenCacheProvider msalTokenCacheProvider = CreateTokenCache(cacheImplementation)
-msalTokenCacheProvider.Initialize(app.UserTokenCache);
+app.UseInMemoryTokenCaches(); // or a distributed token cache.
```
-Refer to [code samples](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/ConfidentialClientTokenCache/Program.cs) for an example of implementation of `CreateTokenCache`.
-
-[Learn more about web APIs calling downstream web API](scenario-web-api-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
+[Learn more about web APIs calling downstream web APIs](scenario-web-api-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
-## [Web app calling web apis.](#tab/authcode)
+## [Web app calling web APIs](#tab/authcode)
-### Migrate web apps calling web apis
+### Migrate a web app that calls web APIs
-If your app uses ASP.NET Core, Microsoft strongly recommends you update to Microsoft.Identity.Web which processes everything for you. See [Microsoft identity web GA](https://github.com/AzureAD/microsoft-identity-web/wiki/1.0.0) for a quick presentation, and [https://aka.ms/ms-id-web/webapp](https://aka.ms/ms-id-web/webapp) for details about how to use it in a web app.
+If your app uses ASP.NET Core, we strongly recommend that you update to Microsoft.Identity.Web, which processes everything for you. For a quick presentation, see the [Microsoft.Identity.Web announcement of general availability](https://github.com/AzureAD/microsoft-identity-web/wiki/1.0.0). For details about how to use it in a web app, see [Why use Microsoft.Identity.Web in web apps?](https://aka.ms/ms-id-web/webapp).
-Web apps that sign in users and call web APIs on behalf of the user use the OAuth2.0 [authorization code flow](v2-oauth2-auth-code-flow.md). Typically:
+Web apps that sign in users and call web APIs on behalf of users use the OAuth2.0 [authorization code flow](v2-oauth2-auth-code-flow.md). Typically:
-1. The web app signs-in a user by executing a first leg of the auth code flow. It does this by going to Azure AD's authorize endpoint. The users signs-in, and performs multiple factor authentications if needed. As an outcome of this operation, the app receives the **authorization code**. So far ADAL/MSAL aren't involved.
-2. The app will, then, execute the second leg of the authorization code flow. It uses the authorization code to get an access token, an ID Token, and a refresh token. Your application needs to provide the redirectUri, which is the URI at which Azure AD will provide the security tokens. Once received, the web app will typically call ADAL/MSAL `AcquireTokenByAuthorizationCode` to redeem the code, and get a token that will be stored in the token cache.
-3. The app will then use ADAL or MSAL to call `AcquireTokenSilent` to acquire tokens used to call the web APIs it needs to call. This is done from the web app controllers.
+1. The web app signs in a user by executing a first leg of the authorization code flow. It does this by going to the authorize endpoint in Azure Active Directory (Azure AD). The user signs in and performs multifactor authentications if needed. As an outcome of this operation, the app receives the authorization code. So far, ADAL and MSAL aren't involved.
+2. The app executes the second leg of the authorization code flow. It uses the authorization code to get an access token, an ID token, and a refresh token. Your application needs to provide the `redirectUri` value, which is the URI at which Azure AD will provide the security tokens. After the app receives that URI, it typically calls `AcquireTokenByAuthorizationCode` for ADAL or MSAL to redeem the code and to get a token that will be stored in the token cache.
+3. The app uses ADAL or MSAL to call `AcquireTokenSilent` so that it can get tokens for calling the necessary web APIs. This is done from the web app controllers.
-#### Find if your code uses the auth code flow
+#### Find out if your code uses the auth code flow
The ADAL code for your app uses auth code flow if it contains a call to `AuthenticationContext.AcquireTokenByAuthorizationCodeAsync`.
-#### Update the code using auth code flow
+#### Update the code by using the authorization code flow
[!INCLUDE [Common steps](includes/msal-net-adoption-steps-confidential-clients.md)]
-In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` by a call to `IConfidentialClientApplication.AcquireTokenByAuthorizationCode`.
+In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` with a call to `IConfidentialClientApplication.AcquireTokenByAuthorizationCode`.
-##### Sample auth code flow code
+Here's a comparison of sample authorization code flows for ADAL.NET and MSAL.NET:
:::row::: :::column span="":::
public partial class AuthWrapper
:::column-end::: :::row-end:::
-Calling `AcquireTokenByAuthorizationCode` adds a token to the token cache. To acquire extra token(s) for other resources or tenants, use `AcquireTokenSilent` in your controllers.
+Calling `AcquireTokenByAuthorizationCode` adds a token to the token cache. To acquire extra tokens for other resources or tenants, use `AcquireTokenSilent` in your controllers.
-#### Token caching
+#### Benefit from token caching
-Since your web app uses `AcquireTokenByAuthorizationCode`, your app needs to use a distributed token cache for token caching. For details see [token cache for a web app or web API (confidential client application)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-web-app-or-web-api-confidential-client-application) and [read through sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache)
+Because your web app uses `AcquireTokenByAuthorizationCode`, your app needs to use a distributed token cache for token caching. For details, see [Token cache for a web app or web API](msal-net-token-cache-serialization.md?tabs=aspnet) and read through [sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
```CSharp
-IMsalTokenCacheProvider msalTokenCacheProvider = CreateTokenCache(cacheImplementation)
-msalTokenCacheProvider.Initialize(app.UserTokenCache);
+app.UseInMemoryTokenCaches(); // or a distributed token cache.
```
-Refer to [code samples](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/ConfidentialClientTokenCache/Program.cs) for an example of implementation of `CreateTokenCache`.
[Learn more about web apps calling web APIs](scenario-web-app-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
Refer to [code samples](https://github.com/Azure-Samples/active-directory-dotnet
## MSAL benefits
-Some of the key features that come with MSAL.NET are resilience, security, performance, and scalability. These are described below.
-
-### Resilience
-
-Using MSAL.NET ensures your app is resilient. This is achieved through the following:
+Key benefits of MSAL.NET for your app include:
-- AAD Cached Credential Service(CCS) benefits. CCS operates as an AAD backup.-- Proactive renewal of tokens if the API you call enables long lived tokens through [continuous access evaluation](app-resilience-continuous-access-evaluation.md).
+- **Resilience**. MSAL.NET helps make your app resilient through the following:
-### Security
+ - Azure AD Cached Credential Service (CCS) benefits. CCS operates as an Azure AD backup.
+ - Proactive renewal of tokens if the API that you call enables long-lived tokens through [continuous access evaluation](app-resilience-continuous-access-evaluation.md).
-You can also acquire Proof of Possession (PoP) tokens if the web API that you want to call requires it. For details see [Proof Of Possession (PoP) tokens in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Proof-Of-Possession-(PoP)-tokens)
+- **Security**. You can acquire Proof of Possession (PoP) tokens if the web API that you want to call requires it. For details, see [Proof Of Possession tokens in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Proof-Of-Possession-(PoP)-tokens)
-### Performance and scalability
-
-If you don't need to share your cache with ADAL.NET, disable the legacy cache compatibility when creating the confidential client application (`.WithLegacyCacheCompatibility(false)`). This increases the performance significantly.
+- **Performance and scalability**. If you don't need to share your cache with ADAL.NET, disable the legacy cache compatibility when you're creating the confidential client application (`.WithLegacyCacheCompatibility(false)`). This increases the performance significantly.
-```csharp
-app = ConfidentialClientApplicationBuilder.Create(ClientId)
- .WithCertificate(certificate)
- .WithAuthority(authority)
- .WithLegacyCacheCompatibility(false)
- .Build();
-```
+ ```csharp
+ app = ConfidentialClientApplicationBuilder.Create(ClientId)
+ .WithCertificate(certificate)
+ .WithAuthority(authority)
+ .WithLegacyCacheCompatibility(false)
+ .Build();
+ ```
## Troubleshooting
-This troubleshooting guide makes two assumptions:
--- It assumes that your ADAL.NET code was working.-- It assumes that you migrated to MSAL keeping the same ClientID.
+The following troubleshooting information makes two assumptions:
-### AADSTS700027 exception
+- Your ADAL.NET code was working.
+- You migrated to MSAL by keeping the same client ID.
-If you get an exception with the following message:
+If you get an exception with either of the following messages:
> `AADSTS700027: Client assertion contains an invalid signature. [Reason - The key was not found.]`
-You can troubleshoot the exception using the steps below:
--- Confirm that you're using the latest version of MSAL.NET,-- Confirm that the authority host set when building the confidential client application and the authority host you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md)? (Azure Government, Azure China 21Vianet, Azure Germany).-
-### AADSTS700030 exception
-
-If you get an exception with the following message:
- > `AADSTS90002: Tenant 'cf61953b-e41a-46b3-b500-663d279ea744' not found. This may happen if there are no active` > `subscriptions for the tenant. Check to make sure you have the correct tenant ID. Check with your subscription` > `administrator.`
-You can troubleshoot the exception using the steps below:
+You can troubleshoot the exception by using these steps:
-- Confirm that you're using the latest version of MSAL.NET,-- Confirm that the authority host set when building the confidential client application and the authority host you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md)? (Azure Government, Azure China 21Vianet, Azure Germany).
+1. Confirm that you're using the latest version of MSAL.NET.
+1. Confirm that the authority host that you set when building the confidential client application and the authority host that you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md) (Azure Government, Azure China 21Vianet, or Azure Germany)?
## Next steps
-Learn more about the [Differences between ADAL.NET and MSAL.NET apps](msal-net-differences-adal-net.md)
+Learn more about the [differences between ADAL.NET and MSAL.NET apps](msal-net-differences-adal-net.md).
+Learn more about [token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
active-directory V2 Conditional Access Dev Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-conditional-access-dev-guide.md
For developers building apps for Azure AD, this article shows how you can use Co
Knowledge of [single](quickstart-register-app.md) and [multi-tenant](howto-convert-app-to-be-multi-tenant.md) apps and [common authentication patterns](./authentication-vs-authorization.md) is assumed. > [!NOTE]
-> Using this feature requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free, Basic, and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
+> Using this feature requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free, Basic, and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
> Customers with [Microsoft 365 Business licenses](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-business-service-description) also have access to Conditional Access features. ## How does Conditional Access impact an app?
active-directory Compare With B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/compare-with-b2c.md
Previously updated : 03/02/2021 Last updated : 07/13/2021
The following table gives a detailed comparison of the scenarios you can enable
| **Single sign-on (SSO)** | SSO to all Azure AD-connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. | | **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](conditional-access.md)). | Managed by the organization via Conditional Access and Identity Protection. | | **Branding** | Host/inviting organization's brand is used. | Fully customizable branding per application or organization. |
-| **Billing model** | [External Identities pricing](https://azure.microsoft.com/en-us/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2B setup details](external-identities-pricing.md)) | [External Identities pricing](https://azure.microsoft.com/en-us/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2C setup details](../../active-directory-b2c/billing.md)) |
+| **Billing model** | [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2B setup details](external-identities-pricing.md)) | [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2C setup details](../../active-directory-b2c/billing.md)) |
| **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) | Secure and manage customers and partners beyond your organizational boundaries with Azure AD External Identities.
active-directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/conditional-access.md
For more information, see the following articles on Azure AD B2B collaboration:
- [What is Azure AD B2B collaboration?](./what-is-b2b.md) - [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md)-- [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/)
+- [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/)
- [Frequently Asked Questions (FAQs)](./faq.yml)
active-directory External Identities Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/external-identities-pricing.md
Previously updated : 05/24/2021 Last updated : 07/13/2021
Azure Active Directory (Azure AD) External Identities pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD guest user collaboration (B2B) and [Azure AD B2C tenants](../../active-directory-b2c/billing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In this article, learn about MAU billing and linking your Azure AD tenants to a subscription. > [!IMPORTANT]
-> This article does not contain pricing details. For the latest information about usage billing and pricing, see [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+> This article does not contain pricing details. For the latest information about usage billing and pricing, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## What do I need to do?
To take advantage of MAU billing, your Azure AD tenant must be linked to an Azur
In your Azure AD tenant, guest user collaboration usage is billed based on the count of unique guest users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model.
-The pricing tier that applies to your guest users is based on the highest pricing tier assigned to your Azure AD tenant. For more information, see [Azure Active Directory External Identities Pricing](https://azure.microsoft.com/en-us/pricing/details/active-directory/external-identities/).
+The pricing tier that applies to your guest users is based on the highest pricing tier assigned to your Azure AD tenant. For more information, see [Azure Active Directory External Identities Pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
## Link your Azure AD tenant to a subscription
After you complete these steps, your Azure subscription is billed based on your
## Next steps
-For the latest pricing information, see [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+For the latest pricing information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
active-directory Use Dynamic Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/use-dynamic-groups.md
Previously updated : 02/28/2020 Last updated : 07/13/2021
## What are dynamic groups? Dynamic configuration of security group membership for Azure Active Directory (Azure AD) is available in [the Azure portal](https://portal.azure.com). Administrators can set rules to populate groups that are created in Azure AD based on user attributes (such as userType, department, or country/region). Members can be automatically added to or removed from a security group based on their attributes. These groups can provide access to applications or cloud resources (SharePoint sites, documents) and to assign licenses to members. Read more about dynamic groups in [Dedicated groups in Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md).
-The appropriate [Azure AD Premium P1 or P2 licensing](https://azure.microsoft.com/pricing/details/active-directory/) is required to create and use dynamic groups. Learn more in the article [Create attribute-based rules for dynamic group membership in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
+The appropriate [Azure AD Premium P1 or P2 licensing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) is required to create and use dynamic groups. Learn more in the article [Create attribute-based rules for dynamic group membership in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
## Creating an "all users" dynamic group You can create a group containing all users within a tenant using a membership rule. When users are added or removed from the tenant in the future, the group's membership is adjusted automatically.
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 07/09/2021 Last updated : 07/13/2021
# What is guest user access in Azure Active Directory B2B?
-Azure Active Directory (Azure AD) business-to-business (B2B) collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with guest users from any other organization, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department. A simple invitation and redemption process lets partners use their own credentials to access your company's resources. Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+Azure Active Directory (Azure AD) business-to-business (B2B) collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with guest users from any other organization, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department. A simple invitation and redemption process lets partners use their own credentials to access your company's resources. Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
> [!IMPORTANT] >
You can also use [API connectors](api-connectors-overview.md) to integrate your
- [External Identities pricing](external-identities-pricing.md) - [Add B2B collaboration guest users in the portal](add-users-administrator.md)-- [Understand the invitation redemption process](redemption-experience.md)
+- [Understand the invitation redemption process](redemption-experience.md)
active-directory 1 Secure Access Posture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/1-secure-access-posture.md
The goals of IT-governed and delegated access differ.
Whichever you enact for your organization and scenarios you'll need to:
-* **Control access to applications, data, and content**. This can be accomplished through a variety of methods, depending on your versions of [Azure AD](https://azure.microsoft.com/pricing/details/active-directory/) and [Microsoft 365](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans).
+* **Control access to applications, data, and content**. This can be accomplished through a variety of methods, depending on your versions of [Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) and [Microsoft 365](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans).
* **Reduce the attack surface**. [Privileged identity management](../privileged-identity-management/pim-configure.md), [data loss prevention (DLP),](/exchange/security-and-compliance/data-loss-prevention/data-loss-prevention) and [encryption capabilities](/exchange/security-and-compliance/data-loss-prevention/data-loss-prevention) reduce the attack surface.
active-directory Active Directory Deployment Checklist P2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-deployment-checklist-p2.md
Many of the recommendations in this guide can be implemented with Azure AD Free
Additional information about licensing can be found on the following pages:
-* [Azure AD licensing](https://azure.microsoft.com/pricing/details/active-directory/)
+* [Azure AD licensing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)
* [Microsoft 365 Enterprise](https://www.microsoft.com/en-us/licensing/product-licensing/microsoft-365-enterprise) * [Enterprise Mobility + Security](https://www.microsoft.com/en-us/licensing/product-licensing/enterprise-mobility-security) * [Azure AD External Identities pricing](../external-identities/external-identities-pricing.md)
Phase 4 sees administrators enforcing least privilege principles for administrat
## Next steps
-[Azure AD licensing and pricing details](https://azure.microsoft.com/pricing/details/active-directory/)
+[Azure AD licensing and pricing details](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)
[Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations)
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-deployment-plans.md
Title: Deployment plans - Azure Active Directory | Microsoft Docs
-description: End-to-end guidance about how to deploy many Azure Active Directory capabilities.
+description: Guidance about how to deploy many Azure Active Directory capabilities.
# Azure Active Directory deployment plans
-Looking for end-to-end guidance on deploying Azure Active Directory (Azure AD) capabilities? Azure AD deployment plans walk you through the business value, planning considerations, and operational procedures needed to successfully deploy common Azure AD capabilities.
+Looking for complete guidance on deploying Azure Active Directory (Azure AD) capabilities? Azure AD deployment plans walk you through the business value, planning considerations, and operational procedures needed to successfully deploy common Azure AD capabilities.
From any of the plan pages, use your browser's Print to PDF capability to create an up-to-date offline version of the documentation.
From any of the plan pages, use your browser's Print to PDF capability to create
| Capability | Description| | -| -|
-| [Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)| Azure AD Multi-Factor Authentication (MFA) is Microsoft's two-step verification solution. Using admin-approved authentication methods, Azure AD MFA helps safeguard access to your data and applications while meeting the demand for a simple sign-in process. Watch this video on [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM)|
+| [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md)| Azure AD Multi-Factor Authentication (MFA) is Microsoft's two-step verification solution. Using admin-approved authentication methods, Azure AD MFA helps safeguard access to your data and applications while meeting the demand for a simple sign in process. Watch this video on [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM)|
| [Conditional Access](../conditional-access/plan-conditional-access.md)| With Conditional Access, you can implement automated access control decisions for who can access your cloud apps, based on conditions. | | [Self-service password reset](../authentication/howto-sspr-deployment.md)| Self-service password reset helps your users reset their passwords without administrator intervention, when and where they need to. |
-| [Passwordless](../authentication/howto-authentication-passwordless-deployment.md) | Implement passwordless authentication using the the Microsoft Authenticator app or FIDO2 Security keys in your organization |
+| [Passwordless](../authentication/howto-authentication-passwordless-deployment.md) | Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys in your organization |
## Deploy application and device management | Capability | Description| | -| - |
-| [Single sign-on](../manage-apps/plan-sso-deployment.md)| Single sign-on helps your users access the apps and resources they need to do business while signing in only once. After they've signed in, they can go from Microsoft Office to SalesForce to Box to internal applications without being required to enter credentials a second time. |
+| [Single sign-on](../manage-apps/plan-sso-deployment.md)| Single sign-on helps your users' access the apps and resources they need to do business while signing in only once. After they've signed in, they can go from Microsoft Office to SalesForce to Box to internal applications without being required to enter credentials a second time. |
| [My Apps](../manage-apps/my-apps-deployment-plan.md)| Offer your users a simple hub to discover and access all their applications. Enable them to be more productive with self-service capabilities, like requesting access to apps and groups, or managing access to resources on behalf of others. | | [Devices](../devices/plan-device-deployment.md) | This article helps you evaluate the methods to integrate your device with Azure AD, choose the implementation plan, and provides key links to supported device management tools. |
From any of the plan pages, use your browser's Print to PDF capability to create
| Capability | Description| | -| -|
-| [ADFS to Password Hash Sync](../hybrid/plan-migrate-adfs-password-hash-sync.md)| With Password Hash Synchronization, hashes of user passwords are synchronized from on-premises Active Directory to Azure AD, letting Azure AD authenticate users with no interaction with the on-premises Active Directory |
-| [ADFS to Pass Through Authentication](../hybrid/plan-migrate-adfs-pass-through-authentication.md)| Azure AD Pass-through Authentication helps your users sign in to both on-premises and cloud-based applications using the same passwords. This feature provides users with a better experience - one less password to remember - and reduces IT helpdesk costs because users are less likely to forget how to sign in. When people sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory. |
+| [AD FS to cloud user authentication](/hybrid/migrate-from-federation-to-cloud-authentication.md)| Learn to migrate your user authentication from federation to cloud authentication with either pass through authentication or password hash sync.
| [Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) |Employees today want to be productive at any place, at any time, and from any device. They need to access SaaS apps in the cloud and corporate apps on-premises. Azure AD Application proxy enables this robust access without costly and complex virtual private networks (VPNs) or demilitarized zones (DMZs). |
-| [Seamless SSO](../hybrid/how-to-connect-sso-quick-start.md)| Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) automatically signs users in when they are on their corporate devices connected to your corporate network. With this feature, users won't need to type in their passwords to sign in to Azure AD and usually won't need to enter their usernames. This feature provides authorized users with easy access to your cloud-based applications without needing any additional on-premises components. |
+| [Seamless SSO](../hybrid/how-to-connect-sso-quick-start.md)| Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) automatically signs users in when they are on their corporate devices connected to your corporate network. With this feature, users won't need to type in their passwords to sign in to Azure AD and usually won't need to enter their usernames. This feature provides authorized users with easy access to your cloud-based applications without needing any extra on-premises components. |
## Deploy user provisioning
Roles might include the following
|End-user|A representative group of users for which the capability will be implemented. Often previews the changes in a pilot program. |IT Support Manager|IT support organization representative who can provide input on the supportability of this change from a helpdesk perspective.ΓÇ» |Identity Architect or Azure Global Administrator|Identity management team representative in charge of defining how this change is aligned with the core identity management infrastructure in your organization.|
-|Application Business Owner |The overall business owner of the affected application(s), which may include managing access.  May also provide input on the user experience and usefulness of this change from an end-user's perspective.
-|Security Owner|A representative from the security team that can sign off that the plan will meet the security requirements of your organization.|
+|Application Business Owner |The overall business owner of the affected application(s), which may include managing access.  May also provide input on the user experience and usefulness of this change from an end user's perspective.
+|Security Owner|A representative from the security team that can sign out that the plan will meet the security requirements of your organization.|
|Compliance Manager|The person within your organization responsible for ensuring compliance with corporate, industry, or governmental requirements.| **Levels of involvement might include:**
Roles might include the following
- **I**nformed of project plan and outcome - ## Best practices for a pilot
-A pilot allows you to test with a small group before turning a capability on for everyone. Ensure that as part of your testing, each use case within your organization is thoroughly tested. It's best to target a specific group of pilot users before rolling this out to your organization as a whole.
+A pilot allows you to test with a small group before turning on a capability for everyone. Ensure that as part of your testing, each use case within your organization is thoroughly tested. It's best to target a specific group of pilot users before rolling this deployment out to your organization as a whole.
-In your first wave, target IT, usability, and other appropriate users who can test and provide feedback. This feedback should be used to further develop the communications and instructions you send to your users, and to give insights into the types of issues your support staff may see.
+In your first wave, target IT, usability, and other appropriate users who can test and provide feedback. Use this feedback to further develop the communications and instructions you send to your users, and to give insights into the types of issues your support staff may see.
Widening the rollout to larger groups of users should be carried out by increasing the scope of the group(s) targeted. This can be done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s).
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Managing Azure Active Directory requires the continuous execution of key operati
| Triage and investigate users flagged for risk and vulnerability reports from Azure AD Identity Protection | InfoSec Operations Team | > [!NOTE]
-> Azure AD Identity Protection requires an Azure AD Premium P2 license. To find the right license for your requirements, see [Comparing generally available features of the Azure AD Free and Azure AD Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
+> Azure AD Identity Protection requires an Azure AD Premium P2 license. To find the right license for your requirements, see [Comparing generally available features of the Azure AD Free and Azure AD Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
As you review your list, you may find you need to either assign an owner for tasks that are missing an owner or adjust ownership for tasks with owners that aren't aligned with the recommendations above.
active-directory Active Directory Whatis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-whatis.md
Microsoft Online business services, such as Microsoft 365 or Microsoft Azure, re
To enhance your Azure AD implementation, you can also add paid capabilities by upgrading to Azure Active Directory Premium P1 or Premium P2 licenses. Azure AD paid licenses are built on top of your existing free directory, providing self-service, enhanced monitoring, security reporting, and secure access for your mobile users. >[!Note]
->For the pricing options of these licenses, see [Azure Active Directory Pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+>For the pricing options of these licenses, see [Azure Active Directory Pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
> >Azure Active Directory Premium P1 and Premium P2 are not currently supported in China. For more information about Azure AD pricing, contact the [Azure Active Directory Forum](https://azure.microsoft.com/support/community/?product=active-directory).
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
To help organizations understand the business requirements and respect complianc
## Plan an Azure AD B2C deployment
-This phase includes the following capabilities.
+This phase includes the following capabilities:
| Capability | Description | |:|:|
Define clear expectations and follow up plans to meet key milestones:
## Implement an Azure AD B2C deployment
-This phase includes the following capabilities.
+This phase includes the following capabilities:
| Capability | Description | |:-|:--|
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
If your organization is a previous user of per-user based Azure AD Multi-Factor
### Conditional Access
-You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which are not available in security defaults. If you're using Conditional Access and have Conditional Access policies enabled in your environment, security defaults won't be available to you. If you have a license that provides Conditional Access but don't have any Conditional Access policies enabled in your environment, you are welcome to use security defaults until you enable Conditional Access policies. More information about Azure AD licensing can be found on the [Azure AD pricing page](https://azure.microsoft.com/pricing/details/active-directory/).
+You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which are not available in security defaults. If you're using Conditional Access and have Conditional Access policies enabled in your environment, security defaults won't be available to you. If you have a license that provides Conditional Access but don't have any Conditional Access policies enabled in your environment, you are welcome to use security defaults until you enable Conditional Access policies. More information about Azure AD licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
![Warning message that you can have security defaults or Conditional Access not both](./media/concept-fundamentals-security-defaults/security-defaults-conditional-access.png)
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/customize-branding.md
Your custom branding won't immediately appear when your users go to sites such a
- **Language.** The language is automatically set as your default and can't be changed.
- - **Sign-in page background image.** Select a .png or .jpg image file to appear as the background for your sign-in pages. The image will be anchored to the center of the browser, and will scale to the size of the viewable space. You can't select an image larger than 1920x1080 pixels in size or that has a file size more than 300 KB.
+ - **Sign-in page background image.** Select a .png or .jpg image file to appear as the background for your sign-in pages. The image will be anchored to the center of the browser, and will scale to the size of the viewable space. You can't select an image larger than 1920x1080 pixels in size or that has a file size more than 300,000 bytes.
It's recommended to use images without a strong subject focus, e.g., an opaque white box appears in the center of the screen, and could cover any part of the image depending on the dimensions of the viewable space.
active-directory License Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/license-users-groups.md
There are several license plans available for the Azure AD service, including:
- Azure AD Premium P2
-For specific information about each license plan and the associated licensing details, see [What license do I need?](https://azure.microsoft.com/pricing/details/active-directory/). To sign up for Azure AD premium license plans see [here](./active-directory-get-started-premium.md).
+For specific information about each license plan and the associated licensing details, see [What license do I need?](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). To sign up for Azure AD premium license plans see [here](./active-directory-get-started-premium.md).
Not all Microsoft services are available in all locations. Before a license can be assigned to a group, you must specify the **Usage location** for all members. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. Any user whose usage location is not specified inherits the location of the Azure AD organization.
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
authentication decisions. For more information, see the
* If you're using a version of Azure AD that doesn't include Conditional Access, ensure that you're using the [Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
- For more information about Azure AD feature licensing, see the [Azure AD pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
+ For more information about Azure AD feature licensing, see the [Azure AD pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Monitor
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/deploy-access-reviews.md
The following videos may be useful as you learn about Access Reviews:
You need a valid Azure AD Premium (P2) license for each person, other than Global Administrators or User Administrators, who will create or perform Access Reviews. For more information, see [Access Reviews license requirements](access-reviews-overview.md).
-You may also need other Identity Governance features, such as [Entitlement Lifecycle Management](entitlement-management-overview.md) or Privileged Identity Managements. In that case, you might also need related licenses. For more information, see [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+You may also need other Identity Governance features, such as [Entitlement Lifecycle Management](entitlement-management-overview.md) or Privileged Identity Managements. In that case, you might also need related licenses. For more information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Plan the Access Reviews deployment project
Go to [Use Azure AD access reviews to manage users excluded from Conditional Acc
### Review guest users' group memberships
-Go to [Manage guest access with Azure AD access reviews](./manage-guest-access-with-access-reviews.md) to learn how to review guest users' access to group memeberships.
+Go to [Manage guest access with Azure AD access reviews](./manage-guest-access-with-access-reviews.md) to learn how to review guest users' access to group memberships.
### Review access to on-premises groups
Learn about the below related technologies.
* [What is Azure AD Entitlement Management?](entitlement-management-overview.md)
-* [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)
+* [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/choose-ad-authn.md
Details on decision questions:
5. Azure AD Identity Protection requires Password Hash Sync regardless of which sign-in method you choose, to provide the *Users with leaked credentials* report. Organizations can fail over to Password Hash Sync if their primary sign-in method fails and it was configured before the failure event. > [!NOTE]
-> Azure AD Identity Protection require [Azure AD Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) licenses.
+> Azure AD Identity Protection require [Azure AD Premium P2](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) licenses.
## Detailed considerations
Details on decision questions:
Organizations that require multi-factor authentication with password hash synchronization must use Azure AD Multi-Factor Authentication or [Conditional Access custom controls](../../active-directory/conditional-access/controls.md#custom-controls-preview). Those organizations can't use third-party or on-premises multifactor authentication methods that rely on federation. > [!NOTE]
-> Azure AD Conditional Access require [Azure AD Premium P1](https://azure.microsoft.com/pricing/details/active-directory/) licenses.
+> Azure AD Conditional Access require [Azure AD Premium P1](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) licenses.
* **Business continuity**. Using password hash synchronization with cloud authentication is highly available as a cloud service that scales to all Microsoft datacenters. To make sure password hash synchronization does not go down for extended periods, deploy a second Azure AD Connect server in staging mode in a standby configuration.
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
+
+ Title: Migrate from federation to cloud authentication in Azure Active Directory
+description: This article has information about moving your hybrid identity environment from federation to cloud authentication
+++++ Last updated : 07/08/2021+++++++
+# Migrate from federation to cloud authentication
+
+In this article, you learn how to deploy cloud user authentication with either Azure Active Directory [Password hash synchronization (PHS)](whatis-phs.md) or [Pass-through authentication (PTA)](how-to-connect-pta.md). While we present the use case for moving from [Active Directory Federation Services (AD FS)](whatis-fed.md) to cloud authentication methods, the guidance substantially applies other to on premises systems as well.
+
+Before you continue, we suggest that you review our guide on [choosing the right authentication method](choose-ad-authn.md) and compare methods most suitable for your organization.
+
+We recommend using PHS for cloud authentication.
+
+## Staged rollout
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE3inQJ]
+
+Staged rollout is a great way to selectively test groups of users with cloud authentication capabilities like Azure AD Multi-Factor Authentication (MFA), Conditional Access, Identity Protection for leaked credentials, Identity Governance, and others, before cutting over your domains.
+
+Refer to the staged rollout implementation plan to understand the [supported](how-to-connect-staged-rollout.md#supported-scenarios) and [unsupported scenarios](how-to-connect-staged-rollout.md#unsupported-scenarios). We recommend using staged rollout to test before cutting over domains.
+
+To learn how to configure staged rollout, see the [staged rollout interactive guide](https://mslearn.cloudguides.com/guides/Test%20migration%20to%20cloud%20authentication%20using%20staged%20rollout%20in%20Azure%20AD) migration to cloud authentication using staged rollout in Azure AD).
+
+## Migration process flow
+
+![Process flow for migrating to cloud auth](media/deploy-cloud-user-authentication/process-flow-migration.png)
+
+## Prerequisites
+
+Before you begin your migration, ensure that you meet these prerequisites:
+
+### Required roles
+
+For staged rollout, you need to be a global administrator on your tenant.
+
+To enable seamless SSO on a specific Windows Active Directory Forest, you need to be a domain administrator.
+
+### Step up Azure AD Connect server
+
+Install [Azure Active Directory Connect](https://www.microsoft.com/download/details.aspx?id=47594) (Azure AD Connect) or [upgrade to the latest version](how-to-upgrade-previous-version.md). When you step up Azure AD Connect server, it reduces the time to migrate from AD FS to the cloud authentication methods from potentially hours to minutes.
+
+### Document current federation settings
+
+To find your current federation settings, run the [Get-MsolDomainFederationSettings](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) cmdlet.
+
+Verify any settings that might have been customized for your federation design and deployment documentation. Specifically, look for customizations in **PreferredAuthenticationProtocol**, **SupportsMfa**, and **PromptLoginBehavior**.
+
+### Back up federation settings
+
+Although this deployment changes no other relying parties in your AD FS farm, you can back up your settings:
+
+ - Use Microsoft [AD FS Rapid Restore Tool](/windows-server/identity/ad-fs/operations/ad-fs-rapid-restore-tool) to restore an existing farm or create a new farm.
+
+- Export the Microsoft 365 Identity Platform relying party trust and any associated custom claim rules you added using the following PowerShell example:
+
+ ```powershell
+
+ (Get-AdfsRelyingPartyTrust -Name "Microsoft Office 365 Identity Platform") | Export-CliXML "C:\temp\O365-RelyingPartyTrust.xml"
+
+ ```
+
+## Plan the project
+
+When technology projects fail, itΓÇÖs typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that youΓÇÖre engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholder roles in the project are well understood.
+
+### Plan communications
+
+After migrating to cloud authentication, the user sign in experience for accessing Microsoft 365 and other resources that are authenticated through Azure AD changes. Users who are outside the network see only the Azure AD sign in page.
+
+Proactively communicate with your users how their experience will change, when it will change, and how to gain support if they experience issues.
+
+### Plan the maintenance window
+
+After the domain conversion, Azure AD might continue to send some legacy authentication requests from Exchange Online to your AD FS servers for up to four hours. The delay is because the Exchange Online cache for [legacy applications authentication](../fundamentals/concept-fundamentals-block-legacy-authentication.md) can take up to 4 hours to be aware of the cutover from federation to cloud authentication.
+
+During this four-hour window, you may prompt users for credentials repeatedly when reauthenticating to applications that use legacy authentication. Although the user can still successfully authenticate against AD FS, Azure AD no longer accepts the userΓÇÖs issued token because that federation trust is now removed.
+
+Existing Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't affected because Exchange Online keeps a cache of their credentials for a set period of time. The cache is used to silently reauthenticate the user. The user doesnΓÇÖt have to return to AD FS. Credentials stored on the device for these clients are used to silently reauthenticate themselves after the cached is cleared. Users arenΓÇÖt expected to receive any password prompts as a result of the domain conversion process.
+
+Modern authentication clients (Office 2016 and Office 2013, iOS, and Android apps) use a valid refresh token to obtain new access tokens for continued access to resources instead of returning to AD FS. These clients are immune to any password prompts resulting from the domain conversion process. The clients will continue to function without extra configuration.
+
+### Plan for rollback
+
+> [!TIP]
+> Consider planning cutover of domains during off-business hours in case of rollback requirements.
+
+To plan for rollback, use the [documented current federation settings](#document-current-federation-settings) and check the [federation design and deployment documentation](/windows-server/identity/ad-fs/deployment/windows-server-2012-r2-ad-fs-deployment-guide).
+
+The rollback process should include converting managed domains to federated domains by using the [Convert-MSOLDomainToFederated](/powershell/module/msonline/convert-msoldomaintofederated) cmdlet. If necessary, configuring extra claims rules.
+
+## Migration considerations
+
+Here are key migration considerations.
+
+### Plan for customizations settings
+
+The onload.js file cannot be duplicated in Azure AD. If your AD FS instance is heavily customized and relies on specific customization settings in the onload.js file, verify if Azure AD can meet your current customization requirements and plan accordingly. Communicate these upcoming changes to your users.
+
+#### Sign in experience
+
+You cannot customize Azure AD sign in experience. No matter how your users signed-in earlier, you need a fully qualified domain name such as User Principal Name (UPN) or email to sign into Azure AD.
+
+#### Organization branding
+
+You can [customize the Azure AD sign in page](../fundamentals/customize-branding.md). Some visual changes from AD FS on sign in pages should be expected after the conversion.
+
+>[!NOTE]
+>Organization branding is not available in free Azure AD licenses unless you have a Microsoft 365 license.
+
+### Plan for conditional access policies
+
+Evaluate if youΓÇÖre currently using conditional access for authentication, or if you use access control policies in AD FS.
+
+Consider replacing AD FS access control policies with the equivalent Azure AD [Conditional Access policies](../conditional-access/overview.md) and [Exchange Online Client Access Rules](/exchange/clients-and-mobile-in-exchange-online/client-access-rules/client-access-rules). You can use either Azure AD or on-premises groups for conditional access.
+
+**Disable Legacy Authentication** - Due to the increased risk associated with legacy authentication protocols create [Conditional Access policy to block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md).
+
+### Plan support for MFA
+
+Each federated domain in Azure AD has a SupportsMFA flag.
+
+**If the SupportsMFA flag is set to True**, Azure AD redirects users to perform MFA on AD FS or other federation providers. For example, if a user is accessing an application for which a Conditional Access policy that requires MFA has been configured, the user will be redirected to AD FS. Adding Azure AD MFA as an authentication method in AD FS, enables Azure AD MFA to be invoked once your configurations are complete.
+
+**If the SupportsMFA flag is set to False**, youΓÇÖre likely not using Azure MFA; youΓÇÖre probably using claims rules on AD FS relying parties to trigger MFA.
+
+You can check the status of your **SupportsMFA** flag with the following Windows PowerShell cmdlet:
+```powershell
+ Get-MsolDomainFederationSettings ΓÇôDomainName yourdomain.com
+ ```
+
+>[!NOTE]
+>Microsoft MFA Server is nearing the end of support life, and if you're using it you must move to Azure AD MFA.
+For more information, see **[Migrate from Microsoft MFA Server to Azure Multi-factor Authentication documentation](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md)**.
+>If you plan to use Azure AD MFA, we recommend that you use **[combined registration for self-service password reset (SSPR) and Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md)** to have your users register their authentication methods once.
+
+## Plan for implementation
+
+This section includes pre-work before you switch your sign in method and convert the domains.
+
+### Create necessary groups for staged rollout
+
+*If you're not using staged rollout, skip this step.*
+
+Create groups for staged rollout. You will also need to create groups for conditional access policies if you decide to add them.
+
+We recommend you use a group mastered in Azure AD, also known as a cloud-only group. You can use Azure AD security groups or Microsoft 365 Groups for both moving users to MFA and for conditional access policies. For more information, see [creating an Azure AD security group](../fundamentals/active-directory-groups-create-azure-portal.md), and this [overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups).
+
+The members in a group are automatically enabled for staged rollout. Nested and dynamic groups are not supported for staged rollout.
+
+### Pre-work for SSO
+
+The version of SSO that you use is dependent on your device OS and join state.
+
+- **For Windows 10, Windows Server 2016 and later versions**, we recommend using SSO via [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md) with [Azure AD joined devices](../devices/concept-azure-ad-join.md), [hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) and [Azure AD registered devices](../devices/concept-azure-ad-register.md).
+
+- **For Windows 7 and 8.1 devices**, we recommend using [seamless SSO](how-to-connect-sso.md) with domain-joined to register the computer in Azure AD. You don't have to sync these accounts like you do for Windows 10 devices. However, you must complete this [pre-work for seamless SSO using PowerShell](how-to-connect-staged-rollout.md#pre-work-for-seamless-sso).
+
+### Pre-work for PHS and PTA
+
+Depending on the choice of sign in method, complete the [pre-work for PHS](how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or [for PTA](how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication).
+
+## Implement your solution
+
+Finally, you switch the sign in method to PHS or PTA, as planned and convert the domains from federation to cloud authentication.
+
+### Using staged rollout
+
+If you're using staged rollout, follow the steps in the links below:
+
+1. [Enable staged rollout of a specific feature on your tenant.](how-to-connect-staged-rollout.md#enable-staged-rollout)
+
+2. Once testing is complete, [convert domains from federated to managed](#convert-domains-from-federated-to-managed).
+
+### Without using staged rollout
+
+You have two options for enabling this change:
+
+- **Option A:** Switch using Azure AD Connect.
+
+ *Available if you initially configured your AD FS/ ping-federated environment by using Azure AD Connect*.
+
+- **Option B:** Switch using Azure AD Connect and PowerShell
+
+ *Available if you didnΓÇÖt initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services*.
+
+To choose one of these options, you must know what your current settings are.
+
+#### Verify current Azure AD Connect settings
+
+Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure AD Connect** and verify the **USER SIGN_IN** settings as shown in this diagram:
+
+![Verify current Azure AD Connect settings](media/deploy-cloud-user-authentication/current-user-settings-on-azure-ad-portal.png)
++
+**To verify how federation was configured:**
+
+1. On your Azure AD Connect server, open **Azure AD Connect** and select **Configure**.
+
+2. Under **Additional Tasks > Manage Federation**, select **View federation configuration**.
+
+ ![View manage federation](media/deploy-cloud-user-authentication/manage-federation.png)
+
+ If the AD FS configuration appears in this section, you can safely assume that AD FS was originally configured by using Azure AD Connect. See the image below as an example-
+
+ ![View AD FS configuration](media/deploy-cloud-user-authentication/federation-configuration.png)
+
+ If AD FS isnΓÇÖt listed in the current settings, you must manually convert your domains from federated identity to managed identity by using PowerShell.
+
+#### Option A
+
+**Switch from federation to the new sign in method by using Azure AD Connect**
+
+1. On your Azure AD Connect server, open **Azure AD Connect** and select **Configure**.
+
+2. Under **Additional tasks** page, select **Change user sign-in**, and then select **Next**.
+
+ ![View Additional tasks](media/deploy-cloud-user-authentication/additional-tasks.png)
+
+3. On the **Connect to Azure AD** page, enter your Global Administrator account credentials.
+
+4. On the **User sign-in** page:
+
+ - If you select **Pass-through authentication** option button, check **Enable single sign-on**, and then select **Next**.
+
+ - If you select the **Password hash synchronization** option button, make sure to select the **Do not convert user accounts** check box. The option is deprecated. Check **Enable single sign-on**, and then select **Next**.
+
+ ![Check enable single sign-on on User sign-in page](media/deploy-cloud-user-authentication/user-sign-in.png)
+
+5. On the **Enable single sign-on** page, enter the credentials of a Domain Administrator account, and then select **Next**.
+
+ ![Enable single sign-on page](media/deploy-cloud-user-authentication/enable-single-sign-on.png)
+
+ Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions:
+ - A computer account named AZUREADSSO (which represents Azure AD) is created in your on-premises Active Directory instance.
+ - The computer accountΓÇÖs Kerberos decryption key is securely shared with Azure AD.
+ - Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign in.
+
+ The domain administrator credentials are not stored in Azure AD Connect or Azure AD and get discarded when the process successfully finishes. They are used to turn ON this feature.
+
+6. On the **Ready to configure** page, make sure that the **Start the synchronization process when configuration completes** check box is selected. Then, select **Configure**.
+
+ ![Ready to configure page](media/deploy-cloud-user-authentication/ready-to-configure.png)
+
+ > [!IMPORTANT]
+ > At this point, all your federated domains will change to managed authentication. Your selected User sign in method is the new method of authentication.
+
+1. In the Azure AD portal, select **Azure Active Directory**, and then select **Azure AD Connect**.
+
+2. Verify these settings:
+
+ - **Federation** is set to **Disabled**.
+ - **Seamless single sign-on** is set to **Enabled**.
+ - **Password Hash Sync** is set to **Enabled**.
+
+ ![ Reverify current user settings](media/deploy-cloud-user-authentication/reverify-settings.png)
+
+3. In case you're switching to PTA, follow the next steps.
+
+##### Deploy more authentication agents for PTA
+
+>[!NOTE]
+> PTA requires deploying lightweight agents on the Azure AD Connect server and on your on-premises computer thatΓÇÖs running Windows server. To reduce latency, install the agents as close as possible to your Active Directory domain controllers.
+
+For most customers, two or three authentication agents are sufficient to provide high availability and the required capacity. A tenant can have a maximum of 12 agents registered. The first agent is always installed on the Azure AD Connect server itself. To learn about agent limitations and agent deployment options, see [Azure AD pass-through authentication: Current limitations](how-to-connect-pta-current-limitations.md).
+
+1. Select **Pass-through authentication**.
+2. On the **Pass-through authentication** page, select the **Download** button.
+3. On the **Download agent** page, select **Accept terms and download**.
+
+ More authentication agents start to download. Install the secondary authentication agent on a domain-joined server.
+
+4. Run the authentication agent installation. During installation, you must enter the credentials of a Global Administrator account.
+
+ ![ Microsoft Azure AD Connect Authentication Agent](media/deploy-cloud-user-authentication/install-azure-ad-connect-installation-agent.png)
+
+5. When the authentication agent is installed, you can return to the PTA health page to check the status of the more agents.
+
+#### Option B
+
+**Switch from federation to the new sign in method by using Azure AD Connect and PowerShell**
+
+*Available if you didnΓÇÖt initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services.*
+
+On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a). You will notice that on the User sign-in page, the **Do not configure** option is pre-selected.
+
+![ See Do not Configure option on the user sign-in page](media/deploy-cloud-user-authentication/do-not-configure-on-user-sign-in-page.png)
+
+1. In the Azure AD portal, select **Azure Active Directory**, and then select **Azure AD Connect**.
+
+2. Verify these settings:
+
+ - **Federation** is set to **Enabled**.
+ - **Seamless single sign-on** is set to **Disabled**.
+ - **Password Hash Sync** is set to **Enabled**.
+
+ ![ Verify current user settings on the Azure portal](media/deploy-cloud-user-authentication/verify-current-user-settings-on-azure-ad-portal.png)
+
+**In case of PTA only**, follow these steps to install more PTA agent servers.
+
+1. In the Azure AD portal, select **Azure Active Directory**, and then select **Azure AD Connect**.
+
+2. Select **Pass-through authentication**. Verify that the status is **Active**.
+
+ ![ Pass-through authentication settings](media/deploy-cloud-user-authentication/pass-through-authentication-settings.png)
+
+ If the authentication agent isnΓÇÖt active, complete these [troubleshooting steps](tshoot-connect-pass-through-authentication.md) before you continue with the domain conversion process in the next step. You risk causing an authentication outage if you convert your domains before you validate that your PTA agents are successfully installed and that their status is **Active** in the Azure portal.
+
+3. [Deploy more authentication agents](#deploy-more-authentication-agents-for-pta).
+
+### Convert domains from federated to managed
+
+**At this point, federated authentication is still active and operational for your domains**. To continue with the deployment, you must convert each domain from federated identity to managed identity.
+
+>[!IMPORTANT]
+> You donΓÇÖt have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
+
+**Complete the conversion by using the Azure AD PowerShell module:**
+
+1. In PowerShell, sign in to Azure AD by using a Global Administrator account.
+
+2. To convert the first domain, run the following command:
+ ```powershell
+ Set-MsolDomainAuthentication -Authentication Managed -DomainName <domain name>
+ ```
+ See [Set-MsolDomainAuthentication](/powershell/module/msonline/set-msoldomainauthentication)
+
+3. In the Azure AD portal, select **Azure Active Directory > Azure AD Connect**.
+
+4. Verify that the domain has been converted to managed by running the following command:
+ ```powershell
+ Get-MsolDomain -DomainName <domain name>
+ ```
+## Complete your migration
+
+Complete the following tasks to verify the sign-up method and to finish the conversion process.
+
+### Test the new sign in method
+
+When your tenant used federated identity, users were redirected from the Azure AD sign in page to your AD FS environment. Now that the tenant is configured to use the new sign in method instead of federated authentication, users arenΓÇÖt redirected to AD FS.
+
+**Instead, users sign in directly on the Azure AD sign-in page.**
+
+Follow the steps in this link - [Validate sign in with PHS/ PTA and seamless SSO](how-to-connect-staged-rollout.md#validation) (where required)
+
+### Remove a user from staged rollout
+
+If you used staged rollout, you should remember to turn off the staged rollout features once you have finished cutting over.
+
+**To disable the staged rollout feature, slide the control back to Off.**
+
+### Sync UserPrincipalName updates
+
+Historically, updates to the **UserPrincipalName** attribute, which uses the sync service from the on-premises environment, are blocked unless both of these conditions are true:
+
+ - The user is in a managed (non-federated) identity domain.
+ - The user hasnΓÇÖt been assigned a license.
+
+To learn how to verify or turn on this feature, see [Sync userPrincipalName updates](how-to-connect-syncservice-features.md).
+
+## Manage your implementation
+
+### Roll over the seamless SSO Kerberos decryption key
+
+We recommend that you roll over the Kerberos decryption key at least every 30 days to align with the way that Active Directory domain members submit password changes. There is no associated device attached to the AZUREADSSO computer account object, so you must perform the rollover manually.
+
+See FAQ [How do I roll over the Kerberos decryption key of the AZUREADSSO computer account?](how-to-connect-sso.md).
+
+### Monitoring and logging
+
+Monitor the servers that run the authentication agents to maintain the solution availability. In addition to general server performance counters, the authentication agents expose performance objects that can help you understand authentication statistics and errors.
+
+Authentication agents log operations to the Windows event logs that are located under Application and Service logs. You can also turn on logging for troubleshooting.
+
+To confirm the various actions performed on staged rollout, you can [Audit events for PHS, PTA, or seamless SSO](how-to-connect-staged-rollout.md#auditing).
+
+### Troubleshoot
+
+Your support team should understand how to troubleshoot any authentication issues that arise either during, or after the change from federation to managed. Use the following troubleshooting documentation to help your support team familiarize themselves with the common troubleshooting steps and appropriate actions that can help to isolate and resolve the issue.
+
+- [Azure AD PHS](tshoot-connect-password-hash-synchronization.md)
+- [Azure AD PTA](tshoot-connect-pass-through-authentication.md)
+- [Azure AD seamless SSO](tshoot-connect-sso.md)
+
+## Decommission AD FS infrastructure
+
+### Migrate app authentication from AD FS to Azure AD
+
+Migration requires assessing how the application is configured on-premises, and then mapping that configuration to Azure AD.
+
+If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, youΓÇÖll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../manage-apps/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md).
+
+You can move SaaS applications that are currently federated with ADFS to Azure AD. Reconfigure to authenticate with Azure AD either via a built-in connector from the [Azure App gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps), or by [registering the application in Azure AD](../develop/quickstart-register-app.md).
+
+For more information, see ΓÇô
+
+- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](/manage-apps/migrate-adfs-apps-to-azure) and
+- [AD FS to Azure AD application migration playbook for developers](/samples/azure-samples/ms-identity-dotnet-adfs-to-aad)
+
+### Remove relying party trust
+
+If you have Azure AD Connect Health, you can [monitor usage](how-to-connect-health-adfs.md) from the Azure portal. In case the usage shows no new auth req and you validate that all users and clients are successfully authenticating via Azure AD, itΓÇÖs safe to remove the Microsoft 365 relying party trust.
+
+If you donΓÇÖt use AD FS for other purposes (that is, for other relying party trusts), you can decommission AD FS at this point.
+
+## Next steps
+
+- [Learn about migrating applications](../manage-apps/migration-resources.md)
+- [Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
active-directory Plan Connect Design Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-connect-design-concepts.md
# Azure AD Connect: Design concepts
-The purpose of this document is to describe areas that must be thought through during the implementation design of Azure AD Connect. This document is a deep dive on certain areas and these concepts are briefly described in other documents as well.
+The purpose of this document is to describe areas that must be considered while configuring Azure AD Connect. This document is a deep dive on certain areas and these concepts are briefly described in other documents as well.
## sourceAnchor The sourceAnchor attribute is defined as *an attribute immutable during the lifetime of an object*. It uniquely identifies an object as being the same object on-premises and in Azure AD. The attribute is also called **immutableId** and the two names are used interchangeable.
Read [Add your custom domain name to Azure Active Directory](../fundamentals/add
Azure AD Connect detects if you are running in a non-routable domain environment and would appropriately warn you from going ahead with express settings. If you are operating in a non-routable domain, then it is likely that the UPN, of the users, have non-routable suffixes too. For example, if you are running under contoso.local, Azure AD Connect suggests you to use custom settings rather than using express settings. Using custom settings, you are able to specify the attribute that should be used as UPN to sign in to Azure after the users are synced to Azure AD. ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Plan Migrate Adfs Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-migrate-adfs-pass-through-authentication.md
- Title: 'Azure AD Connect: Migrate from federation to PTA for Azure AD'
-description: This article has information about moving your hybrid identity environment from federation to pass-through authentication.
------- Previously updated : 05/29/2020-----
-# Migrate from federation to pass-through authentication for Azure Active Directory
-
-This article describes how to move your organization domains from Active Directory Federation Services (AD FS) to pass-through authentication.
-
-> [!NOTE]
-> Changing your authentication method requires planning, testing, and potentially downtime. [Staged rollout](how-to-connect-staged-rollout.md) provides an alternative way to test and gradually migrate from federation to cloud authentication using pass-through authentication.
->
-> If you plan on using staged rollout, you should remember to turn off the staged rollout features once you have finished cutting over. For more information see [Migrate to cloud authentication using staged rollout](how-to-connect-staged-rollout.md)
--
-## Prerequisites for migrating to pass-through authentication
-
-The following prerequisites are required to migrate from using AD FS to using pass-through authentication.
-
-### Update Azure AD Connect
-
-To successfully complete the steps it takes to migrate to using pass-through authentication, you must have [Azure Active Directory Connect](https://www.microsoft.com/download/details.aspx?id=47594) (Azure AD Connect) 1.1.819.0 or a later version. In Azure AD Connect 1.1.819.0, the way sign-in conversion is performed changes significantly. The overall time to migrate from AD FS to cloud authentication in this version is reduced from potentially hours to minutes.
-
-> [!IMPORTANT]
-> You might read in outdated documentation, tools, and blogs that user conversion is required when you convert domains from federated identity to managed identity. *Converting users* is no longer required. Microsoft is working to update documentation and tools to reflect this change.
-
-To update Azure AD Connect, complete the steps in [Azure AD Connect: Upgrade to the latest version](./how-to-upgrade-previous-version.md).
-
-### Plan authentication agent number and placement
-
-Pass-through authentication requires deploying lightweight agents on the Azure AD Connect server and on your on-premises computer that's running Windows Server. To reduce latency, install the agents as close as possible to your Active Directory domain controllers.
-
-For most customers, two or three authentication agents are sufficient to provide high availability and the required capacity. A tenant can have a maximum of 12 agents registered. The first agent is always installed on the Azure AD Connect server itself. To learn about agent limitations and agent deployment options, see [Azure AD pass-through authentication: Current limitations](./how-to-connect-pta-current-limitations.md).
-
-### Plan the migration method
-
-You can choose from two methods to migrate from federated identity management to pass-through authentication and seamless single sign-on (SSO). The method you use depends on how your AD FS instance was originally configured.
-
-* **Azure AD Connect**. If you originally configured AD FS by using Azure AD Connect, you *must* change to pass-through authentication by using the Azure AD Connect wizard.
-
- ΓÇÄAzure AD Connect automatically runs the **Set-MsolDomainAuthentication** cmdlet when you change the user sign-in method. Azure AD Connect automatically unfederates all the verified federated domains in your Azure AD tenant.
-
- > [!NOTE]
- > Currently, if you originally used Azure AD Connect to configure AD FS, you can't avoid unfederating all domains in your tenant when you change the user sign-in to pass-through authentication.
-ΓÇÄ
-* **Azure AD Connect with PowerShell**. You can use this method only if you didn't originally configure AD FS by using Azure AD Connect. For this option, you still must change the user sign-in method via the Azure AD Connect wizard. The core difference with this option is that the wizard doesn't automatically run the **Set-MsolDomainAuthentication** cmdlet. With this option, you have full control over which domains are converted and in which order.
-
-To understand which method you should use, complete the steps in the following sections.
-
-#### Verify current user sign-in settings
-
-1. Sign in to the [Azure AD portal](https://aad.portal.azure.com/) by using a Global Administrator account.
-2. In the **User sign-in** section, verify the following settings:
- * **Federation** is set to **Enabled**.
- * **Seamless single sign-on** is set to **Disabled**.
- * **Pass-through authentication** is set to **Disabled**.
-
- ![Screenshot of the settings in the Azure AD Connect User sign-in section](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image1.png)
-
-#### Verify how federation was configured
-
-1. On your Azure AD Connect server, open Azure AD Connect. Select **Configure**.
-2. On the **Additional tasks** page, select **View current configuration**, and then select **Next**.<br />
-
- ![Screenshot of the View current configuration option on the Additional tasks page](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image2.png)<br />
-3. Under **Additional Tasks > Manage Federation**, scroll to **Active Directory Federation Services (AD FS)**.<br />
-
- * If the AD FS configuration appears in this section, you can safely assume that AD FS was originally configured by using Azure AD Connect. You can convert your domains from federated identity to managed identity by using the Azure AD Connect **Change user sign-in** option. For more information about the process, see the section **Option A: Configure pass-through authentication by using Azure AD Connect**.
- * If AD FS isn't listed in the current settings, you must manually convert your domains from federated identity to managed identity by using PowerShell. For more information about this process, see the section **Option B: Switch from federation to pass-through authentication by using Azure AD Connect and PowerShell**.
-
-### Document current federation settings
-
-To find your current federation settings, run the **Get-MsolDomainFederationSettings** cmdlet:
-
-``` PowerShell
-Get-MsolDomainFederationSettings -DomainName YourDomain.extention | fl *
-```
-
-Example:
-
-``` PowerShell
-Get-MsolDomainFederationSettings -DomainName Contoso.com | fl *
-```
-
-Verify any settings that might have been customized for your federation design and deployment documentation. Specifically, look for customizations in **PreferredAuthenticationProtocol**, **SupportsMfa**, and **PromptLoginBehavior**.
-
-For more information, see these articles:
-
-* [AD FS prompt=login parameter support](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login)
-* [Set-MsolDomainAuthentication](/powershell/module/msonline/set-msoldomainauthentication)
-
-> [!NOTE]
-> If **SupportsMfa** is set to **True**, you're using an on-premises multi-factor authentication solution to inject a second-factor challenge into the user authentication flow. This setup no longer works for Azure AD authentication scenarios.
->
-> Instead, use the Azure AD Multi-Factor Authentication cloud-based service to perform the same function. Carefully evaluate your multi-factor authentication requirements before you continue. Before you convert your domains, make sure that you understand how to use Azure AD Multi-Factor Authentication, the licensing implications, and the user registration process.
-
-#### Back up federation settings
-
-Although no changes are made to other relying parties in your AD FS farm during the processes described in this article, we recommend that you have a current valid backup of your AD FS farm that you can restore from. You can create a current valid backup by using the free Microsoft [AD FS Rapid Restore Tool](/windows-server/identity/ad-fs/operations/ad-fs-rapid-restore-tool). You can use the tool to back up AD FS, and to restore an existing farm or create a new farm.
-
-If you choose not to use the AD FS Rapid Restore Tool, at a minimum, you should export the Microsoft 365 Identity Platform relying party trust and any associated custom claim rules you added. You can export the relying party trust and associated claim rules by using the following PowerShell example:
-
-``` PowerShell
-(Get-AdfsRelyingPartyTrust -Name "Microsoft Office 365 Identity Platform") | Export-CliXML "C:\temp\O365-RelyingPartyTrust.xml"
-```
-
-## Deployment considerations and using AD FS
-
-This section describes deployment considerations and details about using AD FS.
-
-### Current AD FS use
-
-Before you convert from federated identity to managed identity, look closely at how you currently use AD FS for Azure AD, Microsoft 365, and other applications (relying party trusts). Specifically, consider the scenarios that are described in the following table:
-
-| If | Then |
-|-|-|
-| You plan to keep using AD FS with other applications (other than Azure AD and Microsoft 365). | After you convert your domains, you'll use both AD FS and Azure AD. Consider the user experience. In some scenarios, users might be required to authenticate twice: once to Azure AD (where a user gets SSO access to other applications, like Microsoft 365), and again for any applications that are still bound to AD FS as a relying party trust. |
-| Your AD FS instance is heavily customized and relies on specific customization settings in the onload.js file (for example, if you changed the sign-in experience so that users use only a **SamAccountName** format for their username instead of a User Principal Name (UPN), or your organization has heavily branded the sign-in experience). The onload.js file can't be duplicated in Azure AD. | Before you continue, you must verify that Azure AD can meet your current customization requirements. For more information and for guidance, see the sections on AD FS branding and AD FS customization.|
-| You use AD FS to block earlier versions of authentication clients.| Consider replacing AD FS controls that block earlier versions of authentication clients by using a combination of [Conditional Access controls](../conditional-access/concept-conditional-access-conditions.md) and [Exchange Online Client Access Rules](/exchange/clients-and-mobile-in-exchange-online/client-access-rules/client-access-rules). |
-| You require users to perform multi-factor authentication against an on-premises multi-factor authentication server solution when users authenticate to AD FS.| In a managed identity domain, you can't inject a multi-factor authentication challenge via the on-premises multi-factor authentication solution into the authentication flow. However, you can use the Azure AD Multi-Factor Authentication service for multi-factor authentication after the domain is converted.<br /><br /> If your users don't currently use Azure AD Multi-Factor Authentication, a onetime user registration step is required. You must prepare for and communicate the planned registration to your users. |
-| You currently use access control policies (AuthZ rules) in AD FS to control access to Microsoft 365.| Consider replacing the policies with the equivalent Azure AD [Conditional Access policies](../conditional-access/overview.md) and [Exchange Online Client Access Rules](/exchange/clients-and-mobile-in-exchange-online/client-access-rules/client-access-rules).|
-
-### Common AD FS customizations
-
-This section describes common AD FS customizations.
-
-#### InsideCorporateNetwork claim
-
-AD FS issues the **InsideCorporateNetwork** claim if the user who is authenticating is inside the corporate network. This claim can then be passed on to Azure AD. The claim is used to bypass multi-factor authentication based on the user's network location. To learn how to determine whether this functionality currently is available in AD FS, see [Trusted IPs for federated users](../authentication/howto-mfa-adfs.md).
-
-The **InsideCorporateNetwork** claim isn't available after your domains are converted to pass-through authentication. You can use [named locations in Azure AD](../conditional-access/location-condition.md) to replace this functionality.
-
-After you configure named locations, you must update all Conditional Access policies that were configured to either include or exclude the network **All trusted locations** or **MFA Trusted IPs** values to reflect the new named locations.
-
-For more information about the **Location** condition in Conditional Access, see [Active Directory Conditional Access locations](../conditional-access/location-condition.md).
-
-#### Hybrid Azure AD-joined devices
-
-When you join a device to Azure AD, you can create Conditional Access rules that enforce that devices meet your access standards for security and compliance. Also, users can sign in to a device by using an organizational work or school account instead of a personal account. When you use hybrid Azure AD-joined devices, you can join your Active Directory domain-joined devices to Azure AD. Your federated environment might have been set up to use this feature.
-
-To ensure that hybrid join continues to work for any devices that are joined to the domain after your domains are converted to pass-through authentication, for Windows 10 clients, you must use Azure AD Connect to sync Active Directory computer accounts to Azure AD.
-
-For Windows 8 and Windows 7 computer accounts, hybrid join uses seamless SSO to register the computer in Azure AD. You don't have to sync Windows 8 and Windows 7 computer accounts like you do for Windows 10 devices. However, you must deploy an updated workplacejoin.exe file (via an .msi file) to Windows 8 and Windows 7 clients so they can register themselves by using seamless SSO. [Download the .msi file](https://www.microsoft.com/download/details.aspx?id=53554).
-
-For more information, see [Configure hybrid Azure AD-joined devices](../devices/hybrid-azuread-join-plan.md).
-
-#### Branding
-
-If your organization [customized your AD FS sign-in pages](/windows-server/identity/ad-fs/operations/ad-fs-user-sign-in-customization) to display information that's more pertinent to the organization, consider making similar [customizations to the Azure AD sign-in page](../fundamentals/customize-branding.md).
-
-Although similar customizations are available, some visual changes on sign-in pages should be expected after the conversion. You might want to provide information about expected changes in your communications to users.
-
-> [!NOTE]
-> Organization branding is available only if you purchase the Premium or Basic license for Azure Active Directory or if you have a Microsoft 365 license.
-
-## Plan for smart lockout
-
-Azure AD smart lockout protects against brute-force password attacks. Smart lockout prevents an on-premises Active Directory account from being locked out when pass-through authentication is being used and an account lockout group policy is set in Active Directory.
-
-For more information, see [Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md).
-
-## Plan deployment and support
-
-Complete the tasks that are described in this section to help you plan for deployment and support.
-
-### Plan the maintenance window
-
-Although the domain conversion process is relatively quick, Azure AD might continue to send some authentication requests to your AD FS servers for up to four hours after the domain conversion is finished. During this four-hour window, and depending on various service side caches, Azure AD might not accept these authentications. Users might receive an error. The user can still successfully authenticate against AD FS, but Azure AD no longer accepts the userΓÇÖs issued token because that federation trust is now removed.
-
-Only users who access the services via a web browser during this post-conversion window before the service side cache is cleared are affected. Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't expected to be affected because Exchange Online keeps a cache of their credentials for a set period of time. The cache is used to silently reauthenticate the user. The user doesn't have to return to AD FS. Credentials stored on the device for these clients are used to silently reauthenticate themselves after this cached is cleared. Users aren't expected to receive any password prompts as a result of the domain conversion process.
-
-Modern authentication clients (Office 2016 and Office 2013, iOS, and Android apps) use a valid refresh token to obtain new access tokens for continued access to resources instead of returning to AD FS. These clients are immune to any password prompts resulting from the domain conversion process. The clients will continue to function without additional configuration.
-
-> [!IMPORTANT]
-> DonΓÇÖt shut down your AD FS environment or remove the Microsoft 365 relying party trust until you have verified that all users can successfully authenticate by using cloud authentication.
-
-### Plan for rollback
-
-If you encounter a major issue that you can't resolve quickly, you might decide to roll back the solution to federation. ItΓÇÖs important to plan what to do if your deployment doesnΓÇÖt roll out as intended. If conversion of the domain or users fails during deployment, or if you need to roll back to federation, you must understand how to mitigate any outage and reduce the effect on your users.
-
-#### To roll back
-
-To plan for rollback, check the federation design and deployment documentation for your specific deployment details. The process should include these tasks:
-
-* Converting managed domains to federated domains by using the **Convert-MSOLDomainToFederated** cmdlet.
-* If necessary, configuring additional claims rules.
-
-### Plan communications
-
-An important part of planning deployment and support is ensuring that your users are proactively informed about upcoming changes. Users should know in advance what they might experience and what is required of them.
-
-After both pass-through authentication and seamless SSO are deployed, the user sign-in experience for accessing Microsoft 365 and other resources that are authenticated through Azure AD changes. Users who are outside the network see only the Azure AD sign-in page. These users aren't redirected to the forms-based page that's presented by external-facing web application proxy servers.
-
-Include the following elements in your communication strategy:
-
-* Notify users about upcoming and released functionality by using:
- * Email and other internal communication channels.
- * Visuals, such as posters.
- * Executive, live, or other communications.
-* Determine who will customize the communications and who will send the communications, and when.
-
-## Implement your solution
-
-You planned your solution. Now, you can now implement it. Implementation involves the following components:
-
-* Preparing for seamless SSO.
-* Changing the sign-in method to pass-through authentication and enabling seamless SSO.
-
-### Step 1: Prepare for seamless SSO
-
-For your devices to use seamless SSO, you must add an Azure AD URL to users' intranet zone settings by using a group policy in Active Directory.
-
-By default, web browsers automatically calculate the correct zone, either internet or intranet, from a URL. For example, **http:\/\/contoso/** maps to the intranet zone and **http:\/\/intranet.contoso.com** maps to the internet zone (because the URL contains a period). Browsers send Kerberos tickets to a cloud endpoint, like the Azure AD URL, only if you explicitly add the URL to the browser's intranet zone.
-
-Complete the steps to [roll out](./how-to-connect-sso-quick-start.md) the required changes to your devices.
-
-> [!IMPORTANT]
-> Making this change doesn't modify the way your users sign in to Azure AD. However, itΓÇÖs important that you apply this configuration to all your devices before you proceed. Users who sign in on devices that haven't received this configuration simply are required to enter a username and password to sign in to Azure AD.
-
-### Step 2: Change the sign-in method to pass-through authentication and enable seamless SSO
-
-You have two options for changing the sign-in method to pass-through authentication and enabling seamless SSO.
-
-#### Option A: Configure pass-through authentication by using Azure AD Connect
-
-Use this method if you initially configured your AD FS environment by using Azure AD Connect. You can't use this method if you *didn't* originally configure your AD FS environment by using Azure AD Connect.
-
-> [!IMPORTANT]
-> After you complete the following steps, all your domains are converted from federated identity to managed identity. For more information, review [Plan the migration method](#plan-the-migration-method).
-
-First, change the sign-in method:
-
-1. On the Azure AD Connect server, open the Azure AD Connect wizard.
-2. Select **Change user sign-in**, and then select **Next**.
-3. On the **Connect to Azure AD** page, enter the username and password of a Global Administrator account.
-4. On the **User sign-in** page, select the **Pass-through authentication** button, select **Enable single sign-on**, and then select **Next**.
-5. On the **Enable single sign-on** page, enter the credentials of a Domain Administrator account, and then select **Next**.
-
- > [!NOTE]
- > Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions. The Domain Administrator account credentials aren't stored in Azure AD Connect or in Azure AD. The Domain Administrator account credentials are used only to turn on the feature. The credentials are discarded when the process successfully finishes.
- >
- > 1. A computer account named AZUREADSSOACC (which represents Azure AD) is created in your on-premises Active Directory instance.
- > 2. The computer account's Kerberos decryption key is securely shared with Azure AD.
- > 3. Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in.
-
-6. On the **Ready to configure** page, make sure that the **Start the synchronization process when configuration completes** check box is selected. Then, select **Configure**.<br />
-
- ![Screenshot of the Ready to configure page](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image8.png)<br />
-7. In the Azure AD portal, select **Azure Active Directory**, and then select **Azure AD Connect**.
-8. Verify these settings:
- * **Federation** is set to **Disabled**.
- * **Seamless single sign-on** is set to **Enabled**.
- * **Pass-through authentication** is set to **Enabled**.<br />
-
- ![Screenshot that shows the settings in the User sign-in section](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image9.png)<br />
-
-Next. deploy additional authentication methods:
-
-1. In the Azure portal, go to **Azure Active Directory** > **Azure AD Connect**, and then select **Pass-through authentication**.
-2. On the **Pass-through authentication** page, select the **Download** button.
-3. On the **Download agent** page, select **Accept terms and download**.
-
- Additional authentication agents start to download. Install the secondary authentication agent on a domain-joined server.
-
- > [!NOTE]
- > The first agent is always installed on the Azure AD Connect server itself as part of the configuration changes made in the **User sign-in** section of the Azure AD Connect tool. Install any additional authentication agents on a separate server. We recommend that you have two or three additional authentication agents available.
-
-4. Run the authentication agent installation. During installation, you must enter the credentials of a Global Administrator account.
-
- ![Screenshot that shows the Install button you use to run the Microsoft Azure AD Connect Authentication Agent Package.](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image11.png)
-
- ![Screenshot that shows the Microsoft sign-in page.](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image12.png)
-
-5. When the authentication agent is installed, you can return to the pass-through authentication agent health page to check the status of the additional agents.
-
-Skip to [Testing and next steps](#testing-and-next-steps).
-
-> [!IMPORTANT]
-> Skip the section **Option B: Switch from federation to pass-through authentication by using Azure AD Connect and PowerShell**. The steps in that section don't apply if you chose Option A to change the sign-in method to pass-through authentication and enable seamless SSO.
-
-#### Option B: Switch from federation to pass-through authentication by using Azure AD Connect and PowerShell
-
-Use this option if you didn't initially configure your federated domains by using Azure AD Connect.
-
-First, enable pass-through authentication:
-
-1. On the Azure AD Connect Server, open the Azure AD Connect wizard.
-2. Select **Change user sign-in**, and then select **Next**.
-3. On the **Connect to Azure AD** page, enter the username and password of a Global Administrator account.
-4. On the **User sign-in** page, select the **Pass-through authentication** button. Select **Enable single sign-on**, and then select **Next**.
-5. On the **Enable single sign-on** page, enter the credentials of a Domain Administrator account, and then select **Next**.
-
- > [!NOTE]
- > Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions. The Domain Administrator account credentials aren't stored in Azure AD Connect or in Azure AD. The Domain Administrator account credentials are used only to turn on the feature. The credentials are discarded when the process successfully finishes.
- >
- > 1. A computer account named AZUREADSSOACC (which represents Azure AD) is created in your on-premises Active Directory instance.
- > 2. The computer account's Kerberos decryption key is securely shared with Azure AD.
- > 3. Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in.
-
-6. On the **Ready to configure** page, make sure that the **Start the synchronization process when configuration completes** check box is selected. Then, select **Configure**.<br />
-
- ΓÇÄ![Screenshot that shows the Ready to configure page and the Configure button](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image18.png)<br />
- The following steps occur when you select **Configure**:
-
- 1. The first pass-through authentication agent is installed.
- 2. The pass-through feature is enabled.
- 3. Seamless SSO is enabled.
-
-7. Verify these settings:
- * **Federation** is set to **Enabled**.
- * **Seamless single sign-on** is set to **Enabled**.
- * **Pass-through authentication** is set to **Enabled**.
-
- ![Screenshot that shows the settings to verify in the User sign-in section.](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image19.png)
-8. Select **Pass-through authentication** and verify that the status is **Active**.<br />
-
- If the authentication agent isn't active, complete some [troubleshooting steps](./tshoot-connect-pass-through-authentication.md) before you continue with the domain conversion process in the next step. You risk causing an authentication outage if you convert your domains before you validate that your pass-through authentication agents are successfully installed and that their status **Active** in the Azure portal.
-
-Next, deploy additional authentication agents:
-
-1. In the Azure portal, go to **Azure Active Directory** > **Azure AD Connect**, and then select **Pass-through authentication**.
-2. On the **Pass-through authentication** page, select the **Download** button.
-3. On the **Download agent** page, select **Accept terms and download**.
-
- The authentication agent starts to download. Install the secondary authentication agent on a domain-joined server.
-
- > [!NOTE]
- > The first agent is always installed on the Azure AD Connect server itself as part of the configuration changes made in the **User sign-in** section of the Azure AD Connect tool. Install any additional authentication agents on a separate server. We recommend that you have two or three additional authentication agents available.
-
-4. Run the authentication agent installation. During the installation, you must enter the credentials of a Global Administrator account.<br />
-
- ![Screenshot that shows the Install button on the Microsoft Azure AD Connect Authentication Agent Package page](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image23.png)<br />
- ![Screenshot that shows the sign-in page](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image24.png)<br />
-5. After the authentication agent is installed, you can return to the pass-through authentication agent health page to check the status of the additional agents.
-
-At this point, federated authentication is still active and operational for your domains. To continue with the deployment, you must convert each domain from federated identity to managed identity so that pass-through authentication starts serving authentication requests for the domain.
-
-You don't have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
-
-Complete the conversion by using the Azure AD PowerShell module:
-
-1. In PowerShell, sign in to Azure AD by using a Global Administrator account.
-2. To convert the first domain, run the following command:
-
- ``` PowerShell
- Set-MsolDomainAuthentication -Authentication Managed -DomainName <domain name>
- ```
-
-3. In the Azure AD portal, select **Azure Active Directory** > **Azure AD Connect**.
-4. After you convert all your federated domains, verify these settings:
- * **Federation** is set to **Disabled**.
- * **Seamless single sign-on** is set to **Enabled**.
- * **Pass-through authentication** is set to **Enabled**.<br />
-
- ![Screenshot that shows the settings in the User sign-in section in the Azure AD portal.](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image26.png)<br />
-
-## Testing and next steps
-
-Complete the following tasks to verify pass-through authentication and to finish the conversion process.
-
-### Test pass-through authentication
-
-When your tenant used federated identity, users were redirected from the Azure AD sign-in page to your AD FS environment. Now that the tenant is configured to use pass-through authentication instead of federated authentication, users aren't redirected to AD FS. Instead, users sign in directly on the Azure AD sign-in page.
-
-To test pass-through authentication:
-
-1. Open Internet Explorer in InPrivate mode so that seamless SSO doesn't sign you in automatically.
-2. Go to the Office 365 sign-in page ([https://portal.office.com](https://portal.office.com/)).
-3. Enter a user UPN, and then select **Next**. Make sure that you enter the UPN of a hybrid user who was synced from your on-premises Active Directory instance, and who previously used federated authentication. A page on which you enter the username and password appears:
-
- ![Screenshot that shows the sign-in page in which you enter a username](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image27.png)
-
- ![Screenshot that shows the sign-in page in which you enter a password](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image28.png)
-
-4. After you enter the password and select **Sign in**, you're redirected to the Office 365 portal.
-
- ![Screenshot that shows the Office 365 portal](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image29.png)
-
-### Test seamless SSO
-
-To test seamless SSO:
-
-1. Sign in to a domain-joined machine that is connected to the corporate network.
-2. In Internet Explorer or Chrome, go to one of the following URLs (replace "contoso" with your domain):
-
- * https:\/\/myapps.microsoft.com/contoso.com
- * https:\/\/myapps.microsoft.com/contoso.onmicrosoft.com
-
- The user is briefly redirected to the Azure AD sign-in page, which shows the message "Trying to sign you in." The user isn't prompted for a username or password.<br />
-
- ![Screenshot that shows the Azure AD sign-in page and message](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image30.png)<br />
-3. The user is redirected and is successfully signed in to the access panel:
-
- > [!NOTE]
- > Seamless SSO works on Microsoft 365 services that support domain hint (for example, myapps.microsoft.com/contoso.com). Currently, the Microsoft 365 portal (portal.office.com) doesnΓÇÖt support domain hints. Users are required to enter a UPN. After a UPN is entered, seamless SSO retrieves the Kerberos ticket on behalf of the user. The user is signed in without entering a password.
-
- > [!TIP]
- > Consider deploying [Azure AD hybrid join on Windows 10](../devices/overview.md) for an improved SSO experience.
-
-### Remove the relying party trust
-
-After you validate that all users and clients are successfully authenticating via Azure AD, it's safe to remove the Microsoft 365 relying party trust.
-
-If you don't use AD FS for other purposes (that is, for other relying party trusts), it's safe to decommission AD FS at this point.
-
-### Rollback
-
-If you discover a major issue and can't resolve it quickly, you might choose to roll back the solution to federation.
-
-Consult the federation design and deployment documentation for your specific deployment details. The process should involve these tasks:
-
-* Convert managed domains to federated authentication by using the **Convert-MSOLDomainToFederated** cmdlet.
-* If necessary, configure additional claims rules.
-
-### Sync userPrincipalName updates
-
-Historically, updates to the **UserPrincipalName** attribute, which uses the sync service from the on-premises environment, are blocked unless both of these conditions are true:
-
-* The user is in a managed (non-federated) identity domain.
-* The user hasn't been assigned a license.
-
-To learn how to verify or turn on this feature, see [Sync userPrincipalName updates](./how-to-connect-syncservice-features.md).
-
-## Roll over the seamless SSO Kerberos decryption key
-
-It's important to frequently roll over the Kerberos decryption key of the AZUREADSSOACC computer account (which represents Azure AD). The AZUREADSSOACC computer account is created in your on-premises Active Directory forest. We highly recommend that you roll over the Kerberos decryption key at least every 30 days to align with the way that Active Directory domain members submit password changes. There's no associated device attached to the AZUREADSSOACC computer account object, so you must perform the rollover manually.
-
-Initiate the rollover of the seamless SSO Kerberos decryption key on the on-premises server that's running Azure AD Connect.
-
-For more information, see [How do I roll over the Kerberos decryption key of the AZUREADSSOACC computer account?](./how-to-connect-sso-faq.yml).
-
-## Monitoring and logging
-
-Monitor the servers that run the authentication agents to maintain the solution availability. In addition to general server performance counters, the authentication agents expose performance objects that can help you understand authentication statistics and errors.
-
-Authentication agents log operations to the Windows event logs that are located under Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin.
-
-You can also turn on logging for troubleshooting.
-
-For more information, see [Troubleshoot Azure Active Directory pass-through authentication](./tshoot-connect-pass-through-authentication.md).
-
-## Next steps
-
-* Learn about [Azure AD Connect design concepts](plan-connect-design-concepts.md).
-* Choose the [right authentication](./choose-ad-authn.md).
-* Learn about [supported topologies](plan-connect-design-concepts.md).
active-directory Plan Migrate Adfs Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-migrate-adfs-password-hash-sync.md
- Title: 'Azure AD Connect: Migrate from federation to PHS for Azure AD | Microsoft Docs'
-description: This article has information about moving your hybrid identity environment from federation to password hash synchronization.
------- Previously updated : 05/29/2020-----
-# Migrate from federation to password hash synchronization for Azure Active Directory
-
-This article describes how to move your organization domains from Active Directory Federation Services (AD FS) to password hash synchronization.
-
-> [!NOTE]
-> Changing your authentication method requires planning, testing, and potentially downtime. [Staged rollout](how-to-connect-staged-rollout.md) provides an alternative way to test and gradually migrate from federation to cloud authentication using password hash synchronization.
->
-> If you plan on using staged rollout, you should remember to turn off the staged rollout features once you have finished cutting over. For more information see [Migrate to cloud authentication using staged rollout](how-to-connect-staged-rollout.md)
--
-## Prerequisites for migrating to password hash synchronization
-
-The following prerequisites are required to migrate from using AD FS to using password hash synchronization.
--
-### Update Azure AD Connect
-
-As a minimum to successfully perform the steps to migrate to password hash synchronization, you should have [Azure AD connect](https://www.microsoft.com/download/details.aspx?id=47594) 1.1.819.0. This version contains significant changes to the way sign-in conversion is performed and reduces the overall time to migrate from Federation to Cloud Authentication from potentially hours to minutes.
--
-> [!IMPORTANT]
-> You might read in outdated documentation, tools, and blogs that user conversion is required when you convert domains from federated identity to managed identity. *Converting users* is no longer required. Microsoft is working to update documentation and tools to reflect this change.
-
-To update Azure AD Connect, complete the steps in [Azure AD Connect: Upgrade to the latest version](./how-to-upgrade-previous-version.md).
-
-### Password hash synchronization required permissions
-
-You can configure Azure AD Connect by using express settings or a custom installation. If you used the custom installation option, the [required permissions](./reference-connect-accounts-permissions.md) for password hash synchronization might not be in place.
-
-The Azure AD Connect Active Directory Domain Services (AD DS) service account requires the following permissions to synchronize password hashes:
-
-* Replicate Directory Changes
-* Replicate Directory Changes All
-
-Now is a good time to verify that these permissions are in place for all domains in the forest.
-
-### Plan the migration method
-
-You can choose from two methods to migrate from federated identity management to password hash synchronization and seamless single sign-on (SSO). The method you use depends on how your AD FS instance was originally configured.
-
-* **Azure AD Connect**. If you originally configured AD FS by using Azure AD Connect, you *must* change to password hash synchronization by using the Azure AD Connect wizard.
-
- ΓÇÄAzure AD Connect automatically runs the **Set-MsolDomainAuthentication** cmdlet when you change the user sign-in method. Azure AD Connect automatically unfederates all the verified federated domains in your Azure AD tenant.
-
- > [!NOTE]
- > Currently, if you originally used Azure AD Connect to configure AD FS, you can't avoid unfederating all domains in your tenant when you change the user sign-in to password hash synchronization.
-ΓÇÄ
-* **Azure AD Connect with PowerShell**. You can use this method only if you didn't originally configure AD FS by using Azure AD Connect. For this option, you still must change the user sign-in method via the Azure AD Connect wizard. The core difference with this option is that the wizard doesn't automatically run the **Set-MsolDomainAuthentication** cmdlet. With this option, you have full control over which domains are converted and in which order.
-
-To understand which method you should use, complete the steps in the following sections.
-
-#### Verify current user sign-in settings
-
-To verify your current user sign-in settings:
-
-1. Sign in to the [Azure AD portal](https://aad.portal.azure.com/) by using a Global Administrator account.
-2. In the **User sign-in** section, verify the following settings:
- * **Federation** is set to **Enabled**.
- * **Seamless single sign-on** is set to **Disabled**.
- * **Pass-through authentication** is set to **Disabled**.
-
- ![Screenshot of the settings in the Azure AD Connect User sign-in section](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image1.png)
-
-#### Verify the Azure AD Connect configuration
-
-1. On your Azure AD Connect server, open Azure AD Connect. Select **Configure**.
-2. On the **Additional tasks** page, select **View current configuration**, and then select **Next**.<br />
-
- ![Screenshot of the View current configuration option selected on the Additional tasks page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image2.png)<br />
-3. On the **Review Your Solution** page, note the **Password hash synchronization** status.<br />
-
- * If **Password hash synchronization** is set to **Disabled**, complete the steps in this article to enable it.
- * If **Password hash synchronization** is set to **Enabled**, you can skip the section **Step 1: Enable password hash synchronization** in this article.
-4. On the **Review your solution** page, scroll to **Active Directory Federation Services (AD FS)**.<br />
-
- * ΓÇÄIf the AD FS configuration appears in this section, you can safely assume that AD FS was originally configured by using Azure AD Connect. You can convert your domains from federated identity to managed identity by using the Azure AD Connect **Change user sign-in** option. The process is detailed in the section **Option A: Switch from federation to password hash synchronization by using Azure AD Connect**.
- * If AD FS isn't listed in the current settings, you must manually convert your domains from federated identity to managed identity by using PowerShell. For more information about this process, see the section **Option B: Switch from federation to password hash synchronization by using Azure AD Connect and PowerShell**.
-
-### Document current federation settings
-
-To find your current federation settings, run the **Get-MsolDomainFederationSettings** cmdlet:
-
-``` PowerShell
-Get-MsolDomainFederationSettings -DomainName YourDomain.extention | fl *
-```
-
-Example:
-
-``` PowerShell
-Get-MsolDomainFederationSettings -DomainName Contoso.com | fl *
-```
-
-Verify any settings that might have been customized for your federation design and deployment documentation. Specifically, look for customizations in **PreferredAuthenticationProtocol**, **SupportsMfa**, and **PromptLoginBehavior**.
-
-For more information, see these articles:
-
-* [AD FS prompt=login parameter support](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login)
-* [Set-MsolDomainAuthentication](/powershell/module/msonline/set-msoldomainauthentication)
-
-> [!NOTE]
-> If **SupportsMfa** is set to **True**, you're using an on-premises multi-factor authentication solution to inject a second-factor challenge into the user authentication flow. This setup no longer works for Azure AD authentication scenarios after converting this domain from federated to managed authentication. After you disable federation, you sever the relationship to your on-premises federation and this includes on-premises MFA adapters.
->
-> Instead, use the Azure AD Multi-Factor Authentication cloud-based service to perform the same function. Carefully evaluate your multi-factor authentication requirements before you continue. Before you convert your domains, make sure that you understand how to use Azure AD Multi-Factor Authentication, the licensing implications, and the user registration process.
-
-#### Back up federation settings
-
-Although no changes are made to other relying parties in your AD FS farm during the processes described in this article, we recommend that you have a current valid backup of your AD FS farm that you can restore from. You can create a current valid backup by using the free Microsoft [AD FS Rapid Restore Tool](/windows-server/identity/ad-fs/operations/ad-fs-rapid-restore-tool). You can use the tool to back up AD FS, and to restore an existing farm or create a new farm.
-
-If you choose not to use the AD FS Rapid Restore Tool, at a minimum, you should export the Microsoft 365 Identity Platform relying party trust and any associated custom claim rules you added. You can export the relying party trust and associated claim rules by using the following PowerShell example:
-
-``` PowerShell
-(Get-AdfsRelyingPartyTrust -Name "Microsoft Office 365 Identity Platform") | Export-CliXML "C:\temp\O365-RelyingPartyTrust.xml"
-```
-
-## Deployment considerations and using AD FS
-
-This section describes deployment considerations and details about using AD FS.
-
-### Current AD FS use
-
-Before you convert from federated identity to managed identity, look closely at how you currently use AD FS for Azure AD, Microsoft 365, and other applications (relying party trusts). Specifically, consider the scenarios that are described in the following table:
-
-| If | Then |
-|-|-|
-| You plan to keep using AD FS with other applications (other than Azure AD and Microsoft 365). | After you convert your domains, you'll use both AD FS and Azure AD. Consider the user experience. In some scenarios, users might be required to authenticate twice: once to Azure AD (where a user gets SSO access to other applications, like Microsoft 365), and again for any applications that are still bound to AD FS as a relying party trust. |
-| Your AD FS instance is heavily customized and relies on specific customization settings in the onload.js file (for example, if you changed the sign-in experience so that users use only a **SamAccountName** format for their username instead of a User Principal Name (UPN), or your organization has heavily branded the sign-in experience). The onload.js file can't be duplicated in Azure AD. | Before you continue, you must verify that Azure AD can meet your current customization requirements. For more information and for guidance, see the sections on AD FS branding and AD FS customization.|
-| You use AD FS to block earlier versions of authentication clients.| Consider replacing AD FS controls that block earlier versions of authentication clients by using a combination of [Conditional Access controls](../conditional-access/concept-conditional-access-conditions.md) and [Exchange Online Client Access Rules](/exchange/clients-and-mobile-in-exchange-online/client-access-rules/client-access-rules). |
-| You require users to perform multi-factor authentication against an on-premises multi-factor authentication server solution when users authenticate to AD FS.| In a managed identity domain, you can't inject a multi-factor authentication challenge via the on-premises multi-factor authentication solution into the authentication flow. However, you can use the Azure AD Multi-Factor Authentication service for multi-factor authentication after the domain is converted.<br /><br /> If your users don't currently use Azure AD Multi-Factor Authentication, a onetime user registration step is required. You must prepare for and communicate the planned registration to your users. |
-| You currently use access control policies (AuthZ rules) in AD FS to control access to Microsoft 365.| Consider replacing the policies with the equivalent Azure AD [Conditional Access policies](../conditional-access/overview.md) and [Exchange Online Client Access Rules](/exchange/clients-and-mobile-in-exchange-online/client-access-rules/client-access-rules).|
-
-### Common AD FS customizations
-
-This section describes common AD FS customizations.
-
-#### InsideCorporateNetwork claim
-
-AD FS issues the **InsideCorporateNetwork** claim if the user who is authenticating is inside the corporate network. This claim can then be passed on to Azure AD. The claim is used to bypass multi-factor authentication based on the user's network location. To learn how to determine whether this functionality currently is enabled in AD FS, see [Trusted IPs for federated users](../authentication/howto-mfa-adfs.md).
-
-The **InsideCorporateNetwork** claim isn't available after your domains are converted to password hash synchronization. You can use [named locations in Azure AD](../conditional-access/location-condition.md) to replace this functionality.
-
-After you configure named locations, you must update all Conditional Access policies that were configured to either include or exclude the network **All trusted locations** or **MFA Trusted IPs** values to reflect the new named locations.
-
-For more information about the **Location** condition in Conditional Access, see [Active Directory Conditional Access locations](../conditional-access/location-condition.md).
-
-#### Hybrid Azure AD-joined devices
-
-When you join a device to Azure AD, you can create Conditional Access rules that enforce that devices meet your access standards for security and compliance. Also, users can sign in to a device by using an organizational work or school account instead of a personal account. When you use hybrid Azure AD-joined devices, you can join your Active Directory domain-joined devices to Azure AD. Your federated environment might have been set up to use this feature.
-
-To ensure that hybrid join continues to work for any devices that are joined to the domain after your domains are converted to password hash synchronization, for Windows 10 clients, you must use Azure AD Connect device options to sync Active Directory computer accounts to Azure AD.
-
-For Windows 8 and Windows 7 computer accounts, hybrid join uses seamless SSO to register the computer in Azure AD. You don't have to sync Windows 8 and Windows 7 computer accounts like you do for Windows 10 devices. However, you must deploy an updated workplacejoin.exe file (via an .msi file) to Windows 8 and Windows 7 clients so they can register themselves by using seamless SSO. [Download the .msi file](https://www.microsoft.com/download/details.aspx?id=53554).
-
-For more information, see [Configure hybrid Azure AD-joined devices](../devices/hybrid-azuread-join-plan.md).
-
-#### Branding
-
-If your organization [customized your AD FS sign-in pages](/windows-server/identity/ad-fs/operations/ad-fs-user-sign-in-customization) to display information that's more pertinent to the organization, consider making similar [customizations to the Azure AD sign-in page](../fundamentals/customize-branding.md).
-
-Although similar customizations are available, some visual changes on sign-in pages should be expected after the conversion. You might want to provide information about expected changes in your communications to users.
-
-> [!NOTE]
-> Organization branding is available only if you purchase the Premium or Basic license for Azure Active Directory or if you have an Microsoft 365 license.
-
-## Plan deployment and support
-
-Complete the tasks that are described in this section to help you plan for deployment and support.
-
-### Plan the maintenance window
-
-Although the domain conversion process is relatively quick, Azure AD might continue to send some authentication requests to your AD FS servers for up to four hours after the domain conversion is finished. During this four-hour window, and depending on various service side caches, Azure AD might not accept these authentications. Users might receive an error. The user can still successfully authenticate against AD FS, but Azure AD no longer accepts the userΓÇÖs issued token because that federation trust is now removed.
-
-Only users who access the services via a web browser during this post-conversion window before the service side cache is cleared are affected. Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't expected to be affected because Exchange Online keeps a cache of their credentials for a set period of time. The cache is used to silently reauthenticate the user. The user doesn't have to return to AD FS. Credentials stored on the device for these clients are used to silently reauthenticate themselves after this cached is cleared. Users aren't expected to receive any password prompts as a result of the domain conversion process.
-
-Modern authentication clients (Office 2016 and Office 2013, iOS, and Android apps) use a valid refresh token to obtain new access tokens for continued access to resources instead of returning to AD FS. These clients are immune to any password prompts resulting from the domain conversion process. The clients will continue to function without additional configuration.
-
-> [!IMPORTANT]
-> DonΓÇÖt shut down your AD FS environment or remove the Microsoft 365 relying party trust until you have verified that all users can successfully authenticate by using cloud authentication.
-
-### Plan for rollback
-
-If you encounter a major issue that you can't resolve quickly, you might decide to roll back the solution to federation. ItΓÇÖs important to plan what to do if your deployment doesnΓÇÖt roll out as intended. If conversion of the domain or users fails during deployment, or if you need to roll back to federation, you must understand how to mitigate any outage and reduce the effect on your users.
-
-#### To roll back
-
-To plan for rollback, check the federation design and deployment documentation for your specific deployment details. The process should include these tasks:
-
-* Converting managed domains to federated domains by using the **Convert-MSOLDomainToFederated** cmdlet.
-* If necessary, configuring additional claims rules.
-
-### Plan communications
-
-An important part of planning deployment and support is ensuring that your users are proactively informed about upcoming changes. Users should know in advance what they might experience and what is required of them.
-
-After both password hash synchronization and seamless SSO are deployed, the user sign-in experience for accessing Microsoft 365 and other resources that are authenticated through Azure AD changes. Users who are outside the network see only the Azure AD sign-in page. These users aren't redirected to the forms-based page that's presented by external-facing web application proxy servers.
-
-Include the following elements in your communication strategy:
-
-* Notify users about upcoming and released functionality by using:
- * Email and other internal communication channels.
- * Visuals, such as posters.
- * Executive, live, or other communications.
-* Determine who will customize the communications and who will send the communications, and when.
-
-## Implement your solution
-
-You planned your solution. Now, you can now implement it. Implementation involves the following components:
-
-* Enabling password hash synchronization.
-* Preparing for seamless SSO.
-* Changing the sign-in method to password hash synchronization and enabling seamless SSO.
-
-### Step 1: Enable password hash synchronization
-
-The first step to implement this solution is to enable password hash synchronization by using the Azure AD Connect wizard. Password hash synchronization is an optional feature that you can enable in environments that use federation. There's no effect on the authentication flow. In this case, Azure AD Connect will start syncing password hashes without affecting users who sign in by using federation.
-
-For this reason, we recommend that you complete this step as a preparation task well before you change your domain's sign-in method. Then, you'll have ample time to verify that password hash synchronization works correctly.
-
-To enable password hash synchronization:
-
-1. On the Azure AD Connect server, open the Azure AD Connect wizard, and then select **Configure**.
-2. Select **Customize synchronization options**, and then select **Next**.
-3. On the **Connect to Azure AD** page, enter the username and password of a Global Administrator account.
-4. On the **Connect your directories** page, select **Next**.
-5. On the **Domain and OU filtering** page, select **Next**.
-6. On the **Optional features** page, select **Password synchronization**, and then select **Next**.
-
- ![Screenshot of the Password synchronization option selected on the Optional features page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image6.png)<br />
-7. Select **Next** on the remaining pages. On the last page, select **Configure**.
-8. Azure AD Connect starts to sync password hashes on the next synchronization.
-
-After password hash synchronization is enabled, the password hashes for all users in the Azure AD Connect synchronization scope are rehashed and written to Azure AD. Depending on the number of users, this operation might take minutes or several hours.
-
-For planning purposes, you should estimate that approximately 20,000 users are processed in 1 hour.
-
-To verify that password hash synchronization works correctly, complete the **Troubleshooting** task in the Azure AD Connect wizard:
-
-1. Open a new Windows PowerShell session on your Azure AD Connect server by using the Run as Administrator option.
-2. Run `Set-ExecutionPolicy RemoteSigned` or `Set-ExecutionPolicy Unrestricted`.
-3. Start the Azure AD Connect wizard.
-4. Go to the **Additional tasks** page, select **Troubleshoot**, and then select **Next**.
-5. On the **Troubleshooting** page, select **Launch** to start the troubleshooting menu in PowerShell.
-6. On the main menu, select **Troubleshoot password hash synchronization**.
-7. On the submenu, select **Password hash synchronization does not work at all**.
-
-For troubleshooting issues, see [Troubleshoot password hash synchronization with Azure AD Connect sync](./tshoot-connect-password-hash-synchronization.md).
-
-### Step 2: Prepare for seamless SSO
-
-For your devices to use seamless SSO, you must add an Azure AD URL to users' intranet zone settings by using a group policy in Active Directory.
-
-By default, web browsers automatically calculate the correct zone, either internet or intranet, from a URL. For example, **http:\/\/contoso/** maps to the intranet zone and **http:\/\/intranet.contoso.com** maps to the internet zone (because the URL contains a period). Browsers send Kerberos tickets to a cloud endpoint, like the Azure AD URL, only if you explicitly add the URL to the browser's intranet zone.
-
-Complete the steps to [roll out](./how-to-connect-sso-quick-start.md) the required changes to your devices.
-
-> [!IMPORTANT]
-> Making this change doesn't modify the way your users sign in to Azure AD. However, itΓÇÖs important that you apply this configuration to all your devices before you proceed. Users who sign in on devices that haven't received this configuration simply are required to enter a username and password to sign in to Azure AD.
-
-### Step 3: Change the sign-in method to password hash synchronization and enable seamless SSO
-
-You have two options for changing the sign-in method to password hash synchronization and enabling seamless SSO.
-
-#### Option A: Switch from federation to password hash synchronization by using Azure AD Connect
-
-Use this method if you initially configured your AD FS environment by using Azure AD Connect. You can't use this method if you *didn't* originally configure your AD FS environment by using Azure AD Connect.
-
-First, change the sign-in method:
-
-1. On the Azure AD Connect server, open the Azure AD Connect wizard.
-2. Select **Change user sign-in**, and then select **Next**.
-
- ![Screenshot of the Change user sign-in option on the Additional tasks page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image7.png)<br />
-3. On the **Connect to Azure AD** page, enter the username and password of a Global Administrator account.
-4. On the **User sign-in** page, select the **Password hash synchronization button**. Make sure to select the **Do not convert user accounts** check box. The option is deprecated. Select **Enable single sign-on**, and then select **Next**.
-
- ![Screenshot of the Enable single sign-on page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image8.png)<br />
-
- > [!NOTE]
- > Starting with Azure AD Connect version 1.1.880.0, the **Seamless single sign-on** check box is selected by default.
-
- > [!IMPORTANT]
- > You can safely ignore the warnings that indicate that user conversion and full password hash synchronization are required steps for converting from federation to cloud authentication. Note that these steps aren't required anymore. If you still see these warnings, make sure that you're running the latest version of Azure AD Connect and that you're using the latest version of this guide. For more information, see the section [Update Azure AD Connect](#update-azure-ad-connect).
-
-5. On the **Enable single sign-on** page, enter the credentials of Domain Administrator account, and then select **Next**.
-
- ![Screenshot of the Enable single sign-on page where you can enter the Domain Administrator account credentials.](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image9.png)<br />
-
- > [!NOTE]
- > Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions. The Domain Administrator account credentials aren't stored in Azure AD Connect or in Azure AD. The Domain Administrator account credentials are used only to turn on the feature. The credentials are discarded when the process successfully finishes.
- >
- > 1. A computer account named AZUREADSSOACC (which represents Azure AD) is created in your on-premises Active Directory instance.
- > 2. The computer account's Kerberos decryption key is securely shared with Azure AD.
- > 3. Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in.
-
-6. On the **Ready to configure** page, make sure that the **Start the synchronization process when configuration completes** check box is selected. Then, select **Configure**.
-
- ![Screenshot of the Ready to configure page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image10.png)<br />
-
- > [!IMPORTANT]
- > At this point, all your federated domains will change to managed authentication. Password hash synchronization is the new method of authentication.
-
-7. In the Azure AD portal, select **Azure Active Directory** > **Azure AD Connect**.
-8. Verify these settings:
- * **Federation** is set to **Disabled**.
- * **Seamless single sign-on** is set to **Enabled**.
- * **Password Sync** is set to **Enabled**.<br />
-
- ![Screenshot that shows the settings in the User sign-in section of the Azure AD portal.](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image11.png)<br />
-
-Skip to [Testing and next steps](#testing-and-next-steps).
-
- > [!IMPORTANT]
- > Skip the section **Option B: Switch from federation to password hash synchronization by using Azure AD Connect and PowerShell**. The steps in that section don't apply if you chose Option A to change the sign-in method to password hash synchronization and enable seamless SSO.
-
-#### Option B: Switch from federation to password hash synchronization using Azure AD Connect and PowerShell
-
-Use this option if you didn't initially configure your federated domains by using Azure AD Connect. During this process, you enable seamless SSO and switch your domains from federated to managed.
-
-1. On the Azure AD Connect server, open the Azure AD Connect wizard.
-2. Select **Change user sign-in**, and then select **Next**.
-3. On the **Connect to Azure AD** page, enter the username and password for a Global Administrator account.
-4. On the **User sign-in** page, select the **Password hash synchronization** button. Select **Enable single sign-on**, and then select **Next**.
-
- Before you enable password hash synchronization:
- ![Screenshot that shows the Do not configure option on the User sign-in page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image12.png)<br />
-
- After you enable password hash synchronization:
- ![Screenshot that shows new options on the User sign-in page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image13.png)<br />
-
- > [!NOTE]
- > Starting with Azure AD Connect version 1.1.880.0, the **Seamless single sign-on** check box is selected by default.
-
-5. On the **Enable single sign-on** page, enter the credentials for a Domain Administrator account, and then select **Next**.
-
- > [!NOTE]
- > Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions. The Domain Administrator account credentials aren't stored in Azure AD Connect or in Azure AD. The Domain Administrator account credentials are used only to turn on the feature. The credentials are discarded when the process successfully finishes.
- >
- > 1. A computer account named AZUREADSSOACC (which represents Azure AD) is created in your on-premises Active Directory instance.
- > 2. The computer account's Kerberos decryption key is securely shared with Azure AD.
- > 3. Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in.
-
-6. On the **Ready to configure** page, make sure that the **Start the synchronization process when configuration completes** check box is selected. Then, select **Configure**.
-
- ![Screenshot that shows the Configure button on the Ready to configure page](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image15.png)<br />
- When you select the **Configure** button, seamless SSO is configured as indicated in the preceding step. Password hash synchronization configuration isn't modified because it was enabled earlier.
-
- > [!IMPORTANT]
- > No changes are made to the way users sign in at this time.
-
-7. In the Azure AD portal, verify these settings:
- * **Federation** is set to **Enabled**.
- * **Seamless single sign-on** is set to **Enabled**.
- * **Password Sync** is set to **Enabled**.
-
- ![Screenshot that shows the settings in the User sign-in section](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image16.png)
-
-#### Convert domains from federated to managed
-
-At this point, federation is still enabled and operational for your domains. To continue with the deployment, each domain needs to be converted from federated to managed to force user authentication via password hash synchronization.
-
-> [!IMPORTANT]
-> You don't have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
-
-Complete the conversion by using the Azure AD PowerShell module:
-
-1. In PowerShell, sign in to Azure AD by using a Global Administrator account.
-2. To convert the first domain, run the following command:
-
- ``` PowerShell
- Set-MsolDomainAuthentication -Authentication Managed -DomainName <domain name>
- ```
-
-3. In the Azure AD portal, select **Azure Active Directory** > **Azure AD Connect**.
-4. Verify that the domain has been converted to managed by running the following command:
-
- ``` PowerShell
- Get-MsolDomain -DomainName <domain name>
- ```
-
-## Testing and next steps
-
-Complete the following tasks to verify password hash synchronization and to finish the conversion process.
-
-### Test authentication by using password hash synchronization
-
-When your tenant used federated identity, users were redirected from the Azure AD sign-in page to your AD FS environment. Now that the tenant is configured to use password hash synchronization instead of federated authentication, users aren't redirected to AD FS. Instead, users sign in directly on the Azure AD sign-in page.
-
-To test password hash synchronization:
-
-1. Open Internet Explorer in InPrivate mode so that seamless SSO doesn't sign you in automatically.
-2. Go to the Office 365 sign-in page ([https://portal.office.com](https://portal.office.com/)).
-3. Enter a user UPN, and then select **Next**. Make sure that you enter the UPN of a hybrid user who was synced from your on-premises Active Directory instance, and who previously used federated authentication. A page on which you enter the username and password appears:
-
- ![Screenshot that shows the sign-in page in which you enter a username](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image18.png)
-
- ![Screenshot that shows the sign-in page in which you enter a password](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image19.png)
-
-4. After you enter the password and select **Sign in**, you're redirected to the Office 365 portal.
-
- ![Screenshot that shows the Office 365 portal](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image20.png)
--
-### Test seamless SSO
-
-1. Sign in to a domain-joined machine that is connected to the corporate network.
-2. In Internet Explorer or Chrome, go to one of the following URLs (replace "contoso" with your domain):
-
- * https:\/\/myapps.microsoft.com/contoso.com
- * https:\/\/myapps.microsoft.com/contoso.onmicrosoft.com
-
- The user is briefly redirected to the Azure AD sign-in page, which shows the message "Trying to sign you in." The user isn't prompted for a username or password.<br />
-
- ![Screenshot that shows the Azure AD sign-in page and message](media/plan-migrate-adfs-password-hash-sync/migrating-adfs-to-phs_image21.png)<br />
-3. The user is redirected and is successfully signed in to the access panel:
-
- > [!NOTE]
- > Seamless SSO works on Microsoft 365 services that support domain hint (for example, myapps.microsoft.com/contoso.com). Currently, the Microsoft 365 portal (portal.office.com) doesnΓÇÖt support domain hints. Users are required to enter a UPN. After a UPN is entered, seamless SSO retrieves the Kerberos ticket on behalf of the user. The user is signed in without entering a password.
-
- > [!TIP]
- > Consider deploying [Azure AD hybrid join on Windows 10](../devices/overview.md) for an improved SSO experience.
-
-### Remove the relying party trust
-
-After you validate that all users and clients are successfully authenticating via Azure AD, it's safe to remove the Microsoft 365 relying party trust.
-
-If you don't use AD FS for other purposes (that is, for other relying party trusts), it's safe to decommission AD FS at this point.
-
-### Rollback
-
-If you discover a major issue and can't resolve it quickly, you might choose to roll back the solution to federation.
-
-Consult the federation design and deployment documentation for your specific deployment details. The process should involve these tasks:
-
-* Convert managed domains to federated authentication by using the **Convert-MSOLDomainToFederated** cmdlet.
-* If necessary, configure additional claims rules.
-
-### Sync userPrincipalName updates
-
-Historically, updates to the **UserPrincipalName** attribute, which uses the sync service from the on-premises environment, are blocked unless both of these conditions are true:
-
-* The user is in a managed (non-federated) identity domain.
-* The user hasn't been assigned a license.
-
-To learn how to verify or turn on this feature, see [Sync userPrincipalName updates](./how-to-connect-syncservice-features.md).
-
-### Troubleshooting
-
-Your support team should understand how to troubleshoot any authentication issues that arise either during, or after the change from federation to managed. Use the following troubleshooting documentation to help your support team familiarize themselves with the common troubleshooting steps and appropriate actions that can help to isolate and resolve the issue.
-
-[Troubleshoot Azure Active Directory password hash synchronization](./tshoot-connect-password-hash-synchronization.md)
-
-[Troubleshoot Azure Active Directory Seamless Single Sign-On](./tshoot-connect-sso.md)
-
-## Roll over the seamless SSO Kerberos decryption key
-
-It's important to frequently roll over the Kerberos decryption key of the AZUREADSSOACC computer account (which represents Azure AD). The AZUREADSSOACC computer account is created in your on-premises Active Directory forest. We highly recommend that you roll over the Kerberos decryption key at least every 30 days to align with the way that Active Directory domain members submit password changes. There's no associated device attached to the AZUREADSSOACC computer account object, so you must perform the rollover manually.
-
-Initiate the rollover of the seamless SSO Kerberos decryption key on the on-premises server that's running Azure AD Connect.
-
-For more information, see [How do I roll over the Kerberos decryption key of the AZUREADSSOACC computer account?](./how-to-connect-sso-faq.yml).
-
-## Next steps
-
-* Learn about [Azure AD Connect design concepts](plan-connect-design-concepts.md).
-* Choose the [right authentication](./choose-ad-authn.md).
-* Learn about [supported topologies](plan-connect-design-concepts.md).
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/end-user-experiences.md
Which method(s) you choose to deploy in your organization is your discretion.
## Azure AD My Apps
-My Apps at <https://myapps.microsoft.com> is a web-based portal that allows an end user with an organizational account in Azure Active Directory to view and launch applications to which they have been granted access by the Azure AD administrator. If you are an end user with [Azure Active Directory Premium](https://azure.microsoft.com/pricing/details/active-directory/), you can also utilize self-service group management capabilities through My Apps.
+My Apps at <https://myapps.microsoft.com> is a web-based portal that allows an end user with an organizational account in Azure Active Directory to view and launch applications to which they have been granted access by the Azure AD administrator. If you are an end user with [Azure Active Directory Premium](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), you can also utilize self-service group management capabilities through My Apps.
By default, all applications are listed together on a single page. But you can use collections to group together related applications and present them on a separate tab, making them easier to find. For example, you can use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on. For information, see [Create collections on the My Apps portal](access-panel-collections.md).
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Integrating F5 BIG-IP with Azure AD for SHA have the following pre-requisites:
- An Azure AD [free subscription](/windows/client-management/mdm/register-your-free-azure-active-directory-subscription#:~:text=%20Register%20your%20free%20Azure%20Active%20Directory%20subscription,will%20take%20you%20to%20the%20Azure...%20More%20) provides the minimum core requirements for implementing SHA with password-less authentication
- - A [Premium subscription](https://azure.microsoft.com/pricing/details/active-directory/) provides all additional value adds outlined in the preface, including [Conditional Access](../conditional-access/overview.md), [MFA](../authentication/concept-mfa-howitworks.md), and [Identity Protection](../identity-protection/overview-identity-protection.md)
+ - A [Premium subscription](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) provides all additional value adds outlined in the preface, including [Conditional Access](../conditional-access/overview.md), [MFA](../authentication/concept-mfa-howitworks.md), and [Identity Protection](../identity-protection/overview-identity-protection.md)
No previous experience or F5 BIG-IP knowledge is necessary to implement SHA, but we do recommend familiarizing yourself with F5 BIG-IP terminology. F5ΓÇÖs rich [knowledge base](https://www.f5.com/services/resources/glossary) is also a good place to start building BIG-IP knowledge.
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/howto-saml-token-encryption.md
# How to: Configure Azure AD SAML token encryption > [!NOTE]
-> Token encryption is an Azure Active Directory (Azure AD) premium feature. To learn more about Azure AD editions, features, and pricing, see [Azure AD pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+> Token encryption is an Azure Active Directory (Azure AD) premium feature. To learn more about Azure AD editions, features, and pricing, see [Azure AD pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
SAML token encryption enables the use of encrypted SAML assertions with an application that supports it. When configured for an application, Azure AD will encrypt the SAML assertions it emits for that application using the public key obtained from a certificate stored in Azure AD. The application must use the matching private key to decrypt the token before it can be used as evidence of authentication for the signed in user.
active-directory Migrate Adfs Apps To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
Both AD FS and Azure AD provide token encryptionΓÇöthe ability to encrypt the SA
For information about Azure AD SAML token encryption and how to configure it, see [How to: Configure Azure AD SAML token encryption](howto-saml-token-encryption.md). > [!NOTE]
-> Token encryption is an Azure Active Directory (Azure AD) premium feature. To learn more about Azure AD editions, features, and pricing, see [Azure AD pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+> Token encryption is an Azure Active Directory (Azure AD) premium feature. To learn more about Azure AD editions, features, and pricing, see [Azure AD pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
### Apps and configurations that can be moved today
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-an-application-integration.md
The following articles discuss the different ways applications integrate with Az
## Capabilities for apps not listed in the Azure AD gallery
-You can add any application that already exists in your organization, or any third-party application from a vendor who is not already part of the Azure AD gallery. Depending on your [license agreement](https://azure.microsoft.com/pricing/details/active-directory/), the following capabilities are available:
+You can add any application that already exists in your organization, or any third-party application from a vendor who is not already part of the Azure AD gallery. Depending on your [license agreement](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), the following capabilities are available:
* Self-service integration of any application that supports [Security Assertion Markup Language (SAML) 2.0](https://wikipedia.org/wiki/SAML_2.0) identity providers (SP-initiated or IdP-initiated) * Self-service integration of any web application that has an HTML-based sign-in page using [password-based SSO](sso-options.md#password-based-sso)
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-sso-deployment.md
The Azure Marketplace has over 3000 applications with pre-integrated SSO connect
## Licensing -- **Azure AD licensing** - SSO for pre-integrated SaaS applications is free. However, the number of objects in your directory and the features you wish to deploy may require additional licenses. For a full list of license requirements, see [Azure Active Directory Pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+- **Azure AD licensing** - SSO for pre-integrated SaaS applications is free. However, the number of objects in your directory and the features you wish to deploy may require additional licenses. For a full list of license requirements, see [Azure Active Directory Pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
- **Application licensing** - You'll need the appropriate licenses for your SaaS applications to meet your business needs. Work with the application owner to determine whether the users assigned to the application have the appropriate licenses for their roles within the application. If Azure AD manages the automatic provisioning based on roles, the roles assigned in Azure AD must align with the number of licenses owned within the application. Improper number of licenses owned in the application may lead to errors during the provisioning/updating of a user. ## Plan your SSO team
active-directory Qs Configure Template Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md
ms.devlang: na
na Previously updated : 12/15/2020 Last updated : 07/13/2020
To enable system-assigned managed identity on a VM, your account needs the [Virt
### Assign a role the VM's system-assigned managed identity
-After you have enabled system-assigned managed identity on your VM, you may want to grant it a role such as **Reader** access to the resource group in which it was created.
+After you enable a system-assigned managed identity on your VM, you may want to grant it a role such as **Reader** access to the resource group in which it was created. You can find detailed information to help you with this step in the [Assign Azure roles using Azure Resource Manager templates](../../role-based-access-control/role-assignments-template.md) article.
-To assign a role to your VM's system-assigned identity, your account needs the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role assignment.
-
-1. Whether you sign in to Azure locally or via the Azure portal, use an account that is associated with the Azure subscription that contains the VM.
-
-2. Load the template into an [editor](#azure-resource-manager-templates) and add the following information to give your VM **Reader** access to the resource group in which it was created. Your template structure may vary depending on the editor and the deployment model you choose.
-
- Under the `parameters` section add the following:
-
- ```json
- "builtInRoleType": {
- "type": "string",
- "defaultValue": "Reader"
- },
- "rbacGuid": {
- "type": "string"
- }
- ```
-
- Under the `variables` section add the following:
-
- ```json
- "Reader": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')]"
- ```
-
- Under the `resources` section add the following:
-
- ```json
- {
- "apiVersion": "2017-09-01",
- "type": "Microsoft.Authorization/roleAssignments",
- "name": "[parameters('rbacGuid')]",
- "properties": {
- "roleDefinitionId": "[variables(parameters('builtInRoleType'))]",
- "principalId": "[reference(variables('vmResourceId'), '2017-12-01', 'Full').identity.principalId]",
- "scope": "[resourceGroup().id]"
- },
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
- ]
- }
- ```
### Disable a system-assigned managed identity from an Azure VM
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
description: List of services that support managed identities for Azure resource
Previously updated : 06/28/2021 Last updated : 07/13/2021
Refer to the following document to reconfigure a managed identity if you have mo
| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet | | | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
+| System assigned | Preview | Preview | Not available | Preview |
+| User assigned | Preview | Preview | Not available | Preview |
Refer to the following documents to use managed identity with [Azure Automation](../../automation/automation-intro.md):
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
| User assigned | ![Available][check] | Not available | Not available | Not available |
-> [!Note]
-> Microsoft Power BI also [supports managed identities](../../stream-analytics/powerbi-output-managed-identity.md).
+> [!NOTE]
+> You can use Managed Identities to authenticate an [Azure Stream analytics job to Power BI](../../stream-analytics/powerbi-output-managed-identity.md).
[check]: media/services-support-managed-identities/check.png "Available"
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
You can route Azure AD audit logs and sign-in logs to your Azure storage account
To use this feature, you need: * An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
-* Azure AD Free, Basic, Premium 1, or Premium 2 [license](https://azure.microsoft.com/pricing/details/active-directory/), to access the Azure AD audit logs in the Azure portal.
+* Azure AD Free, Basic, Premium 1, or Premium 2 [license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), to access the Azure AD audit logs in the Azure portal.
* An Azure AD tenant. * A user who's a **global administrator** or **security administrator** for the Azure AD tenant.
-* Azure AD Premium 1, or Premium 2 [license](https://azure.microsoft.com/pricing/details/active-directory/), to access the Azure AD sign-in logs in the Azure portal.
+* Azure AD Premium 1, or Premium 2 [license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), to access the Azure AD sign-in logs in the Azure portal.
Depending on where you want to route the audit log data, you need either of the following:
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/overview-monitoring.md
Currently, you can route the logs to:
You'll need an Azure AD premium license to access the Azure AD sign in logs.
-For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
+For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
To deploy Azure AD monitoring and reporting you'll need a user who is a global administrator or security administrator for the Azure AD tenant.
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/overview-reports.md
The [audit logs report](concept-audit-logs.md) provides you with records of syst
#### What Azure AD license do you need to access the audit logs report?
-The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A deatiled feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory/). For more details, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
+The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A deatiled feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more details, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
### Sign-ins report
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
With Azure AD monitoring, you can route logs to:
You'll need an Azure AD premium license to access the Azure AD sign in logs.
-For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
+For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
To deploy Azure AD monitoring and reporting you'll need a user who is a global administrator or security administrator for the Azure AD tenant.
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/administrative-units.md
A central administrator could:
## License requirements
-Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and Azure AD Free licenses for administrative unit members. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
+Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and Azure AD Free licenses for administrative unit members. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Manage administrative units
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-overview.md
A scope is the restriction of permitted actions to a particular Azure AD resourc
## License requirements
-Using built-in roles in Azure AD is free, while custom roles requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
+Using built-in roles in Azure AD is free, while custom roles requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Next steps
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
The following are known issues with role-assignable groups:
## License requirements
-Using this feature requires an Azure AD Premium P1 license. To also use Privileged Identity Management for just-in-time role activation, requires an Azure AD Premium P2 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
+Using this feature requires an Azure AD Premium P1 license. To also use Privileged Identity Management for just-in-time role activation, requires an Azure AD Premium P2 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Next steps
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/my-staff-configure.md
To complete this article, you need the following resources and privileges:
* You need *Global Administrator* privileges in your Azure AD tenant to enable SMS-based authentication. * Each user who's enabled in the text message authentication method policy must be licensed, even if they don't use it. Each enabled user must have one of the following Azure AD or Microsoft 365 licenses:
- * [Azure AD Premium P1 or P2](https://azure.microsoft.com/pricing/details/active-directory/)
+ * [Azure AD Premium P1 or P2](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)
* [Microsoft 365 (M365) F1 or F3](https://www.microsoft.com/licensing/news/m365-firstline-workers) * [Enterprise Mobility + Security (EMS) E3 or E5](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/compare-plans-and-pricing) or [Microsoft 365 (M365) E3 or E5](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans)
active-directory Github Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-provisioning-tutorial.md
Title: 'Tutorial: User provisioning for GitHub - Azure AD' description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to GitHub. -+ Last updated 10/21/2020-+ # Tutorial: Configure GitHub for automatic user provisioning
For more information on how to read the Azure AD provisioning logs, see [Reporti
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Iprova Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/iprova-provisioning-tutorial.md
Title: 'Tutorial: Configure iProva for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to iProva. -
-writer: zchia
+
+writer: twimmers
Last updated 10/29/2019-+ # Tutorial: Configure iProva for automatic user provisioning
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 8.16.1 or JIRA Service Desk 3.0 to 4.16.1 should installed and configured on Windows 64-bit version
+- JIRA Core and Software 6.4 to 8.17.1 or JIRA Service Desk 3.0 to 4.16.1 should installed and configured on Windows 64-bit version
- JIRA server is HTTPS enabled - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 8.16.1
+* JIRA Core and Software: 6.4 to 8.17.1
* JIRA Service Desk 3.0 to 4.16.1 * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md)
active-directory Enable Your Tenant Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/enable-your-tenant-verifiable-credentials.md
Before you can successfully complete this tutorial, you must first:
- Complete the steps in the [Get started](get-started-verifiable-credentials.md) tutorial. - Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Have Azure AD with a P2 [license](https://azure.microsoft.com/pricing/details/active-directory/). If you don't have one, follow the steps in [Create a free developer account](how-to-create-a-free-developer-account.md).
+- Have Azure AD with a P2 [license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). If you don't have one, follow the steps in [Create a free developer account](how-to-create-a-free-developer-account.md).
- Have an instance of [Azure Key Vault](../../key-vault/general/overview.md) where you have rights to create keys and secrets. ## Azure Active Directory
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quickstart-event-grid.md
+
+ Title: Subscribe to Azure Kubernetes Service events with Azure Event Grid (Preview)
+description: Use Azure Event Grid to subscribe to Azure Kubernetes Service events
+++ Last updated : 07/12/2021+++
+# Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid (Preview)
+
+Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a publish-subscribe model.
+
+In this quickstart, you'll create an AKS cluster and subscribe to AKS events.
++
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Register the `EventgridPreview` preview feature
+
+To use the feature, you must also enable the `EventgridPreview` feature flag on your subscription.
+
+Register the `EventgridPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "EventgridPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EventgridPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
++
+## Create an AKS cluster
+
+Create an AKS cluster using the [az aks create][az-aks-create] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group:
+
+```azurecli
+az group create --name MyResourceGroup --location eastus
+az aks create -g MyResourceGroup -n MyAKS --location eastus --node-count 1 --generate-ssh-keys
+```
+
+## Subscribe to AKS events
+
+Create a namespace and event hub using [az eventhubs namespace create][az-eventhubs-namespace-create] and [az eventhubs eventhub create][az-eventhubs-eventhub-create]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group.
+
+```azurecli
+az eventhubs namespace create --location eastus --name MyNamespace -g MyResourceGroup
+az eventhubs eventhub create --name MyEventGridHub --namespace-name MyNamespace -g MyResourceGroup
+```
+
+> [!NOTE]
+> The *name* of your namespace must be unique.
+
+Subscribe to the AKS events using [az eventgrid event-subscription create][az-eventgrid-event-subscription-create]:
+
+```azurecli
+SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv)
+ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv)
+az eventgrid event-subscription create --name MyEventGridSubscription \
+--source-resource-id $SOURCE_RESOURCE_ID \
+--endpoint-type eventhub \
+--endpoint $ENDPOINT
+```
+
+Verify your subscription to AKS events using `az eventgrid event-subscription list`:
+
+```azurecli
+az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID
+```
+
+The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub:
+
+```output
+$ az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID
+[
+ {
+ "deadLetterDestination": null,
+ "deadLetterWithResourceIdentity": null,
+ "deliveryWithResourceIdentity": null,
+ "destination": {
+ "deliveryAttributeMappings": null,
+ "endpointType": "EventHub",
+ "resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub"
+ },
+ "eventDeliverySchema": "EventGridSchema",
+ "expirationTimeUtc": null,
+ "filter": {
+ "advancedFilters": null,
+ "enableAdvancedFilteringOnArrays": null,
+ "includedEventTypes": [
+ "Microsoft.ContainerService.NewKubernetesVersionAvailable"
+ ],
+ "isSubjectCaseSensitive": null,
+ "subjectBeginsWith": "",
+ "subjectEndsWith": ""
+ },
+ "id": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription",
+ "labels": null,
+ "name": "MyEventGridSubscription",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "MyResourceGroup",
+ "retryPolicy": {
+ "eventTimeToLiveInMinutes": 1440,
+ "maxDeliveryAttempts": 30
+ },
+ "systemData": null,
+ "topic": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/microsoft.containerservice/managedclusters/MyAKS",
+ "type": "Microsoft.EventGrid/eventSubscriptions"
+ }
+]
+```
+
+When AKS events occur, you'll see those events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you'll see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events].
+
+## Delete the cluster and subscriptions
+
+Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources.
+
+```azurecli-interactive
+az group delete --name MyResourceGroup --yes --no-wait
+```
+
+> [!NOTE]
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+>
+> If you used a managed identity, the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hub.
+
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+[aks-events]: ../event-grid/event-schema-aks.md
+[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-eventhubs-namespace-create]: /cli/azure/eventhubs/namespace?view=azure-cli-latest&preserve-view=true#az-eventhubs-namespace-create
+[az-eventhubs-eventhub-create]: /cli/azure/eventhubs/eventhub?view=azure-cli-latest&preserve-view=true#az-eventhubs-eventhub-create
+[az-eventgrid-event-subscription-create]: /cli/azure/eventgrid/event-subscription?view=azure-cli-latest&preserve-view=true#az-eventgrid-event-subscription-create
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-group-delete]: /cli/azure/group#az_group_delete
+[sp-delete]: kubernetes-service-principal.md#additional-considerations
api-management How To Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-event-grid.md
+
+ Title: Send events from Azure API Management to Event Grid
+description: In this quickstart, you enable Event Grid events for your Azure API Management instance, then send events to a sample application.
++++ Last updated : 07/12/2021+++
+# Send events from API Management to Event Grid (Preview)
+
+API Management integrates with Azure [Event Grid](../event-grid/overview.md) so that you can send event notifications to other services and trigger downstream processes. Event Grid is a fully managed event routing service that uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../logic-apps/logic-apps-overview.md), and can deliver event alerts to non-Azure services using webhooks.
+
+For example, using integration with Event Grid, you can build an application that updates a database, creates a billing account, and sends an email notification each time a user is added to your API Management instance.
+
+In this article, you subscribe to Event Grid events in your API Management instance, trigger events, and send the events to an endpoint that processes the data. To keep it simple, you send events to a sample web app that collects and displays the messages:
++
+- If you don't already have an API Management service, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
+- Enable a [system-assigned managed identity](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity) in your API Management instance.
+- Create a [resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) if you don't have one in which to deploy the sample endpoint.
+
+## Create an event endpoint
+
+In this section, you use a Resource Manager template to deploy a pre-built sample web application to Azure App Service. Later, you subscribe to your API Management instance's Event Grid events and specify this app as the endpoint to which the events are sent.
+
+To deploy the sample app, you can use the Azure CLI, Azure PowerShell, or the Azure portal. The following example uses the [az deployment group create](/cli/azure/deployment/group#az_deployment_group_create) command in the Azure CLI.
+
+* Set `RESOURCE_GROUP_NAME` to the name of an existing resource group
+* Set `SITE_NAME` to a unique name for your web app
+
+ The site name must be unique within Azure because it forms part of the fully qualified domain name (FQDN) of the web app. In a later section, you navigate to the app's FQDN in a web browser to view the events.
+
+```azurecli-interactive
+RESOURCE_GROUP_NAME=<your-resource-group-name>
+SITE_NAME=<your-site-name>
+
+az deployment group create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" \
+ --parameters siteName=$SITE_NAME hostingPlanName=$SITE_NAME-plan
+```
+
+Once the deployment has succeeded (it might take a few minutes), open a browser and navigate to your web app to make sure it's running:
+
+`https://<your-site-name>.azurewebsites.net`
+
+You should see the sample app rendered with no event messages displayed.
++
+## Subscribe to API Management events
+
+In Event Grid, you subscribe to a *topic* to tell it which events you want to track, and where to send them. Here, you create a subscription to events in your API Management instance.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. Select **Events (preview) > + Event Subscription**.
+1. On the **Basic** tab:
+ * Enter a descriptive **Name** for the event subscription.
+ * In **Event Types**, select one or more API Management event types to send to Event Grid. For the example in this article, select at least **Microsoft.APIManagement.ProductCreated**
+ * In **Endpoint Details**, select the **Web Hook** event type, click **Select an endpoint**, and enter your web app URL followed by `api/updates`. Example: `https://myapp.azurewebsites.net/api/updates`.
+ * Select **Confirm selection**.
+1. Leave the settings on the remaining tabs at their default values, and then select **Create**.
+
+ :::image type="content" source="media/how-to-event-grid/create-event-subscription.png" alt-text="Create an event subscription in Azure portal":::
+
+## Trigger and view events
+
+Now that the sample app is up and running and you've subscribed to your API Management instance with Event Grid, you're ready to generate events.
+
+As an example, [create a product](/api-management-howto-add-products.md) in your API Management instance. If your event subscription includes the **Microsoft.APIManagement.ProductCreated** event, creating the product triggers an event that is pushed to your web app endpoint.
+
+Navigate to your Event Grid Viewer web app, and you should see the `ProductCreated` event. Select the button next to the event to show the details.
++
+## Event Grid event schema
+
+API Management event data includes the `resourceUri`, which identifies the API Management resource that triggered the event. For details about the API Management event message schema, see the Event Grid documentation:
+
+[Azure Event Grid event schema for API Management](../event-grid/event-schema-api-management.md)
+
+## Next steps
+
+* [Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus](../event-grid/compare-messaging-services.md)
+* Learn more about [subscribing to events](../event-grid/subscribe-through-portal.md).
+
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-nodejs-mongodb-app.md
Title: 'Tutorial: Node.js app with MongoDB'
-description: Learn how to get a Node.js app working in Azure, with connection to a MongoDB database in Azure (Cosmos DB). MEAN.js is used in the tutorial.
+description: Learn how to get a Node.js app working in Azure, with connection to a MongoDB database in Azure (Cosmos DB). Sails.js and Angular 12 are used in the tutorial.
ms.assetid: 0b4d7d0e-e984-49a1-a57a-3c0caa955f0e ms.devlang: nodejs Previously updated : 06/16/2020 Last updated : 07/13/2021 zone_pivot_groups: app-service-platform-windows-linux
zone_pivot_groups: app-service-platform-windows-linux
::: zone pivot="platform-windows"
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This tutorial shows how to create a Node.js app in App Service on Windows and connect it to a MongoDB database. When you're done, you'll have a MEAN application (MongoDB, Express, AngularJS, and Node.js) running in [Azure App Service](overview.md). For simplicity, the sample application uses the [MEAN.js web framework](https://meanjs.org/).
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This tutorial shows how to create a Node.js app in App Service on Windows and connect it to a MongoDB database. When you're done, you'll have a MEAN application (MongoDB, Express, AngularJS, and Node.js) running in [Azure App Service](overview.md). The sample application uses a combination of [Sails.js](https://sailsjs.com/) and [Angular 12](https://angular.io/).
::: zone-end ::: zone pivot="platform-linux"
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a Node.js app in App Service on Linux, connect it locally to a MongoDB database, then deploy it to a database in Azure Cosmos DB's API for MongoDB. When you're done, you'll have a MEAN application (MongoDB, Express, AngularJS, and Node.js) running in App Service on Linux. For simplicity, the sample application uses the [MEAN.js web framework](https://meanjs.org/).
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a Node.js app in App Service on Linux, connect it locally to a MongoDB database, then deploy it to a database in Azure Cosmos DB's API for MongoDB. When you're done, you'll have a MEAN application (MongoDB, Express, AngularJS, and Node.js) running in App Service on Linux. The sample application uses a combination of [Sails.js](https://sailsjs.com/) and [Angular 12](https://angular.io/).
::: zone-end
-![MEAN.js app running in Azure App Service](./media/tutorial-nodejs-mongodb-app/meanjs-in-azure.png)
+![MEAN app running in Azure App Service](./media/tutorial-nodejs-mongodb-app/run-in-azure.png)
What you'll learn:
To complete this tutorial:
- [Install Git](https://git-scm.com/) - [Install Node.js and NPM](https://nodejs.org/)-- [Install Bower](https://bower.io/) (required by [MEAN.js](https://meanjs.org/docs/0.5.x/#getting-started))-- [Install Gulp.js](https://gulpjs.com/) (required by [MEAN.js](https://meanjs.org/docs/0.5.x/#getting-started))-- [Install and run MongoDB Community Edition](https://docs.mongodb.com/manual/administration/install-community/) [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-## Test local MongoDB
-
-Open the terminal window and `cd` to the `bin` directory of your MongoDB installation. You can use this terminal window to run all the commands in this tutorial.
-
-Run `mongo` in the terminal to connect to your local MongoDB server.
-
-```bash
-mongo
-```
-
-If your connection is successful, then your MongoDB database is already running. If not, make sure that your local MongoDB database is started by following the steps at [Install MongoDB Community Edition](https://docs.mongodb.com/manual/administration/install-community/). Often, MongoDB is installed, but you still need to start it by running `mongod`.
-
-When you're done testing your MongoDB database, type `Ctrl+C` in the terminal.
- ## Create local Node.js app In this step, you set up the local Node.js project.
In the terminal window, `cd` to a working directory.
Run the following command to clone the sample repository. ```bash
-git clone https://github.com/Azure-Samples/meanjs.git
+git clone https://github.com/Azure-Samples/mean-todoapp.git
```
-This sample repository contains a copy of the [MEAN.js repository](https://github.com/meanjs/mean). It is modified to run on App Service (for more information, see the MEAN.js repository [README file](https://github.com/Azure-Samples/meanjs/blob/master/README.md)).
+> [!NOTE]
+> For information on how the sample app is create, see [https://github.com/Azure-Samples/mean-todoapp](https://github.com/Azure-Samples/mean-todoapp).
### Run the application Run the following commands to install the required packages and start the application. ```bash
-cd meanjs
+cd mean-todoapp
npm install
-npm start
+node app.js --alter
```
-Ignore the config.domain warning. When the app is fully loaded, you see something similar to the following message:
+When the app is fully loaded, you see something similar to the following message:
<pre>
-MEAN.JS - Development Environment
-
-Environment: development
-Server: http://0.0.0.0:3000
-Database: mongodb://localhost/mean-dev
-App version: 0.5.0
-MEAN.JS version: 0.5.0
-</pre>
+debug: -
+debug: :: Fri Jul 09 2021 13:10:34 GMT+0200 (Central European Summer Time)
-Navigate to `http://localhost:3000` in a browser. Click **Sign Up** in the top menu and create a test user.
+debug: Environment : development
+debug: Port : 1337
+debug: -
+</pre>
-The MEAN.js sample application stores user data in the database. If you are successful at creating a user and signing in, then your app is writing data to the local MongoDB database.
+Navigate to `http://localhost:1337` in a browser. Add a few todo items.
-![MEAN.js connects successfully to MongoDB](./media/tutorial-nodejs-mongodb-app/mongodb-connect-success.png)
+The MEAN sample application stores user data in the database. By default, it uses a disk-based development database. If you can create and see todo items, then your app is reading and writing data.
-Select **Admin > Manage Articles** to add some articles.
+![MEAN app loaded successfully](./media/tutorial-nodejs-mongodb-app/run-locally.png)
To stop Node.js at any time, press `Ctrl+C` in the terminal.
To stop Node.js at any time, press `Ctrl+C` in the terminal.
In this step, you create a MongoDB database in Azure. When your app is deployed to Azure, it uses this cloud database.
-For MongoDB, this tutorial uses [Azure Cosmos DB](/azure/documentdb/). Cosmos DB supports MongoDB client connections.
+For MongoDB, this tutorial uses [Azure Cosmos DB](/azure/cosmos-db/). Cosmos DB supports MongoDB client connections.
### Create a resource group
When the Cosmos DB account is created, the Azure CLI shows information similar t
<pre> {
- "consistencyPolicy":
- {
+ "apiProperties": {
+ "serverVersion": "3.6"
+ },
+ "backupPolicy": {
+ "periodicModeProperties": {
+ "backupIntervalInMinutes": 240,
+ "backupRetentionIntervalInHours": 8,
+ "backupStorageRedundancy": "Geo"
+ },
+ "type": "Periodic"
+ },
+ "capabilities": [
+ {
+ "name": "EnableMongo"
+ }
+ ],
+ "connectorOffer": null,
+ "consistencyPolicy": {
"defaultConsistencyLevel": "Session", "maxIntervalInSeconds": 5, "maxStalenessPrefix": 100 },
+ "cors": [],
"databaseAccountOfferType": "Standard",
+ "defaultIdentity": "FirstPartyIdentity",
+ "disableKeyBasedMetadataWriteAccess": false,
"documentEndpoint": "https://&lt;cosmosdb-name&gt;.documents.azure.com:443/",
- "failoverPolicies":
... &lt; Output truncated for readability &gt; }
When the Cosmos DB account is created, the Azure CLI shows information similar t
## Connect app to production MongoDB
-In this step, you connect your MEAN.js sample application to the Cosmos DB database you just created, using a MongoDB connection string.
+In this step, you connect your sample application to the Cosmos DB database you just created, using a MongoDB connection string.
### Retrieve the database key
The Azure CLI shows information similar to the following example:
Copy the value of `primaryMasterKey`. You need this information in the next step. <a name="devconfig"></a>
-### Configure the connection string in your Node.js application
+### Configure the connection string in your sample application
-In your local MEAN.js repository, in the _config/env/_ folder, create a file named _local-production.js_. _.gitignore_ is already configured to keep this file out of the repository.
-
-Copy the following code into it. Be sure to replace the two *\<cosmosdb-name>* placeholders with your Cosmos DB database name, and replace the *\<primary-master-key>* placeholder with the key you copied in the previous step.
+In your local repository, in _config/datastores.js_, replace the existing content with the following code and save your changes.
```javascript
-module.exports = {
- db: {
- uri: 'mongodb://<cosmosdb-name>:<primary-master-key>@<cosmosdb-name>.documents.azure.com:10250/mean?ssl=true&sslverifycertificate=false'
- }
+module.exports.datastores = {
+ default: {
+ adapter: 'sails-mongo',
+ url: process.env.MONGODB_URI,
+ ssl: true,
+ },
}; ```
-The `ssl=true` option is required because [Cosmos DB requires TLS/SSL](../cosmos-db/connect-mongodb-account.md#connection-string-requirements).
-
-Save your changes.
+The `ssl: true` option is required because [Cosmos DB requires TLS/SSL](../cosmos-db/connect-mongodb-account.md#connection-string-requirements). `url` is set to an environment variable, which you will set next.
-### Test the application in production mode
-
-In a local terminal window, run the following command to minify and bundle scripts for the production environment. This process generates the files needed by the production environment.
+In the terminal, set the `MONGODB_URI` environment variable. Be sure to replace the two \<cosmosdb-name> placeholders with your Cosmos DB database name, and replace the \<cosmosdb-key> placeholder with the key you copied in the previous step.
```bash
-gulp prod
+export MONGODB_URI=mongodb://<cosmosdb-name>:<cosmosdb-key>@<cosmosdb-name>.documents.azure.com:10250/todoapp
```
-In a local terminal window, run the following command to use the connection string you configured in _config/env/local-production.js_. Ignore the certificate error and the config.domain warning.
-
-```bash
-# Bash
-NODE_ENV=production node server.js
-
-# Windows PowerShell
-$env:NODE_ENV = "production"
-node server.js
-```
+> [!NOTE]
+> This connection string follows the format defined in the [Sails.js documentation](https://sailsjs.com/documentation/reference/configuration/sails-config-datastores#?the-connection-url).
-`NODE_ENV=production` sets the environment variable that tells Node.js to run in the production environment. `node server.js` starts the Node.js server with `server.js` in your repository root. This is how your Node.js application is loaded in Azure.
+### Test the application with MongoDB
-When the app is loaded, check to make sure that it's running in the production environment:
+In a local terminal window, run `node app.js --alter` again.
-<pre>
-MEAN.JS
-
-Environment: production
-Server: http://0.0.0.0:8443
-Database: mongodb://&lt;cosmosdb-name&gt;:&lt;primary-master-key&gt;@&lt;cosmosdb-name&gt;.documents.azure.com:10250/mean?ssl=true&sslverifycertificate=false
-App version: 0.5.0
-MEAN.JS version: 0.5.0
-</pre>
+```bash
+node app.js --alter
+```
-Navigate to `http://localhost:8443` in a browser. Click **Sign Up** in the top menu and create a test user. If you are successful creating a user and signing in, then your app is writing data to the Cosmos DB database in Azure.
+Navigate to `http://localhost:1337` again. If you can create and see todo items, then your app is reading and writing data using the Cosmos DB database in Azure.
In the terminal, stop Node.js by typing `Ctrl+C`.
In this step, you deploy your MongoDB-connected Node.js application to Azure App
::: zone pivot="platform-windows"
+In the Cloud Shell, create an App Service plan with the [`az appservice plan create`](/cli/azure/appservice/plan) command.
+
+The following example creates an App Service plan named `myAppServicePlan` in the **B1** pricing tier:
+
+```azurecli-interactive
+az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1
+```
+
+When the App Service plan has been created, the Azure CLI shows information similar to the following example:
+
+<pre>
+{
+ "freeOfferExpirationTime": null,
+ "geoRegion": "UK West",
+ "hostingEnvironmentProfile": null,
+ "hyperV": false,
+ "id": "/subscriptions/0000-0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan",
+ "isSpot": false,
+ "isXenon": false,
+ "kind": "app",
+ "location": "ukwest",
+ "maximumElasticWorkerCount": 1,
+ "maximumNumberOfWorkers": 0,
+ &lt; JSON data removed for brevity. &gt;
+}
+</pre>
::: zone-end ::: zone pivot="platform-linux"
+In the Cloud Shell, create an App Service plan with the [`az appservice plan create`](/cli/azure/appservice/plan) command.
+
+<!-- [!INCLUDE [app-service-plan](app-service-plan.md)] -->
+
+The following example creates an App Service plan named `myAppServicePlan` in the **B1** pricing tier:
+
+```azurecli-interactive
+az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1 --is-linux
+```
+
+When the App Service plan has been created, the Azure CLI shows information similar to the following example:
+
+<pre>
+{
+ "freeOfferExpirationTime": null,
+ "geoRegion": "West Europe",
+ "hostingEnvironmentProfile": null,
+ "id": "/subscriptions/0000-0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan",
+ "kind": "linux",
+ "location": "West Europe",
+ "maximumNumberOfWorkers": 1,
+ "name": "myAppServicePlan",
+ &lt; JSON data removed for brevity. &gt;
+ "targetWorkerSizeId": 0,
+ "type": "Microsoft.Web/serverfarms",
+ "workerTierName": null
+}
+</pre>
::: zone-end
In this step, you deploy your MongoDB-connected Node.js application to Azure App
### Configure an environment variable
-By default, the MEAN.js project keeps _config/env/local-production.js_ out of the Git repository. So for your Azure app, you use app settings to define your MongoDB connection string.
+Remember that the sample application is already configured to use the `MONGODB_URI` environment variable in `config/datastores.js`. In App Service, you inject this variable by using an [app setting](configure-common.md#configure-app-settings).
To set app settings, use the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command in the Cloud Shell.
-The following example configures a `MONGODB_URI` app setting in your Azure app. Replace the *\<app-name>*, *\<cosmosdb-name>*, and *\<primary-master-key>* placeholders.
+The following example configures a `MONGODB_URI` app setting in your Azure app. Replace the *\<app-name>*, *\<cosmosdb-name>*, and *\<cosmosdb-key>* placeholders.
```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings MONGODB_URI="mongodb://<cosmosdb-name>:<primary-master-key>@<cosmosdb-name>.documents.azure.com:10250/mean?ssl=true"
+az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings MONGODB_URI='mongodb://<cosmosdb-name>:<cosmosdb-key>@<cosmosdb-name>.documents.azure.com:10250/todoapp' DEPLOYMENT_BRANCH='main'
```
-In Node.js code, you [access this app setting](configure-language-nodejs.md#access-environment-variables) with `process.env.MONGODB_URI`, just like you would access any environment variable.
-
-In your local MEAN.js repository, open _config/env/production.js_ (not _config/env/local-production.js_), which has production-environment specific configuration. The default MEAN.js app is already configured to use the `MONGODB_URI` environment variable that you created.
-
-```javascript
-db: {
- uri: ... || process.env.MONGODB_URI || ...,
- ...
-},
-```
+> [!NOTE]
+> `DEPLOYMENT_BRANCH` is a special app setting that tells the deployment engine which Git branch you're deploying to in App Service.
### Push to Azure from Git [!INCLUDE [app-service-plan-no-h](../../includes/app-service-web-git-push-to-azure-no-h.md)] + <pre>
-Counting objects: 5, done.
-Delta compression using up to 4 threads.
-Compressing objects: 100% (5/5), done.
-Writing objects: 100% (5/5), 489 bytes | 0 bytes/s, done.
-Total 5 (delta 3), reused 0 (delta 0)
-remote: Updating branch 'master'.
+Enumerating objects: 5, done.
+Counting objects: 100% (5/5), done.
+Delta compression using up to 8 threads
+Compressing objects: 100% (3/3), done.
+Writing objects: 100% (3/3), 318 bytes | 318.00 KiB/s, done.
+Total 3 (delta 2), reused 0 (delta 0), pack-reused 0
+remote: Updating branch 'main'.
remote: Updating submodules.
-remote: Preparing deployment for commit id '6c7c716eee'.
-remote: Running custom deployment command...
+remote: Preparing deployment for commit id '4eb0ca7190'.
+remote: Generating deployment script.
remote: Running deployment command... remote: Handling node.js deployment.
+remote: Creating app_offline.htm
+remote: KuduSync.NET from: 'D:\home\site\repository' to: 'D:\home\site\wwwroot'
+remote: Copying file: 'package.json'
+remote: Deleting app_offline.htm
+remote: Looking for app.js/server.js under site root.
+remote: Using start-up script app.js
+remote: Generated web.config.
. . . remote: Deployment successful. To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- * [new branch]      master -> master
+ * [new branch]      main -> main
</pre>
-You may notice that the deployment process runs [Gulp](https://gulpjs.com/) after `npm install`. App Service does not run Gulp or Grunt tasks during deployment, so this sample repository has two additional files in its root directory to enable it:
+> [!TIP]
+> During Git deployment, the deployment engine runs `npm install --production` as part of its build automation.
+>
+> - As defined in `package.json`, the `postinstall` script is picked up by `npm install` and runs `ng build` to generate the production files for Angular and deploy them to the [assets](https://sailsjs.com/documentation/concepts/assets) folder.
+> - `scripts` in `package.json` can use tools that are installed in `node_modules/.bin`. Since `npm install` has installed `node_modules/.bin/ng` too, you can use it to deploy your Angular client files. This npm behavior is exactly the same in Azure App Service.
+> Packages under `devDependencies` in `package.json` are not installed. Any package you need in the production environment needs to be moved under `dependencies`.
+>
+> If your app needs to bypass the default automation and run custom automation, see [Run Grunt/Bower/Gulp](configure-language-nodejs.md#run-gruntbowergulp).
-- _.deployment_ - This file tells App Service to run `bash deploy.sh` as the custom deployment script.-- _deploy.sh_ - The custom deployment script. If you review the file, you will see that it runs `gulp prod` after `npm install` and `bower install`.
-You can use this approach to add any step to your Git-based deployment. If you restart your Azure app at any point, App Service doesn't rerun these automation tasks. For more information, see [Run Grunt/Bower/Gulp](configure-language-nodejs.md#run-gruntbowergulp).
+
+<pre>
+Enumerating objects: 5, done.
+Counting objects: 100% (5/5), done.
+Delta compression using up to 8 threads
+Compressing objects: 100% (3/3), done.
+Writing objects: 100% (3/3), 347 bytes | 347.00 KiB/s, done.
+Total 3 (delta 2), reused 0 (delta 0), pack-reused 0
+remote: Deploy Async
+remote: Updating branch 'main'.
+remote: Updating submodules.
+remote: Preparing deployment for commit id 'f776be774a'.
+remote: Repository path is /home/site/repository
+remote: Running oryx build...
+remote: Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
+remote: You can report issues at https://github.com/Microsoft/Oryx/issues
+remote:
+remote: Oryx Version: 0.2.20210420.1, Commit: 85c6e9278aae3980b86cb1d520aaad532c814ed7, ReleaseTagName: 20210420.1
+remote:
+remote: Build Operation ID: |qwejn9R4StI=.5e8a3529_
+remote: Repository Commit : f776be774a3ea8abc48e5ee2b5132c037a636f73
+.
+.
+.
+remote: Deployment successful.
+remote: Deployment Logs : 'https://&lt;app-name&gt;.scm.azurewebsites.net/newui/jsonviewer?view_url=/api/deployments/a6fcf811136739f145e0de3be82ff195bca7a68b/log'
+To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
+ 4f7e3ac..a6fcf81 main -> main
+</pre>
+
+> [!TIP]
+> During Git deployment, the deployment engine runs `npm install` as part of its build automation.
+>
+> - As defined in `package.json`, the `postinstall` script is picked up by `npm install` and runs `ng build` to generate the production files for Angular and deploy them to the [assets](https://sailsjs.com/documentation/concepts/assets) folder.
+> - `scripts` in `package.json` can use tools that are installed in `node_modules/.bin`. Since `npm install` has installed `node_modules/.bin/ng` too, you can use it to deploy your Angular client files. This npm behavior is exactly the same in Azure App Service.
+> When build automation is complete, the whole completed repository is copied into the `/home/site/wwwroot` folder, out of which your app is hosted.
+>
+> If your app needs to bypass the default automation and run custom automation, see [Run Grunt/Bower/Gulp](configure-language-nodejs.md#run-gruntbowergulp).
+ ### Browse to the Azure app Browse to the deployed app using your web browser. ```bash
-http://<app-name>.azurewebsites.net
+https://<app-name>.azurewebsites.net
```
-Click **Sign Up** in the top menu and create a dummy user.
-
-If you are successful and the app automatically signs in to the created user, then your MEAN.js app in Azure has connectivity to the MongoDB (Cosmos DB) database.
+If you can create and see todo items in the browser, then your sample app in Azure has connectivity to the MongoDB (Cosmos DB) database.
-![MEAN.js app running in Azure App Service](./media/tutorial-nodejs-mongodb-app/meanjs-in-azure.png)
-
-Select **Admin > Manage Articles** to add some articles.
+![MEAN app running in Azure App Service](./media/tutorial-nodejs-mongodb-app/run-in-azure.png)
**Congratulations!** You're running a data-driven Node.js app in Azure App Service. ## Update data model and redeploy
-In this step, you change the `article` data model and publish your change to Azure.
-
-### Update the data model
-
-In your local MEAN.js repository, open _modules/articles/server/models/article.server.model.js_.
-
-In `ArticleSchema`, add a `String` type called `comment`. When you're done, your schema code should look like this:
-
-```javascript
-const ArticleSchema = new Schema({
- ...,
- user: {
- type: Schema.ObjectId,
- ref: 'User'
- },
- comment: {
- type: String,
- default: '',
- trim: true
- }
-});
-```
-
-### Update the articles code
-
-Update the rest of your `articles` code to use `comment`.
+In this step, you change the `Todo` data model and publish your change to Azure.
-There are five files you need to modify: the server controller and the four client views.
+### Update the server-side model
-Open _modules/articles/server/controllers/articles.server.controller.js_.
+In Sails.js, changing the server-side model and API code is as simple as changing the data model, because [Sails.js already defines the common routes](https://sailsjs.com/documentation/concepts/blueprints/blueprint-routes#?restful-routes) for a model by default.
-In the `update` function, add an assignment for `article.comment`. The following code shows the completed `update` function:
+In your local repository, open _api/models/Todo.js_ and add a `done` attribute. When you're done, your schema code should look like this:
```javascript
-exports.update = function (req, res) {
- let article = req.article;
+module.exports = {
- article.title = req.body.title;
- article.content = req.body.content;
- article.comment = req.body.comment;
+ attributes: {
+ value: {type: 'string'},
+ done: {type: 'boolean', defaultsTo: false}
+ },
- ...
}; ```
-Open _modules/articles/client/views/view-article.client.view.html_.
+### Update the client code
-Just above the closing `</section>` tag, add the following line to display `comment` along with the rest of the article data:
+There are three files you need to modify: the client model, the HTML template, and the component file.
-```html
-<p class="lead" ng-bind="vm.article.comment"></p>
-```
-
-Open _modules/articles/client/views/list-articles.client.view.html_.
+Open _client/src/app/todo.ts_ and add a `done` property. When you're done, your model show look like this:
-Just above the closing `</a>` tag, add the following line to display `comment` along with the rest of the article data:
-
-```html
-<p class="list-group-item-text" ng-bind="article.comment"></p>
-```
-
-Open _modules/articles/client/views/admin/list-articles.client.view.html_.
-
-Inside the `<div class="list-group">` element and just above the closing `</a>` tag, add the following line to display `comment` along with the rest of the article data:
-
-```html
-<p class="list-group-item-text" data-ng-bind="article.comment"></p>
+```typescript
+export class Todo {
+ id!: String;
+ value!: String;
+ done!: Boolean;
+}
```
-Open _modules/articles/client/views/admin/form-article.client.view.html_.
-
-Find the `<div class="form-group">` element that contains the submit button, which looks like this:
+Open _client/src/app/app.component.html_. Just above the only `<span>` element, add the following code to add a checkbox at the beginning of each todo item:
```html
-<div class="form-group">
- <button type="submit" class="btn btn-default">{{vm.article._id ? 'Update' : 'Create'}}</button>
-</div>
+<input class="form-check-input me-2" type="checkbox" [checked]="todo.done" (click)="toggleDone(todo.id, i)" [disabled]="isProcessing">
```
-Just above this tag, add another `<div class="form-group">` element that lets people edit the `comment` field. Your new element should look like this:
-
-```html
-<div class="form-group">
- <label class="control-label" for="comment">Comment</label>
- <textarea name="comment" data-ng-model="vm.article.comment" id="comment" class="form-control" cols="30" rows="10" placeholder="Comment"></textarea>
-</div>
+Open _client/src/app/app.component.ts_. Just above the last closing curly brace (`}`), insert the following method. It's called by the template code above when the checkbox is clicked and updates the server-side data.
+
+```typescript
+toggleDone(id:any, i:any) {
+ console.log("Toggled checkbox for " + id);
+ this.isProcessing = true;
+ this.Todos[i].done = !this.Todos[i].done;
+ this.restService.updateTodo(id, this.Todos[i])
+ .subscribe((res) => {
+ console.log('Data updated successfully!');
+ this.isProcessing = false;
+ }, (err) => {
+ console.log(err);
+ this.Todos[i].done = !this.Todos[i].done;
+ });
+}
``` ### Test your changes locally
-Save all your changes.
-
-In the local terminal window, test your changes in production mode again.
+In the local terminal window, compile the updated Angular client code with the build script defined in `package.json`.
```bash
-# Bash
-gulp prod
-NODE_ENV=production node server.js
-
-# Windows PowerShell
-gulp prod
-$env:NODE_ENV = "production"
-node server.js
+npm run build
```
-Navigate to `http://localhost:8443` in a browser and make sure that you're signed in.
+Test your changes with `node app.js --alter` again. Since you changed your server-side model, the `--alter` flag lets `Sails.js` alter the data structure in your Cosmos DB database.
-Select **Admin > Manage Articles**, then add an article by selecting the **+** button.
+```bash
+node app.js --alter
+```
-You see the new `Comment` textbox now.
+Navigate to `http://localhost:1337`. You should now see a checkbox in front of todo item. When you select or clear a checkbox, the Cosmos DB database in Azure is updated to indicate that the todo item is done.
-![Added comment field to Articles](./media/tutorial-nodejs-mongodb-app/added-comment-field.png)
+![Added Done data and UI](./media/tutorial-nodejs-mongodb-app/added-done.png)
In the terminal, stop Node.js by typing `Ctrl+C`.
In the terminal, stop Node.js by typing `Ctrl+C`.
In the local terminal window, commit your changes in Git, then push the code changes to Azure. ```bash
-git commit -am "added article comment"
-git push azure master
+git commit -am "added done field"
+git push azure main
``` Once the `git push` is complete, navigate to your Azure app and try out the new functionality.
-![Model and database changes published to Azure](media/tutorial-nodejs-mongodb-app/added-comment-field-published.png)
+![Model and database changes published to Azure](media/tutorial-nodejs-mongodb-app/added-done-published.png)
If you added any articles earlier, you still can see them. Existing data in your Cosmos DB is not lost. Also, your updates to the data schema and leaves your existing data intact.
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-metrics.md
Application Gateway provides several builtΓÇæin timing metrics related to the re
Average time that it takes for a request to be received, processed and its response to be sent.
- This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. This includes the processing time taken by Application Gateway, the *Backend last byte response time*, time taken by Application Gateway to send all the response and the *Client RTT*.
+ This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. This includes the processing time taken by Application Gateway, the *Backend last byte response time*, and the time taken by Application Gateway to send all the response.
- **Client RTT**
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 04/29/2021 Last updated : 06/28/2021
All tasks that you create against resources using Azure Resource Manager and the
## Managed identities (preview)
-A managed identity from Azure Active Directory (Azure AD) allows your runbook to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+A managed identity from Azure Active Directory (Azure AD) allows your runbook to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
Here are some of the benefits of using managed identities: -- You can use managed identities to authenticate to any Azure service that supports Azure AD authentication. They can be used for cloud as well as hybrid jobs. Hybrid jobs can use managed identities when run on a Hybrid Runbook Worker that's running on an Azure or non-Azure VM.
+- Using a managed identity instead of the Automation Run As account makes management simpler. You don't have to renew the certificate used by a Run As account.
- Managed identities can be used without any additional cost.
An Automation account can be granted two types of identities:
- A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.
->[!NOTE]
-> User assigned identities are not supported yet.
+> [!NOTE]
+> User assigned identities are supported for cloud jobs only. To learn more about the different managed identities, see [Manage identity types](/active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
For details on using managed identities, see [Enable managed identity for Azure Automation (preview)](enable-managed-identity-for-automation.md).
When you create an Azure Classic Run As account, it performs the following tasks
## Service principal for Run As account
-The service principal for a Run As account does not have permissions to read Azure AD by default. If you want to add permissions to read or manage Azure AD, you must grant the permissions on the service principal under **API permissions**. To learn more, see [Add permissions to access your web API](../active-directory/develop/quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api).
+The service principal for a Run As account doesn't have permissions to read Azure AD by default. If you want to add permissions to read or manage Azure AD, you must grant the permissions on the service principal under **API permissions**. To learn more, see [Add permissions to access your web API](../active-directory/develop/quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api).
## <a name="permissions"></a>Run As account permissions
To verify that the situation producing the error message has been remedied:
1. From the Azure Active Directory pane in the Azure portal, select **Users and groups**. 2. Select **All users**. 3. Choose your name, then select **Profile**.
-4. Ensure that the value of the **User type** attribute under your user's profile is not set to **Guest**.
+4. Ensure that the value of the **User type** attribute under your user's profile isn't set to **Guest**.
## Role-based access control
If you have strict security controls for permission assignment in resource group
## Runbook authentication with Hybrid Runbook Worker
-Runbooks running on a Hybrid Runbook Worker in your datacenter or against computing services in other cloud environments like AWS, cannot use the same method that is typically used for runbooks authenticating to Azure resources. This is because those resources are running outside of Azure and therefore, requires their own security credentials defined in Automation to authenticate to resources that they access locally. For more information about runbook authentication with runbook workers, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
+Runbooks running on a Hybrid Runbook Worker in your datacenter or against computing services in other cloud environments like AWS, can't use the same method that is typically used for runbooks authenticating to Azure resources. This is because those resources are running outside of Azure and therefore, requires their own security credentials defined in Automation to authenticate to resources that they access locally. For more information about runbook authentication with runbook workers, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
For runbooks that use Hybrid Runbook Workers on Azure VMs, you can use [runbook authentication with managed identities](automation-hrw-run-runbooks.md#runbook-auth-managed-identities) instead of Run As accounts to authenticate to your Azure resources.
automation Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/managed-identity.md
Title: Troubleshoot Azure Automation managed identity issues (preview)
description: This article tells how to troubleshoot and resolve issues when using a managed identity with an Automation account. Previously updated : 04/28/2021 Last updated : 06/28/2021
This article discusses solutions to problems that you might encounter when you use a managed identity with your Automation account. For general information about using managed identity with Automation accounts, see [Azure Automation account authentication overview](../automation-security-overview.md#managed-identities-preview).
+## Scenario: Fail to get MSI token for account
+
+### Issue
+
+When working with a user-assigned managed identity in your Automation account, you receive an error similar to: `Failed to get MSI token for account a123456b-1234-12a3-123a-aa123456aa0b`.
+
+### Cause
+
+Using a user-assigned managed identity before enabling a system-assigned managed identity for your Automation account.
+
+### Resolution
+
+Enable a system-assigned managed identity for your Automation account. Then use the user-assigned managed identity.
+ ## Scenario: Attempt to use managed identity with Automation account fails ### Issue
azure-arc Backup Restore Postgresql Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/backup-restore-postgresql-hyperscale.md
Last updated 06/02/2021
-# Back up and restore Azure Arc enabled PostgreSQL Hyperscale server groups
+# Back up and restore Azure Arc-enabled PostgreSQL Hyperscale server groups
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)] [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-When you back up or restore your Azure Arc enabled PostgreSQL Hyperscale server group, the entire set of databases on all the PostgreSQL nodes of your server group is backed-up and/or restored.
+When you back up or restore your Azure Arc-enabled PostgreSQL Hyperscale server group, the entire set of databases on all the PostgreSQL nodes of your server group is backed-up and/or restored.
## Take a manual full backup
Where:
- __server-name__ indicates a server group - __no-wait__ indicates that the command line will not wait for the backup to complete for you to be able to continue to use this command-line window
-This command will coordinate a distributed full backup across all the nodes that constitute your Azure Arc enabled PostgreSQL Hyperscale server group. In other words, it will backup all data in your Coordinator and Worker nodes.
+This command will coordinate a distributed full backup across all the nodes that constitute your Azure Arc-enabled PostgreSQL Hyperscale server group. In other words, it will backup all data in your Coordinator and Worker nodes.
For example:
azdata arc postgres backup restore -sn <target server group name> [-ssn <source
Where: - __backup-id__ is the ID of the backup shown in the list backup command shown above.
-This will coordinate a distributed full restore across all the nodes that constitute your Azure Arc enabled PostgreSQL Hyperscale server group. In other words, it will restore all data in your Coordinator and Worker nodes.
+This will coordinate a distributed full restore across all the nodes that constitute your Azure Arc-enabled PostgreSQL Hyperscale server group. In other words, it will restore all data in your Coordinator and Worker nodes.
#### Examples:
azure-arc Change Postgresql Port https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/change-postgresql-port.md
Title: Change the PostgreSQL port
-description: Change the port on which the Azure Arc enabled PostgreSQL Hyperscale server group is listening.
+description: Change the port on which the Azure Arc-enabled PostgreSQL Hyperscale server group is listening.
azure-arc Concepts Distributed Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/concepts-distributed-postgres-hyperscale.md
Title: Concepts for distributing data and scaling out with Arc enabled PostgreSQL Hyperscale server group-+ description: Concepts for distributing data with Arc enabled PostgreSQL Hyperscale server group
# Concepts for distributing data with Arc enabled PostgreSQL Hyperscale server group
-This article explains key concepts that are important to benefit the most from Azure Arc enabled PostgreSQL Hyperscale.
-The articles linked below point to the concepts explained for Azure Database for PostgreSQL Hyperscale (Citus). It is the same technology as Azure Arc enabled PostgreSQL Hyperscale so the same concepts and perspectives apply.
+This article explains key concepts that are important to benefit the most from Azure Arc-enabled PostgreSQL Hyperscale.
+The articles linked below point to the concepts explained for Azure Database for PostgreSQL Hyperscale (Citus). It is the same technology as Azure Arc-enabled PostgreSQL Hyperscale so the same concepts and perspectives apply.
**What is the difference between them?** - _Azure Database for PostgreSQL Hyperscale (Citus)_ This is the hyperscale form factor of the Postgres database engine available as database as a service in Azure (PaaS). It is powered by the the Citus extension that enables the Hyperscale experience. In this form factor the service runs in the Microsoft datacenters and is operated by Microsoft. -- _Azure Arc enabled PostgreSQL Hyperscale_
+- _Azure Arc-enabled PostgreSQL Hyperscale_
-This is the hyperscale form factor of the Postgres database engine offered available with Azure Arc enabled Data Service. In this form factor, our customers provide the infrastructure that host the systems and operate them.
+This is the hyperscale form factor of the Postgres database engine offered available with Azure Arc-enabled Data Service. In this form factor, our customers provide the infrastructure that host the systems and operate them.
-The key concepts around Azure Arc enabled PostgreSQL Hyperscale are summarized below:
+The key concepts around Azure Arc-enabled PostgreSQL Hyperscale are summarized below:
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Nodes and tables
-It is important to know about a following concepts to benefit the most from Azure Arc enabled Postgres Hyperscale:
-- Specialized Postgres nodes in Azure Arc enabled PostgreSQL Hyperscale: coordinator and workers
+It is important to know about a following concepts to benefit the most from Azure Arc-enabled Postgres Hyperscale:
+- Specialized Postgres nodes in Azure Arc-enabled PostgreSQL Hyperscale: coordinator and workers
- Types of tables: distributed tables, reference tables and local tables - Shards
See more information at [Nodes and tables in Azure Database for PostgreSQL ΓÇô H
## Determine the application type Clearly identifying the type of application you are building is important. Why?
-Because running efficient queries on a Azure Arc enabled PostgreSQL Hyperscale server group requires that tables be properly distributed across servers.
-The recommended distribution varies by the type of application and its query patterns. There are broadly two kinds of applications that work well on Azure Arc enabled Postgres Hyperscale:
+Because running efficient queries on a Azure Arc-enabled PostgreSQL Hyperscale server group requires that tables be properly distributed across servers.
+The recommended distribution varies by the type of application and its query patterns. There are broadly two kinds of applications that work well on Azure Arc-enabled Postgres Hyperscale:
- Multi-Tenant Applications - Real-Time Applications
See details at [Determining application type](../../postgresql/concepts-hypersca
## Choose a distribution column Why choose a distributed column?
-This is one of the most important modeling decisions you'll make. Azure Arc enabled PostgreSQL Hyperscale stores rows in shards based on the value of the rows' distribution column. The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features.
+This is one of the most important modeling decisions you'll make. Azure Arc-enabled PostgreSQL Hyperscale stores rows in shards based on the value of the rows' distribution column. The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features.
An incorrect choice makes the system run slowly and won't support all SQL features across nodes. This article gives distribution column tips for the two most common hyperscale scenarios. See details at [Choose distribution columns](../../postgresql/concepts-hyperscale-choose-distribution-column.md).
See details at [Table colocation](../../postgresql/concepts-hyperscale-colocatio
## Next steps-- [Read about creating Azure Arc enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)-- [Read about scaling out Azure Arc enabled PostgreSQL Hyperscale server groups created in your Arc Data Controller](scale-out-in-postgresql-hyperscale-server-group.md)-- [Read about Azure Arc enabled Data Services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
+- [Read about creating Azure Arc-enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)
+- [Read about scaling out Azure Arc-enabled PostgreSQL Hyperscale server groups created in your Arc Data Controller](scale-out-in-postgresql-hyperscale-server-group.md)
+- [Read about Azure Arc-enabled Data Services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
- [Read about Azure Arc](https://aka.ms/azurearc)
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-managed-instance.md
Title: Configure Azure Arc enabled SQL managed instance
-description: Configure Azure Arc enabled SQL managed instance
+ Title: Configure Azure Arc-enabled SQL managed instance
+description: Configure Azure Arc-enabled SQL managed instance
Previously updated : 09/22/2020 Last updated : 07/13/2021
-# Configure Azure Arc enabled SQL managed instance
+# Configure Azure Arc-enabled SQL managed instance
-This article explains how to configure Azure Arc enabled SQL managed instance.
+This article explains how to configure Azure Arc-enabled SQL managed instance.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Configure resources
-### Configure using [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]
+### Configure using CLI
-You can edit the configuration of Azure Arc enabled SQL Managed Instances with the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]. Run the following command to see configuration options.
+You can edit the configuration of Azure Arc-enabled SQL Managed Instances with the CLI. Run the following command to see configuration options.
-```
-azdata arc sql mi edit --help
+```azurecli
+az sql mi-arc edit --help
``` The following example sets the cpu core and memory requests and limits.
-```
-azdata arc sql mi edit --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI>
+```azurecli
+az sql mi-arc edit --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI>
``` To view the changes made to the SQL managed instance, you can use the following commands to view the configuration yaml file:
-```
-azdata arc sql mi show -n <NAME_OF_SQL_MI>
+```azurecli
+az sql mi-arc show -n <NAME_OF_SQL_MI>
``` ## Configure Server options
-You can configure server configuration settings for Azure Arc enabled SQL managed instance after creation time. This article describes how to configure settings like enabling or disabling mssql Agent, enable specific trace flags for troubleshooting scenarios.
+You can configure server configuration settings for Azure Arc-enabled SQL managed instance after creation time. This article describes how to configure settings like enabling or disabling mssql Agent, enable specific trace flags for troubleshooting scenarios.
To change any of these settings, follow these steps:
azure-arc Configure Security Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-security-postgres-hyperscale.md
Title: Configure security for your Azure Arc enabled PostgreSQL Hyperscale server group
-description: Configure security for your Azure Arc enabled PostgreSQL Hyperscale server group
+ Title: Configure security for your Azure Arc-enabled PostgreSQL Hyperscale server group
+description: Configure security for your Azure Arc-enabled PostgreSQL Hyperscale server group
Last updated 06/02/2021
-# Configure security for your Azure Arc enabled PostgreSQL Hyperscale server group
+# Configure security for your Azure Arc-enabled PostgreSQL Hyperscale server group
This document describes various aspects related to security of your server group: - Encryption at rest
This document describes various aspects related to security of your server group
You can implement encryption at rest either by encrypting the disks on which you store your databases and/or by using database functions to encrypt the data you insert or update. ### Hardware: Linux host volume encryption
-Implement system data encryption to secure any data that resides on the disks used by your Azure Arc enabled Data Services setup. You can read more about this topic:
+Implement system data encryption to secure any data that resides on the disks used by your Azure Arc-enabled Data Services setup. You can read more about this topic:
- [Data encryption at rest](https://wiki.archlinux.org/index.php/Data-at-rest_encryption) on Linux in general -- Disk encryption with LUKS `cryptsetup` encrypt command (Linux)(https://www.cyberciti.biz/security/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/) specifically Since Azure Arc enabled Data Services runs on the physical infrastructure that you provide, you are in charge of securing the infrastructure.
+- Disk encryption with LUKS `cryptsetup` encrypt command (Linux)(https://www.cyberciti.biz/security/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/) specifically Since Azure Arc-enabled Data Services runs on the physical infrastructure that you provide, you are in charge of securing the infrastructure.
### Software: Use the PostgreSQL `pgcrypto` extension in your server group
-In addition of encrypting the disks used to host your Azure Arc setup, you can configure your Azure Arc enabled PostgreSQL Hyperscale server group to expose mechanisms that your applications can use to encrypt data in your database(s). The `pgcrypto` extension is part of the `contrib` extensions of Postgres and is available in your Azure Arc enabled PostgreSQL Hyperscale server group. You find details about the `pgcrypto` extension [here](https://www.postgresql.org/docs/current/pgcrypto.html).
+In addition of encrypting the disks used to host your Azure Arc setup, you can configure your Azure Arc-enabled PostgreSQL Hyperscale server group to expose mechanisms that your applications can use to encrypt data in your database(s). The `pgcrypto` extension is part of the `contrib` extensions of Postgres and is available in your Azure Arc-enabled PostgreSQL Hyperscale server group. You find details about the `pgcrypto` extension [here](https://www.postgresql.org/docs/current/pgcrypto.html).
In summary, with the following commands, you enable the extension, you create it and you use it:
When I connect with my application and I pass a password, it will look up in the
(1 row) ```
-This small example demonstrates that you can encrypt data at rest (store encrypted data) in Azure Arc enabled PostgreSQL Hyperscale using the Postgres `pgcrypto` extension and your applications can use functions offered by `pgcrypto` to manipulate this encrypted data.
+This small example demonstrates that you can encrypt data at rest (store encrypted data) in Azure Arc-enabled PostgreSQL Hyperscale using the Postgres `pgcrypto` extension and your applications can use functions offered by `pgcrypto` to manipulate this encrypted data.
## User management ### General perspectives You can use the standard Postgres way to create users or roles. However, if you do so, these artifacts will only be available on the coordinator role. During preview, these users/roles will not yet be able to access data that is distributed outside the Coordinator node and on the Worker nodes of your server group. The reason is that in preview, the user definition is not replicated to the Worker nodes. ### Change the password of the _postgres_ administrative user
-Azure Arc enabled PostgreSQL Hyperscale comes with the standard Postgres administrative user _postgres_ for which you set the password when you create your server group.
+Azure Arc-enabled PostgreSQL Hyperscale comes with the standard Postgres administrative user _postgres_ for which you set the password when you create your server group.
The general format of the command to change its password is: ```console azdata arc postgres server edit --name <server group name> --admin-password
azure-arc Configure Server Parameters Postgresql Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-server-parameters-postgresql-hyperscale.md
Title: Configure Postgres engine server parameters for your PostgreSQL Hyperscale server group on Azure Arc-+ description: Configure Postgres engine server parameters for your PostgreSQL Hyperscale server group on Azure Arc
Last updated 06/02/2021
-# Set the database engine settings for Azure Arc enabled PostgreSQL Hyperscale
+# Set the database engine settings for Azure Arc-enabled PostgreSQL Hyperscale
This document describes the steps to set the database engine settings of your PostgreSQL Hyperscale server group to custom (non-default) values. For details about what database engine parameters can be set and what their default value is, refer to the PostgreSQL documentation [here](https://www.postgresql.org/docs/current/runtime-config.html).
azure-arc Connect Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connect-managed-instance.md
Title: Connect to Azure Arc enabled SQL Managed Instance
-description: Connect to Azure Arc enabled SQL Managed Instance
+ Title: Connect to Azure Arc-enabled SQL Managed Instance
+description: Connect to Azure Arc-enabled SQL Managed Instance
Previously updated : 09/22/2020 Last updated : 07/13/2021
-# Connect to Azure Arc enabled SQL Managed Instance
+# Connect to Azure Arc-enabled SQL Managed Instance
-This article explains how you can connect to your Azure Arc enabled SQL Managed Instance.
+This article explains how you can connect to your Azure Arc-enabled SQL Managed Instance.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## View Azure Arc enabled SQL Managed Instances
+## View Azure Arc-enabled SQL Managed Instances
-To view the Azure Arc enabled SQL Managed Instance and the external endpoints use the following command:
+To view the Azure Arc-enabled SQL Managed Instance and the external endpoints use the following command:
-```console
-azdata arc sql mi list
+```azurecli
+az sql mi-arc list
``` Output should look like this:
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connectivity.md
Title: Connectivity modes and requirements
-description: Explains Azure Arc enabled data services connectivity options for from your environment to Azure
+description: Explains Azure Arc-enabled data services connectivity options for from your environment to Azure
Previously updated : 09/22/2020 Last updated : 07/13/2021
## Connectivity modes
-There are multiple options for the degree of connectivity from your Azure Arc enabled data services environment to Azure. As your requirements vary based on business policy, government regulation, or the availability of network connectivity to Azure, you can choose from the following connectivity modes.
+There are multiple options for the degree of connectivity from your Azure Arc-enabled data services environment to Azure. As your requirements vary based on business policy, government regulation, or the availability of network connectivity to Azure, you can choose from the following connectivity modes.
-Azure Arc enabled data services provides you the option to connect to Azure in two different *connectivity modes*:
+Azure Arc-enabled data services provides you the option to connect to Azure in two different *connectivity modes*:
- Directly connected - Indirectly connected
-The connectivity mode provides you the flexibility to choose how much data is sent to Azure and how users interact with the Arc Data Controller. Depending on the connectivity mode that is chosen, some functionality of Azure Arc enabled data services may or may not be available.
+The connectivity mode provides you the flexibility to choose how much data is sent to Azure and how users interact with the Arc Data Controller. Depending on the connectivity mode that is chosen, some functionality of Azure Arc-enabled data services may or may not be available.
-Importantly, if the Azure Arc enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on all in the Azure portal. If the Azure Arc enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and Postgres Hyperscale instances that you have deployed and the details about them, but you cannot take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)], or Kubernetes native tools like kubectl.
+Importantly, if the Azure Arc-enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on all in the Azure portal. If the Azure Arc-enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and Postgres Hyperscale instances that you have deployed and the details about them, but you cannot take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the appropriate CLI, or Kubernetes native tools like kubectl.
Additionally, Azure Active Directory and Azure Role-Based Access Control can be used in the directly connected mode only because there is a dependency on a continuous and direct connection to Azure to provide this functionality.
Some Azure-attached services are only available when they can be directly reache
|**Feature**|**Indirectly connected**|**Directly connected**| |||| |**Automatic high availability**|Supported|Supported|
-|**Self-service provisioning**|Supported<br/>Creation can be done through Azure Data Studio, [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)], or Kubernetes native tools (helm, kubectl, oc, etc.), or using Azure Arc enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates. **Pending availability of directly connected mode**
+|**Self-service provisioning**|Supported<br/>Creation can be done through Azure Data Studio, the appropriate CLI, or Kubernetes native tools (helm, kubectl, oc, etc.), or using Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates. **Pending availability of directly connected mode**
|**Elastic scalability**|Supported|Supported<br/>**Pending availability of directly connected mode**| |**Billing**|Supported<br/>Billing data is periodically exported out and sent to Azure.|Supported<br/>Billing data is automatically and continuously sent to Azure and reflected in near real time. **Pending availability of directly connected mode**| |**Inventory management**|Supported<br/>Inventory data is periodically exported out and sent to Azure.<br/><br/>Use client tools like Azure Data Studio, Azure Data CLI, or `kubectl` to view and manage inventory locally.|Supported<br/>Inventory data is automatically and continuously sent to Azure and reflected in near real time. As such, you can manage inventory directly from the Azure portal. **Pending availability of directly connected mode**|
Some Azure-attached services are only available when they can be directly reache
||||||| |**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. In the event that the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This also applies to automated updates.| |**Resource inventory**|Customer environment -> Azure|Required|No|Indirect or direct|An inventory of data controllers, database instances (PostgreSQL and SQL) is kept in Azure for billing purposes and also for purposes of creating an inventory of all data controllers and database instances in one place which is especially useful if you have more than one environment with Azure Arc data services. As instances are provisioned, deprovisioned, scaled out/in, scaled up/down the inventory is updated in Azure.|
-|**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. There is no cost for Azure Arc enabled data services during the preview period.|
+|**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. There is no cost for Azure Arc-enabled data services during the preview period.|
|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/en-us/pricing/details/monitor/))|Indirect or direct|You may want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.| |**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC then local Kubernetes RBAC can be used. **Pending availability of directly connected mode**| |**Azure Active Directory (AD)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you may already be paying for Azure AD|Direct only|If you want to use Azure AD for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure AD for authentication, you can us Active Directory Federation Services (ADFS) over Active Directory. **Pending availability of directly connected mode**| |**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. | |**Azure backup - long term retention**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You may want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. **Pending availability of directly connected mode**| |**Azure Defender security services**|Customer environment -> Azure -> Customer environment|Optional|Yes|Direct only|**Pending availability of directly connected mode**|
-|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]. In directly connected mode, you will also be able to provision and make configuration changes from the Azure portal. **Pending availability of directly connected mode**|
+|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the appropriate CLI. In directly connected mode, you will also be able to provision and make configuration changes from the Azure portal. **Pending availability of directly connected mode**|
## Details on internet addresses, ports, encryption, and proxy server support
The following sections provide details for these connections.
### Microsoft Container Registry (MCR)
-The Microsoft Container Registry hosts the Azure Arc enabled data services container images. You can pull these images from MCR and push them to a private container registry and configure the data controller deployment process to pull the container images from that private container registry.
+The Microsoft Container Registry hosts the Azure Arc-enabled data services container images. You can pull these images from MCR and push them to a private container registry and configure the data controller deployment process to pull the container images from that private container registry.
#### Connection source
Yes
None ### Azure Resource Manager APIs
-Azure Data Studio, [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
+Azure Data Studio, and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
#### Connection source
-A computer running Azure Data Studio, [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)], or Azure CLI that is connecting to Azure.
+A computer running Azure Data Studio, or Azure CLI that is connecting to Azure.
#### Connection target
Azure Active Directory
### Azure monitor APIs
-Azure Data Studio, [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
+Azure Data Studio, and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
#### Connection source
-A computer running [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] or Azure CLI that is uploading monitoring metrics or logs to Azure Monitor.
+A computer running Azure CLI that is uploading monitoring metrics or logs to Azure Monitor.
#### Connection target
Yes
Azure Active Directory > [!NOTE]
-> For now, all browser HTTPS/443 connections to the Grafana and Kibana dashboards and from the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] to the data controller API are SSL encrypted using self-signed certificates. A feature will be available in the future that will allow you to provide your own certificates for encryption of these SSL connections.
+> For now, all browser HTTPS/443 connections to the Grafana and Kibana dashboards to the data controller API are SSL encrypted using self-signed certificates. A feature will be available in the future that will allow you to provide your own certificates for encryption of these SSL connections.
-Connectivity from Azure Data Studio and [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] to the Kubernetes API server uses the Kubernetes authentication and encryption that you have established. Each user that is using Azure Data Studio and the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] must have an authenticated connection to the Kubernetes API to perform many of the actions related to Azure Arc enabled data services.
+Connectivity from Azure Data Studio to the Kubernetes API server uses the Kubernetes authentication and encryption that you have established. Each user that is using Azure Data Studio or CLI must have an authenticated connection to the Kubernetes API to perform many of the actions related to Azure Arc-enabled data services.
azure-arc Create Custom Configuration Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-custom-configuration-template.md
+
+ Title: Create custom configuration templates
+description: Create custom configuration templates
++++++ Last updated : 07/13/2021++
+# Create custom configuration templates
+
+This article explains how to create a custom configuration template for Azure Arc-enabled data controller.
++
+One of required parameters during deployment of a data controller, whether in direct mode or indirect mode, is the `--profile-name` parameter. Currently, the available list of built-in profiles can be found via running the query:
+
+```azurecli
+azdata arc dc config list
+```
+These profiles are template JSON files that have various settings for the Azure Arc-enabled data controller such as docker registry and repository settings, storage classes for data and logs, storage size for data and logs, security, service type etc. and can be customized to your environment.
+
+## Create custom.json file
+
+Run `azdata arc dc config init` to initiate a control.json file with pre-defined settings based on your distribution of Kubernetes cluster.
+For instance, a template control.json file for a Kubernetes cluster based on upstream kubeadm can be created as follows:
+
+```azurecli
+azdata arc dc config init --source azure-arc-kubeadm --path custom
+```
+The created control.json file can be edited in any editor such as Visual Studio Code to customize the settings appropriate for your environment.
+
+## Use custom control.json file to deploy Azure Arc-enabled data controller using azdata CLI
+
+Once the template file is updated, the file can be applied during Azure Arc-enabled data controller create as follows:
+
+```azurecli
+azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription ID> --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+## Use custom control.json file for deploying Azure Arc data controller using Azure portal
+
+From the Azure Arc data controller create screen, select "Configure custom template" under Custom template. This will invoke a blade to provide custom settings. In this blade, you can either type in the values for the various settings, or upload a pre-configured control.json file directly.
+
+After ensuring the values are correct, click Apply to proceed with the Azure Arc data controller deployment.
+
+## Next steps
+
+[Deploy data controller - direct connect mode (prerequisites)](create-data-controller-direct-prerequisites.md)
+
+[Create Azure Arc data controller (CLI)](create-data-controller-direct-cli.md)
azure-arc Create Data Controller Direct Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-direct-azure-portal.md
+
+ Title: Deploy Azure Arc data controller from Azure portal| Direct connect mode
+description: Explains how to deploy the data controller in direct connect mode from Azure portal.
++++++ Last updated : 07/13/2021+++
+# Create Azure Arc data controller from Azure portal - Direct connectivity mode
++
+This article describes how to deploy the Azure Arc data controller in direct connect mode during the current preview of this feature.
+
+## Complete prerequisites
+
+Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md).
+
+## Deploy Azure Arc data controller
+
+Azure Arc data controller create flow can be launched from the Azure portal in one of the following ways:
+
+- From the search bar in Azure portal, search for "Azure Arc data controllers", and select "+ Create"
+- From the Overview page of your Azure Arc-enabled Kubernetes cluster,
+ - Select "Extensions (preview)" under Settings.
+ - Select "Add" from the Extensions overview page and then select "Azure Arc data controller"
+ - Select Create from the Azure Arc data controller marketplace gallery
+
+Either of these actions should bring you to the Azure Arc data controller prerequisites page of the create flow.
+
+- Ensure the Azure Arc-enabled Kubernetes cluster (Direct connectivity mode) option is selected. Select "Next : Data controller details"
+- In the **Data controller details** page:
+ - Select the Azure Subscription and Resource group where the Azure Arc data controller will be projected to.
+ - Enter a **name** for the Data controller
+ - Select a pre-created **Custom location** or select "Create new" to create a new custom location. If you choose to create a new custom location, enter a name for the new custom location, select the Azure Arc-enabled Kubernetes cluster from the dropdown, and then enter a namespace to be associated with the new custom location, and finally select Create in the Create new custom location window. Learn more about [custom locations](../kubernetes/conceptual-custom-locations.md)
+ - **Kubernetes configuration** - Select a Kubernetes configuration template that best matches your Kubernetes distribution from the dropdown. If you choose to use your own settings or have a custom profile you want to use, select the Custom template option from the dropdown. In the blade that opens on the right side, enter the details for Docker credentials, repository information, Image tag, Image pull policy, infrastructure type, storage settings for data, logs and their sizes, Service type, and ports for controller and management proxy. Select Apply when all the required information is provided. You can also choose to upload your own template file by selecting the "Upload a template (JSON) from the top of the blade. If you use custom settings and would like to download a copy of those settings, use the "Download this template (JSON)" to do so. Learn more about [custom configuration profiles](create-custom-configuration-template.md).
+ - Select the appropriate **Service Type** for your environment
+ - **Administrator account** - Enter the credentials for the Data controller login and password
+ - **Service Principal** - Enter the Client Id, Tenant ID and the Client Secret information for the Service principal account to be used.
+ - Select the "Next: Additional settings" button to proceed forward after all the required information is provided.
+- In the **Additional Settings** page:
+ - If you choose to upload your logs to Azure Log Analytics automatically, enter the Log Analytics workspace ID and the Log analytics shared access key
+ - If you choose to NOT upload your logs to Azure Log Analytics automatically, uncheck the "Enable logs upload" checkbox.
+ - Select :Next: Tags" to proceed.
+- In the **Tags** page, enter the Names and Values for your tags and select "Next: Review + Create".
+- In the **Review + Create** page, view the summary of your deployment. Ensure all the settings look correct and select "Create" to start the deployment of Azure Arc data controller.
+
+## Monitor the creation from Azure portal
+
+Selecting the "Create" button from the previous step should launch the Azure deployment overview page which shows the progress of the deployment of Azure Arc data controller.
+
+## Monitor the creation from your Kubernetes cluster
+
+The progress of Azure Arc data controller deployment can be monitored as follows:
+
+- Check if the CRDs are created by running ```kubectl get crd ``` from your cluster
+- Check if the namespace is created by running ```kubectl get ns``` from your cluster
+- Check if the custom location is created by running ```az customlocation list --resource-group <resourcegroup> -o table```
+- Check the status of pod deployment by running ```kubectl get pods -ns <namespace>```
+
+## Next steps
+
+[Create an Azure Arc-enabled SQL managed instance](create-sql-managed-instance.md)
+
+[Create an Azure Arc-enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md)
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-direct-cli.md
+
+ Title: Create Azure Arc data controller | Direct connect mode
+description: Explains how to create the data controller in direct connect mode.
++++++ Last updated : 07/13/2021+++
+# Create Azure Arc data controller in Direct connectivity mode using CLI
+
+This article describes how to create the Azure Arc data controller in **direct** connectivity mode using CLI, during the current preview of this feature.
++
+## Complete prerequisites
+
+Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md).
+
+Creating an Azure Arc data controller in **direct** connectivity mode involves the following steps:
+
+1. Create an Azure Arc-enabled data services extension.
+1. Create a custom location.
+1. Create the data controller.
+
+> [!NOTE]
+> Currently, this step can only be performed from the portal. For details, see [Release notes](release-notes.md).
+
+## Create an Azure Arc-enabled data services extension
+
+Use the k8s-extension CLI to create a data services extension.
+
+### Set environment variables
+
+Set the following environment variables which will be then used in next step.
+
+#### Linux
+
+``` terminal
+# where you want the connected cluster resource to be created in Azure
+export subscription=<Your subscription ID>
+export resourceGroup=<Your resource group>
+export resourceName=<name of your connected kubernetes cluster>
+export location=<Azure location>
+```
+
+#### Windows PowerShell
+``` PowerShell
+# where you want the connected cluster resource to be created in Azure
+$ENV:subscription="<Your subscription ID>"
+$ENV:resourceGroup="<Your resource group>"
+$ENV:resourceName="<name of your connected kubernetes cluster>"
+$ENV:location="<Azure location>"
+```
+
+### Create the Arc data services extension
+
+#### Linux
+
+```bash
+az k8s-extension create -c ${resourceName} -g ${resourceGroup} --name ${ADSExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
+
+az k8s-extension show -g ${resourceGroup} -c ${resourceName} --name ${ADSExtensionName} --cluster-type connectedclusters
+```
+
+#### Windows PowerShell
+```PowerShell
+$ENV:ADSExtensionName="ads-extension"
+
+az k8s-extension create -c "$ENV:resourceName" -g "$ENV:resourceGroup" --name "$ENV:ADSExtensionName" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
+
+az k8s-extension show -g "$ENV:resourceGroup" -c "$ENV:resourceName" --name "$ENV:ADSExtensionName" --cluster-type connectedclusters
+```
+
+#### Deploy Azure Arc data services extension using private container registry and credentials
+
+Use the below command if you are deploying from your private repository:
+
+```azurecli
+az k8s-extension create -c "<connected cluster name>" -g "<resource group>" --name "<extension name>" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --scope cluster --release-namespace "<namespace>" --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config imageCredentials.registry=<registry info> --config imageCredentials.username=<username> --config systemDefaultValues.image=<registry/repo/arc-bootstrapper:<imagetag>> --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
+```
+
+ For example
+```azurecli
+az k8s-extension create -c "my-connected-cluster" -g "my-resource-group" --name "arc-data-services" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --scope cluster --release-namespace "arc" --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config imageCredentials.registry=mcr.microsoft.com --config imageCredentials.username=arcuser --config systemDefaultValues.image=mcr.microsoft.com/arcdata/arc-bootstrapper:latest --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
+```
++
+> [!NOTE]
+> The Arc data services extension install can take a couple of minutes to finish.
+
+### Verify the Arc data services extension is created
+
+You can verify if the Arc enabled data services extension is created either from the portal or by connecting directly to the Arc enabled Kubernetes cluster.
+
+#### Azure portal
+1. Login to the Azure portal and browse to the resource group where the Kubernetes connected cluster resource is located.
+1. Select the Arc enabled kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed.
+1. In the navigation on the left side, under **Settings**, select "Extensions (preview)".
+1. You should see the extension that was just created earlier in an "Installed" state.
++
+#### kubectl CLI
+
+1. Connect to your Kubernetes cluster via a Terminal window.
+1. Run the below command and ensure the (1) namespace mentioned above is created and (2) the `bootstrapper` pod is in 'running' state before proceeding to the next step.
+
+``` console
+kubectl get pods -n <name of namespace used in the json template file above>
+```
+
+For example, the following gets the pods from `arc` namespace.
+
+```console
+#Example:
+kubectl get pods -n arc
+```
+
+## Create a custom location using custom location CLI extension
+
+A custom location is an Azure resource that is equivalent to a namespace in a Kubernetes cluster. Custom locations are used as a target to deploy resources to or from Azure. Learn more about custom locations in the [Custom locations on top of Azure Arc-enabled Kubernetes documentation](../kubernetes/conceptual-custom-locations.md).
+
+### Set environment variables
+
+#### Linux
+
+```bash
+export clName=mycustomlocation
+export clNamespace=arc
+export hostClusterId=$(az connectedk8s show -g ${resourceGroup} -n ${resourceName} --query id -o tsv)
+export extensionId=$(az k8s-extension show -g ${resourceGroup} -c ${resourceName} --cluster-type connectedClusters --name ${ADSExtensionName} --query id -o tsv)
+
+az customlocation create -g ${resourceGroup} -n ${clName} --namespace ${clNamespace} \
+ --host-resource-id ${hostClusterId} \
+ --cluster-extension-ids ${extensionId} --location eastus
+```
+
+#### Windows PowerShell
+```PowerShell
+$ENV:clName="mycustomlocation"
+$ENV:clNamespace="arc"
+$ENV:hostClusterId = az connectedk8s show -g "$ENV:resourceGroup" -n "$ENV:resourceName" --query id -o tsv
+$ENV:extensionId = az k8s-extension show -g "$ENV:resourceGroup" -c "$ENV:resourceName" --cluster-type connectedClusters --name "$ENV:ADSExtensionName" --query id -o tsv
+
+az customlocation create -g "$ENV:resourceGroup" -n "$ENV:clName" --namespace "$ENV:clNamespace" --host-resource-id "$ENV:hostClusterId" --cluster-extension-ids "$ENV:extensionId"
+```
+
+## Validate the custom location is created
+
+From the terminal, run the below command to list the custom locations, and validate that the **Provisioning State** shows Succeeded:
+
+```azurecli
+az customlocation list -o table
+```
+
+## Create the Azure Arc data controller
+
+After the extension and custom location are created, proceed to Azure portal to deploy the Azure Arc data controller.
+
+1. Log into the Azure portal.
+1. Search for "Azure Arc data controller" in the Azure Marketplace and initiate the Create flow.
+1. In the **Prerequisites** section, ensure that the Azure Arc-enabled Kubernetes cluster (direct mode) is selected and proceed to the next step.
+1. In the **Data controller details** section, choose a subscription and resource group.
+1. Enter a name for the data controller.
+1. Choose a configuration profile based on the Kubernetes distribution provider you are deploying to.
+1. Choose the Custom Location that you created in the previous step.
+1. Provide details for the data controller administrator login and password.
+1. Provide details for ClientId, TenantId, and Client Secret for the Service Principal that would be used to create the Azure objects. See [Upload metrics](upload-metrics-and-logs-to-azure-monitor.md) for detailed instructions on creating a Service Principal account and the roles that needed to be granted for the account.
+1. Click **Next**, review the summary page for all the details and click on **Create**.
+
+## Monitor the creation
+
+When the Azure portal deployment status shows the deployment was successful, you can check the status of the Arc data controller deployment on the cluster as follows:
+
+```console
+kubectl get datacontrollers -n arc
+```
+
+## Next steps
+
+[Create an Azure Arc-enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md)
+
+[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md)
azure-arc Create Data Controller Direct Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-direct-prerequisites.md
+
+ Title: Prerequisites | Direct connect mode
+description: Prerequisites to deploy the data controller in direct connect mode.
++++++ Last updated : 03/31/2021+++
+# Prerequisites to deploy the data controller in direct connectivity mode
+
+This article describes how to prepare to deploy a data controller for Azure Arc-enabled data services in direct connect mode. Deploying Azure Arc data controller requires additional understanding and concepts as described in [Plan to deploy Azure Arc-enabled data services](plan-azure-arc-data-services.md).
++
+At a high level, the prerequisites for creating Azure Arc data controller in **direct** connectivity mode include:
+
+1. Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes
+2. Create the service principal and configure roles for metrics
+3. Create Azure Arc-enabled data services data controller. This step involves creating
+ - Azure Arc data services extension
+ - custom location
+ - Azure Arc data controller
+
+## 1. Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes
+
+Connecting your kubernetes cluster to Azure can be done by using the ```az``` CLI, with the following extensions as well as Helm.
+
+#### Install tools
+
+- Helm version 3.3+ ([install](https://helm.sh/docs/intro/install/))
+- Install or upgrade to the latest version of Azure CLI ([install](/sql/azdata/install/deploy-install-azdata))
+
+#### Add extensions for Azure CLI
+
+Install the latest versions of the following az extensions:
+- ```k8s-extension```
+- ```connectedk8s```
+- ```k8s-configuration```
+- `customlocation`
+
+Run the following commands to install the az CLI extensions:
+
+```azurecli
+az extension add --name k8s-extension
+az extension add --name connectedk8s
+az extension add --name k8s-configuration
+az extension add --name customlocation
+```
+
+If you've previously installed the ```k8s-extension```, ```connectedk8s```, ```k8s-configuration```, `customlocation` extensions, update to the latest version using the following command:
+
+```azurecli
+az extension update --name k8s-extension
+az extension update --name connectedk8s
+az extension update --name k8s-configuration
+az extension update --name customlocation
+```
+#### Connect your cluster to Azure
+
+To complete this task, follow the steps in [Connect an existing Kubernetes cluster to Azure arc](../kubernetes/quickstart-connect-cluster.md).
+
+After you connect your cluster to Azure, continue to create a Service Principal.
+
+## 2. Create service principal and configure roles for metrics
+
+Follow the steps detailed in the [Upload metrics](upload-metrics-and-logs-to-azure-monitor.md) article and create a Service Principal and grant the roles as described the article.
+
+The SPN ClientID, TenantID, and Client Secret information will be required when you [deploy Azure Arc data controller](create-data-controller-direct-azure-portal.md).
+
+## 3. Create Azure Arc data services
+
+After you have completed these prerequisites, you can [Deploy Azure Arc data controller | Direct connect mode](create-data-controller-direct-azure-portal.md).
++
azure-arc Create Data Controller Indirect Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-indirect-azure-data-studio.md
+
+ Title: Create data controller in Azure Data Studio
+description: Create data controller in Azure Data Studio
++++++ Last updated : 07/13/2021+++
+# Create data controller in Azure Data Studio
+
+You can create a data controller using Azure Data Studio through the deployment wizard and notebooks.
++
+## Prerequisites
+
+- You need access to a Kubernetes cluster and have your kubeconfig file configured to point to the Kubernetes cluster you want to deploy to.
+- You need to [install the client tools](install-client-tools.md) including **Azure Data Studio** the Azure Data Studio extensions called **Azure Arc** and Azure CLI with the `arcdata` extension.
+- You need to log in to Azure in Azure Data Studio. To do this: type CTRL/Command + SHIFT + P to open the command text window and type **Azure**. Choose **Azure: Sign in**. In the panel, that comes up click the + icon in the top right to add an Azure account.
+
+## Use the Deployment Wizard to create Azure Arc data controller
+
+Follow these steps to create an Azure Arc data controller using the Deployment wizard.
+
+1. In Azure Data Studio, click on the Connections tab on the left navigation.
+1. Click on the **...** button at the top of the Connections panel and choose **New Deployment...**
+1. In the new Deployment wizard, choose **Azure Arc Data Controller**, and then click the **Select** button at the bottom.
+1. Ensure the prerequisite tools are available and meet the required versions. **Click Next**.
+1. Use the default kubeconfig file or select another one. Click **Next**.
+1. Choose a Kubernetes cluster context. Click **Next**.
+1. Choose a deployment configuration profile depending on your target Kubernetes cluster. **Click Next**.
+1. Choose the desired subscription and resource group.
+1. Select an Azure location.
+
+ The Azure location selected here is the location in Azure where the *metadata* about the data controller and the database instances that it manages will be stored. The data controller and database instances will be actually created in your Kubernetes cluster wherever that may be.
+
+ Once done, click **Next**.
+
+1. Enter a name for the data controller and for the namespace that the data controller will be created in.
+
+ The data controller and namespace name will be used to create a custom resource in the Kubernetes cluster so they must conform to [Kubernetes naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names).
+
+ If the namespace already exists it will be used if the namespace does not already contain other Kubernetes objects - pods, etc. If the namespace does not exist, an attempt to create the namespace will be made. Creating a namespace in a Kubernetes cluster requires Kubernetes cluster administrator privileges. If you don't have Kubernetes cluster administrator privileges, ask your Kubernetes cluster administrator to perform the first few steps in the [Create a data controller using Kubernetes-native tools](./create-data-controller-using-kubernetes-native-tools.md) article which are required to be performed by a Kubernetes administrator before you complete this wizard.
++
+1. Select the storage class where the data controller will be deployed.
+1. Enter a username and password and confirm the password for the data controller administrator user account. Click **Next**.
+
+1. Review the deployment configuration.
+1. Click the **Deploy** to deploy the desired configuration or the **Script to Notebook** to review the deployment instructions or make any changes necessary such as storage class names or service types. Click **Run All** at the top of the notebook.
+
+## Monitoring the creation status
+
+Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
+
+> [!NOTE]
+> The example commands below assume that you created a data controller and Kubernetes namespace with the name 'arc'. If you used a different namespace/data controller name, you can replace 'arc' with your name.
+
+```console
+kubectl get datacontroller/arc --namespace arc
+```
+
+```console
+kubectl get pods --namespace arc
+```
+
+You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
+
+```console
+kubectl describe po/<pod name> --namespace arc
+
+#Example:
+#kubectl describe po/control-2g7bl --namespace arc
+```
+
+## Troubleshooting creation problems
+
+If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Create Data Controller Indirect Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-indirect-azure-portal.md
+
+ Title: Create an Azure Arc data controller in indirect mode from Azure portal
+description: Create an Azure Arc data controller in indirect mode from Azure portal
++++++ Last updated : 07/13/2021+++
+# Create Azure Arc data controller from Azure portal - Indirect connectivity mode
++
+## Introduction
+
+You can use the Azure portal to create an Azure Arc data controller, in indirect connectivity mode.
+
+Many of the creation experiences for Azure Arc start in the Azure portal even though the resource to be created or managed is outside of Azure infrastructure. The user experience pattern in these cases, especially when there is no direct connectivity between Azure and your environment, is to use the Azure portal to generate a script which can then be downloaded and executed in your environment to establish a secure connection back to Azure. For example, Azure Arc-enabled servers follows this pattern to [create Arc enabled servers](../servers/onboard-portal.md).
+
+When you use the indirect connect mode of Azure Arc-enabled data services, you can use the Azure portal to generate a notebook for you that can then be downloaded and run in Azure Data Studio against your Kubernetes cluster.
+
+When you use direct connect mode, you can provision the data controller directly from the Azure portal. You can read more about [connectivity modes](connectivity.md).
+
+## Use the Azure portal to create an Azure Arc data controller
+
+Follow the steps below to create an Azure Arc data controller using the Azure portal and Azure Data Studio.
+
+1. First, log in to the [Azure portal marketplace](https://ms.portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home/searchQuery/azure%20arc%20data%20controller). The marketplace search results will be filtered to show you the 'Azure Arc data controller'.
+1. If the first step has not entered the search criteria. Please enter in to the search results, click on 'Azure Arc data controller'.
+1. Select the Azure Data Controller tile from the marketplace.
+1. Click on the **Create** button.
+1. Select the indirect connectivity mode. Learn more about [Connectivity modes and requirements](./connectivity.md).
+1. Review the requirements to create an Azure Arc data controller and install any missing prerequisite software such as Azure Data Studio and kubectl.
+1. Click on the **Next: Data controller details** button.
+1. Choose a subscription, resource group and Azure location just like you would for any other resource that you would create in the Azure portal. In this case the Azure location that you select will be where the metadata about the resource will be stored. The resource itself will be created on whatever infrastructure you choose. It doesn't need to be on Azure infrastructure.
+1. Enter a name for your data controller.
+
+1. Click the **Open in Azure Studio** button.
+1. On the next screen, you will see a summary of your selections and a notebook that is generated. You can click the **Open link in Azure Data Studio** button to open the generated notebook in Azure Data Studio.
+1. Open the notebook in Azure Data Studio and click the **Run All** button at the top.
+1. Follow the prompts and instructions in the notebook to complete the data controller creation.
+
+## Monitoring the creation status
+
+Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
+
+> [!NOTE]
+> The example commands below assume that you created a data controller and Kubernetes namespace with the name 'arc'. If you used a different namespace/data controller name, you can replace 'arc' with your name.
+
+```console
+kubectl get datacontroller/arc --namespace arc
+```
+
+```console
+kubectl get pods --namespace arc
+```
+
+You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
+
+```console
+kubectl describe po/<pod name> --namespace arc
+
+#Example:
+#kubectl describe po/control-2g7bl --namespace arc
+```
+
+## Troubleshooting creation problems
+
+If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-indirect-cli.md
+
+ Title: Create data controller using CLI
+description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster which you already have created, using the CLI.
++++++ Last updated : 07/13/2021+++
+# Create Azure Arc data controller using the CLI
++
+## Prerequisites
+
+Review the topic [Create the Azure Arc data controller](create-data-controller.md) for overview information.
+
+To create the Azure Arc data Controller using the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] you will need to have the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] installed.
+
+ [Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
+
+Regardless of which target platform you choose, you will need to set the following environment variables prior to the creation for the data controller administrator user. You can provide these credentials to other people that need to have administrator access to the data controller as needed.
+
+**AZDATA_USERNAME** - A username of your choice for the data controller administrator user. Example: `arcadmin`
+
+**AZDATA_PASSWORD** - A password of your choice for the data controller administrator user. The password must be at least eight characters long and contain characters from three of the following four sets: uppercase letters, lowercase letters, numbers, and symbols.
+
+### Linux or macOS
+
+```console
+export AZDATA_USERNAME="<your username of choice>"
+export AZDATA_PASSWORD="<your password of choice>"
+```
+
+### Windows PowerShell
+
+```console
+$ENV:AZDATA_USERNAME="<your username of choice>"
+$ENV:AZDATA_PASSWORD="<your password of choice>"
+```
+
+You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
+
+You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands.
+
+```console
+kubectl get namespace
+kubectl config current-context
+```
+
+## Create the Azure Arc data controller
+
+> [!NOTE]
+> You can use a different value for the `--namespace` parameter of the `azdata arc dc create` command in the examples below, but be sure to use that namespace name for the `--namespace parameter` in all other commands below.
+
+- [Create Azure Arc data controller using the CLI](#create-azure-arc-data-controller-using-the-cli)
+ - [Prerequisites](#prerequisites)
+ - [Linux or macOS](#linux-or-macos)
+ - [Windows PowerShell](#windows-powershell)
+ - [Create the Azure Arc data controller](#create-the-azure-arc-data-controller)
+ - [Create on Azure Kubernetes Service (AKS)](#create-on-azure-kubernetes-service-aks)
+ - [Create on AKS engine on Azure Stack Hub](#create-on-aks-engine-on-azure-stack-hub)
+ - [Create on AKS on Azure Stack HCI](#create-on-aks-on-azure-stack-hci)
+ - [Create on Azure Red Hat OpenShift (ARO)](#create-on-azure-red-hat-openshift-aro)
+ - [Create custom deployment profile](#create-custom-deployment-profile)
+ - [Create data controller](#create-data-controller)
+ - [Create on Red Hat OpenShift Container Platform (OCP)](#create-on-red-hat-openshift-container-platform-ocp)
+ - [Determine storage class](#determine-storage-class)
+ - [Create custom deployment profile](#create-custom-deployment-profile-1)
+ - [Set storage class](#set-storage-class)
+ - [Set LoadBalancer (optional)](#set-loadbalancer-optional)
+ - [Create data controller](#create-data-controller-1)
+ - [Create on open source, upstream Kubernetes (kubeadm)](#create-on-open-source-upstream-kubernetes-kubeadm)
+ - [Create on AWS Elastic Kubernetes Service (EKS)](#create-on-aws-elastic-kubernetes-service-eks)
+ - [Create on Google Cloud Kubernetes Engine Service (GKE)](#create-on-google-cloud-kubernetes-engine-service-gke)
+ - [Monitoring the creation status](#monitoring-the-creation-status)
+ - [Troubleshooting creation problems](#troubleshooting-creation-problems)
+
+### Create on Azure Kubernetes Service (AKS)
+
+By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have VMs that were deployed using VM images that have premium disks.
+
+If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location.
+
+```console
+azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. It just won't provide the fastest performance.
+
+If you want to use the `default` storage class, then you can run this command:
+
+```console
+azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+
+### Create on AKS engine on Azure Stack Hub
+
+By default, the deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have worker VMs that were deployed using VM images that have premium disks on Azure Stack Hub.
+
+You can run the following command to create the data controller using the managed-premium storage class:
+
+```console
+azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. In Azure Stack Hub, premium disks and standard disks are backed by the same storage infrastructure. Therefore, they are expected to provide the same general performance, but with different IOPS limits.
+
+If you want to use the `default` storage class, then you can run this command.
+
+```console
+azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+
+### Create on AKS on Azure Stack HCI
+
+By default, the deployment profile uses a storage class named `default` and the service type `LoadBalancer`.
+
+You can run the following command to create the data controller using the `default` storage class and service type `LoadBalancer`.
+
+```console
+azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
++
+### Create on Azure Red Hat OpenShift (ARO)
+
+#### Create custom deployment profile
+
+Use the profile `azure-arc-azure-openshift` for Azure RedHat Open Shift.
+
+```console
+azdata arc dc config init --source azure-arc-azure-openshift --path ./custom
+```
+
+#### Create data controller
+
+You can run the following command to create the data controller:
+
+```console
+azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example
+#azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+
+### Create on Red Hat OpenShift Container Platform (OCP)
+
+> [!NOTE]
+> If you are using Red Hat OpenShift Container Platform on Azure, it is recommended to use the latest available version.
+
+#### Determine storage class
+
+You will also need to determine which storage class to use by running the following command.
+
+```console
+kubectl get storageclass
+```
+
+#### Create custom deployment profile
+
+Create a new custom deployment profile file based on the `azure-arc-openshift` deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
+
+Use the profile `azure-arc-openshift` for OpenShift Container Platform.
+
+```console
+azdata arc dc config init --source azure-arc-openshift --path ./custom
+```
+
+#### Set storage class
+
+Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above.
+
+```console
+azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>"
+azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>"
+
+#Example:
+#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass"
+#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass"
+```
+
+#### Set LoadBalancer (optional)
+
+By default, the `azure-arc-openshift` deployment profile uses `NodePort` as the service type. If you are using an OpenShift cluster that is integrated with a load balancer, you can change the configuration to use the `LoadBalancer` service type using the following command:
+
+```console
+azdata arc dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
+```
+
+#### Create data controller
+
+Now you are ready to create the data controller using the following command.
+
+> [!NOTE]
+> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
++
+```console
+azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --path ./custom --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+
+### Create on open source, upstream Kubernetes (kubeadm)
+
+By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `azdata arc dc create` command below.
+
+If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command will create a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
+
+```console
+azdata arc dc config init --source azure-arc-kubeadm --path ./custom
+```
+
+You can look up the available storage classes by running the following command.
+
+```console
+kubectl get storageclass
+```
+
+Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above.
+
+```console
+azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>"
+azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>"
+
+#Example:
+#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass"
+#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass"
+```
+
+By default, the kubeadm deployment profile uses `NodePort` as the service type. If you are using a Kubernetes cluster that is integrated with a load balancer, you can change the configuration using the following command.
+
+```console
+azdata arc dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
+```
+
+Now you are ready to create the data controller using the following command.
+
+```console
+azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --path ./custom --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+
+### Create on AWS Elastic Kubernetes Service (EKS)
+
+By default, the EKS storage class is `gp2` and the service type is `LoadBalancer`.
+
+Run the following command to create the data controller using the provided EKS deployment profile.
+
+```console
+azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+
+### Create on Google Cloud Kubernetes Engine Service (GKE)
+
+By default, the GKE storage class is `standard` and the service type is `LoadBalancer`.
+
+Run the following command to create the data controller using the provided GKE deployment profile.
+
+```console
+azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+
+#Example:
+#azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+```
+
+Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+
+## Monitoring the creation status
+
+Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
+
+> [!NOTE]
+> The example commands below assume that you created a data controller and Kubernetes namespace with the name `arc`. If you used a different namespace/data controller name, you can replace `arc` with your name.
+
+```console
+kubectl get datacontroller/arc --namespace arc
+```
+
+```console
+kubectl get pods --namespace arc
+```
+
+You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
+
+```console
+kubectl describe po/<pod name> --namespace arc
+
+#Example:
+#kubectl describe po/control-2g7bl --namespace arc
+```
+
+## Troubleshooting creation problems
+
+If you encounter any troubles with creation, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Previously updated : 06/02/2021 Last updated : 07/13/2021
To create the Azure Arc data Controller using Kubernetes tools you will need to
### Cleanup from past installations
-If you installed Azure Arc data controller in the past, on the same cluster and deleted the Azure Arc data controller using `azdata arc dc delete` command, there may be some cluster level objects that would still need to be deleted. Run the following commands to delete Azure Arc data controller cluster level objects:
+If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted. Run the following commands to delete the Azure Arc data controller cluster level objects:
```console # Cleanup azure arc data service artifacts
-kubectl delete crd datacontrollers.arcdata.microsoft.com
-kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com
-kubectl delete crd postgresqls.arcdata.microsoft.com
+kubectl delete crd datacontrollers.arcdata.microsoft.com
+kubectl delete crd postgresqls.arcdata.microsoft.com
+kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com
+kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com
+kubectl delete crd dags.sql.arcdata.microsoft.com
+kubectl delete crd exporttasks.tasks.arcdata.microsoft.com
+kubectl delete crd monitors.arcdata.microsoft.com
+
+kubectl delete clusterrole arc:cr-arc-metricsdc-reader
+kubectl delete clusterrolebinding arc:crb-arc-metricsdc-reader
+
+kubectl delete apiservice v1beta1.arcdata.microsoft.com
+kubectl delete apiservice v1beta1.sql.arcdata.microsoft.com
``` ## Overview
Creating the Azure Arc data controller has the following high level steps:
3. Create the bootstrapper service including the replica set, service account, role, and role binding. 4. Create a secret for the data controller administrator username and password. 5. Create the data controller.
-
+ ## Create the custom resource definitions Run the following command to create the custom resource definitions. **[Requires Kubernetes Cluster Administrator Permissions]**
Run a command similar to the following to create a new, dedicated namespace in w
kubectl create namespace arc ```
-If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More information will be provided later on how to provide more granular role-based access to users.
+If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
## Create the bootstrapper service
Edit the following as needed:
**RECOMMENDED TO REVIEW AND POSSIBLY CHANGE DEFAULTS** - **storage..className**: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is `default` which assumes there is a storage class that exists and is named `default` not that there is a storage class that _is_ the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs. - **serviceType**: Change the service type to `NodePort` if you are not using a LoadBalancer. Note: There are two serviceType settings that need to be changed.-- On Azure Red Hat OpenShift or Red Hat OpenShift container platform, you must apply the security context constraint before you create the data controller. Follow the instructions at [Apply a security context constraint for Azure Arc enabled data services on OpenShift](how-to-apply-security-context-constraint.md).-- **Security** For Azure Red Hat OpenShift or Red Hat OpenShift container platform, replace the `security:` settings with the following values in the data controller yaml file.
+- On Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, you must apply the security context constraint before you create the data controller. Follow the instructions at [Apply a security context constraint for Azure Arc-enabled data services on OpenShift](how-to-apply-security-context-constraint.md).
+- **Security** For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the `security:` settings with the following values in the data controller yaml file.
```yml security:
Edit the following as needed:
- **displayName**: Set this to the same value as the name attribute at the top of the file. - **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here. - **dockerRegistry**: The image pull secret to use to pull the images from a private container registry if required.-- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc enabled data services container images.
+- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images.
- **imageTag**: the current latest version tag is defaulted in the template, but you can change it if you want to use an older version. The following example shows a completed data controller yaml file. Update the example for your environment, based on your requirements, and the information above.
kind: ServiceAccount
metadata: name: sa-mssql-controller
-apiVersion: arcdata.microsoft.com/v1alpha1
+apiVersion: arcdata.microsoft.com/v1beta1
kind: datacontroller metadata: generation: 1
- name: arc
+ name: arc-dc
spec: credentials: controllerAdmin: controller-login-secret
spec:
imageTag: latest registry: mcr.microsoft.com repository: arcdata
+ infrastructure: other #Must be a value in the array [alibaba, aws, azure, gcp, onpremises, other]
security: allowDumps: true allowNodeMetricsCollection: true
spec:
resourceGroup: <your resource group> subscription: <your subscription GUID> controller:
- displayName: arc
+ displayName: arc-dc
enableBilling: "True" logs.rotation.days: "7" logs.rotation.size: "5000"
kubectl describe pod/<pod name> --namespace arc
#kubectl describe pod/control-2g7bl --namespace arc ```
-Azure Arc extension for Azure Data Studio provides a notebook to walk you through the experience of how to set up Azure Arc enabled Kubernetes and configure it to monitor a git repository that contains a sample SQL Managed Instance yaml file. When everything is connected, a new SQL Managed Instance will be deployed to your Kubernetes cluster.
+Azure Arc extension for Azure Data Studio provides a notebook to walk you through the experience of how to set up Azure Arc-enabled Kubernetes and configure it to monitor a git repository that contains a sample SQL Managed Instance yaml file. When everything is connected, a new SQL Managed Instance will be deployed to your Kubernetes cluster.
-See the **Deploy a SQL Managed Instance using Azure Arc enabled Kubernetes and Flux** notebook in the Azure Arc extension for Azure Data Studio.
+See the **Deploy a SQL Managed Instance using Azure Arc-enabled Kubernetes and Flux** notebook in the Azure Arc extension for Azure Data Studio.
## Troubleshooting creation problems
azure-arc Create Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller.md
Previously updated : 05/05/2021 Last updated : 07/13/2021
## Overview of creating the Azure Arc data controller
-Azure Arc enabled data services can be created on multiple different types of Kubernetes clusters and managed Kubernetes services using multiple different approaches.
+Azure Arc-enabled data services can be created on multiple different types of Kubernetes clusters and managed Kubernetes services using multiple different approaches.
Currently, the supported list of Kubernetes services and distributions are the following:
Regardless of the option you choose, during the creation process you will need t
- **Data controller username** - Any username for the data controller administrator user. - **Data controller password** - A password for the data controller administrator user. - **Name of your Kubernetes namespace** - the name of the Kubernetes namespace that you want to create the data controller in.-- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc enabled data services environment to Azure. Preview currently only supports indirectly connected and directly connected modes. For information, see [connectivity mode](./connectivity.md).
+- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Preview currently only supports indirectly connected and directly connected modes. For information, see [connectivity mode](./connectivity.md).
- **Azure subscription ID** - The Azure subscription GUID for where you want the data controller resource in Azure to be created. - **Azure resource group name** - The name of the resource group where you want the data controller resource in Azure to be created. - **Azure location** - The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see [Azure global infrastructure / Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). The metadata and billing information about the Azure resources managed by the data controller that you are deploying will be stored only in the location in Azure that you specify as the location parameter. If you are deploying in the directly connected mode, the location parameter for the data controller will be the same as the location of the custom location resource that you target.
There are multiple options for creating the Azure Arc data controller:
> **Just want to try things out?** > Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM! > -- [Create a data controller with [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](create-data-controller-using-azdata.md)-- [Create a data controller with Azure Data Studio](create-data-controller-azure-data-studio.md)-- [Create a data controller from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-resource-in-azure-portal.md)-- [Create a data controller with Kubernetes tools such as kubectl or oc](create-data-controller-using-kubernetes-native-tools.md)
+- [Create a data controller in indirect connected mode with CLI](create-data-controller-indirect-cli.md)
+- [Create a data controller in indirect connected mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)
+- [Create a data controller in indirect connected mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)
+- [Create a data controller in indirect connected mode with Kubernetes tools such as kubectl or oc](create-data-controller-using-kubernetes-native-tools.md)
+- [Create a data controller in direct connected mode](create-data-controller-direct-prerequisites.md)
- [Create a data controller with Azure Arc Jumpstart for an accelerated experience of a test deployment](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/)
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
Title: Create Azure Arc enabled PostgreSQL Hyperscale using Azure Data Studio
-description: Create Azure Arc enabled PostgreSQL Hyperscale using Azure Data Studio
+ Title: Create Azure Arc-enabled PostgreSQL Hyperscale using Azure Data Studio
+description: Create Azure Arc-enabled PostgreSQL Hyperscale using Azure Data Studio
Last updated 06/02/2021
-# Create Azure Arc enabled PostgreSQL Hyperscale using Azure Data Studio
+# Create Azure Arc-enabled PostgreSQL Hyperscale using Azure Data Studio
-This document walks you through the steps for using Azure Data Studio to provision Azure Arc enabled PostgreSQL Hyperscale server groups.
+This document walks you through the steps for using Azure Data Studio to provision Azure Arc-enabled PostgreSQL Hyperscale server groups.
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)]
_**Server-group-name** is the name of the server group you will deploy during th
For more details on SCCs in OpenShift, please refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html). You may now implement the next step.
-## Create an Azure Arc enabled PostgreSQL Hyperscale server group
+## Create an Azure Arc-enabled PostgreSQL Hyperscale server group
1. Launch Azure Data Studio 1. On the Connections tab, Click on the three dots on the top left and choose "New Deployment"
You may now implement the next step.
- Select the number of worker nodes to provision 1. Click the **Deploy** button
-This starts the creation of the Azure Arc enabled PostgreSQL Hyperscale server group on the data controller.
+This starts the creation of the Azure Arc-enabled PostgreSQL Hyperscale server group on the data controller.
In a few minutes, your creation should successfully complete.
While indicating 1 worker works, we do not recommend you use it. This deployment
- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class. - to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
- - to set the storage class for the backups: in this Preview of the Azure Arc enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
+ - to set the storage class for the backups: in this Preview of the Azure Arc-enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
- if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class. - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type.
While indicating 1 worker works, we do not recommend you use it. This deployment
* [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)* * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+ > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md)
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
Title: Create an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
-description: Create an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
+ Title: Create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
+description: Create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
Last updated 06/02/2021
-# Create an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
+# Create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
This document describes the steps to create a PostgreSQL Hyperscale server group on Azure Arc from the Azure portal.
This document describes the steps to create a PostgreSQL Hyperscale server group
## Getting started If you are already familiar with the topics below, you may skip this paragraph. There are important topics you may want read before you proceed with creation:-- [Overview of Azure Arc enabled data services](overview.md)
+- [Overview of Azure Arc-enabled data services](overview.md)
- [Connectivity modes and requirements](connectivity.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
If you prefer to try out things without provisioning a full environment yourself
## Deploy an Arc data controller configured to use the Direct connectivity mode
-Requirement: before you deploy an Azure Arc enabled PostgreSQL Hyperscale server group that you operate from the Azure portal you must first deploy an Azure Arc data controller configured to use the *Direct* connectivity mode.
+Requirement: before you deploy an Azure Arc-enabled PostgreSQL Hyperscale server group that you operate from the Azure portal you must first deploy an Azure Arc data controller configured to use the *Direct* connectivity mode.
To deploy an Arc data controller, complete the instructions in these articles:
-1. [Deploy data controller - direct connect mode (prerequisites)](deploy-data-controller-direct-mode-prerequisites.md)
-1. [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md)
+1. [Deploy data controller - direct connect mode (prerequisites)](create-data-controller-direct-prerequisites.md)
+1. [Deploy Azure Arc data controller in Direct connect mode from Azure portal](create-data-controller-direct-azure-portal.md)
## Preliminary and temporary step for OpenShift users only
For more details on SCCs in OpenShift, refer to the [OpenShift documentation](ht
Proceed to the next step.
-## Deploy an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
+## Deploy an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
-To deploy and operate an Azure Arc enabled Postgres Hyperscale server group from the Azure portal you must deploy it to an Arc data controller configured to use the *Direct* connectivity mode.
+To deploy and operate an Azure Arc-enabled Postgres Hyperscale server group from the Azure portal you must deploy it to an Arc data controller configured to use the *Direct* connectivity mode.
> [!IMPORTANT]
-> You can not operate an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal if you deployed it to an Azure Arc data controller configured to use the *Indirect* connectivity mode.
+> You can not operate an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal if you deployed it to an Azure Arc data controller configured to use the *Indirect* connectivity mode.
-After you deployed an Arc data controller enabled for Direct connectivity mode, you may chose one the following 3 options to deploy a Azure Arc enabled Postgres Hyperscale server group:
+After you deployed an Arc data controller enabled for Direct connectivity mode, you may chose one the following 3 options to deploy a Azure Arc-enabled Postgres Hyperscale server group:
### Option 1: Deploy from the Azure Marketplace 1. Open a browser to the following URL [https://portal.azure.com](https://portal.azure.com)
After you deployed an Arc data controller enabled for Direct connectivity mode,
### Option 2: Deploy from the Azure Database for PostgreSQL deployment option page 1. Open a browser to the following URL https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer.
-2. Click the tile at the bottom right. It is titled: Azure Arc enabled PostgreSQL Hyperscale (Preview).
+2. Click the tile at the bottom right. It is titled: Azure Arc-enabled PostgreSQL Hyperscale (Preview).
3. Fill in the form like you deploy an other Azure resources. ### Option 3: Deploy from the Azure Arc center
While indicating 1 worker works, we do not recommend you use it. This deployment
- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class. - to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
- - to set the storage class for the backups: in this Preview of the Azure Arc enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
+ - to set the storage class for the backups: in this Preview of the Azure Arc-enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
- if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class. - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type. ## Next steps -- Connect to your Azure Arc enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
+- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially: * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md) * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
While indicating 1 worker works, we do not recommend you use it. This deployment
* [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)* * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+ > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
-- [Scale out your Azure Arc enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Arc-enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
Title: Create an Azure Arc enabled PostgreSQL Hyperscale server group from CLI
-description: Create an Azure Arc enabled PostgreSQL Hyperscale server group from CLI
+ Title: Create an Azure Arc-enabled PostgreSQL Hyperscale server group from CLI
+description: Create an Azure Arc-enabled PostgreSQL Hyperscale server group from CLI
Last updated 06/02/2021
-# Create an Azure Arc enabled PostgreSQL Hyperscale server group
+# Create an Azure Arc-enabled PostgreSQL Hyperscale server group
This document describes the steps to create a PostgreSQL Hyperscale server group on Azure Arc.
This document describes the steps to create a PostgreSQL Hyperscale server group
## Getting started If you are already familiar with the topics below, you may skip this paragraph. There are important topics you may want read before you proceed with creation:-- [Overview of Azure Arc enabled data services](overview.md)
+- [Overview of Azure Arc-enabled data services](overview.md)
- [Connectivity modes and requirements](connectivity.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
oc adm policy add-scc-to-user arc-data-scc -z <server-group-name> -n <namespace
For more details on SCCs in OpenShift, please refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html). You may now implement the next step.
-## Create an Azure Arc enabled PostgreSQL Hyperscale server group
+## Create an Azure Arc-enabled PostgreSQL Hyperscale server group
-To create an Azure Arc enabled PostgreSQL Hyperscale server group on your Arc data controller, you will use the command `azdata arc postgres server create` to which you will pass several parameters.
+To create an Azure Arc-enabled PostgreSQL Hyperscale server group on your Arc data controller, you will use the command `azdata arc postgres server create` to which you will pass several parameters.
For details about all the parameters you can set at the creation time, review the output of the command: ```console
While using -w 1 works, we do not recommend you use it. This deployment will not
- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class. - to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
- - to set the storage class for the backups: in this Preview of the Azure Arc enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
+ - to set the storage class for the backups: in this Preview of the Azure Arc-enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
- if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class. - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type.
Name State Workers
postgres01 Ready 2 ```
-## Get the endpoints to connect to your Azure Arc enabled PostgreSQL Hyperscale server groups
+## Get the endpoints to connect to your Azure Arc-enabled PostgreSQL Hyperscale server groups
To view the endpoints for a PostgreSQL server group, run the following command:
For example:
] ```
-You can use the PostgreSQL Instance endpoint to connect to the PostgreSQL Hyperscale server group from your favorite tool: [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), [pgcli](https://www.pgcli.com/) psql, pgAdmin, etc. When you do so, you connect to the coordinator node/instance which takes care of routing the query to the appropriate worker nodes/instances if you have created distributed tables. For more details, read the [concepts of Azure Arc enabled PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md).
+You can use the PostgreSQL Instance endpoint to connect to the PostgreSQL Hyperscale server group from your favorite tool: [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), [pgcli](https://www.pgcli.com/) psql, pgAdmin, etc. When you do so, you connect to the coordinator node/instance which takes care of routing the query to the appropriate worker nodes/instances if you have created distributed tables. For more details, read the [concepts of Azure Arc-enabled PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md).
## Special note about Azure virtual machine deployments
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
## Next steps -- Connect to your Azure Arc enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
+- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially: * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md) * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
* [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)* * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+ > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
-- [Scale out your Azure Arc enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Arc-enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Create Sql Managed Instance Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md
Previously updated : 09/22/2020 Last updated : 07/13/2021
This document walks you through the steps for installing Azure SQL Managed Insta
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## Log in to the Azure Arc data controller
-
-Before you can create an instance, log in to the Azure Arc data controller if you are not already logged in.
-
-```console
-azdata login
-```
-
-You will then be prompted for the namespace where the data controller is created, the username and password to log in to the controller.
-
-> If you need to validate the namespace, you can run ```kubectl get pods -A``` to get a list of all the namespaces on the cluster.
-
-```console
-Username: arcadmin
-Password:
-Namespace: arc
-Logged in successfully to `https://10.0.0.4:30080` in namespace `arc`. Setting active context to `arc`
-```
- ## Create Azure SQL Managed Instance on Azure Arc - Launch Azure Data Studio - On the Connections tab, Click on the three dots on the top left and choose "New Deployment" - From the deployment options, select **Azure SQL Managed Instance - Azure Arc** > [!NOTE]
- > You may be prompted to install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] here if it is not currently installed.
+ > You may be prompted to install the appropriate CLI here if it is not currently installed.
- Accept the Privacy and license terms and click **Select** at the bottom -- - In the Deploy Azure SQL Managed Instance - Azure Arc blade, enter the following information: - Enter a name for the SQL Server instance - Enter and confirm a password for the SQL Server instance
Logged in successfully to `https://10.0.0.4:30080` in namespace `arc`. Setting a
## Connect to Azure SQL Managed Instance - Azure Arc from Azure Data Studio -- Log in to the Azure Arc data controller, by providing the namespace, username and password for the data controller:
-```console
-azdata login
-```
- - View all the Azure SQL Managed Instances provisioned, using the following commands:
-```console
+```azurecli
azdata arc sql mi list ```
sqlinstance1 1/1 25.51.65.109:1433 Ready
- Optionally, select/Add New Server Group as appropriate - Select **Connect** to connect to the Azure SQL Managed Instance - Azure Arc --- ## Next Steps Now try to [monitor your SQL instance](monitor-grafana-kibana.md)
azure-arc Create Sql Managed Instance Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
If you encounter any troubles with creation, please see the [troubleshooting gui
## Next steps
-[Connect to Azure Arc enabled SQL Managed Instance](connect-managed-instance.md)
+[Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md)
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance.md
Previously updated : 09/22/2020 Last updated : 07/13/2021
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## Login to the Azure Arc data controller
-
-Before you can create an instance, log in to the Azure Arc data controller if you are not already logged in.
-
-```console
-azdata login
-```
-
-You will then be prompted for the username, password, and the system namespace.
-
-```console
-Username: arcadmin
-Password:
-Namespace: arc
-Logged in successfully to `https://10.0.0.4:30080` in namespace `arc`. Setting active context to `arc`
-```
- ## Create an Azure SQL Managed Instance To view available create options forSQL Managed Instance, use the following command:
Name Replicas ServerEndpoint State
sqldemo 1/1 10.240.0.4:32023 Ready ```
-If you are using AKS or `kubeadm` or OpenShift etc., you can copy the external IP and port number from here and connect to it using your favorite tool for connecting to a SQL Sever/Azure SQL instance such as Azure Data Studio or SQL Server Management Studio. However, if you are using the quickstart VM, see the [Connect to Azure Arc enabled SQL Managed Instance](connect-managed-instance.md) article for special instructions.
+If you are using AKS or `kubeadm` or OpenShift etc., you can copy the external IP and port number from here and connect to it using your favorite tool for connecting to a SQL Sever/Azure SQL instance such as Azure Data Studio or SQL Server Management Studio. However, if you are using the quickstart VM, see the [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md) article for special instructions.
## Next steps-- [Connect to Azure Arc enabled SQL Managed Instance](connect-managed-instance.md)
+- [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md)
- [Register your instance with Azure and upload metrics and logs about your instance](upload-metrics-and-logs-to-azure-monitor.md) - [Deploy Azure SQL managed instance using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)
azure-arc Delete Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-azure-resources.md
See [Manage Azure resources by using the Azure portal](../../azure-resource-mana
In indirect connect mode, deleting an instance from Kubernetes will not remove it from Azure and deleting an instance from Azure will not remove it from Kubernetes. For indirect connect mode, deleting a resource is a two step process and this will be improved in the future. Kubernetes will be the source of truth and the portal will be updated to reflect it.
-In some cases, you may need to manually delete Azure Arc enabled data services resources in Azure. You can delete these resources using any of the following options.
+In some cases, you may need to manually delete Azure Arc-enabled data services resources in Azure. You can delete these resources using any of the following options.
- [Delete resources from Azure](#delete-resources-from-azure) - [Delete an entire resource group](#delete-an-entire-resource-group)
In some cases, you may need to manually delete Azure Arc enabled data services r
## Delete an entire resource group
-If you have been using a specific and dedicated resource group for Azure Arc enabled data services and you want to delete *everything* inside of the resource group you can delete the resource group which will delete everything inside of it.
+If you have been using a specific and dedicated resource group for Azure Arc-enabled data services and you want to delete *everything* inside of the resource group you can delete the resource group which will delete everything inside of it.
You can delete a resource group in the Azure portal by doing the following: -- Browse to the resource group in the Azure portal where the Azure Arc enabled data services resources have been created.
+- Browse to the resource group in the Azure portal where the Azure Arc-enabled data services resources have been created.
- Click the **Delete resource group** button. - Confirm the deletion by entering the resource group name and click **Delete**. ## Delete specific resources in the resource group
-You can delete specific Azure Arc enabled data services resources in a resource group in the Azure portal by doing the following:
+You can delete specific Azure Arc-enabled data services resources in a resource group in the Azure portal by doing the following:
-- Browse to the resource group in the Azure portal where the Azure Arc enabled data services resources have been created.
+- Browse to the resource group in the Azure portal where the Azure Arc-enabled data services resources have been created.
- Select all the resources to be deleted. - Click on the Delete button. - Confirm the deletion by typing 'yes' and click **Delete**. ## Delete resources using the Azure CLI
-You can delete specific Azure Arc enabled data services resources using the Azure CLI.
+You can delete specific Azure Arc-enabled data services resources using the Azure CLI.
### Delete SQL managed instance resources using the Azure CLI
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-managed-instance.md
Title: Delete Azure Arc enabled SQL Managed Instance
-description: Delete Azure Arc enabled SQL Managed Instance
+ Title: Delete Azure Arc-enabled SQL Managed Instance
+description: Delete Azure Arc-enabled SQL Managed Instance
Previously updated : 09/22/2020 Last updated : 07/13/2021
-# Delete Azure Arc enabled SQL Managed Instance
-This article describes how you can delete an Azure Arc enabled SQL Managed Instance.
+# Delete Azure Arc-enabled SQL Managed Instance
+This article describes how you can delete an Azure Arc-enabled SQL Managed Instance.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## View Existing Azure Arc enabled SQL Managed Instances
+## View Existing Azure Arc-enabled SQL Managed Instances
To view SQL Managed Instances, run the following command:
-```console
-azdata arc sql mi list
+```azurecli
+az sql mi-arc list
``` Output should look something like this:
Name Replicas ServerEndpoint State
demo-mi 1/1 10.240.0.4:32023 Ready ```
-## Delete a Azure Arc enabled SQL Managed Instance
+## Delete a Azure Arc-enabled SQL Managed Instance
To delete a SQL Managed Instance, run the following command:
-```console
-azdata arc sql mi delete -n <NAME_OF_INSTANCE>
+```azurecli
+az sql mi-arc delete -n <NAME_OF_INSTANCE>
``` Output should look something like this: ```console
-# azdata arc sql mi delete -n demo-mi
+# az sql mi-arc delete -n demo-mi
Deleted demo-mi from namespace arc ```
persistentvolumeclaim "logs-demo-mi-0" deleted
## Next steps
-Learn more about [Features and Capabilities of Azure Arc enabled SQL Managed Instance](managed-instance-features.md)
+Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
[Start by creating a Data Controller](create-data-controller.md)
-Already created a Data Controller? [Create an Azure Arc enabled SQL Managed Instance](create-sql-managed-instance.md)
+Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
azure-arc Delete Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-postgresql-hyperscale-server-group.md
Title: Delete an Azure Arc enabled PostgreSQL Hyperscale server group
-description: Delete an Azure Arc enabled Postgres Hyperscale server group
+ Title: Delete an Azure Arc-enabled PostgreSQL Hyperscale server group
+description: Delete an Azure Arc-enabled Postgres Hyperscale server group
Last updated 09/22/2020
-# Delete an Azure Arc enabled PostgreSQL Hyperscale server group
+# Delete an Azure Arc-enabled PostgreSQL Hyperscale server group
This document describes the steps to delete a server group from your Azure Arc setup.
persistentvolumeclaim "data-postgres01-0" deleted
> ``` ## Next step
-Create [Azure Arc enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)
+Create [Azure Arc-enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)
azure-arc Deploy Data Controller Direct Mode Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-data-controller-direct-mode-prerequisites.md
- Title: Prerequisites | Direct connect mode
-description: Prerequisites to deploy the data controller in direct connect mode.
------ Previously updated : 03/31/2021---
-# Deploy data controller - direct connect mode (prerequisites)
-
-This article describes how to prepare to deploy a data controller for Azure Arc enabled data services in direct connect mode.
--
-At a high level summary, the prerequisites include:
-
-1. Install tools
-1. Add extensions
-1. Create the service principal and configure roles for metrics
-1. Connect Kubernetes cluster to Azure using Azure Arc enabled Kubernetes
-
-After you have completed these prerequisites, you can [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md).
-
-The remaining sections of this article identify the prerequisites.
-
-## Install tools
--- Helm version 3.3+ ([install](https://helm.sh/docs/intro/install/))-- Azure CLI ([install](/sql/azdata/install/deploy-install-azdata))-
-## Add extensions for Azure CLI
-
-Additionally, the following az extensions are also required:
-- Azure CLI `k8s-extension` extension (0.2.0)-- Azure CLI `customlocation` (0.1.0)-
-Sample `az` and its CLI extensions would be:
-
-```console
-$ az version
-{
- "azure-cli": "2.19.1",
- "azure-cli-core": "2.19.1",
- "azure-cli-telemetry": "1.0.6",
- "extensions": {
- "connectedk8s": "1.1.0",
- "customlocation": "0.1.0",
- "k8s-configuration": "1.0.0",
- "k8s-extension": "0.2.0"
- }
-}
-```
-
-## Create service principal and configure roles for metrics
-
-Follow the steps detailed in the [Upload metrics](upload-metrics-and-logs-to-azure-monitor.md) article and create a Service Principal and grant the roles as described the article.
-
-The SPN ClientID, TenantID, and Client Secret information will be required when you [deploy Azure Arc data controller](deploy-data-controller-direct-mode.md).
-
-## Connect Kubernetes cluster to Azure using Azure Arc enabled Kubernetes
-
-To complete this task, follow the steps in [Connect an existing Kubernetes cluster to Azure arc](../kubernetes/quickstart-connect-cluster.md).
-
-## Next steps
-
-After you have completed these prerequisites, you can [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md).
azure-arc Get Connection Endpoints And Connection Strings Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/get-connection-endpoints-and-connection-strings-postgres-hyperscale.md
Title: Get connection endpoints and form the connection strings for your Arc enabled PostgreSQL Hyperscale server group-+ description: Get connection endpoints and form connection strings for your Arc enabled PostgreSQL Hyperscale server group
azure-arc How To Apply Security Context Constraint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/how-to-apply-security-context-constraint.md
Previously updated : 01/15/2021 Last updated : 07/13/2021
-# Apply a security context constraint for Azure Arc enabled data services on OpenShift
+# Apply a security context constraint for Azure Arc-enabled data services on OpenShift
-This article describes how to apply a security context constraint for Azure Arc enabled data services.
+This article describes how to apply a security context constraint for Azure Arc-enabled data services.
## Applicability
It applies to deployments on Azure Red Hat OpenShift or Red Hat OpenShift Contai
## Next steps - [Create the Azure Arc data controller](create-data-controller.md)-- [Create data controller in Azure Data Studio](create-data-controller-azure-data-studio.md)-- [Create Azure Arc data controller using the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](create-data-controller-using-azdata.md)
+- [Create data controller in Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)
+- [Create Azure Arc data controller with CLI](create-data-controller-indirect-cli.md)
azure-arc Install Arcdata Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/install-arcdata-extension.md
+
+ Title: Install `arcdata` extension
+description: Install the `arcdata` extension for Azure (az) cli
++++++ Last updated : 07/13/2021+++
+# Install `arcdata` Azure CLI extension
+
+> [!IMPORTANT]
+> If you are updating to a new monthly release, please be sure to also update to the latest version of Azure CLI and the Azure CLI extension.
++
+## Install latest Azure CLI
+
+To get the latest Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
++
+## Add the `arcdata` extension
+
+To add the extension, run the following command:
+
+```azurecli
+az extension add --name arcdata
+```
+
+[Learn more about Azure CLI extensions](/cli/azure/azure-cli-extensions-overview).
+
+## Next steps
+
+[Create the Azure Arc data controller](create-data-controller.md)
azure-arc Install Client Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/install-client-tools.md
Previously updated : 09/22/2020 Last updated : 07/13/2021
-# Install client tools for deploying and managing Azure Arc enabled data services
+# Install client tools for deploying and managing Azure Arc-enabled data services
> [!IMPORTANT]
-> If you are updating to a new monthly release, please be sure to also update to the latest version of Azure Data Studio, the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] tool and Azure Arc extensions for Azure Data Studio.
+> If you are updating to a new monthly release, please be sure to also update to the latest version of Azure Data Studio, the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] tool, the Azure CLI and Azure Arc extensions for Azure Data Studio.
+
+> [!IMPORTANT]
+> The Arc enabled data services command groups in the Azure Data CLI (azdata) are deprecated and will be removed in the next release. Please move to using the `arcdata` extension for Azure CLI instead.
This document walks you through the steps for installing the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)], Azure Data Studio, Azure CLI (az), and the Kubernetes CLI tool (kubectl) on your client machine. [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## Tools for creating and managing Azure Arc enabled data services
+## Tools for creating and managing Azure Arc-enabled data services
-The following table lists common tools required for creating and managing Azure Arc enabled data services, and how to install those tools:
+The following table lists common tools required for creating and managing Azure Arc-enabled data services, and how to install those tools:
| Tool | Required | Description | Installation | |||||
-| [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] | Yes | Command-line tool for installing and managing a big data cluster. [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] also includes a command line utility to connect to and query Azure SQL and SQL Server instances and Postgres servers using the commands `azdata sql query` (run a single query from the command line), `azdata sql shell` (an interactive shell), `azdata postgres query` and `azdata postgres shell`. | [Install](/sql/azdata/install/deploy-install-azdata?toc=/azure/azure-arc/data/toc.json&bc=/azure/azure-arc/data/breadcrumb/toc.json) |
-| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc enabled data services. | [Install](/sql/azure-data-studio/download-azure-data-studio) |
+| [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] | Yes | Command-line tool for installing and managing a SQL Server Big Data Cluster and Azure Arc-enabled data services. [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] also includes a command line utility to connect to and query Azure SQL and SQL Server instances and Postgres servers using the commands `azdata sql query` (run a single query from the command line), `azdata sql shell` (an interactive shell), `azdata postgres query` and `azdata postgres shell`. | [Install](/sql/azdata/install/deploy-install-azdata?toc=/azure/azure-arc/data/toc.json&bc=/azure/azure-arc/data/breadcrumb/toc.json) |
+| Azure CLI (az)<sup>1</sup> | Yes | Modern command-line interface for managing Azure services. Used with AKS deployments and to upload Azure Arc-enabled data services inventory and billing data to Azure. ([More info](/cli/azure/)). | [Install](/cli/azure/install-azure-cli) |
+| Azure CLI Extension for Arc enabled data services | Yes | Command-line tool for managing Arc enabled data services as an extension to the Azure CLI (az) | [Install](install-arcdata-extension.md). |
+| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc-enabled data services. | [Install](/sql/azure-data-studio/download-azure-data-studio) |
| [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] extension for Azure Data Studio | Yes | Extension for Azure Data Studio that will install [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] if you don't already have it.| Install from extensions gallery in Azure Data Studio.|
-| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc enabled data services. There is a dependency on the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] extension for Azure Data Studio. | Install from extensions gallery in Azure Data Studio.|
+| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services. There is a dependency on the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] extension for Azure Data Studio. | Install from extensions gallery in Azure Data Studio.|
| PostgreSQL extension in Azure Data Studio | No | PostgreSQL extension for Azure Data Studio that provides management capabilities for PostgreSQL. | <!--{need link} [Install](../azure-data-studio/data-virtualization-extension.md) --> Install from extensions gallery in Azure Data Studio.|
-| Azure CLI (az)<sup>1</sup> | Yes | Modern command-line interface for managing Azure services. Used with AKS deployments and to upload Azure Arc enabled data services inventory and billing data to Azure. ([More info](/cli/azure/)). | [Install](/cli/azure/install-azure-cli) |
| Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-powershell-from-psgallery) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management) | | curl <sup>3</sup> | Required for some sample scripts. | Command-line tool for transferring data with URLs. | [Windows](https://curl.haxx.se/windows/) \| Linux: install curl package | | oc | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli)
-<sup>1</sup> You must be using Azure CLI version 2.0.4 or later. Run `az --version` to find the version if needed.
+<sup>1</sup> You must be using Azure CLI version 2.26.0 or later. Run `az --version` to find the version if needed.
-<sup>2</sup> You must use `kubectl` version 1.13 or later. Also, the version of `kubectl` should be plus or minus one minor version of your Kubernetes cluster. If you want to install a specific version on `kubectl` client, see [Install `kubectl` binary via curl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl) (on Windows 10, use cmd.exe and not Windows PowerShell to run curl).
+<sup>2</sup> You must use `kubectl` version 1.19 or later. Also, the version of `kubectl` should be plus or minus one minor version of your Kubernetes cluster. If you want to install a specific version on `kubectl` client, see [Install `kubectl` binary via curl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl) (on Windows 10, use cmd.exe and not Windows PowerShell to run curl).
<sup>3</sup> If you are using PowerShell, curl is an alias to the Invoke-WebRequest cmdlet. ## Next steps
-[Create the Azure Arc data controller](create-data-controller.md)
+[Create the Azure Arc data controller](create-data-controller.md)
azure-arc Limitations Postgresql Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/limitations-postgresql-hyperscale.md
Title: Limitations of Azure Arc enabled PostgreSQL Hyperscale
-description: Limitations of Azure Arc enabled PostgreSQL Hyperscale
+ Title: Limitations of Azure Arc-enabled PostgreSQL Hyperscale
+description: Limitations of Azure Arc-enabled PostgreSQL Hyperscale
Last updated 02/11/2021
-# Limitations of Azure Arc enabled PostgreSQL Hyperscale
+# Limitations of Azure Arc-enabled PostgreSQL Hyperscale
-This article describes limitations of Azure Arc enabled PostgreSQL Hyperscale.
+This article describes limitations of Azure Arc-enabled PostgreSQL Hyperscale.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
Managing users and roles is not supported. For now, continue to use the postgre
## Roles and responsibilities
-The roles and responsibilities between Microsoft and its customers differ between Azure PaaS services (Platform As A Service) and Azure hybrid (like Azure Arc enabled PostgreSQL Hyperscale).
+The roles and responsibilities between Microsoft and its customers differ between Azure PaaS services (Platform As A Service) and Azure hybrid (like Azure Arc-enabled PostgreSQL Hyperscale).
### Frequently asked questions
__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Because Mic
3. [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md) - **Learn**
- - [Read more about Azure Arc enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
+ - [Read more about Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
- [Read about Azure Arc](https://aka.ms/azurearc)
azure-arc List Server Groups Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/list-server-groups-postgres-hyperscale.md
Title: List the Azure Arc enabled PostgreSQL Hyperscale server groups created in an Azure Arc Data Controller
-description: List the Azure Arc enabled PostgreSQL Hyperscale server groups created in an Azure Arc Data Controller
+ Title: List the Azure Arc-enabled PostgreSQL Hyperscale server groups created in an Azure Arc Data Controller
+description: List the Azure Arc-enabled PostgreSQL Hyperscale server groups created in an Azure Arc Data Controller
Last updated 09/22/2020
-# List the Azure Arc enabled PostgreSQL Hyperscale server groups created in an Azure Arc Data Controller
+# List the Azure Arc-enabled PostgreSQL Hyperscale server groups created in an Azure Arc Data Controller
This article explains how you can retrieve the list of server groups created in your Arc Data Controller.
To list the server groups running the version 11 of Postgres, replace _postgresq
## Next steps: * [Read the article about how to get the connection end points and form the connection strings to connect to your server group](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
-* [Read the article about showing the configuration of an Azure Arc enabled PostgreSQL Hyperscale server group](show-configuration-postgresql-hyperscale-server-group.md)
+* [Read the article about showing the configuration of an Azure Arc-enabled PostgreSQL Hyperscale server group](show-configuration-postgresql-hyperscale-server-group.md)
azure-arc Manage Postgresql Hyperscale Server Group With Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/manage-postgresql-hyperscale-server-group-with-azure-data-studio.md
Last updated 09/22/2020
-# Use Azure Data Studio to manage your Azure Arc enabled PostgreSQL Hyperscale server group
+# Use Azure Data Studio to manage your Azure Arc-enabled PostgreSQL Hyperscale server group
This article describes how to:
Enter the connection information to your Azure Data Controller:
Azure data studio shows your Arc Data Controller. Expand it and it shows the list of PostgreSQL instances that it manages.
-## Manage your Azure Arc enabled PostgreSQL Hyperscale server groups
+## Manage your Azure Arc-enabled PostgreSQL Hyperscale server groups
Right-click on the PostgreSQL instance you want to manage and select [Manage]
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-features.md
Title: Features and Capabilities of Azure Arc enabled SQL Managed Instance
-description: Features and Capabilities of Azure Arc enabled SQL Managed Instance
+ Title: Features and Capabilities of Azure Arc-enabled SQL Managed Instance
+description: Features and Capabilities of Azure Arc-enabled SQL Managed Instance
Last updated 09/22/2020
-# Features and Capabilities of Azure Arc enabled SQL Managed Instance
+# Features and Capabilities of Azure Arc-enabled SQL Managed Instance
-Azure Arc enabled SQL Managed Instance share a common code base with the latest stable version of SQL Server. Most of the standard SQL language, query processing, and database management features are identical. The features that are common between SQL Server and SQL Database or SQL Managed Instance are:
+Azure Arc-enabled SQL Managed Instance share a common code base with the latest stable version of SQL Server. Most of the standard SQL language, query processing, and database management features are identical. The features that are common between SQL Server and SQL Database or SQL Managed Instance are:
- Language features - [Control of flow language keywords](/sql/t-sql/language-elements/control-of-flow), [Cursors](/sql/t-sql/language-elements/cursors-transact-sql), [Data types](/sql/t-sql/data-types/data-types-transact-sql), [DML statements](/sql/t-sql/queries/queries), [Predicates](/sql/t-sql/queries/predicates), [Sequence numbers](/sql/relational-databases/sequence-numbers/sequence-numbers), [Stored procedures](/sql/relational-databases/stored-procedures/stored-procedures-database-engine), and [Variables](/sql/t-sql/language-elements/variables-transact-sql). - Database features - [Automatic tuning (plan forcing)](/sql/relational-databases/automatic-tuning/automatic-tuning), [Change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server), [Database collation](/sql/relational-databases/collations/set-or-change-the-database-collation), [Contained databases](/sql/relational-databases/databases/contained-databases), [Contained users](/sql/relational-databases/security/contained-database-users-making-your-database-portable), [Data compression](/sql/relational-databases/data-compression/data-compression), [Database configuration settings](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql), [Online index operations](/sql/relational-databases/indexes/perform-index-operations-online), [Partitioning](/sql/relational-databases/partitions/partitioned-tables-and-indexes), and [Temporal tables](/sql/relational-databases/tables/temporal-tables) ([see getting started guide](/sql/relational-databases/tables/getting-started-with-system-versioned-temporal-tables)).
Azure Arc enabled SQL Managed Instance share a common code base with the latest
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## Features of Azure Arc enabled SQL Managed Instance
+## Features of Azure Arc-enabled SQL Managed Instance
### <a name="RDBMSHA"></a> RDBMS High Availability
-|Feature|Azure Arc enabled SQL Managed Instance|
+|Feature|Azure Arc-enabled SQL Managed Instance|
|-|-| |Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available | |Always On availability groups<sup>2</sup>|HA capabilities are planned.|
Azure Arc enabled SQL Managed Instance share a common code base with the latest
### <a name="RDBMSSP"></a> RDBMS Scalability and Performance
-| Feature | Azure Arc enabled SQL Managed Instance |
+| Feature | Azure Arc-enabled SQL Managed Instance |
|--|--| | Columnstore | Yes | | Large object binaries in clustered columnstore indexes | Yes |
Azure Arc enabled SQL Managed Instance share a common code base with the latest
### <a name="RDBMSS"></a> RDBMS Security
-| Feature | Azure Arc enabled SQL Managed Instance |
+| Feature | Azure Arc-enabled SQL Managed Instance |
|--|--| | Row-level security | Yes | | Always Encrypted | Yes |
Azure Arc enabled SQL Managed Instance share a common code base with the latest
### <a name="RDBMSM"></a> RDBMS Manageability
-| Feature | Azure Arc enabled SQL Managed Instance |
+| Feature | Azure Arc-enabled SQL Managed Instance |
|--|--| | Dedicated admin connection | Yes | | PowerShell scripting support | Yes |
Azure Arc enabled SQL Managed Instance share a common code base with the latest
### <a name="Programmability"></a> Programmability
-| Feature | Azure Arc enabled SQL Managed Instance |
+| Feature | Azure Arc-enabled SQL Managed Instance |
|--|--| | JSON | Yes | | Query Store | Yes |
Azure Arc enabled SQL Managed Instance share a common code base with the latest
### Tools
-Azure Arc enabled SQL Managed Instance support various data tools that can help you manage your data.
+Azure Arc-enabled SQL Managed Instance support various data tools that can help you manage your data.
-| **Tool** | Azure Arc enabled SQL Managed Instance|
+| **Tool** | Azure Arc-enabled SQL Managed Instance|
| | | | | Azure portal <sup>1</sup> | No | | Azure CLI | No |
Azure Arc enabled SQL Managed Instance support various data tools that can help
| [SQL Server PowerShell](/sql/relational-databases/scripting/sql-server-powershell) | Yes | | [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) | Yes |
-<sup>1</sup> The Azure portal is only used to view Azure Arc enabled SQL Managed Instances in read-only mode during preview.
+<sup>1</sup> The Azure portal is only used to view Azure Arc-enabled SQL Managed Instances in read-only mode during preview.
### <a name="Unsupported"></a> Unsupported Features & Services
-The following features and services are not available for Azure Arc enabled SQL Managed Instance. The support of these features will be increasingly enabled over time.
+The following features and services are not available for Azure Arc-enabled SQL Managed Instance. The support of these features will be increasingly enabled over time.
| Area | Unsupported feature or service | |--|--|
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-high-availability.md
Title: Azure Arc enabled Managed Instance high availability-
-description: Learn how to deploy Azure Arc enabled Managed Instance with high availability.
+ Title: Azure Arc-enabled SQL Managed Instance high availability
+
+description: Learn how to deploy Azure Arc-enabled SQL Managed Instance with high availability.
Previously updated : 03/02/2021 Last updated : 07/13/2021
-# Azure Arc enabled Managed Instance high availability
+# Azure Arc-enabled SQL Managed Instance high availability
-Azure Arc enabled Managed Instance is deployed on Kubernetes as a containerized application and uses kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure Azure Arc enabled Managed Instance to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. This service is provided without user intervention ΓÇô all from availability group setup, configuring database mirroring endpoints, to adding databases to the availability group or failover and upgrade coordination. This document explores both types of high availability.
+Azure Arc-enabled SQL Managed Instance is deployed on Kubernetes as a containerized application and uses kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure Azure Arc-enabled SQL Managed Instance to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. This service is provided without user intervention ΓÇô all from availability group setup, configuring database mirroring endpoints, to adding databases to the availability group or failover and upgrade coordination. This document explores both types of high availability.
## Built-in high availability
-Built-in high availability is provided by Kubernetes when remote persistent storage is configured and shared with nodes used by the Arc data service deployment. In this configuration, Kubernetes plays the role of the cluster orchestrator. When the managed instance in a container or the underlying node fails, the orchestrator bootstraps another instance of the container and attaches to the same persistent storage. This type is enabled by default when you deploy Azure Arc enabled Managed Instance.
+Built-in high availability is provided by Kubernetes when remote persistent storage is configured and shared with nodes used by the Arc data service deployment. In this configuration, Kubernetes plays the role of the cluster orchestrator. When the managed instance in a container or the underlying node fails, the orchestrator bootstraps another instance of the container and attaches to the same persistent storage. This type is enabled by default when you deploy Azure Arc-enabled SQL Managed Instance.
### Verify built-in high availability
This section, you verify the built-in high availability provided by Kubernetes.
### Prerequisites - Kubernetes cluster must have [shared, remote storage](storage-configuration.md#factors-to-consider-when-choosing-your-storage-configuration) -- An Azure Arc enabled Managed Instance deployed with one replica (default)
+- An Azure Arc-enabled SQL Managed Instance deployed with one replica (default)
1. View the pods.
After all containers within the pod have recovered, you can connect to the manag
## Deploy with Always On availability groups
-For increased reliability, you can configure Azure Arc enabled Managed Instance to deploy with extra replicas in a high availability configuration.
+For increased reliability, you can configure Azure Arc-enabled SQL Managed Instance to deploy with extra replicas in a high availability configuration.
Capabilities that availability groups enable: -- When deployed with multiple replicas, a single availability group named `containedag` is created. By default, `containedag` has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. Additional availability groups cannot be created in the Azure Arc enabled Managed Instance.
+- When deployed with multiple replicas, a single availability group named `containedag` is created. By default, `containedag` has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. Additional availability groups cannot be created in the Azure Arc-enabled SQL Managed Instance.
- All databases are automatically added to the availability group, including all user and system databases like `master` and `msdb`. This capability provides a single-system view across the availability group replicas. Notice both `containedag_master` and `containedag_msdb` databases if you connect directly to the instance. The `containedag_*` databases represent the `master` and `msdb` inside the availability group.
Capabilities that availability groups enable:
To deploy a managed instance with availability groups, run the following command.
-```console
-azdata arc sql mi create -n <name of instance> --replicas 3
+```azurecli
+az sql mi-arc create -n <name of instance> --replicas 3
``` ### Check status Once the instance has been deployed, run the following commands to check the status of your instance:
-```console
-azdata arc sql mi list
-azdata arc sql mi show -n <name of instance>
+```azurecli
+az sql mi-arc list
+az sql mi-arc show -n <name of instance>
``` Example output: ```output
-user@pc:/# azdata arc sql mi list
+user@pc:/# az sql mi-arc list
ExternalEndpoint Name Replicas State - - 20.131.31.58,1433 sql2 3/3 Ready
-user@pc:/# azdata arc sql mi show -n sql2
+user@pc:/# az sql mi-arc show -n sql2
{ ... "status": {
Additional steps are required to restore a database into an availability group.
### Limitations
-Azure Arc enabled Managed Instance availability groups has the same [limitations as Big Data Cluster availability groups. Click here to learn more.](/sql/big-data-cluster/deployment-high-availability#known-limitations)
+Azure Arc-enabled SQL Managed Instance availability groups has the same [limitations as Big Data Cluster availability groups. Click here to learn more.](/sql/big-data-cluster/deployment-high-availability#known-limitations)
## Next steps
-Learn more about [Features and Capabilities of Azure Arc enabled SQL Managed Instance](managed-instance-features.md)
+Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-overview.md
Title: Azure Arc enabled SQL Managed Instance Overview
-description: Azure Arc enabled SQL Managed Instance Overview
+ Title: Azure Arc-enabled SQL Managed Instance Overview
+description: Azure Arc-enabled SQL Managed Instance Overview
Last updated 03/02/2021
-# Azure Arc enabled SQL Managed Instance Overview
+# Azure Arc-enabled SQL Managed Instance Overview
-Azure Arc enabled SQL Managed Instance is an Azure SQL data service that can be created on the infrastructure of your choice.
+Azure Arc-enabled SQL Managed Instance is an Azure SQL data service that can be created on the infrastructure of your choice.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Description
-Azure Arc enabled SQL Managed Instance has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead.
+Azure Arc-enabled SQL Managed Instance has near 100% compatibility with the latest SQL Server database engine, and enables existing SQL Server customers to lift and shift their applications to Azure Arc data services with minimal application and database changes while maintaining data sovereignty. At the same time, SQL Managed Instance includes built-in management capabilities that drastically reduce management overhead.
To learn more about these capabilities, you can also refer to this Data Exposed episode. > [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-SQL-Managed-Instance--Data-Exposed/player?format=ny] ## Next steps
-Learn more about [Features and Capabilities of Azure Arc enabled SQL Managed Instance](managed-instance-features.md)
+Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
-[Azure Arc enabled Managed Instance high availability](managed-instance-high-availability.md)
+[Azure Arc-enabled Managed Instance high availability](managed-instance-high-availability.md)
[Start by creating a Data Controller](create-data-controller.md)
-Already created a Data Controller? [Create an Azure Arc enabled SQL Managed Instance](create-sql-managed-instance.md)
+Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
azure-arc Migrate Postgresql Data Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
Title: Migrate data from a PostgreSQL database into an Azure Arc enabled PostgreSQL Hyperscale server group-
-description: Migrate data from a PostgreSQL database into an Azure Arc enabled PostgreSQL Hyperscale server group
+ Title: Migrate data from a PostgreSQL database into an Azure Arc-enabled PostgreSQL Hyperscale server group
+
+description: Migrate data from a PostgreSQL database into an Azure Arc-enabled PostgreSQL Hyperscale server group
Last updated 06/02/2021
-# Migrate PostgreSQL database to Azure Arc enabled PostgreSQL Hyperscale server group
+# Migrate PostgreSQL database to Azure Arc-enabled PostgreSQL Hyperscale server group
-This document describes the steps to get your existing PostgreSQL database (one that not hosted in Azure Arc enabled Data Services) into your Azure Arc enabled PostgreSQL Hyperscale server group.
+This document describes the steps to get your existing PostgreSQL database (one that not hosted in Azure Arc-enabled Data Services) into your Azure Arc-enabled PostgreSQL Hyperscale server group.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Considerations
-Azure Arc enabled PostgreSQL Hyperscale server group is the community version of PostgreSQL and runs with the CitusData extension enabled. So any tool that that works on PostgreSQL outside of Azure Arc should work with Azure Arc enabled PostgreSQL Hyperscale server group.
+Azure Arc-enabled PostgreSQL Hyperscale server group is the community version of PostgreSQL and runs with the CitusData extension enabled. So any tool that that works on PostgreSQL outside of Azure Arc should work with Azure Arc-enabled PostgreSQL Hyperscale server group.
As such, with the set of tools you use today for Postgres, you should be able to: 1. Backup your Postgres database from your instance hosted outside of Azure Arc
-2. Restore it in your Azure Arc enabled PostgreSQL Hyperscale server group
+2. Restore it in your Azure Arc-enabled PostgreSQL Hyperscale server group
What will be left for you to do is: - reset the server parameters
Configure it:
The backup completes successfully: :::image type="content" source="media/postgres-hyperscale/Migrate-PG-Source-Backup3.jpg" alt-text="Migrate-source-backup-completed":::
-### Create an empty database on the destination system in your Azure Arc enabled PostgreSQL Hyperscale server group
+### Create an empty database on the destination system in your Azure Arc-enabled PostgreSQL Hyperscale server group
> [!NOTE] > To register a Postgres instance in the `pgAdmin` tool, you need to you use public IP of your instance in your Kubernetes cluster and set the port and security context appropriately. You will find these details on the `psql` endpoint line after running the following command:
Configure the restore:
The restore is successful. :::image type="content" source="media/postgres-hyperscale/migrate-pg-destination-dbrestore3.jpg" alt-text="Migrate-db-restore-completed":::
-### Verify that the database was successfully restored in your Azure Arc enabled PostgreSQL Hyperscale server group
+### Verify that the database was successfully restored in your Azure Arc-enabled PostgreSQL Hyperscale server group
Use either of the following methods:
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
``` > [!NOTE]
-> - You will not see so much performance benefits of running on Azure Arc enabled PostgreSQL Hyperscale until you scale out and you shard/distribute the data across the worker nodes of your PostgreSQL Hyperscale server group. See [Next steps](#next-steps).
+> - You will not see so much performance benefits of running on Azure Arc-enabled PostgreSQL Hyperscale until you scale out and you shard/distribute the data across the worker nodes of your PostgreSQL Hyperscale server group. See [Next steps](#next-steps).
> > - It is not possible today to "onboard into Azure Arc" an existing Postgres instance that would running on premises or in any other cloud. In other words, it is not possible to install some sort of "Azure Arc agent" on your existing Postgres instance to make it a Postgres setup enabled by Azure Arc. Instead, you need to create a new Postgres instance and transfer data into it. You may use the technique shown above to do this or you may use any ETL tool of your choice.
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
* [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)* * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
-> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for Postgres - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for Postgres - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-to-managed-instance.md
Title: Migrate a database from SQL Server to Azure Arc enabled SQL Managed Instance
-description: Migrate database from SQL Server to Azure Arc enabled SQL Managed Instance
+ Title: Migrate a database from SQL Server to Azure Arc-enabled SQL Managed Instance
+description: Migrate database from SQL Server to Azure Arc-enabled SQL Managed Instance
Last updated 09/22/2020
-# Migrate: SQL Server to Azure Arc enabled SQL Managed Instance
+# Migrate: SQL Server to Azure Arc-enabled SQL Managed Instance
This scenario walks you through the steps for migrating a database from a SQL Server instance to Azure SQL managed instance in Azure Arc via two different backup and restore methods.
This scenario walks you through the steps for migrating a database from a SQL Se
## Use Azure blob storage
-Use Azure blob storage for migrating to Azure Arc enabled SQL Managed Instance.
+Use Azure blob storage for migrating to Azure Arc-enabled SQL Managed Instance.
This method uses Azure Blob Storage as a temporary storage location that you can back up to and then restore from.
GO
## Next steps
-[Learn more about Features and Capabilities of Azure Arc enabled SQL Managed Instance](managed-instance-features.md)
+[Learn more about Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
[Start by creating a Data Controller](create-data-controller.md)
-[Already created a Data Controller? Create an Azure Arc enabled SQL Managed Instance](create-sql-managed-instance.md)
+[Already created a Data Controller? Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
azure-arc Monitor Grafana Kibana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/monitor-grafana-kibana.md
# View logs and metrics using Kibana and Grafana
-Kibana and Grafana web dashboards are provided to bring insight and clarity to the Kubernetes namespaces being used by Azure Arc enabled data services.
+Kibana and Grafana web dashboards are provided to bring insight and clarity to the Kubernetes namespaces being used by Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
azure-arc Monitoring Log Analytics Azure Portal Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/monitoring-log-analytics-azure-portal-managed-instance.md
Title: Monitoring, log analytics, Azure portal (SQL Managed Instance)
-description: Monitor Azure Arc enabled data services for SQL Managed Instance.
+description: Monitor Azure Arc-enabled data services for SQL Managed Instance.
# Monitoring, log analytics, billing information, Azure portal (SQL Managed Instance)
-This article lists additional experiences you can have with Azure Arc enabled data services.
+This article lists additional experiences you can have with Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
This article lists additional experiences you can have with Azure Arc enabled da
[!INCLUDE [azure-arc-common-monitoring](../../../includes/azure-arc-common-monitoring.md)] ## Next steps-- [Read about the overview of Azure Arc enabled data services](overview.md)-- [Read about connectivity modes and requirements for Azure Arc enabled data services](connectivity.md)
+- [Read about the overview of Azure Arc-enabled data services](overview.md)
+- [Read about connectivity modes and requirements for Azure Arc-enabled data services](connectivity.md)
azure-arc Monitoring Log Analytics Azure Portal Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/monitoring-log-analytics-azure-portal-postgresql.md
Title: Monitoring, log analytics, Azure portal (PostgreSQL Hyperscale)
-description: Monitor Azure Arc enabled PostgreSQL services
+description: Monitor Azure Arc-enabled PostgreSQL services
# Monitoring, log analytics, billing information, Azure portal (PostgreSQL Hyperscale)
-This article lists additional experiences you can have with Azure Arc enabled data services.
+This article lists additional experiences you can have with Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
This article lists additional experiences you can have with Azure Arc enabled da
[!INCLUDE [azure-arc-common-monitoring](../../../includes/azure-arc-common-monitoring.md)] ## Next steps-- [Read about the overview of Azure Arc enabled data services](overview.md)-- [Read about connectivity modes and requirements for Azure Arc enabled data services](connectivity.md)
+- [Read about the overview of Azure Arc-enabled data services](overview.md)
+- [Read about connectivity modes and requirements for Azure Arc-enabled data services](connectivity.md)
azure-arc Offline Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/offline-deployment.md
# Offline Deployment Overview
-Typically the container images used in the creation of the Azure Arc data controller, SQL managed instances and PostgreSQL Hyperscale server groups are directly pulled from the Microsoft Container Registry (MCR). In some cases, the environment that you are deploying to will not have connectivity to the Microsoft Container Registry. For situations like this, you can pull the container images using a computer, which _does_ have access to the Microsoft Container Registry and then tag and push them to a private container registry that _is_ connectable from the environment in which you want to deploy Azure Arc enabled data services.
+Typically the container images used in the creation of the Azure Arc data controller, SQL managed instances and PostgreSQL Hyperscale server groups are directly pulled from the Microsoft Container Registry (MCR). In some cases, the environment that you are deploying to will not have connectivity to the Microsoft Container Registry. For situations like this, you can pull the container images using a computer, which _does_ have access to the Microsoft Container Registry and then tag and push them to a private container registry that _is_ connectable from the environment in which you want to deploy Azure Arc-enabled data services.
-Because monthly updates are provided for Azure Arc enabled data services and there are a large number of container images, it is best to perform this process of pulling, tagging, and pushing the container images to a private container registry using a script. The script can either be automated or run manually.
+Because monthly updates are provided for Azure Arc-enabled data services and there are a large number of container images, it is best to perform this process of pulling, tagging, and pushing the container images to a private container registry using a script. The script can either be automated or run manually.
A [sample script](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/scripts/pull-and-push-arc-data-services-images-to-private-registry.py) can be found in the Azure Arc GitHub repository.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
Title: What are Azure Arc enabled data services
-description: Introduces Azure Arc enabled data services
+ Title: What are Azure Arc-enabled data services
+description: Introduces Azure Arc-enabled data services
Previously updated : 03/31/2021 Last updated : 07/13/2021
-# Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
+# Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
-# What are Azure Arc enabled data services (preview)?
+# What are Azure Arc-enabled data services (preview)?
Azure Arc makes it possible to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice.
-Currently, the following Azure Arc enabled data services are available in preview:
+Currently, the following Azure Arc-enabled data services are available in preview:
- SQL Managed Instance - PostgreSQL Hyperscale
Currently, the following Azure Arc enabled data services are available in previe
## Always current
-Azure Arc enabled data services such as Azure Arc enabled SQL managed instance and Azure Arc enabled PostgreSQL Hyperscale receive updates on a frequent basis including servicing patches and new features similar to the experience in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies. This way, on-premises databases can stay up to date while ensuring you maintain control. Because Azure Arc enabled data services are a subscription service, you will no longer face end-of-support situations for your databases.
+Azure Arc-enabled data services such as Azure Arc-enabled SQL managed instance and Azure Arc-enabled PostgreSQL Hyperscale receive updates on a frequent basis including servicing patches and new features similar to the experience in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies. This way, on-premises databases can stay up to date while ensuring you maintain control. Because Azure Arc-enabled data services are a subscription service, you will no longer face end-of-support situations for your databases.
## Elastic scale
Azure Arc also provides other cloud benefits such as fast deployment and automat
## Unified management
-Using familiar tools such as the Azure portal, Azure Data Studio, and the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)], you can now gain a unified view of all your data assets deployed with Azure Arc. You are able to not only view and manage a variety of relational databases across your environment and Azure, but also get logs and telemetry from Kubernetes APIs to analyze the underlying infrastructure capacity and health. Besides having localized log analytics and performance monitoring, you can now leverage Azure Monitor for comprehensive operational insights across your entire estate.
+Using familiar tools such as the Azure portal, Azure Data Studio, and the Azure CLI (`az`) with the `arcdata` extension, you can now gain a unified view of all your data assets deployed with Azure Arc. You are able to not only view and manage a variety of relational databases across your environment and Azure, but also get logs and telemetry from Kubernetes APIs to analyze the underlying infrastructure capacity and health. Besides having localized log analytics and performance monitoring, you can now leverage Azure Monitor for comprehensive operational insights across your entire estate.
## Disconnected scenario support
-Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure. Connecting directly to Azure opens up additional options for integration with other Azure services such as Azure Monitor and the ability to use the Azure portal and Azure Resource Manager APIs from anywhere in the world to manage your Azure Arc enabled data services.
+Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure. Connecting directly to Azure opens up additional options for integration with other Azure services such as Azure Monitor and the ability to use the Azure portal and Azure Resource Manager APIs from anywhere in the world to manage your Azure Arc-enabled data services.
## Supported regions
The following table describes the scenarios that are currently supported for Arc
[Install the client tools](install-client-tools.md)
-[Create the Azure Arc data controller](create-data-controller.md) (requires installing the client tools first)
+[Plan your Azure Arc data services deployment](plan-azure-arc-data-services.md) (requires installing the client tools first)
[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first)
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/plan-azure-arc-data-services.md
+
+ Title: Plan Azure Arc-enabled data services deployment
+description: Explains considerations for planning the Azure Arc-enabled data services deployment
+++++ Last updated : 07/13/2021++
+# Plan to deploy Azure Arc-enabled data services
+
+This article describes how to plan to deploy Azure Arc-enabled data services.
++
+First, deployment of Azure Arc data services involves proper understanding of the database workloads and the business requirements for those workloads. For example, consider things like availability, business continuity, and capacity requirements for memory, CPU, and storage for those workloads. Second, the infrastructure to support those database workloads needs to be prepared based on the business requirements.
+
+## Prerequisites
+
+Before you deploy the Azure Arc-enabled data services, it's important to understand the prerequisites and have all the necessary information ready, infrastructure environment properly configured with right level of access, appropriate capacity for storage, CPU, and memory. so you can have a successful deployment at the end.
+
+Review the following sections:
+- [Sizing guidance](sizing-guidance.md)
+- [Storage configuration](storage-configuration.md)
+- [Connectivity modes and their requirements](connectivity.md)
+
+Verify that you have:
+- installed the [`arcdata` CLI extension](install-arcdata-extension.md).
+- installed the other [client tools](install-client-tools.md).
+- access to the Kubernetes cluster.
+- your `kubeconfig` file configured. It should point to the Kubernetes cluster you want to deploy to. Run the following command to verify the current context of your cluster you will be deploying to:
+ ```console
+ kubectl cluster-info
+ ```
+- an Azure subscription to which resources such as Azure Arc data controller, Azure Arc-enabled SQL managed instance or Azure Arc-enabled PostgreSQL Hyperscale server will be projected and billed to.
++
+> [!NOTE]
+> Billing applies after general availability and when not using for dev edition.
+
+Once the infrastructure is prepared, deploy Azure Arc-enabled data services in the following way:
+1. Create an Azure Arc-enabled data controller on one of the validated distributions of a Kubernetes cluster
+1. Create an Azure Arc-enabled SQL managed instance or an Azure Arc-enabled PostgreSQL Hyperscale server group.
+
+## Overview: Create the Azure Arc-enabled data controller
+
+You can create Azure Arc-enabled data services on multiple different types of Kubernetes clusters and managed Kubernetes services using multiple different approaches.
+
+Currently, the validated list of Kubernetes services and distributions includes:
++
+- AWS Elastic Kubernetes Service (EKS)
+- Azure Kubernetes Service (AKS)
+- Azure Kubernetes Service Engine (AKS Engine) on Azure Stack
+- Azure Kubernetes Service on Azure Stack HCI
+- Azure RedHat OpenShift (ARO)
+- Google Cloud Kubernetes Engine (GKE)
+- Open source, upstream Kubernetes typically deployed using kubeadm
+- OpenShift Container Platform (OCP)
+
+> [!IMPORTANT]
+> * The minimum supported version of Kubernetes is v1.19. See [Known issues](./release-notes.md#known-issues) for additional information.
+> * The minimum supported version of OCP is 4.7.
+> * If you are using Azure Kubernetes Service, your cluster's worker node VM size should be at least **Standard_D8s_v3** and use **premium disks.** The cluster should not span multiple availability zones. See [Known issues](./release-notes.md#known-issues) for additional information.
++
+> [!NOTE]
+> If you are using Red Hat OpenShift Container Platform on Azure, it is recommended to use the latest available version.
+
+Regardless of the option you choose, during the creation process you will need to provide the following information:
+
+- **Data controller name** - descriptive name for your data controller - e.g. "production-dc", "seattle-dc". The name must meet [Kubernetes naming standards](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/).
+- **Data controller username** - username for the data controller administrator user.
+- **Data controller password** - password for the data controller administrator user.
+- **Name of your Kubernetes namespace** - the name of the Kubernetes namespace that you want to create the data controller in.
+- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Preview currently only supports indirectly connected and directly connected modes. For information, see [connectivity mode](./connectivity.md).
+- **Azure subscription ID** - The Azure subscription GUID for where you want the data controller resource in Azure to be created.
+- **Azure resource group name** - The name of the resource group where you want the data controller resource in Azure to be created.
+- **Azure location** - The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see [Azure global infrastructure / Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). The metadata and billing information about the Azure resources managed by the data controller that you are deploying will be stored only in the location in Azure that you specify as the location parameter. If you are deploying in the directly connected mode, the location parameter for the data controller will be the same as the location of the custom location resource that you target.
+- **Service Principal information** - as described in the [Upload prerequisites](upload-metrics-and-logs-to-azure-monitor.md) article, you will need the Service Principal information during Azure Arc data controller create when deploying in *direct* connectivity mode. For *indirect* connectivity mode, the Service Principal is still needed to export and upload manually but after the Azure Arc data controller is created.
+- **Infrastructure** - For billing purposes, it is required to indicate the infrastructure on which you are running Arc enabled data services. The options are `alibaba`, `aws`, `azure`, `gcp`, `onpremises`, or `other`.
+
+## Additional concepts for direct connected mode
+
+As described in the [connectivity modes](./connectivity.md), Azure Arc data controller can be deployed in **direct** or **indirect** connectivity modes. Deploying Azure Arc data services in **direct** connected mode requires understanding of some additional concepts and considerations.
+First, the Kubernetes cluster where the Arc enabled data services will be deployed needs to be an [Azure Arc-enabled Kubernetes cluster](../kubernetes/overview.md). Onboarding the Kubernetes cluster to Azure Arc provides Azure connectivity that is leveraged for capabilities such as automatic upload of usage information, logs, metrics etc. Connecting your Kubernetes cluster to Azure also allows you to deploy and manage Azure Arc data services to your cluster directly from the Azure portal.
+
+Connecting your Kubernetes cluster to Azure involves the following steps:
+- Install the required az extensions
+- [Connect your cluster to Azure](../kubernetes/quickstart-connect-cluster.md)
+
+Second, after the Kubernetes cluster is onboarded to Azure Arc, deploying Azure Arc data services on an Azure Arc-enabled Kubernetes cluster involves the following:
+- Create the Arc data services extension, learn more about [cluster extensions](../kubernetes/conceptual-extensions.md)
+- Create a custom location, learn more about [custom locations](../kubernetes/conceptual-custom-locations.md)
+- Create the Azure Arc data controller
+
+After the Azure Arc data controller is installed, data services such as Azure Arc-enabled SQL managed instance or Azure Arc-enabled PostgreSQL Hyperscale Server can be created.
++
+## Next steps
+
+There are multiple options for creating the Azure Arc data controller:
+
+> **Just want to try things out?**
+> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM!
+>
+- [Create a data controller in direct connected mode with the Azure portal](create-data-controller-direct-prerequisites.md)
+- [Create a data controller in indirect connected mode with CLI](create-data-controller-indirect-cli.md)
+- [Create a data controller in indirect connected mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)
+- [Create a data controller in indirect connected mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)
+- [Create a data controller in indirect connected mode with Kubernetes tools such as kubectl or oc](create-data-controller-using-kubernetes-native-tools.md)
+- [Create a data controller with Azure Arc Jumpstart for an accelerated experience of a test deployment](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/)
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/point-in-time-restore.md
+
+ Title: Restore a database to a Point in Time
+description: Explains how to perform a Point in Time Restore operation
++++++ Last updated : 07/13/2021+++
+# Perform a Point in Time Restore
++
+Azure Arc-enabled SQL Managed Instance comes built in with many PaaS like capabilities. One such capability is the ability to restore a database to a point-in-time, within the pre-configured retention settings. This article describes how to do a point-in-time restore of a database in Azure Arc-enabled SQL managed instance.
+
+Point-In-Time-Restore is an instance level setting with two properties - Recovery Point Objective (RPO) and Retention Time (RT). Recovery Point Objective setting determines how often the transaction log backups are taken. This is also the amount of time data loss is to be expected. Retention Time is how long the backups (full, differential and transaction log) are kept.
+
+Currently, Point-in-time restore can restore a database:
+
+- from an existing database on a SQL instance
+- to a new database on the same SQL instance
+
+### Limitations
+
+Point-in-time restore to Azure Arc-enabled SQL Managed Instance has the following limitations:
+
+- You can only restore to the same Azure Arc-enabled SQL managed instance
+- Point-in-time restore can be performed only via a yaml file
+- Older backup files that are beyond the pre-configured retention period need to be manually cleaned up
+- Renaming a databases starts a new backup chain in a new folder
+- Dropping and creating different databases with same names isn't handled properly at this time
+
+### Edit PITR settings
+
+##### Enable/disable automated backups
+
+Point-In-Time-Restore (PITR) service is enabled by default with the following settings:
+
+- Recovery Point Objective (RPO) = 300 seconds. Accepted values are 0, or between 300 to 600 (in seconds)
+
+This implies that log backups for all databases on the Azure Arc-enabled SQL managed instance will be taken every 300 seconds or 5 minutes by default. This value can be changed to 0, to disable backups being taken or to a higher value in seconds depending on the RPO requirement needed for the databases on the SQL instance.
+
+The PITR service itself cannot be disabled but the automated backups for a specific instance of Azure Arc-enabled SQL managed instance can either be disabled, or the default settings changed.
+
+The RPO can be edited by changing the value for the property ```recoveryPointObjectiveInSeconds``` as follows:
+
+```
+kubectl edit sqlmi <sqlinstancename> -n <namespace> -o yaml
+```
+
+This should open up the Custom Resource spec for Azure Arc-enabled SQL managed instance in your default editor. Look for ```backup``` setting under ```spec```:
+
+```
+backup:
+ recoveryPointObjectiveInSeconds: 300
+```
+
+Edit the value for ```recoveryPointObjectiveInSeconds``` in the editor and save the changes for the new setting to take effect.
+
+> [!NOTE]
+> Editing the RPO setting will reboot the pod containing the Azure Arc-enabled SQL managed instance.
+
+### Restore a database to a Point-In-Time
+
+A restore operation can be performed on an Azure Arc-enabled SQL managed instance to restore from a source database to a point-in-time within the retention period.
+**(1) Create yaml file as below in your editor:**
+
+```
+apiVersion: tasks.sql.arcdata.microsoft.com/v1beta1
+kind: SqlManagedInstanceRestoreTask
+metadata:
+ name: sql01-restore-20210707
+ namespace: arc
+spec:
+ source:
+ name: sql01
+ database: db01
+ restorePoint: "2021-07-01T02:00:00Z"
+ destination:
+ name: sql01
+ database: db02
+```
+
+- name - Unique string for each custom resource which is a requirement for kubernetes
+- namespace - namespace where the Azure Arc-enabled SQL managed instance is running
+- source > name - name of the Azure Arc-enabled SQL managed instance
+- source > database - name of the source database on the Azure Arc-enabled SQL managed instance
+- restorePoint - Point-in-time for the restore operation in "UTC" date time.
+- destination > name - name of the target Azure Arc-enabled SQL managed instance to restore to. Currently only restores to the same instances are supported.
+- destination > database - name of the new database where the restore would be applied to
+
+**(2) Apply the yaml file to create a task to initiate the restore operation**
+
+Run the command as follows to initiate the restore operation:
+
+```
+kubectl apply -f sql01-restore-task.yaml
+```
+
+> [!NOTE]
+> The name of the task inside the custom resource and the file name don't have to be same.
++
+**Check the status of restore**
+
+- Restore task status gets updated about every 10 seconds and the status changes from "Waiting" --> "Restoring" --> "Completed"/"Failed".
+- While a database is being restored, the status would reflect "Restoring".
+
+The status of the task can be retrieved as follows:
+
+```
+kubectl get sqlmirestoretask -n arc
+```
+
+### Monitor your backups
+
+The backups are stored under ```/var/opt/mssql/backups/archived/<dbname>/<datetime>``` folder, where ```<dbname>``` is the name of the database and ```<datetime>``` would be a timestamp in UTC format, for the beginning of each full backup. Each time a full backup is initiated, a new folder would be created with the full back and all subsequent differential and transaction log backups inside that folder. The most current full backup and its subsequent differential and transaction log backups are stored under ```/var/opt/mssql/backups/current/<dbname><datetime>``` folder.
++
+### Clean up
+
+If you need to delete older backups either to create space or no longer need them, any of the folders under ```/var/opt/mssql/backups/archived/``` folder can be removed. Removing folders in the middle of a timeline could impact the ability to restore to a point in time during that window. It is recommended to delete the oldest folders first allowing for a continuous timeline of restorability.
++
azure-arc Postgresql Hyperscale Server Group Placement On Kubernetes Cluster Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/postgresql-hyperscale-server-group-placement-on-kubernetes-cluster-nodes.md
Last updated 06/02/2021
-# Azure Arc enabled PostgreSQL Hyperscale server group placement
+# Azure Arc-enabled PostgreSQL Hyperscale server group placement
-In this article, we are taking an example to illustrate how the PostgreSQL instances of Azure Arc enabled PostgreSQL Hyperscale server group are placed on the physical nodes of the Kubernetes cluster that hosts them.
+In this article, we are taking an example to illustrate how the PostgreSQL instances of Azure Arc-enabled PostgreSQL Hyperscale server group are placed on the physical nodes of the Kubernetes cluster that hosts them.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
The architecture can be represented as:
:::image type="content" source="media/migrate-postgresql-data-into-postgresql-hyperscale-server-group/2_logical_cluster.png" alt-text="Logical representation of 4 nodes grouped in a Kubernetes cluster":::
-The Kubernetes cluster hosts one Azure Arc Data Controller and one Azure Arc enabled PostgreSQL Hyperscale server group.
+The Kubernetes cluster hosts one Azure Arc Data Controller and one Azure Arc-enabled PostgreSQL Hyperscale server group.
This server group is constituted of three PostgreSQL instances: one coordinator and two workers. List the pods with the command:
postgres01c-0 3/3 Running 0 9h
postgres01w-0 3/3 Running 0 9h postgres01w-1 3/3 Running 0 9h ```
-Each of those pods host a PostgreSQL instance. Together, the pods form the Azure Arc enabled PostgreSQL Hyperscale server group:
+Each of those pods host a PostgreSQL instance. Together, the pods form the Azure Arc-enabled PostgreSQL Hyperscale server group:
```output Pod name Role in the server group
Containers:
… ```
-Each pod that is part of the Azure Arc enabled PostgreSQL Hyperscale server group hosts the following three containers:
+Each pod that is part of the Azure Arc-enabled PostgreSQL Hyperscale server group hosts the following three containers:
|Containers|Description |-|-| |`Fluentbit` |Data * log collector: https://fluentbit.io/
-|`Postgres`|PostgreSQL instance part of the Azure Arc enabled PosgreSQL Hyperscale server group
+|`Postgres`|PostgreSQL instance part of the Azure Arc-enabled PosgreSQL Hyperscale server group
|`Telegraf` |Metrics collector: https://www.influxdata.com/time-series-platform/telegraf/ The architecture looks like: :::image type="content" source="media/migrate-postgresql-data-into-postgresql-hyperscale-server-group/3_pod_placement.png" alt-text="3 pods each placed on separate nodes":::
-It means that, at this point, each PostgreSQL instance constituting the Azure Arc enabled PostgreSQL Hyperscale server group is hosted on specific physical host within the Kubernetes container. This configuration provides the most performance out of the Azure Arc enabled PostgreSQL Hyperscale server group as each role (coordinator and workers) uses the resources of each physical node. Those resources are not shared among several PostgreSQL roles.
+It means that, at this point, each PostgreSQL instance constituting the Azure Arc-enabled PostgreSQL Hyperscale server group is hosted on specific physical host within the Kubernetes container. This configuration provides the most performance out of the Azure Arc-enabled PostgreSQL Hyperscale server group as each role (coordinator and workers) uses the resources of each physical node. Those resources are not shared among several PostgreSQL roles.
-## Scale out Azure Arc enabled PostgreSQL Hyperscale
+## Scale out Azure Arc-enabled PostgreSQL Hyperscale
Now, letΓÇÖs scale out to add a third worker node to the server group and observe what happens. It will create a fourth PostgreSQL instance that will be hosted in a fourth pod. To scale out run the command:
The architecture looks like:
Why isnΓÇÖt the new worker/pod placed on the remaining physical node of the Kubernetes cluster aks-agentpool-42715708-vmss000003?
-The reason is that the last physical node of the Kubernetes cluster is actually hosting several pods that host additional components that are required to run Azure Arc enabled data services.
+The reason is that the last physical node of the Kubernetes cluster is actually hosting several pods that host additional components that are required to run Azure Arc-enabled data services.
Kubernetes assessed that the best candidate ΓÇô at the time of scheduling ΓÇô to host the additional worker is the aks-agentpool-42715708-vmss000000 physical node. Using the same commands as above; we see what each physical node is hosting:
The architecture looks like:
:::image type="content" source="media/migrate-postgresql-data-into-postgresql-hyperscale-server-group/5_full_list_of_pods.png" alt-text="All pods in namespace on various nodes":::
-As described above, the coordinator nodes (Pod 1) of the Azure Arc enabled Postgres Hyperscale server group shares the same physical resources as the third worker node (Pod 4) of the server group. That is acceptable because the coordinator node typically uses very few resources in comparison to what a worker node may be using. For this reason, carefully chose:
+As described above, the coordinator nodes (Pod 1) of the Azure Arc-enabled Postgres Hyperscale server group shares the same physical resources as the third worker node (Pod 4) of the server group. That is acceptable because the coordinator node typically uses very few resources in comparison to what a worker node may be using. For this reason, carefully chose:
- the size of the Kubernetes cluster and the characteristics of each of its physical nodes (memory, vCore) - the number of physical nodes inside the Kubernetes cluster - the applications or workloads you host on the Kubernetes cluster.
-The implication of hosting too many workloads on the Kubernetes cluster is throttling may happen for the Azure Arc enabled PostgreSQL Hyperscale server group. If that happens, you will not benefit so much from its capability to scale horizontally. The performance you get out of the system is not just about the placement or the physical characteristics of the physical nodes or the storage system. The performance you get is also about how you configure each of the resources running inside the Kubernetes cluster (including Azure Arc enabled PostgreSQL Hyperscale), for instance the requests and limits you set for memory and vCore. The amount of workload you can host on a given Kubernetes cluster is relative to the characteristics of the Kubernetes cluster, the nature of the workloads, the number of users, how the operations of the Kubernetes cluster are done…
+The implication of hosting too many workloads on the Kubernetes cluster is throttling may happen for the Azure Arc-enabled PostgreSQL Hyperscale server group. If that happens, you will not benefit so much from its capability to scale horizontally. The performance you get out of the system is not just about the placement or the physical characteristics of the physical nodes or the storage system. The performance you get is also about how you configure each of the resources running inside the Kubernetes cluster (including Azure Arc-enabled PostgreSQL Hyperscale), for instance the requests and limits you set for memory and vCore. The amount of workload you can host on a given Kubernetes cluster is relative to the characteristics of the Kubernetes cluster, the nature of the workloads, the number of users, how the operations of the Kubernetes cluster are done…
## Scale out AKS
-LetΓÇÖs demonstrate that scaling horizontally both the AKS cluster and the Azure Arc enabled PostgreSQL Hyperscale server is a way to benefit the most from the high performance of Azure Arc enabled PostgreSQL Hyperscale.
+LetΓÇÖs demonstrate that scaling horizontally both the AKS cluster and the Azure Arc-enabled PostgreSQL Hyperscale server is a way to benefit the most from the high performance of Azure Arc-enabled PostgreSQL Hyperscale.
LetΓÇÖs add a fifth node to the AKS cluster: :::row:::
And letΓÇÖs update the representation of the architecture of our system:
We can observe that the new physical node of the Kubernetes cluster is hosting only the metrics pod that is necessary for Azure Arc data services. Note that, in this example, we are focusing only on the namespace of the Arc Data Controller, we are not representing the other pods.
-## Scale out Azure Arc enabled PostgreSQL Hyperscale again
+## Scale out Azure Arc-enabled PostgreSQL Hyperscale again
-The fifth physical node is not hosting any workload yet. As we scale out the Azure Arc enabled PostgreSQL Hyperscale, Kubernetes will optimize the placement of the new PostgreSQL pod and should not collocate it on physical nodes that are already hosting more workloads.
-Run the following command to scale the Azure Arc enabled PostgreSQL Hyperscale from 3 to 4 workers. At the end of the operation, the server group will be constituted and distributed across five PostgreSQL instances, one coordinator and four workers.
+The fifth physical node is not hosting any workload yet. As we scale out the Azure Arc-enabled PostgreSQL Hyperscale, Kubernetes will optimize the placement of the new PostgreSQL pod and should not collocate it on physical nodes that are already hosting more workloads.
+Run the following command to scale the Azure Arc-enabled PostgreSQL Hyperscale from 3 to 4 workers. At the end of the operation, the server group will be constituted and distributed across five PostgreSQL instances, one coordinator and four workers.
```console azdata arc postgres server edit --name postgres01 --workers 4
Kubernetes did schedule the new PostgreSQL pod in the least loaded physical node
## Summary
-To benefit the most from the scalability and the performance of scaling Azure Arc enabled server group horizontally, you should avoid resource contention inside the Kubernetes cluster:
-- between the Azure Arc enabled PostgreSQL Hyperscale server group and other workloads hosted on the same Kubernetes cluster-- between all the PostgreSQL instances that constitute the Azure Arc enabled PostgreSQL Hyperscale server group
+To benefit the most from the scalability and the performance of scaling Azure Arc-enabled server group horizontally, you should avoid resource contention inside the Kubernetes cluster:
+- between the Azure Arc-enabled PostgreSQL Hyperscale server group and other workloads hosted on the same Kubernetes cluster
+- between all the PostgreSQL instances that constitute the Azure Arc-enabled PostgreSQL Hyperscale server group
You can achieve this in several ways:-- Scale out both Kubernetes and Azure Arc enabled Postgres Hyperscale: consider scaling horizontally the Kubernetes cluster the same way you are scaling the Azure Arc enabled PostgreSQL Hyperscale server group. Add a physical node to the cluster for each worker you add to the server group.-- Scale out Azure Arc enabled Postgres Hyperscale without scaling out Kubernetes: by setting the right resource constraints (request and limits on memory and vCore) on the workloads hosted in Kubernetes (Azure Arc enabled PostgreSQL Hyperscale included), you will enable the colocation of workloads on Kubernetes and reduce the risk of resource contention. You need to make sure that the physical characteristics of the physical nodes of the Kubernetes cluster can honor the resources constraints you define. You should also ensure that equilibrium remains as the workloads evolve over time or as more workloads are added in the Kubernetes cluster.
+- Scale out both Kubernetes and Azure Arc-enabled Postgres Hyperscale: consider scaling horizontally the Kubernetes cluster the same way you are scaling the Azure Arc-enabled PostgreSQL Hyperscale server group. Add a physical node to the cluster for each worker you add to the server group.
+- Scale out Azure Arc-enabled Postgres Hyperscale without scaling out Kubernetes: by setting the right resource constraints (request and limits on memory and vCore) on the workloads hosted in Kubernetes (Azure Arc-enabled PostgreSQL Hyperscale included), you will enable the colocation of workloads on Kubernetes and reduce the risk of resource contention. You need to make sure that the physical characteristics of the physical nodes of the Kubernetes cluster can honor the resources constraints you define. You should also ensure that equilibrium remains as the workloads evolve over time or as more workloads are added in the Kubernetes cluster.
- Use the Kubernetes mechanisms (pod selector, affinity, anti-affinity) to influence the placement of the pods. ## Next steps
-[Scale out your Azure Arc enabled PostgreSQL Hyperscale server group by adding more worker nodes](scale-out-in-postgresql-hyperscale-server-group.md)
+[Scale out your Azure Arc-enabled PostgreSQL Hyperscale server group by adding more worker nodes](scale-out-in-postgresql-hyperscale-server-group.md)
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/privacy-data-collection-and-reporting.md
Title: Data collection and reporting | Azure Arc enabled data services
+ Title: Data collection and reporting | Azure Arc-enabled data services
description: Explains the type of data that is transmitted by Arc enabled Data services to Microsoft.
# Azure Arc data services data collection and reporting
-This article describes the data that Azure Arc enabled data services transmits to Microsoft.
+This article describes the data that Azure Arc-enabled data services transmits to Microsoft.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Related products
-Azure Arc enabled data services may use some or all of the following products:
+Azure Arc-enabled data services may use some or all of the following products:
- SQL MI ΓÇô Azure Arc - PostgreSQL Hyperscale ΓÇô Azure Arc
Customer Experience Improvement Program (CEIP)|[CEIP summary](/sql/sql-server/us
## Detailed description of data
-This section provides more details about the information included with the Azure Arc enabled data services transmits to Microsoft.
+This section provides more details about the information included with the Azure Arc-enabled data services transmits to Microsoft.
### Operational data
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Title: Azure Arc enabled data services - Release notes
-description: Latest release notes
+ Title: Azure Arc-enabled data services - Release notes
+description: Latest release notes
Previously updated : 06/02/2021 Last updated : 07/13/2021
-# Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
+# Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
-# Release notes - Azure Arc enabled data services (Preview)
+# Release notes - Azure Arc-enabled data services (Preview)
-This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc enabled data services.
+This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
-## May 2021
+## June 2021
-This preview release is published on June 2, 2021.
+This preview release is published July 13, 2021.
-As a preview feature, the technology presented in this article is subject to [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+### Breaking changes
-### Breaking change
+#### New deployment templates
-- Kubernetes native deployment templates have been modified. Update update your .yml templates.
- - Updated templates for data controller, bootstrapper, & SQL Managed instance: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574)
- - Updated templates for PostgreSQL Hyperscale: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574)
+- Kubernetes native deployment templates have been modified for for data controller, bootstrapper, & SQL managed instance. Update your .yaml templates. [Sample yaml files](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml)
+
+#### New Azure CLI extension for data controller and Azure Arc-enabled SQL Managed Instance
+
+This release introduces the `arcdata` extension to the Azure CLI. To add the extension, run the following command:
+
+```azurecli
+az extension add --name arcdata
+```
+
+The extension supports command-line interaction with data controller and SQL managed instance and PostgreSQL Hyperscale resources.
+
+To update your scripts for data controller, replace `azdata arc dc...` with `az arcdata dc...`.
+
+To update your scripts for managed instance, replace `azdata arc sql mi...` with `az sql mi-arc...`.
+
+For Azure Arc-enabled PostgreSQL Hyperscale, replace `azdata arc sql postgres...` with `az postgres arc-server...`.
+
+In addition to the parameters that have historically existed on the azdata commands, the same commands in the `arcdata` Azure CLI extension have some new parameters such as `--namespace` and `--use-k8s` are now required. The `--use-k8s` parameter will be used to differentiate when the command should be sent to the Kubernetes API or to the ARM API. For now all Azure CLI commands for Arc enabled data services target only the Kubernetes API.
+
+Some of the short forms of the parameter names (e.g. `--core-limit` as `-cl`) have either been removed or changed. Use the new parameter short names or the long name.
+
+The `azdata arc dc export` command is no longer functional. Use `az arcdata dc export` instead.
+
+#### Required property: `infrastructure`
+
+The `infrastructure` property is a new required property when deploying a data controller. Adjust your yaml files, azdata/az scripts, and ARM templates to account for specifying this property value. Allowed values are `alibaba`, `aws`, `azure`, `gpc`, `onpremises`, `other`.
+
+#### Kibana login
+
+The OpenDistro security pack has been removed. Log in to Kibana is now done through a generic browser username/password prompt. More information will be provided later how to configure additional authentication/authorization options.
+
+#### CRD version bump to `v1beta1`
+
+All CRDs have had the version bumped from `v1alpha1` to `v1beta1` for this release. Be sure to delete all CRDs as part of the uninstall process if you have deployed a version of Arc enabled data services prior to the June 2021 release. The new CRDs deployed with the June 2021 release will have v1beta1 as the version.
+
+#### Azure Arc-enabled SQL Managed Instance
+
+Automated backup service is available and on by default. Keep a close watch on space availability on the backup volume.
### What's new
+This release introduces `az` CLI extensions for Azure Arc-enabled data services. See information in [Breaking change](#breaking-change) above.
+ #### Platform -- Create and delete data controller, SQL managed instance, and PostgreSQL Hyperscale server groups from Azure portal. -- Validate portal actions when deleting Azure Arc data services. For instance, the portal alerts when you attempt to delete the data controller when there are SQL Managed Instances deployed using the data controller.-- Create custom configuration profiles to support custom settings when you deploy Arc enabled data controller using the Azure portal.-- Optionally, automatically upload your logs to Azure Log analytics workspace in the directly connected mode.
+#### Data controller
-#### Azure Arc enabled PostgreSQL Hyperscale
+- Streamlined user experience for deploying a data controller in the direct connected mode from the Azure portal. Once a Kubernetes cluster has been Arc-enabled, you can deploy the data controller entirely from the portal with the Arc data controller create wizard in one motion. This deployment also creates the custom location and Arc-enabled data services extension (bootstrapper). You can also pre-create the custom location and/or extension and configure the data controller deployment to use them.
+- New `Infrastructure` property is a required property when you deploy an Arc data controller. This property will be required for billing purposes. More information will be provided at general availability.
+- Various usability improvements in the data controller user experience in the Azure portal including the ability to better see the deployment status of resources that are in the deployment process on the Kubernetes cluster.
+- Data controller automatically uploads logs (optionally) and now also metrics to Azure in direct connected mode.
+- The monitoring stack (metrics and logs databases/dashboards) has now been packaged into its own custom resource definition (CRD) - `monitors.arcdata.microsoft.com`. When this custom resource is created the monitoring stack pods are created. When it is deleted the monitoring stack pods are deleted. When the data controller is created the monitor custom resource is automatically created.
+- New regions supported for direct connected mode (preview): East US 2, West US 2, South Central US, UK South, France Central, Southeast Asia, Australia East.
+- The custom location resource chart on the overview blade now shows Arc-enabled data services resources that are deployed to it.
+- Diagnostics and solutions have been added to the Azure portal for data controller.
+- Added new `Observed Generation` property to all Arc related custom resources.
+- Credential manager service is now included and handles the automated distribution of certificates to all services managed by the data controller.
-This release introduces the following features or capabilities:
+#### Azure Arc-enabled PostgreSQL Hyperscale
-- Delete an Azure Arc PostgreSQL Hyperscale from the Azure portal when its Data Controller was configured for Direct connectivity mode.-- Deploy Azure Arc enabled PostgreSQL Hyperscale from the Azure database for Postgres deployment page in the Azure portal. See [Select Azure Database for PostgreSQL deployment option - Microsoft Azure](https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer).-- Specify storage classes and Postgres extensions when deploying Azure Arc enabled PostgreSQL Hyperscale from the Azure portal.-- Reduce the number of worker nodes in your Azure Arc enabled PostgreSQL Hyperscale. You can do this operation (known as scale in as opposed to scale out when you increase the number of worker nodes) from `azdata` command line.
+- Azure Arc PostgreSQL Hyperscale now supports NFS storage.
+- Azure Arc PostgreSQL Hyperscale deployments now supports Kubernetes pods to nodes assignments strategies with nodeSelector, nodeAffinity and anti-affinity.
+- You can now configure compute parameters (vCore & memory) per role (Coordinator or Worker) when you deploy a PostgreSQL Hyperscale server group or after deployment from Azure Data Studio and from the Azure portal.
+- From the Azure portal, you can now view the list of PostgreSQL extensions created on your PostgreSQL Hyperscale server group.
+- From the Azure portal, you can delete Arc-enabled PostgreSQL Hyperscale groups on a data controller that is directly connected to Azure.
-#### Azure Arc enabled SQL Managed Instance
-- New [Azure CLI extension](/cli/azure/azure-cli-extensions-overview) for Arc enabled SQL Managed Instance has the same commands as `azdata arc sql mi <command>`. All Arc enabled SQL Managed Instance commands are located at `az sql mi-arc`. All Arc related `azdata` commands will be deprecated and moved to Azure CLI in a future release.
+#### Azure Arc-enabled SQL Managed Instance
- To add the extension:
-
- ```azurecli
- az extension add --source https://azurearcdatacli.blob.core.windows.net/cli-extensions/arcdata-0.0.1-py2.py3-none-any.whl -y
- az sql mi-arc --help
- ```
+- Automated backups are now enabled.
+- You can now restore a database backup as a new database on the same SQL instance by creating a new custom resource based on the `sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com` custom resource definition (CRD). See documentation for details. There is no command-line interface (`azdata` or `az`), Azure portal, or Azure Data Studio experience for restoring a database yet.
+- The version of SQL engine binaries included in this release is aligned to the latest binaries that are deployed globally in Azure SQL Managed Instance (PaaS in Azure). This alignment enables backup/restore back and forth between Azure SQL Managed Instance PaaS and Azure Arc-enabled Azure SQL Managed Instance. More details on the compatibility will be provided later.
+- You can now delete Azure Arc SQL Managed Instances from the Azure portal in direct connected mode.
+- You can now configure a SQL Managed Instance to have a pricing tier (`GeneralPurpose`, `BusinessCritical`), license type (`LicenseIncluded`, `BasePrice` (used for AHB pricing), and `developer`. There will be no charges incurred for using Azure Arc-enabled SQL Managed Instance until the General Availability date (publicly announced as scheduled for July 30, 2021) and until you upgrade to the General Availability version of the service.
+- The `arcdata` extension for Azure Data Studio now has additional parameters that can be configured for deploying and editing SQL Managed Instances: enable/disable agent, admin login secret, annotations, labels, service annotations, service labels, SSL/TLS configuration settings, collation, language, and trace flags.
+- New commands in `azdata`/custom resource tasks for setting up distributed availability groups. These commands are in early stages of preview, documentation will be provided soon.
-- Manually trigger a failover of using Transact-SQL. Do the following commands in order:-
- 1. On the primary replica endpoint connection:
-
- ```sql
- ALTER AVAILABILITY GROUP current SET (ROLE = SECONDARY);
- ```
+ > [!NOTE]
+ > These commands will migrate to the `az arcdata` extension.
- 1. On the secondary replica endpoint connection:
-
- ```sql
- ALTER AVAILABILITY GROUP current SET (ROLE = PRIMARY);
- ```
-
-- Transact-SQL `BACKUP` command is blocked unless using `COPY_ONLY` setting. This supports point in time restore capability.
+- `azdata arc dc export` is deprecated. It is replaced by `az arcdata dc export` in the `arcdata` extension for the Azure CLI (`az`). It uses a different approach to export the data out. It does not connect directly to the data controller API anymore. Instead it creates an export task based on the `exporttasks.tasks.arcdata.microsoft.com` custom resource definition (CRD). The export task custom resource that is created drives a workflow to generate a downloadable package. The Azure CLI waits for the completion of this task and then retrieves the secure URL from the task custom resource status to download the package.
+- Support for using NFS-based storage classes.
+- Diagnostics and solutions have been added to the Azure portal for Arc SQL Managed Instance
### Known issues #### Platform -- You can create a data controller, SQL managed instance, or PostgreSQL Hyperscale server group on a connected cluster with the Azure portal. Deployment with other Azure Arc enabled data services tools are not supported. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
+- You can create a data controller, SQL managed instance, or PostgreSQL Hyperscale server group on a connected cluster with the Azure portal. Deployment is not supported with other Azure Arc-enabled data services tools. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
- Azure Data Studio - Azure Data CLI (`azdata`) - Kubernetes native tools (`kubectl`)
+ - The `arcdata` extension for the Azure CLI (`az`)
- [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md) explains how to create the data controller in the portal.
+ [Create Azure Arc data controller in Direct connectivity mode from Azure portal](create-data-controller-direct-azure-portal.md) explains how to create the data controller in the portal.
-- You can still use `kubectl` to create resources directly on a Kubernetes cluster, however they will not be reflected in the Azure portal.
+- You can still use `kubectl` to create resources directly on a Kubernetes cluster, however they will not be reflected in the Azure portal if you are using direct connected mode.
-- In direct connected mode, upload of usage, metrics, and logs using `azdata arc dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work.
+- In direct connected mode, upload of usage, metrics, and logs using `az arcdata dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work.
- Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.-- Azure Arc enabled SQL Managed instance and Azure Arc enabled PostgreSQL Hyperscale are not GB18030 certified.-- Currently, only one Azure Arc data controller in direct connected mode per kubernetes cluster is supported.
+- Azure Arc-enabled SQL Managed instance and Azure Arc-enabled PostgreSQL Hyperscale are not GB18030 certified.
+- Currently, only one Azure Arc data controller per Kubernetes cluster is supported.
-#### Azure Arc enabled PostgreSQL Hyperscale
+#### Data controller
+
+Deleting the data controller does not in all cases delete the monitor custom resource. You can delete it manually by running the command `kubectl delete monitor monitoringstack -n <namespace>`.
+
+#### Azure Arc-enabled PostgreSQL Hyperscale
-- Point in time restore is not supported for now on NFS storage. - It is not possible to enable and configure the `pg_cron` extension at the same time. You need to use two commands for this. One command to enable it and one command to configure it. For example: 1. Enable the extension:
-
+ ```console
- azdata arc postgres server edit -n myservergroup --extensions pg_cron
+ azdata arc postgres server edit -n myservergroup --extensions pg_cron
``` 1. Restart the server group. 1. Configure the extension:
-
+ ```console azdata arc postgres server edit -n myservergroup --engine-settings cron.database_name='postgres' ``` If you execute the second command before the restart has completed it will fail. If that is the case, simply wait for a few more moments and execute the second command again. -- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.
+- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.
+
+- Point in time restore is not supported for now on NFS storage.
+
+#### Azure Arc-enabled SQL Managed Instance
+
+Some limitations for the automated backup service. Refer to the Point-In-Time restore article to learn more.
+
+## May 2021
+
+This preview release is published on June 2, 2021.
+
+As a preview feature, the technology presented in this article is subject to [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+### Breaking change
+
+- Kubernetes native deployment templates have been modified. Update update your .yml templates.
+ - Updated templates for data controller, bootstrapper, & SQL Managed instance: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574)
+ - Updated templates for PostgreSQL Hyperscale: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574)
+
+### What's new
+
+#### Platform
+
+- Create and delete data controller, SQL managed instance, and PostgreSQL Hyperscale server groups from Azure portal.
+- Validate portal actions when deleting Azure Arc data services. For instance, the portal alerts when you attempt to delete the data controller when there are SQL Managed Instances deployed using the data controller.
+- Create custom configuration profiles to support custom settings when you deploy Arc-enabled data controller using the Azure portal.
+- Optionally, automatically upload your logs to Azure Log analytics workspace in the directly connected mode.
+
+#### Azure Arc-enabled PostgreSQL Hyperscale
+
+This release introduces the following features or capabilities:
+
+- Delete an Azure Arc PostgreSQL Hyperscale from the Azure portal when its Data Controller was configured for Direct connectivity mode.
+- Deploy Azure Arc-enabled PostgreSQL Hyperscale from the Azure database for Postgres deployment page in the Azure portal. See [Select Azure Database for PostgreSQL deployment option - Microsoft Azure](https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer).
+- Specify storage classes and Postgres extensions when deploying Azure Arc-enabled PostgreSQL Hyperscale from the Azure portal.
+- Reduce the number of worker nodes in your Azure Arc-enabled PostgreSQL Hyperscale. You can do this operation (known as scale in as opposed to scale out when you increase the number of worker nodes) from `azdata` command-line.
+
+#### Azure Arc-enabled SQL Managed Instance
+
+- New [Azure CLI extension](/cli/azure/azure-cli-extensions-overview) for Arc-enabled SQL Managed Instance has the same commands as `az sql mi-arc <command>`. All Arc-enabled SQL Managed Instance commands are located at `az sql mi-arc`. All Arc related `azdata` commands will be deprecated and moved to Azure CLI in a future release.
+
+ To add the extension:
+
+ ```azurecli
+ az extension add --source https://azurearcdatacli.blob.core.windows.net/cli-extensions/arcdata-0.0.1-py2.py3-none-any.whl -y
+ az sql mi-arc --help
+ ```
+
+- Manually trigger a failover of using Transact-SQL. Do the following commands in order:
+
+ 1. On the primary replica endpoint connection:
+
+ ```sql
+ ALTER AVAILABILITY GROUP current SET (ROLE = SECONDARY);
+ ```
+
+ 1. On the secondary replica endpoint connection:
+
+ ```sql
+ ALTER AVAILABILITY GROUP current SET (ROLE = PRIMARY);
+ ```
+
+- Transact-SQL `BACKUP` command is blocked unless using `COPY_ONLY` setting. This supports point in time restore capability.
## April 2021
This preview release is published on April 29, 2021.
### What's new
-This section describes the new features introduced or enabled for this release.
+This section describes the new features introduced or enabled for this release.
#### Platform -- Direct connected clusters automatically upload telemetry information automatically Azure.
+- Direct connected clusters automatically upload telemetry information automatically Azure.
-#### Azure Arc enabled PostgreSQL Hyperscale
+#### Azure Arc-enabled PostgreSQL Hyperscale
-- Azure Arc enabled PostgreSQL Hyperscale is now supported in Direct connect mode. You now can deploy Azure Arc enabled PostgreSQL Hyperscale from the Azure Market Place in the Azure portal. -- Azure Arc enabled PostgreSQL Hyperscale ships with the Citus 10.0 extension which features columnar table storage-- Azure Arc enabled PostgreSQL Hyperscale now supports full user/role management.-- Azure Arc enabled PostgreSQL Hyperscale now supports additional extensions with `Tdigest` and `pg_partman`.-- Azure Arc enabled PostgreSQL Hyperscale now supports configuring vCore and memory settings per role of the PostgreSQL instance in the server group.-- Azure Arc enabled PostgreSQL Hyperscale now supports configuring database engine/server settings per role of the PostgreSQL instance in the server group.
+- Azure Arc-enabled PostgreSQL Hyperscale is now supported in Direct connect mode. You now can deploy Azure Arc-enabled PostgreSQL Hyperscale from the Azure Market Place in the Azure portal.
+- Azure Arc-enabled PostgreSQL Hyperscale ships with the Citus 10.0 extension which features columnar table storage
+- Azure Arc-enabled PostgreSQL Hyperscale now supports full user/role management.
+- Azure Arc-enabled PostgreSQL Hyperscale now supports additional extensions with `Tdigest` and `pg_partman`.
+- Azure Arc-enabled PostgreSQL Hyperscale now supports configuring vCore and memory settings per role of the PostgreSQL instance in the server group.
+- Azure Arc-enabled PostgreSQL Hyperscale now supports configuring database engine/server settings per role of the PostgreSQL instance in the server group.
-#### Azure Arc enabled SQL Managed Instance
+#### Azure Arc-enabled SQL Managed Instance
-- Restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.
+- Restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.
- Connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint. ## March 2021
Azure Data CLI (`azdata`) version number: 20.3.2. You can install `azdata` from
### Data controller -- Deploy Azure Arc enabled data services data controller in direct connect mode from the portal. Start from [Deploy data controller - direct connect mode - prerequisites](deploy-data-controller-direct-mode-prerequisites.md).
+- Deploy Azure Arc-enabled data services data controller in direct connect mode from the portal. Start from [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md).
-### Azure Arc enabled PostgreSQL Hyperscale
+### Azure Arc-enabled PostgreSQL Hyperscale
Both custom resource definitions (CRD) for PostgreSQL have been consolidated into a single CRD. See the following table.
Both custom resource definitions (CRD) for PostgreSQL have been consolidated int
You will delete the previous CRDs as you cleanup past installations. See [Cleanup from past installations](create-data-controller-using-kubernetes-native-tools.md#cleanup-from-past-installations).
-### Azure Arc enabled SQL Managed Instance
+### Azure Arc-enabled SQL Managed Instance
- You can now create a SQL managed instance from the Azure portal in the direct connected mode. -- You can now restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.
+- You can now restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.
- You can now connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
Azure Data CLI (`azdata`) version number: 20.3.1. You can install `azdata` from
Additional updates include: -- Azure Arc enabled SQL Managed Instance
+- Azure Arc-enabled SQL Managed Instance
- High availability with Always On availability groups -- Azure Arc enabled PostgreSQL Hyperscale
- Azure Data Studio:
+- Azure Arc-enabled PostgreSQL Hyperscale
+ Azure Data Studio:
- The overview page shows the status of the server group itemized per node - A new properties page shows more details about the server group - Configure Postgres engine parameters from **Node Parameters** page
Additional updates include:
- PostgreSQL deployments honor the volume size parameters indicated in create commands - The engine version parameters are now honored when editing a server group-- The naming convention of the pods for Azure Arc enabled PostgreSQL Hyperscale has changed
+- The naming convention of the pods for Azure Arc-enabled PostgreSQL Hyperscale has changed
It is now in the form: `ServergroupName{c, w}-n`. For example, a server group with three nodes, one coordinator node and two worker nodes is represented as: - `Postgres01c-0` (coordinator node)
View endpoints for SQL Managed Instance and PostgreSQL Hyperscale using the Azur
Edit SQL Managed Instance resource (CPU core and memory) requests and limits using Azure Data Studio.
-Azure Arc enabled PostgreSQL Hyperscale now supports point in time restore in addition to full backup restore for both versions 11 and 12 of PostgreSQL. The point in time restore capability allows you to indicate a specific date and time to restore to.
+Azure Arc-enabled PostgreSQL Hyperscale now supports point in time restore in addition to full backup restore for both versions 11 and 12 of PostgreSQL. The point in time restore capability allows you to indicate a specific date and time to restore to.
-The naming convention of the pods for Azure Arc enabled PostgreSQL Hyperscale has changed. It is now in the form: ServergroupName{r, s}-_n_. For example, a server group with three nodes, one coordinator node and two worker nodes is represented as:
+The naming convention of the pods for Azure Arc-enabled PostgreSQL Hyperscale has changed. It is now in the form: ServergroupName{r, s}-_n_. For example, a server group with three nodes, one coordinator node and two worker nodes is represented as:
- `postgres02r-0` (coordinator node) - `postgres02s-0` (worker node) - `postgres02s-1` (worker node)
The naming convention of the pods for Azure Arc enabled PostgreSQL Hyperscale ha
#### New resource provider
-This release introduces an updated [resource provider](../../azure-resource-manager/management/azure-services-resource-providers.md) called `Microsoft.AzureArcData`. Before you can use this feature, you need to register this resource provider.
+This release introduces an updated [resource provider](../../azure-resource-manager/management/azure-services-resource-providers.md) called `Microsoft.AzureArcData`. Before you can use this feature, you need to register this resource provider.
-To register this resource provider:
+To register this resource provider:
-1. In the Azure portal, select **Subscriptions**
+1. In the Azure portal, select **Subscriptions**
2. Choose your subscription
-3. Under **Settings**, select **Resource providers**
-4. Search for `Microsoft.AzureArcData` and select **Register**
+3. Under **Settings**, select **Resource providers**
+4. Search for `Microsoft.AzureArcData` and select **Register**
-You can review detailed steps at [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). This change also removes all the existing Azure resources that you have uploaded to the Azure portal. In order to use the resource provider, you need to update the data controller and use the latest `azdata` CLI.
+You can review detailed steps at [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). This change also removes all the existing Azure resources that you have uploaded to the Azure portal. In order to use the resource provider, you need to update the data controller and use the latest `azdata` CLI.
### Platform release notes #### Direct connectivity mode
-This release introduces direct connectivity mode. Direct connectivity mode enables the data controller to automatically upload the usage information to Azure. As part of the usage upload, the Arc data controller resource is automatically created in the portal, if it is not already created via `azdata` upload.
+This release introduces direct connectivity mode. Direct connectivity mode enables the data controller to automatically upload the usage information to Azure. As part of the usage upload, the Arc data controller resource is automatically created in the portal, if it is not already created via `azdata` upload.
-You can specify direct connectivity when you create the data controller. The following example creates a data controller with `azdata arc dc create` named `arc` using direct connectivity mode (`connectivity-mode direct`). Before you run the example, replace `<subscription id>` with your subscription ID.
+You can specify direct connectivity when you create the data controller. The following example creates a data controller with `az arcdata dc create` named `arc` using direct connectivity mode (`connectivity-mode direct`). Before you run the example, replace `<subscription id>` with your subscription ID.
-```console
-azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group my-resource-group --location eastus --connectivity-mode direct
+```azurecli
+az arcdata dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group my-resource-group --location eastus --connectivity-mode direct
```
-## October 2020
+## October 2020
Azure Data CLI (`azdata`) version number: 20.2.3. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata). ### Breaking changes
-This release introduces the following breaking changes:
+This release introduces the following breaking changes:
* In the PostgreSQL custom resource definition (CRD), the term `shards` is renamed to `workers`. This term (`workers`) matches the command-line parameter name.
-* `azdata arc postgres server delete` prompts for confirmation before deleting a postgres instance. Use `--force` to skip prompt.
+* `azdata arc postgres server delete` prompts for confirmation before deleting a postgres instance. Use `--force` to skip prompt.
### Additional changes
-* A new optional parameter was added to `azdata arc postgres server create` called `--volume-claim mounts`. The value is a comma-separated list of volume claim mounts. A volume claim mount is a pair of volume type and PVC name. The only volume type currently supported is `backup`. In PostgreSQL, when volume type is `backup`, the PVC is mounted to `/mnt/db-backups`. This enables sharing backups between PostgresSQL instances so that the backup of one PostgresSQL instance can be restored in another instance.
+* A new optional parameter was added to `azdata arc postgres server create` called `--volume-claim mounts`. The value is a comma-separated list of volume claim mounts. A volume claim mount is a pair of volume type and PVC name. The only volume type currently supported is `backup`. In PostgreSQL, when volume type is `backup`, the PVC is mounted to `/mnt/db-backups`. This enables sharing backups between PostgresSQL instances so that the backup of one PostgresSQL instance can be restored in another instance.
-* New short names for PostgresSQL custom resource definitions:
- * `pg11`
+* New short names for PostgresSQL custom resource definitions:
+ * `pg11`
* `pg12` * Telemetry upload provides user with either: * Number of points uploaded to Azure
- or
+ or
* If no data has been loaded to Azure, a prompt to try it again.
-* `azdata arc dc debug copy-logs` now also reads from `/var/opt/controller/log` folder and collects PostgreSQL engine logs on Linux.
+* `az arcdata dc debug copy-logs` now also reads from `/var/opt/controller/log` folder and collects PostgreSQL engine logs on Linux.
* Display a working indicator during creating and restoring backup with PostgreSQL Hyperscale. * `azdata arc postrgres backup list` now includes backup size information. * SQL Managed Instance admin name property was added to right column of overview blade in the Azure portal.
-* Azure Data Studio supports configuring number of worker nodes, vCore, and memory settings for PostgreSQL Hyperscale.
+* Azure Data Studio supports configuring number of worker nodes, vCore, and memory settings for PostgreSQL Hyperscale.
* Preview supports backup/restore for Postgres version 11 and 12. ## September 2020
-Azure Arc enabled data services is released for public preview. Arc enabled data services allow you to manage data services anywhere.
+Azure Arc-enabled data services is released for public preview. Arc-enabled data services allow you to manage data services anywhere.
- SQL Managed Instance - PostgreSQL Hyperscale
-For instructions see [What are Azure Arc enabled data services?](overview.md)
+For instructions see [What are Azure Arc-enabled data services?](overview.md)
## Next steps
-> **Just want to try things out?**
+> **Just want to try things out?**
> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on AKS, AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. - [Install the client tools](install-client-tools.md)
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
Title: Import the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
-description: Restore the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
+ Title: Import the AdventureWorks sample database to Azure Arc-enabled PostgreSQL Hyperscale
+description: Restore the AdventureWorks sample database to Azure Arc-enabled PostgreSQL Hyperscale
Last updated 06/02/2021
-# Import the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
+# Import the AdventureWorks sample database to Azure Arc-enabled PostgreSQL Hyperscale
[AdventureWorks](/sql/samples/adventureworks-install-configure) is a sample database containing an OLTP database used in tutorials, and examples. It's provided and maintained by Microsoft as part of the [SQL Server samples GitHub repository](https://github.com/microsoft/sql-server-samples/tree/master/samples/databases).
-An open-source project has converted the AdventureWorks database to be compatible with Azure Arc enabled PostgreSQL Hyperscale.
+An open-source project has converted the AdventureWorks database to be compatible with Azure Arc-enabled PostgreSQL Hyperscale.
- [Original project](https://github.com/lorint/AdventureWorks-for-Postgres) - [Follow on project that pre-converts the CSV files to be compatible with PostgreSQL](https://github.com/NorfolkDataSci/adventure-works-postgres)
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use
```
-> **Note: You will not see so much performance benefits of running on Azure Arc enabled PostgreSQL Hyperscale until you scale out and you shard/distribute the data/tables across the worker nodes of your PostgreSQL Hyperscale server group. See [Suggested next steps](#suggested-next-steps).**
+> **Note: You will not see so much performance benefits of running on Azure Arc-enabled PostgreSQL Hyperscale until you scale out and you shard/distribute the data/tables across the worker nodes of your PostgreSQL Hyperscale server group. See [Suggested next steps](#suggested-next-steps).**
## Suggested next steps - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale. :
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use
* [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)* * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+ > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
azure-arc Retrieve The Username Password For Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/retrieve-the-username-password-for-data-controller.md
Previously updated : 09/22/2020 Last updated : 07/13/2021
You may be in a situation where you need to retrieve the user name and password for the Data controller. These are the commands you need when you run.
-```console
-azdata login
-```
- If you are the Kubernetes administrator for the cluster. As such you have the privileges to run commands to retrieve from the Kubernetes secret stores the information that Azure Arc persists there. > [!NOTE]
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
Last updated 06/02/2021
-# Scale out and in your Azure Arc enabled PostgreSQL Hyperscale server group by adding more worker nodes
-This document explains how to scale out and scale in an Azure Arc enabled PostgreSQL Hyperscale server group. It does so by taking you through a scenario. **If you do not want to run through the scenario and want to just read about how to scale out, jump to the paragraph [Scale out](#scale-out)** or [Scale in]().
+# Scale out and in your Azure Arc-enabled PostgreSQL Hyperscale server group by adding more worker nodes
+This document explains how to scale out and scale in an Azure Arc-enabled PostgreSQL Hyperscale server group. It does so by taking you through a scenario. **If you do not want to run through the scenario and want to just read about how to scale out, jump to the paragraph [Scale out](#scale-out)** or [Scale in]().
-You scale out when you add Postgres instances (Postgres Hyperscale worker nodes) to your Azure Arc enabled PosrgreSQL Hyperscale.
+You scale out when you add Postgres instances (Postgres Hyperscale worker nodes) to your Azure Arc-enabled PosrgreSQL Hyperscale.
-You scale in when you remove Postgres instances (Postgres Hyperscale worker nodes) from your Azure Arc enabled PosrgreSQL Hyperscale.
+You scale in when you remove Postgres instances (Postgres Hyperscale worker nodes) from your Azure Arc-enabled PosrgreSQL Hyperscale.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Get started
-If you are already familiar with the scaling model of Azure Arc enabled PostgreSQL Hyperscale or Azure Database for PostgreSQL Hyperscale (Citus), you may skip this paragraph. If you are not, it is recommended you start by reading about this scaling model in the documentation page of Azure Database for PostgreSQL Hyperscale (Citus). Azure Database for PostgreSQL Hyperscale (Citus) is the same technology that is hosted as a service in Azure (Platform As A Service also known as PAAS) instead of being offered as part of Azure Arc enabled Data
+If you are already familiar with the scaling model of Azure Arc-enabled PostgreSQL Hyperscale or Azure Database for PostgreSQL Hyperscale (Citus), you may skip this paragraph. If you are not, it is recommended you start by reading about this scaling model in the documentation page of Azure Database for PostgreSQL Hyperscale (Citus). Azure Database for PostgreSQL Hyperscale (Citus) is the same technology that is hosted as a service in Azure (Platform As A Service also known as PAAS) instead of being offered as part of Azure Arc-enabled Data
- [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md) - [Determine application type](../../postgresql/concepts-hyperscale-app-type.md) - [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
If you are already familiar with the scaling model of Azure Arc enabled PostgreS
- [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)* - [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
-> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
## Scenario
-This scenario refers to the PostgreSQL Hyperscale server group that was created as an example in the [Create an Azure Arc enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md) documentation.
+This scenario refers to the PostgreSQL Hyperscale server group that was created as an example in the [Create an Azure Arc-enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md) documentation.
### Load test data The scenario uses a sample of publicly available GitHub data, available from the [Citus Data website](https://www.citusdata.com/) (Citus Data is part of Microsoft).
-#### Connect to your Azure Arc enabled PostgreSQL Hyperscale server group
+#### Connect to your Azure Arc-enabled PostgreSQL Hyperscale server group
##### List the connection information
-Connect to your Azure Arc enabled PostgreSQL Hyperscale server group by first getting the connection information:
+Connect to your Azure Arc-enabled PostgreSQL Hyperscale server group by first getting the connection information:
The general format of this command is ```console azdata arc postgres endpoint list -n <server name>
Note the execution time.
> [!NOTE]
-> Depending on your environment - for example if you have deployed your test server group with `kubeadm` on a single node VM - you may see a modest improvement in the execution time. To get a better idea of the type of performance improvement you could reach with Azure Arc enabled PostgreSQL Hyperscale, watch the following short videos:
+> Depending on your environment - for example if you have deployed your test server group with `kubeadm` on a single node VM - you may see a modest improvement in the execution time. To get a better idea of the type of performance improvement you could reach with Azure Arc-enabled PostgreSQL Hyperscale, watch the following short videos:
>* [High performance HTAP with Azure PostgreSQL Hyperscale (Citus)](https://www.youtube.com/watch?v=W_3e07nGFxY) >* [Building HTAP applications with Python & Azure PostgreSQL Hyperscale (Citus)](https://www.youtube.com/watch?v=YDT8_riLLs0)
The scale-in operation is an online operation. Your applications continue to acc
## Next steps -- Read about how to [scale up and down (memory, vCores) your Azure Arc enabled PostgreSQL Hyperscale server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md)-- Read about how to set server parameters in your Azure Arc enabled PostgreSQL Hyperscale server group
+- Read about how to [scale up and down (memory, vCores) your Azure Arc-enabled PostgreSQL Hyperscale server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md)
+- Read about how to set server parameters in your Azure Arc-enabled PostgreSQL Hyperscale server group
- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. : * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md) * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
The scale-in operation is an online operation. Your applications continue to acc
* [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)* * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+ > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Show Configuration Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/show-configuration-postgresql-hyperscale-server-group.md
Title: Show the configuration of an Arc enabled PostgreSQL Hyperscale server group-+ description: Show the configuration of an Arc enabled PostgreSQL Hyperscale server group
This article explains how to display the configuration of your server group(s). It does so by anticipating some questions you may be asking to yourself and it answers them. At times there may be several valid answers. This article pitches the most common or useful ones. It groups those questions by theme: - from a Kubernetes point of view-- from an Azure Arc enabled data services point of view
+- from an Azure Arc-enabled data services point of view
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## From a Kubernetes point of view
-### How many pods are used by Azure Arc enabled PostgreSQL Hyperscale?
+### How many pods are used by Azure Arc-enabled PostgreSQL Hyperscale?
List the Kubernetes resources of type Postgres. Run the command:
postgresql-12.arcdata.microsoft.com/postgres02 Ready 3/3 10.0.0.4:3
This example shows that 2 server groups are created and each runs on 3 pods (1 coordinator + 2 workers). That means the server groups created in this Azure Arc Data Controller use 6 pods.
-### What pods are used by Azure Arc enabled PostgreSQL Hyperscale server groups?
+### What pods are used by Azure Arc-enabled PostgreSQL Hyperscale server groups?
Run:
logs-few7hh0k4npx9phsiobdc3hq-postgres01-2 Bound local-pv-5ccd02e6 193
```
-## From an Azure Arc enabled data services point of view:
+## From an Azure Arc-enabled data services point of view:
* How many server groups are created in an Arc Data Controller? * What are their names?
Events: <none>
``` >[!NOTE]
->Prior to October 2020 release, `Workers` was `Shards` in the previous example. See [Release notes - Azure Arc enabled data services (Preview)](release-notes.md) for more information.
+>Prior to October 2020 release, `Workers` was `Shards` in the previous example. See [Release notes - Azure Arc-enabled data services (Preview)](release-notes.md) for more information.
Let's call out some specific points of interest in the description of the `servergroup` shown above. What does it tell us about this server group?
Returns the below output in a format and content very similar to the one returne
``` ## Next steps-- [Read about the concepts of Azure Arc enabled PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md)
+- [Read about the concepts of Azure Arc-enabled PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md)
- [Read about how to scale out (add worker nodes) a server group](scale-out-in-postgresql-hyperscale-server-group.md) - [Read about how to scale up/down (increase or reduce memory and/or vCores) a server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) - [Read about storage configuration](storage-configuration.md) - [Read how to monitor a database instance](monitor-grafana-kibana.md)-- [Use PostgreSQL extensions in your Azure Arc enabled PostgreSQL Hyperscale server group](using-extensions-in-postgresql-hyperscale-server-group.md)-- [Configure security for your Azure Arc enabled PostgreSQL Hyperscale server group](configure-security-postgres-hyperscale.md)
+- [Use PostgreSQL extensions in your Azure Arc-enabled PostgreSQL Hyperscale server group](using-extensions-in-postgresql-hyperscale-server-group.md)
+- [Configure security for your Azure Arc-enabled PostgreSQL Hyperscale server group](configure-security-postgres-hyperscale.md)
azure-arc Sizing Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/sizing-guidance.md
Title: Sizing guidance
-description: Plan for the size of a deployment of Azure Arc enabled data services.
+description: Plan for the size of a deployment of Azure Arc-enabled data services.
## Overview of sizing guidance
-When planning for the deployment of Azure Arc data services you should plan for the correct amount of compute, memory, and storage that will be required to run the Azure Arc data controller and for the number of SQL managed instance and PostgreSQL Hyperscale server groups that you will be deploying. Because Azure Arc enabled data services is deployed on Kubernetes, you have the flexibility of adding additional capacity to your Kubernetes cluster over time by adding additional compute nodes or storage. This guide will provide guidance on minimum requirements as well as provide guidance on recommended sizes for some common requirements.
+When planning for the deployment of Azure Arc data services you should plan for the correct amount of compute, memory, and storage that will be required to run the Azure Arc data controller and for the number of SQL managed instance and PostgreSQL Hyperscale server groups that you will be deploying. Because Azure Arc-enabled data services is deployed on Kubernetes, you have the flexibility of adding additional capacity to your Kubernetes cluster over time by adding additional compute nodes or storage. This guide will provide guidance on minimum requirements as well as provide guidance on recommended sizes for some common requirements.
## General sizing requirements
Limit values for cores are the billable metric on SQL managed instance and Postg
## Minimum deployment requirements
-A minimum size Azure Arc enabled data services deployment could be considered to be the Azure Arc data controller plus one SQL managed instance plus one PostgreSQL Hyperscale server group with two worker nodes. For this configuration, you need at least 16 GB of RAM and 4 cores of _available_ capacity on your Kubernetes cluster. You should ensure that you have a minimum Kubernetes node size of 8 GB RAM and 4 cores and a sum total capacity of 16 GB RAM available across all of your Kubernetes nodes. For example, you could have 1 node at 32 GB RAM and 4 cores or you could have 2 nodes with 16GB RAM and 4 cores each.
+A minimum size Azure Arc-enabled data services deployment could be considered to be the Azure Arc data controller plus one SQL managed instance plus one PostgreSQL Hyperscale server group with two worker nodes. For this configuration, you need at least 16 GB of RAM and 4 cores of _available_ capacity on your Kubernetes cluster. You should ensure that you have a minimum Kubernetes node size of 8 GB RAM and 4 cores and a sum total capacity of 16 GB RAM available across all of your Kubernetes nodes. For example, you could have 1 node at 32 GB RAM and 4 cores or you could have 2 nodes with 16GB RAM and 4 cores each.
See the [storage-configuration](storage-configuration.md) article for details on storage sizing.
Each PostgreSQL Hyperscale server group coordinator or worker pod that is create
## Cumulative sizing
-The overall size of an environment required for Azure Arc enabled data services is primarily a function of the number and size of the database instances that will be created. The overall size can be difficult to predict ahead of time knowing that the number of instances will grow and shrink and the amount of resources that are required for each database instance will change.
+The overall size of an environment required for Azure Arc-enabled data services is primarily a function of the number and size of the database instances that will be created. The overall size can be difficult to predict ahead of time knowing that the number of instances will grow and shrink and the amount of resources that are required for each database instance will change.
-The baseline size for a given Azure Arc enabled data services environment is the size of the data controller which requires 4 cores and 16 GB of RAM. From there you can add on top the cumulative total of cores and memory required for the database instances. For SQL managed instance the number of pods is equal to the number of SQL managed instances that are created. For PostgreSQL Hyperscale server groups the number of pods is equivalent to the number of worker nodes plus one for the coordinator node. For example, if you have a PostgreSQL Server group with 3 worker nodes, the total number of pods will be 4.
+The baseline size for a given Azure Arc-enabled data services environment is the size of the data controller which requires 4 cores and 16 GB of RAM. From there you can add on top the cumulative total of cores and memory required for the database instances. For SQL managed instance the number of pods is equal to the number of SQL managed instances that are created. For PostgreSQL Hyperscale server groups the number of pods is equivalent to the number of worker nodes plus one for the coordinator node. For example, if you have a PostgreSQL Server group with 3 worker nodes, the total number of pods will be 4.
In addition to the cores and memory you request for each database instance, you should add 250m of cores and 250Mi of RAM for the agent containers.
Keep in mind that a given database instance size request for cores or RAM cannot
It is a good idea to maintain at least 25% of available capacity across the Kubernetes nodes to allow Kubernetes to efficiently schedule pods to be created and to allow for elastic scaling and longer term growth on demand.
-In your sizing calculations, don't forget to add in the resource requirements of the Kubernetes system pods and any other workloads which may be sharing capacity with Azure Arc enabled data services on the same Kubernetes cluster.
+In your sizing calculations, don't forget to add in the resource requirements of the Kubernetes system pods and any other workloads which may be sharing capacity with Azure Arc-enabled data services on the same Kubernetes cluster.
To maintain high availability during planned maintenance and disaster continuity, you should plan for at least one of the Kubernetes nodes in your cluster to be unavailable at any given point in time. Kubernetes will attempt to reschedule the pods that were running on a given node that was taken down for maintenance or due to a failure. If there is no available capacity on the remaining nodes those pods will not be rescheduled for creation until there is available capacity again. Be extra careful with large database instances. For example, if there is only one Kubernetes node big enough to meet the resource requirements of a large database instance and that node fails then Kubernetes will not be able to schedule that database instance pod onto another Kubernetes node.
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/storage-configuration.md
Title: Storage configuration
-description: Explains Azure Arc enabled data services storage configuration options
+description: Explains Azure Arc-enabled data services storage configuration options
Previously updated : 10/12/2020 Last updated : 07/13/2021
Some services in Azure Arc for data services depend upon being configured to use
|**Controller SQL instance**|`<namespace>/logs-controldb`, `<namespace>/data-controldb`| |**Controller API service**|`<namespace>/data-controller`|
-At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `azdata arc dc create` command or by setting the storage classes in the control.json deployment template file that is used.
+At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `az arcdata dc create` command or by setting the storage classes in the control.json deployment template file that is used.
The deployment templates that are provided out of the box have a default storage class specified that is appropriate for the target environment, but it can be overridden during deployment. See the detailed steps to [alter the deployment profile](create-data-controller.md) to change the storage class configuration for the data controller pods at deployment time.
Important factors to consider when choosing a storage class for the data control
Each database instance has data, logs, and backup persistent volumes. The storage classes for these persistent volumes can be specified at deployment time. If no storage class is specified the default storage class will be used.
-When creating an instance using either `azdata arc sql mi create` or `azdata arc postgres server create`, there are two parameters that can be used to set the storage classes:
+When creating an instance using either `az sql mi-arc create` or `azdata arc postgres server create`, there are two parameters that can be used to set the storage classes:
> [!NOTE]
-> Some of these parameters are in development and will become available on `azdata arc sql mi create` and `azdata arc postgres server create` in the upcoming releases.
+> Some of these parameters are in development and will become available on `az sql mi-arc create` and `azdata arc postgres server create` in the upcoming releases.
|Parameter name, short name|Used for| |||
azure-arc Supported Versions Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/supported-versions-postgres-hyperscale.md
Title: Supported versions Postgres with Azure Arc enabled PostgreSQL Hyperscale
-description: Supported versions Postgres with Azure Arc enabled PostgreSQL Hyperscale
+ Title: Supported versions Postgres with Azure Arc-enabled PostgreSQL Hyperscale
+description: Supported versions Postgres with Azure Arc-enabled PostgreSQL Hyperscale
Last updated 09/22/2020
-# Supported versions of Postgres with Azure Arc enabled PostgreSQL Hyperscale
+# Supported versions of Postgres with Azure Arc-enabled PostgreSQL Hyperscale
-This article explains what versions of Postgres are available with Azure Arc enabled PostgreSQL Hyperscale.
+This article explains what versions of Postgres are available with Azure Arc-enabled PostgreSQL Hyperscale.
The list of supported versions evolves over time. Today, the major versions that are supported are: - Postgres 12 (default) - Postgres 11
To learn more, read about each version on the official Postgres site:
- [Postgres 12 (default)](https://www.postgresql.org/docs/12/https://docsupdatetracker.net/index.html) - [Postgres 11](https://www.postgresql.org/docs/11/https://docsupdatetracker.net/index.html)
-## How to create a particular version in Azure Arc enabled PostgreSQL Hyperscale?
+## How to create a particular version in Azure Arc-enabled PostgreSQL Hyperscale?
At creation time, you have the possibility to indicate what version to create by passing the _--engine-version_ parameter. If you do not indicate a version information, by default, a server group of Postgres version 12 will be created.
These are CRDs, not server groups. The presence of a CRD is not an indication th
The CRD is an indication of what kind of resources can be created. ## Next steps:-- [Read about creating Azure Arc enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)-- [Read about getting a list of the Azure Arc enabled PostgreSQL Hyperscale server groups created in your Arc Data Controller](list-server-groups-postgres-hyperscale.md)
+- [Read about creating Azure Arc-enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)
+- [Read about getting a list of the Azure Arc-enabled PostgreSQL Hyperscale server groups created in your Arc Data Controller](list-server-groups-postgres-hyperscale.md)
azure-arc Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/troubleshoot-guide.md
Title: Troubleshoot Azure Arc enabled data services
+ Title: Troubleshoot Azure Arc-enabled data services
description: Introduction to troubleshooting resources
# Troubleshooting resources
-This article identifies troubleshooting resources for Azure Arc enabled data services.
+This article identifies troubleshooting resources for Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
azure-arc Troubleshoot Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/troubleshoot-postgresql-hyperscale-server-group.md
Previously updated : 09/22/2020 Last updated : 07/13/2021
azdata arc postgres server create --help
## Collecting logs of the data controller and your server groups
-Read the article about [getting logs for Azure Arc enabled data services](troubleshooting-get-logs.md)
+Read the article about [getting logs for Azure Arc-enabled data services](troubleshooting-get-logs.md)
For example, let's troubleshoot a PostgreSQL Hyperscale server group that might
### Install tools
-Install Azure Data Studio, `kubectl` and [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] on the client machine you are using to run the notebook in Azure Data Studio. To do this, please follow the instructions at [Install client tools](install-client-tools.md)
+Install Azure Data Studio, `kubectl`, and Azure (`az`) CLI with the `arcdata` extension on the client machine you are using to run the notebook in Azure Data Studio. To do this, please follow the instructions at [Install client tools](install-client-tools.md)
### Update the PATH environment variable
Implement the steps described in [033-manage-Postgres-with-AzureDataStudio.md](
:::image type="content" source="media/postgres-hyperscale/ads-controller-postgres-troubleshooting-notebook.jpg" alt-text="Azure Data Studio - Open PostgreSQL troubleshooting Notebook":::
-The **TSG100 - The Azure Arc enabled PostgreSQL Hyperscale troubleshooter notebook** opens up:
+The **TSG100 - The Azure Arc-enabled PostgreSQL Hyperscale troubleshooter notebook** opens up:
:::image type="content" source="media/postgres-hyperscale/ads-controller-postgres-troubleshooting-notebook2.jpg" alt-text="Azure Data Studio - Use PostgreSQL troubleshooting notebook"::: #### Run the scripts
View the output from the execution of the code cells for any potential issues.
We'll add more details to the notebook over time about how to recognize common problems and how to solve them. ## Next step-- Read about [getting logs for Azure Arc enabled data services](troubleshooting-get-logs.md)
+- Read about [getting logs for Azure Arc-enabled data services](troubleshooting-get-logs.md)
- Read about [searching logs with Kibana](monitor-grafana-kibana.md) - Read about [monitoring with Grafana](monitor-grafana-kibana.md) - Create your own notebooks
azure-arc Troubleshooting Get Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/troubleshooting-get-logs.md
Title: Get logs to troubleshoot Azure Arc enabled data services
-description: Learn how to get log files from a data controller to troubleshoot Azure Arc enabled data services.
+ Title: Get logs to troubleshoot Azure Arc-enabled data services
+description: Learn how to get log files from a data controller to troubleshoot Azure Arc-enabled data services.
Previously updated : 09/22/2020 Last updated : 07/13/2021
-# Get logs to troubleshoot Azure Arc enabled data services
+# Get logs to troubleshoot Azure Arc-enabled data services
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
Before you proceed, you need:
-* [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]. For more information, see [Install client tools for deploying and managing Azure Arc data services](./install-client-tools.md).
-* An administrator account to sign in to the Azure Arc enabled data controller.
+* Azure CLI (`az`) with the `arcdata` extension. For more information, see [Install client tools for deploying and managing Azure Arc data services](./install-client-tools.md).
+* An administrator account to sign in to the Azure Arc-enabled data controller.
## Get log files
-You can get service logs across all pods or specific pods for troubleshooting purposes. One way is to use standard Kubernetes tools such as the `kubectl logs` command. In this article, you'll use the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] tool, which makes it easier to get all of the logs at once.
+You can get service logs across all pods or specific pods for troubleshooting purposes. One way is to use standard Kubernetes tools such as the `kubectl logs` command. In this article, you'll use the Azure (`az`) CLI `arcdata` extension, which makes it easier to get all of the logs at once.
-1. Sign in to the data controller with an administrator account.
+Run the following command to dump the logs:
```console
- azdata login
- ```
-
-2. Run the following command to dump the logs:
-
- ```console
- azdata arc dc debug copy-logs --namespace <namespace name> --exclude-dumps --skip-compress
+ az arcdata dc debug copy-logs --namespace <namespace name> --exclude-dumps --skip-compress
``` For example: ```console
- #azdata arc dc debug copy-logs --namespace arc --exclude-dumps --skip-compress
+ #az arcdata dc debug copy-logs --namespace arc --exclude-dumps --skip-compress
``` The data controller creates the log files in the current working directory in a subdirectory called `logs`. ## Options
-The `azdata arc dc debug copy-logs` command provides the following options to manage the output:
+The `az arcdata dc debug copy-logs` command provides the following options to manage the output:
* Output the log files to a different directory by using the `--target-folder` parameter. * Compress the files by omitting the `--skip-compress` parameter.
The `azdata arc dc debug copy-logs` command provides the following options to ma
With these parameters, you can replace the `<parameters>` in the following example:
-```console
-azdata arc dc debug copy-logs --target-folder <desired folder> --exclude-dumps --skip-compress -resource-kind <custom resource definition name> --resource-name <resource name> --namespace <namespace name>
+```azurecli
+az arcdata dc debug copy-logs --target-folder <desired folder> --exclude-dumps --skip-compress -resource-kind <custom resource definition name> --resource-name <resource name> --namespace <namespace name>
``` For example: ```console
-#azdata arc dc debug copy-logs --target-folder C:\temp\logs --exclude-dumps --skip-compress --resource-kind postgresql-12 --resource-name pg1 --namespace arc
+#az arcdata dc debug copy-logs --target-folder C:\temp\logs --exclude-dumps --skip-compress --resource-kind postgresql-12 --resource-name pg1 --namespace arc
``` The following folder hierarchy is an example. It's organized by pod name, then container, and then by directory hierarchy within the container.
The following folder hierarchy is an example. It's organized by pod name, then c
## Next steps
-[azdata arc dc debug copy-logs](/sql/azdata/reference/reference-azdata-arc-dc-debug#azdata-arc-dc-debug-copy-logs?toc=/azure/azure-arc/data/toc.json&bc=/azure/azure-arc/data/breadcrumb/toc.json)
+[az `arcdata` dc debug copy-logs](/sql/azdata/reference/reference-azdata-arc-dc-debug#azdata-arc-dc-debug-copy-logs?toc=/azure/azure-arc/data/toc.json&bc=/azure/azure-arc/data/breadcrumb/toc.json)
azure-arc Uninstall Azure Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/uninstall-azure-arc-data-controller.md
Previously updated : 09/22/2020 Last updated : 07/13/2021
The following article describes how to delete an Azure Arc data controller.
Before you proceed, ensure all the data services that have been create on the data controller are removed as follows:
-## Log in to the data controller
-
-Log in to the data controller that you want to delete:
-
-```
-azdata login
-```
- ## List & delete existing data services Run the following command to check if there are any SQL managed instances created:
-```
-azdata arc sql mi list
+```azurecli
+az sql mi-arc list
``` For each SQL managed instance from the list above, run the delete command as follows:
-```
-azdata arc sql mi delete -n <name>
-# for example: azdata arc sql mi delete -n sqlinstance1
+```azurecli
+az sql mi-arc delete -n <name>
+# for example: az sql mi-arc delete -n sqlinstance1
``` Similarly, to check for PostgreSQL Hyperscale instances, run: ```
+azdata login
azdata arc postgres server list ```
azdata arc postgres server delete -n <name>
After all the SQL managed instances and PostgreSQL Hyperscale instances have been removed, the data controller can be deleted as follows:
-```
-azdata arc dc delete -n <name> -ns <namespace>
-# for example: azdata arc dc delete -ns arc -n arcdc
+```azurecli
+az arcdata dc delete -n <name> -ns <namespace>
+# for example: az arcdata dc delete -ns arc -n arcdc
``` ### Remove SCCs (Red Hat OpenShift only)
kubectl delete ns <nameSpecifiedDuringCreation>
## Next steps
-[What are Azure Arc enabled data services?](overview.md)
+[What are Azure Arc-enabled data services?](overview.md)
azure-arc Upload Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-logs.md
Title: Upload logs to Azure Monitor
-description: Upload logs for Azure Arc enabled data services to Azure Monitor
+description: Upload logs for Azure Arc-enabled data services to Azure Monitor
Previously updated : 09/22/2020 Last updated : 07/13/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
Periodically, you can export logs and then upload them to Azure. Exporting and uploading logs also creates and updates the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure. > [!NOTE]
-> During the preview period, there is no cost for using Azure Arc enabled data services.
+> During the preview period, there is no cost for using Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
With the environment variables set, you can upload logs to the log workspace.
## Upload logs to Azure Monitor
- To upload logs for your Azure Arc enabled SQL managed instances and AzureArc enabled PostgreSQL Hyperscale server groups run the following CLI commands-
+ To upload logs for your Azure Arc-enabled SQL managed instances and AzureArc enabled PostgreSQL Hyperscale server groups run the following CLI commands-
-1. Log in to to the Azure Arc data controller with [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)].
+1. Log in to to the Azure Arc data controller with Azure (`az`) CLI with the `arcdata` extension.
```console
- azdata login
+ az arcdata login
``` Follow the prompts to set the namespace, the administrator username, and the password.
With the environment variables set, you can upload logs to the log workspace.
1. Export all logs to the specified file: ```console
- azdata arc dc export --type logs --path logs.json
+ az arcdata dc export --type logs --path logs.json
``` 2. Upload logs to an Azure monitor log analytics workspace: ```console
- azdata arc dc upload --path logs.json
+ az arcdata dc upload --path logs.json
``` ## View your logs in Azure portal
If you want to upload metrics and logs on a scheduled basis, you can create a sc
In your favorite text/code editor, add the following script to the file and save as a script executable file such as .sh (Linux/Mac) or .cmd, .bat, .ps1.
-```console
-azdata arc dc export --type metrics --path metrics.json --force
-azdata arc dc upload --path metrics.json
+```azurecli
+az arcdata dc export --type metrics --path metrics.json --force
+az arcdata dc upload --path metrics.json
``` Make the script file executable
azure-arc Upload Metrics And Logs To Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-metrics-and-logs-to-azure-monitor.md
Previously updated : 09/22/2020 Last updated : 07/13/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
Periodically, you can export out usage information for billing purposes, monitoring metrics, and logs and then upload it to Azure. The export and upload of any of these three types of data will also create and update the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure. > [!NOTE]
-> During the preview period, there is no cost for using Azure Arc enabled data services.
+> During the preview period, there is no cost for using Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
Before you can upload usage data, metrics, or logs you need to:
The required tools include: * Azure CLI (az)
-* [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]
+* `arcdata` extension
See [Install tools](./install-client-tools.md).
The specific steps for uploading logs, metrics, or user data vary depending abou
## General guidance on exporting and uploading usage, metrics
-Create, read, update, and delete (CRUD) operations on Azure Arc enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background.
+Create, read, update, and delete (CRUD) operations on Azure Arc-enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background.
During preview, this process happens nightly. The general guidance is to upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage.
azure-arc Upload Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-metrics.md
Title: Upload metrics to Azure Monitor
-description: Upload Azure Arc enabled data services metrics to Azure Monitor
+description: Upload Azure Arc-enabled data services metrics to Azure Monitor
Previously updated : 09/22/2020 Last updated : 07/13/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
Periodically, you can export monitoring metrics and then upload them to Azure. The export and upload of data also creates and update the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure. > [!NOTE]
-> During the preview period, there is no cost for using Azure Arc enabled data services.
+> During the preview period, there is no cost for using Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
echo %SPN_AUTHORITY%
## Upload metrics to Azure Monitor
-To upload metrics for your Azure arc enabled SQL managed instances and Azure Arc enabled PostgreSQL Hyperscale server groups run, the following CLI commands:
+To upload metrics for your Azure Arc-enabled SQL managed instances and Azure Arc-enabled PostgreSQL Hyperscale server groups run, the following CLI commands:
1. Log in to the data controller with `azdata`. 1. Export all metrics to the specified file: ```console
- azdata arc dc export --type metrics --path metrics.json
+ az arcdata dc export --type metrics --path metrics.json
``` 2. Upload metrics to Azure monitor: ```console
- azdata arc dc upload --path metrics.json
+ az arcdata dc upload --path metrics.json
``` >[!NOTE]
- >Wait for at least 30 mins after the Azure Arc enabled data instances are created for the first upload.
+ >Wait for at least 30 mins after the Azure Arc-enabled data instances are created for the first upload.
> >Make sure `upload` the metrics right away after `export` as Azure Monitor only accepts metrics for the last 30 minutes. [Learn more](../../azure-monitor/essentials/metrics-store-custom-rest-api.md#troubleshooting). If you see any errors indicating "Failure to get metrics" during export, check if data collection is set to `true` by running the following command:
-```console
-azdata arc dc config show
+```azurecli
+az arcdata dc config show
``` Look under "security section"
If you want to upload metrics and logs on a scheduled basis, you can create a sc
In your favorite text/code editor, add the following script to the file and save as a script executable file such as .sh (Linux/Mac) or .cmd, .bat, .ps1.
-```console
-azdata arc dc export --type metrics --path metrics.json --force
-azdata arc dc upload --path metrics.json
+```azurecli
+az arcdata dc export --type metrics --path metrics.json --force
+az arcdata dc upload --path metrics.json
``` Make the script file executable
You could also use a job scheduler like cron or Windows Task Scheduler or an orc
## General guidance on exporting and uploading usage, metrics
-Create, read, update, and delete (CRUD) operations on Azure Arc enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background.
+Create, read, update, and delete (CRUD) operations on Azure Arc-enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background.
During preview, this process happens nightly. The general guidance is to upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage.
azure-arc Upload Usage Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-usage-data.md
Title: Upload usage data to Azure Monitor
-description: Upload usage Azure Arc enabled data services data to Azure Monitor
+description: Upload usage Azure Arc-enabled data services data to Azure Monitor
Previously updated : 09/22/2020 Last updated : 07/13/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
Periodically, you can export out usage information. The export and upload of this information creates and updates the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure. > [!NOTE]
-> During the preview period, there is no cost for using Azure Arc enabled data services.
+> During the preview period, there is no cost for using Azure Arc-enabled data services.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
Before you proceed, make sure you have created the required service principal an
Usage information such as inventory and resource usage can be uploaded to Azure in the following two-step way:
-1. Log in to the data controller. Enter the values at the prompt.
+1. Export the usage data using `az arcdata dc export` command, as follows:
```console
- azdata login
- ```
-
-1. Export the usage data using `azdata arc dc export` command, as follows:
-
- ```console
- azdata arc dc export --type usage --path usage.json
+ az arcdata dc export --type usage --path usage.json
```
- This command creates a `usage.json` file with all the Azure Arc enabled data resources such as SQL managed instances and PostgreSQL Hyperscale instances etc. that are created on the data controller.
+ This command creates a `usage.json` file with all the Azure Arc-enabled data resources such as SQL managed instances and PostgreSQL Hyperscale instances etc. that are created on the data controller.
2. Upload the usage data using ```azdata upload``` command ```console
- azdata arc dc upload --path usage.json
+ az arcdata dc upload --path usage.json
``` ## Automating uploads (optional)
If you want to upload metrics and logs on a scheduled basis, you can create a sc
In your favorite text/code editor, add the following script to the file and save as a script executable file such as `.sh` (Linux/Mac) or `.cmd`, `.bat`, or `.ps1`.
-```console
-azdata arc dc export --type metrics --path metrics.json --force
-azdata arc dc upload --path metrics.json
+```azurecli
+az arcdata dc export --type metrics --path metrics.json --force
+az arcdata dc upload --path metrics.json
``` Make the script file executable
azure-arc Using Extensions In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/using-extensions-in-postgresql-hyperscale-server-group.md
Title: Use PostgreSQL extensions description: Use PostgreSQL extensions-+
Last updated 09/22/2020
-# Use PostgreSQL extensions in your Azure Arc enabled PostgreSQL Hyperscale server group
+# Use PostgreSQL extensions in your Azure Arc-enabled PostgreSQL Hyperscale server group
PostgreSQL is at its best when you use it with extensions. In fact, a key element of our own Hyperscale functionality is the Microsoft-provided `citus` extension that is installed by default, which allows Postgres to transparently shard data across multiple nodes. [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Supported extensions
-The standard [`contrib`](https://www.postgresql.org/docs/12/contrib.html) extensions and the following extensions are already deployed in the containers of your Azure Arc enabled PostgreSQL Hyperscale server group:
+The standard [`contrib`](https://www.postgresql.org/docs/12/contrib.html) extensions and the following extensions are already deployed in the containers of your Azure Arc-enabled PostgreSQL Hyperscale server group:
- [`citus`](https://github.com/citusdata/citus), v: 10.0. The Citus extension by [Citus Data](https://www.citusdata.com/) is loaded by default as it brings the Hyperscale capability to the PostgreSQL engine. Dropping the Citus extension from your Azure Arc PostgreSQL Hyperscale server group is not supported. - [`pg_cron`](https://github.com/citusdata/pg_cron), v: 1.3 - [`pgaudit`](https://www.pgaudit.org/), v: 1.4
azure-arc View Billing Data In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/view-billing-data-in-azure.md
Previously updated : 03/02/2021 Last updated : 07/13/2021 # Upload billing data to Azure and view it in the Azure portal > [!IMPORTANT]
-> There is no cost to use Azure Arc enabled data services during the preview period. Although the billing system works end to end the billing meter is set to $0. If you follow this scenario, you will see entries in your billing for a service currently named **hybrid data services** and for resources of a type called **Microsoft.AzureArcData/`<resource type>`**. You will be able to see a record for each data service - Azure Arc that you create, but each record will be billed for $0.
+> There is no cost to use Azure Arc-enabled data services during the preview period. Although the billing system works end to end the billing meter is set to $0. If you follow this scenario, you will see entries in your billing for a service currently named **hybrid data services** and for resources of a type called **Microsoft.AzureArcData/`<resource type>`**. You will be able to see a record for each data service - Azure Arc that you create, but each record will be billed for $0.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Connectivity Modes - Implications for billing data
-In the future, there will be two modes in which you can run your Azure Arc enabled data
+In the future, there will be two modes in which you can run your Azure Arc-enabled data
- **Indirectly connected** - There is no direct connection to Azure. Data is sent to Azure only through an export/upload process. All Azure Arc data services deployments work in this mode today in preview.-- **Directly connected** - In this mode there will be a dependency on the Azure Arc enabled Kubernetes service to provide a direct connection between Azure and the Kubernetes cluster on which the Azure Arc enabled data services are running. This will enable more capabilities and will also enable you to use the Azure portal and the Azure CLI to manage your Azure Arc enabled data services just like you manage your data services in Azure PaaS. This connectivity mode is not yet available in preview, but will be coming soon.
+- **Directly connected** - In this mode there will be a dependency on the Azure Arc-enabled Kubernetes service to provide a direct connection between Azure and the Kubernetes cluster on which the Azure Arc-enabled data services are running. This will enable more capabilities and will also enable you to use the Azure portal and the Azure CLI to manage your Azure Arc-enabled data services just like you manage your data services in Azure PaaS. This connectivity mode is not yet available in preview, but will be coming soon.
You can read more about the difference between the [connectivity modes](./connectivity.md).
In the indirectly connected mode, billing data is periodically exported out of t
To upload billing data to Azure, the following should happen first:
-1. Create an Azure Arc enabled data service if you don't have one already. For example create one of the following:
+1. Create an Azure Arc-enabled data service if you don't have one already. For example create one of the following:
- [Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md)
- - [Create an Azure Arc enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md)
+ - [Create an Azure Arc-enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md)
1. [Upload resource inventory, usage data, metrics and logs to Azure Monitor](upload-metrics-and-logs-to-azure-monitor.md) if you haven't already. 1. Wait for at least 2 hours since the creation of the data service so that the billing telemetry collection process can collect some billing data. Run the following command to export out the billing data:
-```console
-azdata arc dc export -t usage -p usage.json
+```azurecli
+az arcdata dc export -t usage -p usage.json
``` For now, the file is not encrypted so that you can see the contents. Feel free to open in a text editor and see what the contents look like.
Example of a `data` entry:
Run the following command to upload the usage.json file to Azure:
-```console
-azdata arc dc upload -p usage.json
+```azurecli
+az arcdata dc upload -p usage.json
``` ## View billing data in Azure portal
Follow these steps to view billing data in the Azure portal:
1. Make sure that your Scope is set to the subscription in which your data service resources were created. 1. Select **Cost by resource** in the View drop down next to the Scope selector near the top of the view. 1. Make sure the date filter is set to **This month** or some other time range that makes sense given the timing of when you created your data service resources.
-1. Click **Add filter** to add a filter by **Resource type** = `Microsoft.AzureArcData/<data service type>` if you want to filter down to just one type of Azure Arc enabled data service.
+1. Click **Add filter** to add a filter by **Resource type** = `Microsoft.AzureArcData/<data service type>` if you want to filter down to just one type of Azure Arc-enabled data service.
1. You will now see a list of all the resources that were created and uploaded to Azure. Since the billing meter is $0, you will see that the cost is always $0. ## Download billing data
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
Title: What is Azure Arc enabled PostgreSQL Hyperscale?
-description: What is Azure Arc enabled PostgreSQL Hyperscale?
+ Title: What is Azure Arc-enabled PostgreSQL Hyperscale?
+description: What is Azure Arc-enabled PostgreSQL Hyperscale?
Last updated 02/11/2021
-# What is Azure Arc enabled PostgreSQL Hyperscale?
+# What is Azure Arc-enabled PostgreSQL Hyperscale?
-Azure Arc enabled PostgreSQL Hyperscale is one of the database services available as part of Azure Arc enabled data services. Azure Arc makes it possible to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. The value proposition of Azure Arc enabled data services articulates around:
+Azure Arc-enabled PostgreSQL Hyperscale is one of the database services available as part of Azure Arc-enabled data services. Azure Arc makes it possible to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. The value proposition of Azure Arc-enabled data services articulates around:
- Always current - Elastic scale - Self-service provisioning
Azure Arc enabled PostgreSQL Hyperscale is one of the database services availabl
- Disconnected scenario support Read more details at:-- [What are Azure Arc enabled data services](overview.md)
+- [What are Azure Arc-enabled data services](overview.md)
- [Connectivity modes and requirements](connectivity.md) [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
To learn more about these capabilities, you can also refer to this Data Exposed
## Compare solutions
-This section describes how Azure Arc enabled PostgreSQL Hyperscale differs from Azure Database for PostgreSQL Hyperscale (Citus)?
+This section describes how Azure Arc-enabled PostgreSQL Hyperscale differs from Azure Database for PostgreSQL Hyperscale (Citus)?
## Azure Database for PostgreSQL Hyperscale (Citus)
This section describes how Azure Arc enabled PostgreSQL Hyperscale differs from
This is the hyperscale form factor of the Postgres database engine available as database as a service in Azure (PaaS). It is powered by the Citus extension that enables the hyperscale experience. In this form factor, the service runs in the Microsoft datacenters and is operated by Microsoft.
-## Azure Arc enabled PostgreSQL Hyperscale
+## Azure Arc-enabled PostgreSQL Hyperscale
-This is the hyperscale form factor of the Postgres database engine that is available with Azure Arc enabled data services. It is also powered by the Citus extension that enables the hyperscale experience. In this form factor, our customers provide the infrastructure that hosts the systems and operate them.
+This is the hyperscale form factor of the Postgres database engine that is available with Azure Arc-enabled data services. It is also powered by the Citus extension that enables the hyperscale experience. In this form factor, our customers provide the infrastructure that hosts the systems and operate them.
## Next steps - **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
This is the hyperscale form factor of the Postgres database engine that is avail
3. [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md) - **Learn**
- - [Read more about Azure Arc enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
+ - [Read more about Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
- [Read about Azure Arc](https://aka.ms/azurearc)
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/custom-locations.md
If you are logged into Azure CLI using a service principal, to enable this featu
1. Deploy the Azure service cluster extension of the Azure service instance you eventually want on your cluster:
- * [Azure Arc enabled Data Services](../dat#create-the-arc-data-services-extension)
+ * [Azure Arc enabled Data Services](../dat#create-the-arc-data-services-extension)
> [!NOTE] > Outbound proxy without authentication and outbound proxy with basic authentication are supported by the Arc enabled Data Services cluster extension. Outbound proxy that expects trusted certificates is currently not supported.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/overview.md
Key features of Azure Arc include:
* Run [Azure data services](../azure-arc/kubernetes/custom-locations.md) on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL Hyperscale, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure.
-* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc enabled Data Services](./dat).
+* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc enabled Data Services](./dat).
* A unified experience viewing your Azure Arc enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
Azure Cache for Redis has different cache offerings, which provide flexibility i
## When to scale
-You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache for Redis to monitor the health and performance of your cache and help determine when to scale the cache.
+You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache for Redis to monitor the health and performance of your cache. Use that information determine when to scale the cache.
You can monitor the following metrics to help determine if you need to scale.
-* Redis Server Load
-* Memory Usage
-* Network Bandwidth
-* CPU Usage
+- Redis Server Load
+ - Redis is a single threaded process and high Redis server load means that Redis is unable to keep pace with the requests from all the client connections. In such situations, it helps to enable clustering or increase shard count so that client connections get distributed across multiple Redis processes.
+- Memory Usage
+ - High memory usage indicates that your data size is too large for the current cache size and you should consider scaling to a cache size with larger memory.
+- Client connections
+ - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider scaling up to a larger tier, or scaling out to enable clustering and increase shard count. Your choice depends on the Redis server load and memory usage.
+ - For more information on connection limits by cache size, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
+- Network Bandwidth
+ - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If you Redis server is exceeding available network bandwidth, you should consider scaling up to a larger cache size with higher network bandwidth.
+ - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
-If you determine your cache is no longer meeting your application requirements, you can scale it to a larger or smaller cache pricing tier that is right for your application. For more information on determining which cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier).
+If you determine your cache is no longer meeting your application's requirements, you can scale to an appropriate cache pricing tier for your application. You can choose a larger or smaller cache to match your needs.
+
+For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq) to view the complete list of available SKU specifications.
## Scale a cache
To scale your cache, [browse to the cache](cache-configure.md#configure-azure-ca
On the left, select the pricing tier you want from **Select pricing tier** and **Select**.
-![Pricing tier][redis-cache-pricing-tier-blade]
You can scale to a different pricing tier with the following restrictions:
-* You can't scale from a higher pricing tier to a lower pricing tier.
- * You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache.
- * You can't scale from a **Standard** cache down to a **Basic** cache.
-* You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can later do a scaling operation to the wanted size.
-* You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in the next scaling operation.
-* You can't scale from a larger size down to the **C0 (250 MB)** size. However, you can scale down to any other size within the same pricing tier. For example, you can scale down from C5 Standard to C1 Standard.
+- You can't scale from a higher pricing tier to a lower pricing tier.
+ - You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache.
+ - You can't scale from a **Standard** cache down to a **Basic** cache.
+- You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can later do a scaling operation to the wanted size.
+- You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in the next scaling operation.
+- You can't scale from a larger size down to the **C0 (250 MB)** size. However, you can scale down to any other size within the same pricing tier. For example, you can scale down from C5 Standard to C1 Standard.
While the cache is scaling to the new pricing tier, a **Scaling** status is displayed on the left in the **Azure Cache for Redis**.
-![Scaling][redis-cache-scaling]
When scaling is complete, the status changes from **Scaling** to **Running**.
When scaling is complete, the status changes from **Scaling** to **Running**.
You can scale your cache instances in the Azure portal. And, you can scale using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
-* [Scale using PowerShell](#scale-using-powershell)
-* [Scale using Azure CLI](#scale-using-azure-cli)
-* [Scale using MAML](#scale-using-maml)
+- [Scale using PowerShell](#scale-using-powershell)
+- [Scale using Azure CLI](#scale-using-azure-cli)
+- [Scale using MAML](#scale-using-maml)
### Scale using PowerShell
For more information, see the [Manage Azure Cache for Redis using MAML](https://
The following list contains answers to commonly asked questions about Azure Cache for Redis scaling.
-* [Can I scale to, from, or within a Premium cache?](#can-i-scale-to-from-or-within-a-premium-cache)
-* [After scaling, do I have to change my cache name or access keys?](#after-scaling-do-i-have-to-change-my-cache-name-or-access-keys)
-* [How does scaling work?](#how-does-scaling-work)
-* [Will I lose data from my cache during scaling?](#will-i-lose-data-from-my-cache-during-scaling)
-* [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling)
-* [Will my cache be available during scaling?](#will-my-cache-be-available-during-scaling)
-* [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication)
-* [Operations that aren't supported](#operations-that-arent-supported)
-* [How long does scaling take?](#how-long-does-scaling-take)
-* [How can I tell when scaling is complete?](#how-can-i-tell-when-scaling-is-complete)
+- [Can I scale to, from, or within a Premium cache?](#can-i-scale-to-from-or-within-a-premium-cache)
+- [After scaling, do I have to change my cache name or access keys?](#after-scaling-do-i-have-to-change-my-cache-name-or-access-keys)
+- [How does scaling work?](#how-does-scaling-work)
+- [Will I lose data from my cache during scaling?](#will-i-lose-data-from-my-cache-during-scaling)
+- [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling)
+- [Will my cache be available during scaling?](#will-my-cache-be-available-during-scaling)
+- [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication)
+- [Operations that aren't supported](#operations-that-arent-supported)
+- [How long does scaling take?](#how-long-does-scaling-take)
+- [How can I tell when scaling is complete?](#how-can-i-tell-when-scaling-is-complete)
### Can I scale to, from, or within a Premium cache?
-* You can't scale from a **Premium** cache down to a **Basic** or **Standard** pricing tier.
-* You can scale from one **Premium** cache pricing tier to another.
-* You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a later scaling operation.
-* If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#cluster-size). If your cache was created without clustering enabled, you can configure clustering at a later time.
+- You can't scale from a **Premium** cache down to a **Basic** or **Standard** pricing tier.
+- You can scale from one **Premium** cache pricing tier to another.
+- You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a later scaling operation.
+- If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#cluster-size). If your cache was created without clustering enabled, you can configure clustering at a later time.
For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
No, your cache name and keys are unchanged during a scaling operation.
### How does scaling work?
-* When you scale a **Basic** cache to a different size, it's shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
-* When you scale a **Basic** cache to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process.
-* When you scale a **Standard** cache to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
-* When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards.
-* When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.
+- When you scale a **Basic** cache to a different size, it's shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
+- When you scale a **Basic** cache to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process.
+- When you scale a **Standard** cache to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
+- When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards.
+- When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.
### Will I lose data from my cache during scaling?
-* When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation.
-* When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.
-* When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When scaling down a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+- When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation.
+- When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.
+- When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When scaling down a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
### Is my custom databases setting affected during scaling? If you configured a custom value for the `databases` setting during cache creation, keep in mind that some pricing tiers have different [databases limits](cache-configure.md#databases). Here are some considerations when scaling in this scenario:
-* When scaling to a pricing tier with a lower `databases` limit than the current tier:
- * If you're using the default number of `databases`, which is 16 for all pricing tiers, no data is lost.
- * If you're using a custom number of `databases` that falls within the limits for the tier to which you're scaling, this `databases` setting is kept and no data is lost.
- * If you're using a custom number of `databases` that exceeds the limits of the new tier, the `databases` setting is lowered to the limits of the new tier and all data in the removed databases is lost.
-* When scaling to a pricing tier with the same or higher `databases` limit than the current tier, your `databases` setting is kept and no data is lost.
+- When scaling to a pricing tier with a lower `databases` limit than the current tier:
+ - If you're using the default number of `databases`, which is 16 for all pricing tiers, no data is lost.
+ - If you're using a custom number of `databases` that falls within the limits for the tier to which you're scaling, this `databases` setting is kept and no data is lost.
+ - If you're using a custom number of `databases` that exceeds the limits of the new tier, the `databases` setting is lowered to the limits of the new tier and all data in the removed databases is lost.
+- When scaling to a pricing tier with the same or higher `databases` limit than the current tier, your `databases` setting is kept and no data is lost.
While Standard and Premium caches have a 99.9% SLA for availability, there's no SLA for data loss. ### Will my cache be available during scaling?
-* **Standard** and **Premium** caches remain available during the scaling operation. However, connection blips can occur while scaling Standard and Premium caches, and also while scaling from Basic to Standard caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly.
-* **Basic** caches are offline during scaling operations to a different size. Basic caches remain available when scaling from **Basic** to **Standard** but might experience a small connection blip. If a connection blip occurs, Redis clients can generally re-establish their connection instantly.
+- **Standard** and **Premium** caches remain available during the scaling operation. However, connection blips can occur while scaling Standard and Premium caches, and also while scaling from Basic to Standard caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly.
+- **Basic** caches are offline during scaling operations to a different size. Basic caches remain available when scaling from **Basic** to **Standard** but might experience a small connection blip. If a connection blip occurs, Redis clients can generally re-establish their connection instantly.
### Are there scaling limitations with geo-replication?
With geo-replication configured, you might notice that you cannot scale a cache
### Operations that aren't supported
-* You can't scale from a higher pricing tier to a lower pricing tier.
- * You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache.
- * You can't scale from a **Standard** cache down to a **Basic** cache.
-* You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can do a scaling operation to the size you want at a later time.
-* You can't scale from a **Basic** cache directly to a **Premium** cache. First scale from **Basic** to **Standard** in one scaling operation, and then scale from **Standard** to **Premium** in a later operation.
-* You can't scale from a larger size down to the **C0 (250 MB)** size.
+- You can't scale from a higher pricing tier to a lower pricing tier.
+ - You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache.
+ - You can't scale from a **Standard** cache down to a **Basic** cache.
+- You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can do a scaling operation to the size you want at a later time.
+- You can't scale from a **Basic** cache directly to a **Premium** cache. First scale from **Basic** to **Standard** in one scaling operation, and then scale from **Standard** to **Premium** in a later operation.
+- You can't scale from a larger size down to the **C0 (250 MB)** size.
If a scaling operation fails, the service tries to revert the operation, and the cache will revert to the original size.
If a scaling operation fails, the service tries to revert the operation, and the
Scaling time depends on a few factors. Here are some factors that can affect how long scaling takes.
-* Amount of data: Larger amounts of data take a longer time to be replicated
-* High write requests: Higher number of writes mean more data replicates across nodes or shards
-* High server load: Higher server load means Redis server is busy and has limited CPU cycles to complete data redistribution
+- Amount of data: Larger amounts of data take a longer time to be replicated
+- High write requests: Higher number of writes mean more data replicates across nodes or shards
+- High server load: Higher server load means Redis server is busy and has limited CPU cycles to complete data redistribution
Generally, when you scale a cache with no data, it takes approximately 20 minutes. For clustered caches, scaling takes approximately 20 minutes per shard with minimal data.
-<!-- Scaling time depends on how much data is in the cache, with larger amounts of data taking a longer time to complete. Scaling takes approximately 20 minutes. For clustered caches, scaling takes approximately 20 minutes per shard.
- -->
- ### How can I tell when scaling is complete? In the Azure portal, you can see the scaling operation in progress. When scaling is complete, the status of the cache changes to **Running**.
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-server.md
Last updated 10/18/2019 + # Troubleshoot Azure Cache for Redis server-side issues This section discusses troubleshooting issues that occur because of a condition on an Azure Cache for Redis or the virtual machine(s) hosting it.
There are several possible changes you can make to help keep memory usage health
- Break up your large cached objects into smaller related objects. - [Create alerts](cache-how-to-monitor.md#alerts) on metrics like used memory to be notified early about potential impacts. - [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity.
+- [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity. For more information, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
## High CPU usage or server load
-A high server load or CPU usage means the server can't process requests in a timely fashion. The server may be slow to respond and unable to keep up with request rates.
+A high server load or CPU usage means the server can't process requests in a timely fashion. The server might be slow to respond and unable to keep up with request rates.
[Monitor metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) such as CPU or server load. Watch for spikes in CPU usage that correspond with timeouts.
There are several changes you can make to mitigate high server load:
- Investigate what is causing CPU spikes such as [long-running commands](#long-running-commands) noted below or page faulting because of high memory pressure. - [Create alerts](cache-how-to-monitor.md#alerts) on metrics like CPU or server load to be notified early about potential impacts.-- [Scale](cache-how-to-scale.md) to a larger cache size with more CPU capacity.
+- [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
## Long-running commands
-Some Redis commands are more expensive to execute than others. The [Redis commands documentation](https://redis.io/commands) shows the time complexity of each command. Because Redis command processing is single-threaded, a command that takes time to run will block all others that come after it. You should review the commands that you're issuing to your Redis server to understand their performance impacts. For instance, the [KEYS](https://redis.io/commands/keys) command is often used without knowing that it's an O(N) operation. You can avoid KEYS by using [SCAN](https://redis.io/commands/scan) to reduce CPU spikes.
+Some Redis commands are more expensive to execute than others. The [Redis commands documentation](https://redis.io/commands) shows the time complexity of each command. Because Redis command processing is single-threaded, a command that takes time to run blocks all others that come after it. Review the commands that you're issuing to your Redis server to understand their performance impacts. For instance, the [KEYS](https://redis.io/commands/keys) command is often used without knowing that it's an O(N) operation. You can avoid KEYS by using [SCAN](https://redis.io/commands/scan) to reduce CPU spikes.
Using the [SLOWLOG](https://redis.io/commands/slowlog) command, you can measure expensive commands being executed against the server.
To mitigate situations where network bandwidth usage is close to maximum capacit
- Change client call behavior to reduce network demand. - [Create alerts](cache-how-to-monitor.md#alerts) on metrics like cache read or cache write to be notified early about potential impacts.-- [Scale](cache-how-to-scale.md) to a larger cache size with more network bandwidth capacity.
+- [Scale](cache-how-to-scale.md) to a larger cache size with more network bandwidth capacity. For more information, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
## Additional information
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-node.md
Title: Create a JavaScript function using Visual Studio Code - Azure Functions description: Learn how to create a JavaScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/03/2020 Last updated : 07/01/2021 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
adobe-target-content: ./create-first-function-vs-code-node_uiex
[!INCLUDE [functions-language-selector-quickstart-vs-code](../../includes/functions-language-selector-quickstart-vs-code.md)]
-In this article, you use Visual Studio Code to create a JavaScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+Use Visual Studio Code to create a JavaScript function that responds to HTTP requests. Test the code locally, then deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Before you get started, make sure you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
++ [Node.js 10.14.1+](https://nodejs.org/). Use the `node --version` command to check your version. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
In this section, you use Visual Studio Code to create a local Azure Functions pr
1. Provide the following information at the prompts:
- + **Select a language for your function project**: Choose `JavaScript`.
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `JavaScript`.|
+ |**Select a template for your project's first function**|Choose `HTTP trigger`.|
+ |**Provide a function name**|Type `HttpExample`.|
+ |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
- + **Select a template for your project's first function**: Choose `HTTP trigger`.
-
- + **Provide a function name**: Type `HttpExample`.
-
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
- + **Select how you would like to open your project**: Choose `Add to workspace`.
-
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+ Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
-## Publish the project to Azure
+<a name="Publish the project to Azure"></a>
+
+## Deploy the project to Azure
In this section, you create a function app and related resources in your Azure subscription and then deploy your code. > [!IMPORTANT]
-> Publishing to an existing function app overwrites the content of that app in Azure.
+> Deploying to an existing function app overwrites the content of that app in Azure.
1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
In this section, you create a function app and related resources in your Azure s
1. Provide the following information at the prompts:
- + **Select folder**: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened.
-
- + **Select subscription**: Choose the subscription to use. You won't see this if you only have one subscription.
-
- + **Select Function App in Azure**: Choose `+ Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)
-
- + **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
-
- + **Select a runtime**: Choose the version of Node.js you've been running on locally. You can use the `node --version` command to check your version.
-
- + **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
+ |Prompt| Selection|
+ |--|--|
+ |**Select Function App in Azure**|Choose `+ Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)|
+ |**Enter a globally unique name for the function app**|Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.|
+ |**Select a runtime**|Choose the version of Node.js you've been running on locally. You can use the `node --version` command to check your version.|
+ |**Select a location for new resources**|For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.|
The extension shows the status of individual resources as they are being created in Azure in the notification area. :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
-1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
+ When completed, the following Azure resources are created in your subscription, using names based on your function app name:
[!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
- A notification is displayed after your function app is created and the deployment package is applied.
+1. A notification is displayed after your function app is created and the deployment package is applied.
[!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
In this section, you create a function app and related resources in your Azure s
[!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)]
+## Change the code and redeploy to Azure
+
+1. In the VSCode Explorer view, select the `./HttpExample/index.js` file.
+1. Replace the file with the following code to construct a JSON object and return it.
+
+ ```javascript
+ module.exports = async function (context, req) {
+
+ try {
+ context.log('JavaScript HTTP trigger function processed a request.');
+
+ // Read incoming data
+ const name = req.query.name;
+ const sport = req.query.sport;
+
+ // fail if incoming data is required
+ if (!name || !sport) {
+
+ context.res = {
+ status: 400
+ };
+ return;
+ }
+
+ // Add or change code here
+ const message = `${name} likes ${sport}`;
+
+ // Construct response
+ const responseJSON = {
+ "name": name,
+ "sport": sport,
+ "message": message,
+ "success": true
+ }
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: responseJSON,
+ contentType: 'application/json'
+ };
+ } catch(err) {
+ context.res = {
+ status: 500
+ };
+ }
+ }
+ ```
+1. [Rerun the function](#run-the-function-locally) app locally.
+1. In the prompt **Enter request body** change the request message body to { "name": "Tom","sport":"basketball" }. Press Enter to send this request message to your function.
+1. View the response in the notification:
+
+ ```json
+ {
+ "name": "Tom",
+ "sport": "basketball",
+ "message": "Tom likes basketball",
+ "success": true
+ }
+ ```
+
+1. [Redeploy the function](#deploy-the-project-to-azure) to Azure.
+
+## Troubleshooting
+
+Use the table below to resolve the most common issues encountered when using this quickstart.
+
+|Problem|Solution|
+|--|--|
+|Can't create a local function project?|Make sure you have the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed.|
+|Can't run the function locally?|Make sure you have the [Azure Functions Core Tools installed](functions-run-local.md?tabs=windows%2Ccsharp%2Cbash) installed. <br/>When running on Windows, make sure that the default terminal shell for Visual Studio Code isn't set to WSL Bash.|
+|Can't deploy function to Azure?|Review the Output for error information. The bell icon in the lower right corner is another way to view the output. Did you publish to an existing function app? That action overwrites the content of that app in Azure.|
+|Couldn't run the cloud-based Function app?|Remember to use the query string to send in parameters.|
+ ## Next steps
You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript)
> [Connect to a database](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-javascript) > [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)
+> [Securing your Function](security-concepts.md)
[Azure Functions Core Tools]: functions-run-local.md [Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
To request a specific Python version when you create your function app in Azure,
When running locally, the runtime uses the available Python version.
+### Changing Python version
+
+To set a Python function app to a specific language version, you need to specify the language as well as the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8.
+
+Set `linuxFxVersion` to `python|3.8`.
+
+To see the full list of supported Python versions functions apps, please refer to this [article](./supported-languages.md)
+
+# [Azure CLI](#tab/azurecli-linux)
+
+You can view and set the `linuxFxVersion` from the Azure CLI.
+
+Using the Azure CLI, view the current `linuxFxVersion` with the [az functionapp config show](/cli/azure/functionapp/config) command.
+
+```azurecli-interactive
+az functionapp config show --name <function_app> \
+--resource-group <my_resource_group>
+```
+
+In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app.
+
+You see the `linuxFxVersion` in the following output, which has been truncated for clarity:
+
+```output
+{
+ ...
+ "kind": null,
+ "limits": null,
+ "linuxFxVersion": <LINUX_FX_VERSION>,
+ "loadBalancing": "LeastRequests",
+ "localMySqlEnabled": false,
+ "location": "West US",
+ "logsDirectorySizeLimit": 35,
+ ...
+}
+```
+
+You can update the `linuxFxVersion` setting in the function app with the [az functionapp config set](/cli/azure/functionapp/config) command.
+
+```azurecli-interactive
+az functionapp config set --name <FUNCTION_APP> \
+--resource-group <RESOURCE_GROUP> \
+--linux-fx-version <LINUX_FX_VERSION>
+```
+
+Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the python version you want to use, prefixed by `python|` e.g. `python|3.9`
+
+You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
+
+The function app restarts after the change is made to the site config.
+
+
++ ## Package management When developing locally using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the `requirements.txt` file and install them using `pip`.
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-indoor-module.md
Title: Use the Azure Maps Indoor Maps module with Microsoft Creator services
description: Learn how to use the Microsoft Azure Maps Indoor Maps module to render maps by embedding the module's JavaScript libraries. Previously updated : 07/20/2020 Last updated : 07/13/2021
if (statesetId.length > 0) {
} ```
+## Geographic Settings (Optional)
+
+This guide assumes that you've created your Creator service in the United States. If so, you can skip this section. However, if your Creator service was created in Europe, add the following code:
+
+```javascript
+ indoorManager.setOptions({ geography: 'eu' });.
+```
+ ## Indoor Level Picker Control The *Indoor Level Picker* control allows you to change the level of the rendered map. You can optionally initialize the *Indoor Level Picker* control via the *Indoor Manager*. Here's the code to initialize the level control picker:
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agents-overview.md
The following tables provide a quick comparison of the Azure Monitor agents for
### Linux agents
-| | Azure Monitor agent (preview) | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent | Dependency<br>agent |
+| | Azure Monitor agent | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent | Dependency<br>agent |
|:|:|:|:|:|:| | **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | | **Agent requirements** | None | None | None | None | Requires Log Analytics agent |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-migration.md
+
+ Title: Migrate from legacy agents to the new Azure Monitor agent
+description: Guidance for migrating from the existing legacy agents to the new Azure Monitor agent (AMA) and Data Collection Rules (DCR)
+++ Last updated : 7/12/2021 ++++
+# Migrating from Log Analytics agent
+This article provides high-level guidance on when and how to migrate to the new Azure Monitor agent (AMA) and Data Collection Rules (DCR). This document will be updated as and when new migration tooling is available.
++
+## Review
+- Go through the guidance [here](./azure-monitor-agent-overview.md#should-i-switch-to-azure-monitor-agent) to decide if you should migrate to the new Azure Monitor agent now or at a later time
+- For the Azure Monitor agent, review the new capabilities, availability of existing features, services, solutions as well as current limitations [here](./agents-overview.md#azure-monitor-agent)
++
+## Test migration using Azure portal
+1. To ensure safe deployment during migration, it is recommended to begin testing with a few resources in your non-production environment that are running the existing Log Analytics agent. Once you can validate the data collected on these test resources, you can rollout to production following the same steps further
+2. Go to 'Monitor > Settings > Data Collection Rules' and [create new data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal) to start collecting some of the existing data types. When using the portal GUI, it performs the following steps on all the target resources, on your behalf:
+ - Enable Managed Identity (System Assigned)
+ - Install the AMA extension
+ - Create and deploy DCR associations
+3. Validate data is flowing as expected via AMA (check ΓÇÿHeartbeat tableΓÇÖ for new agent version values), and ensure it matches data flowing through existing Log Analytics Agent
++
+## At-scale migration using policies
+1. Start by analyzing your current monitoring setup with MMA/OMS, using the below criteria:
+ - Sources (virtual machines, virtual machine scale sets, on-premise servers)
+ - Data Sources (Perf. Counters, Windows Event Logs, Syslog)
+ - Destinations (Log Analytics workspaces)
+2. [Create new data collection rule(s)](/rest/api/monitor/datacollectionrules/create#examples) as per above configuration. As a **best practice** you may want to have a separate DCR for Windows vs Linux sources, or separate DCRs for individual teams with different data collection needs.
+3. [Enable Managed Identity (System Assigned)](../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md#system-assigned-managed-identity) on target resources.
+4. Install the AMA extension and deploy DCR associations on all target resources using the [built-in policy initiative](../deploy-scale.md#built-in-policy-initiatives), and providing the above DCR as input parameter.
+5. Validate data is flowing as expected via AMA (check ΓÇÿHeartbeat tableΓÇÖ for new agent version values), and ensure it matches data flowing through existing Log Analytics Agent
+6. Validate all downstream dependencies like dashboards, alerts, runbook workers, workbooks all continue to function now using data from the new agent
+7. [Uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from the resources, unless you need to use them for System Center Operations Manager (SCOM) scenarios, or other solutions not yet available on AMA.
+8. Clean up any configuration files, workspace keys or certificates that were being used previously by Log Analytics agent.
++
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent including how to install it and how to configure data collection. ## Relationship to other agents
-The Azure Monitor Agent replaces the following agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](/azure/azure-monitor/faq#is-the-new-azure-monitor-agent-at-parity-with-existing-agents)):
+The Azure Monitor Agent replaces the following legacy agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](/azure/azure-monitor/faq#is-the-new-azure-monitor-agent-at-parity-with-existing-agents)):
- [Log Analytics agent](./log-analytics-agent.md) - Sends data to Log Analytics workspace and supports VM insights and monitoring solutions. - [Diagnostic extension](./diagnostics-extension-overview.md) - Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage.
The methods for defining data collection for the existing agents are distinctly
- Diagnostic extension has a configuration for each virtual machine. It's easy to define independent definitions for different virtual machines but difficult to centrally manage. It can only send data to Azure Monitor Metrics, Azure Event Hubs, or Azure Storage. For Linux agents, the open source Telegraf agent is required to send data to Azure Monitor Metrics.
-Azure Monitor agent uses [Data Collection Rules (DCR)](data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They are independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent (preview)](data-collection-rule-azure-monitor-agent.md).
+Azure Monitor agent uses [Data Collection Rules (DCR)](data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They are independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent](data-collection-rule-azure-monitor-agent.md).
## Should I switch to Azure Monitor agent?
-Azure Monitor agent coexists with the [generally available agents for Azure Monitor](agents-overview.md), but you may consider transitioning your VMs off the current agents during the Azure Monitor agent public preview period. Consider the following factors when making this determination.
+Azure Monitor agent replaces the [legacy agents for Azure Monitor](agents-overview.md), and you can start transitioning your VMs off the current agents to the new agent considering the following factors:
- **Environment requirements.** Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will most likely be provided in this new agent. You should assess whether your environment is supported by Azure Monitor agent. If not, then you may need to stay with the current agent. If Azure Monitor agent supports your current environment, then you should consider transitioning to it. - **Current and new feature requirements.** Azure Monitor agent introduces several new capabilities such as filtering, scoping, and multi-homing, but it isnΓÇÖt at parity yet with the current agents for other functionality such as custom log collection and integration with all solutions ([see solutions in preview](/azure/azure-monitor/faq#which-log-analytics-solutions-are-supported-on-the-new-azure-monitor-agent)). Most new capabilities in Azure Monitor will only be made available with Azure Monitor agent, so over time more functionality will only be available in the new agent. Consider whether Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent. If Azure Monitor agent has all the core capabilities you require, then consider transitioning to it. If there are critical features that you require, then continue with the current agent until Azure Monitor agent reaches parity.
azure-monitor Alerts Log Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log-create-templates.md
description: Learn how to use a Resource Manager template to create a log alert
Previously updated : 09/22/2020 Last updated : 07/12/2021 # Create a log alert with a Resource Manager template
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
This JSON can be saved and deployed using [Azure Resource Manager in Azure portal](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template).
-## Template for all resource types (from API version 2020-05-01-preview)
+## Template for all resource types (from API version 2021-02-01-preview)
[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrules/createorupdate) template for all resource types (sample data set as variables):
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
"description": "Specifies whether the alert is enabled" } },
+ "autoMitigate": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert will automatically resolve"
+ }
+ },
+ "checkWorkspaceAlertsStorageConfigured": {
+ "type": "bool",
+ "defaultValue": false,
+ "metadata": {
+ "description": "Specifies whether to check linked storage and fail creation if the storage was not found"
+ }
+ },
"resourceId": { "type": "string", "minLength": 1,
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
}, "muteActionsDuration": { "type": "string",
- "defaultValue": "PT5M",
+ "defaultValue": null,
"allowedValues": [ "PT1M", "PT5M",
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
"name": "[parameters('alertName')]", "type": "Microsoft.Insights/scheduledQueryRules", "location": "[parameters('location')]",
- "apiVersion": "2020-05-01-preview",
+ "apiVersion": "2021-02-01-preview",
"tags": {}, "properties": { "description": "[parameters('alertDescription')]",
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
] }, "muteActionsDuration": "[parameters('muteActionsDuration')]",
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
+ "autoMitigate": "[parameters('autoMitigate')]",
+ "checkWorkspaceAlertsStorageConfigured": "[parameters('checkWorkspaceAlertsStorageConfigured')]",
+ "actions": {
+ "actionGroups": "[parameters('actionGroupId')]",
+ "customProperties": {
+ "key1": "value1",
+ "key2": "value2"
}
- ]
+ }
} } ]
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
* Learn about [log alerts](./alerts-unified-log.md) * Learn about [managing log alerts](./alerts-log.md) * Understand [webhook actions for log alerts](./alerts-log-webhook.md)
-* Learn more about [log queries](../logs/log-query-overview.md).
+* Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
For information about installing ITSMC, see [Add the IT Service Management Conne
### OAuth setup
-ServiceNow supported versions include Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
+ServiceNow supported versions include Quebec, Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
ServiceNow admins must generate a client ID and client secret for their ServiceNow instance. See the following information as required:
+- [Set up OAuth for Quebec](https://docs.servicenow.com/bundle/quebec-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
- [Set up OAuth for Paris](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Orlando](https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for New York](https://docs.servicenow.com/bundle/newyork-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
azure-monitor Resource Manager Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/resource-manager-alerts-log.md
description: Sample Azure Resource Manager templates to deploy Azure Monitor log
Previously updated : 09/22/2020 Last updated : 07/12/2021
The following sample creates a [metric measurement alert rule](../alerts/alerts-
} ```
-## Template for all resource types (from version 2020-05-01-preview)
+## Template for all resource types (from version 2021-02-01-preview)
The following sample creates a rule that can target any resource. ```json
The following sample creates a rule that can target any resource.
"description": "Specifies whether the alert is enabled" } },
+ "autoMitigate": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Specifies whether the alert will automatically resolve"
+ }
+ },
+ "checkWorkspaceAlertsStorageConfigured": {
+ "type": "bool",
+ "defaultValue": false,
+ "metadata": {
+ "description": "Specifies whether to check linked storage and fail creation if the storage was not found"
+ }
+ },
"resourceId": { "type": "string", "minLength": 1,
The following sample creates a rule that can target any resource.
}, "windowSize": { "type": "string",
- "defaultValue": "PT5M",
+ "defaultValue": null,
"allowedValues": [ "PT1M", "PT5M",
The following sample creates a rule that can target any resource.
"name": "[parameters('alertName')]", "type": "Microsoft.Insights/scheduledQueryRules", "location": "[parameters('location')]",
- "apiVersion": "2020-05-01-preview",
+ "apiVersion": "2021-02-01-preview",
"tags": {}, "properties": { "description": "[parameters('alertDescription')]",
The following sample creates a rule that can target any resource.
] }, "muteActionsDuration": "[parameters('muteActionsDuration')]",
- "actions": [
- {
- "actionGroupId": "[parameters('actionGroupId')]"
+ "autoMitigate": "[parameters('autoMitigate')]",
+ "checkWorkspaceAlertsStorageConfigured": "[parameters('checkWorkspaceAlertsStorageConfigured')]",
+ "actions": {
+ "actionGroups": "[parameters('actionGroupId')]",
+ "customProperties": {
+ "key1": "value1",
+ "key2": "value2"
}
- ]
+ }
} } ]
The following sample creates a rule that can target any resource.
## Next steps * [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about alert rules](./alerts-overview.md).
+* [Learn more about alert rules](./alerts-overview.md).
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Supported tables currently are limited to those specified below. All data from t
| MicrosoftHealthcareApisAuditLogs | | | NWConnectionMonitorPathResult | | | NWConnectionMonitorTestResult | |
-| OfficeActivity | Partial support ΓÇô some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently. |
+| OfficeActivity | Partial support (relevant to government clouds only) ΓÇô some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently. |
| Operation | Partial support ΓÇô some of the data is ingested through internal services that isn't supported for export. This portion is missing in export currently. | | Perf | Partial support ΓÇô only windows perf data is currently supported. The Linux perf data is missing in export currently. | | PowerBIDatasetsWorkspace | |
azure-monitor Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualizations.md
Here is a video walkthrough on creating dashboards.
![Screenshot shows Grafana visualizations.](media/visualizations/grafana.png)
+> [!IMPORTANT]
+> The Internet Explorer browser and older Microsoft Edge browsers are not compatible with Grafana, you must use a chromium-based browser including Microsoft Edge. See [supported browsers for Grafana](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-web-browsers).
+ ### Advantages - Rich visualizations. - Rich ecosystem of datasources.
azure-monitor Grafana Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/grafana-plugin.md
To set up a local Grafana server, [download and install Grafana in your local en
## Sign in to Grafana
+> [!IMPORTANT]
+> The Internet Explorer browser and older Microsoft Edge browsers are not compatible with Grafana, you must use a chromium-based browser including Microsoft Edge. See [supported browsers for Grafana](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-web-browsers).
+ 1. Using the IP address of your server, open the Login page at *http://\<IP address\>:3000* or the *\<DNSName>\:3000* in your browser. While 3000 is the default port, note you might have selected a different port during setup. You should see a login page for the Grafana server you built. ![Grafana login screen](./media/grafana-plugin/grafana-login-screen.png)
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na ms.devlang: na Previously updated : 06/14/2021 Last updated : 07/12/2021 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
* **Volume name** Specify the name for the volume that you are creating.
- A volume name must be unique within each capacity pool. It must be at least three characters long. You can use any alphanumeric characters.
+ A volume name must be unique within each capacity pool. It must be at least three characters long. The name must begin with a letter. It can contain letters, numbers, underscores ('_'), and hyphens ('-') only.
You can't use `default` or `bin` as the volume name.
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na ms.devlang: na Previously updated : 06/14/2021 Last updated : 07/12/2021 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* **Volume name** Specify the name for the volume that you are creating.
- A volume name must be unique within each capacity pool. It must be at least three characters long. You can use any alphanumeric characters.
+ A volume name must be unique within each capacity pool. It must be at least three characters long. The name must begin with a letter. It can contain letters, numbers, underscores ('_'), and hyphens ('-') only.
You cannot use `default` or `bin` as the volume name.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* [Mount or unmount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Configure export policy for an NFS volume](azure-netapp-files-configure-export-policy.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
-* [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
+* [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
azure-netapp-files Azure Netapp Files Manage Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md
na ms.devlang: na Previously updated : 06/14/2021 Last updated : 07/12/2021 # Manage snapshots by using Azure NetApp Files
You can create volume snapshots on demand.
You can schedule for volume snapshots to be taken automatically by using snapshot policies. You can also modify a snapshot policy as needed, or delete a snapshot policy that you no longer need.
+### Register the feature
+
+The **snapshot policy** feature is currently in preview. If you are using this feature for the first time, you need to register the feature first.
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSnapshotPolicy
+ ```
+
+2. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSnapshotPolicy
+ ```
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ ### Create a snapshot policy A snapshot policy enables you to specify the snapshot creation frequency in hourly, daily, weekly, or monthly cycles. You also need to specify the maximum number of snapshots to retain for the volume.
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na ms.devlang: na Previously updated : 06/30/2021 Last updated : 07/12/2021 # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* **Volume name** Specify the name for the volume that you are creating.
- A volume name must be unique within each capacity pool. It must be at least three characters long. You can use any alphanumeric characters.
+ A volume name must be unique within each capacity pool. It must be at least three characters long. The name must begin with a letter. It can contain letters, numbers, underscores ('_'), and hyphens ('-') only.
You cannot use `default` or `bin` as the volume name.
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-introduction.md
na ms.devlang: na Previously updated : 04/22/2021 Last updated : 07/12/2021
Azure NetApp Files volume replication is supported between various [Azure region
## Service-level objectives
-Recovery Point Objectives (RPO), or the maximum tolerable data loss, is defined as twice the replication schedule. The actual RPO observed might vary based on factors such as the total dataset size along with the change rate, the percentage of data overwrites, and the replication bandwidth available for transfer.
+Recovery Point Objective (RPO) indicates the point in time to which data can be recovered. The RPO target is typically less than twice the replication schedule, but it can vary. In some cases, it can go beyond the target RPO based on factors such as the total dataset size, the change rate, the percentage of data overwrites, and the replication bandwidth available for transfer.
-* For the replication schedule of 10 minutes, the maximum RPO is 20 minutes.
-* For the hourly replication schedule, the maximum RPO is two hours.
-* For the daily replication schedule, the maximum RPO is two days.
+* For the replication schedule of 10 minutes, the typical RPO is less than 20 minutes.
+* For the hourly replication schedule, the typical RPO is less than two hours.
+* For the daily replication schedule, the typical RPO is less than two days.
Recovery Time Objective (RTO), or the maximum tolerable business application downtime, is determined by factors in bringing up the application and providing access to the data at the second site. The storage portion of the RTO for breaking the peering relationship to activate the destination volume and provide read and write data access in the second site is expected to be complete within a minute.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 06/15/2021 Last updated : 07/12/2021
Azure NetApp Files is updated regularly. This article provides a summary about t
**NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to **NetApp Cloud Compliance**. Clicking the **NetApp Cloud Compliance** tile opens a new browser and directs you to the add-on installation page.
-* Features now generally available (GA)
+* [Manual QoS capacity pool](manual-qos-capacity-pool-introduction.md) now generally available (GA)
- The following Azure NetApp Files features are now generally available. You no longer need to register the features before using them:
- * [Snapshot policy](azure-netapp-files-manage-snapshots.md#manage-snapshot-policies)
- * [Manual QoS capacity pool](manual-qos-capacity-pool-introduction.md)
+ The Manual QoS capacity pool feature is now generally available. You no longer need to register the feature before using it.
* [Shared AD support for multiple accounts to one Active Directory per region per subscription](create-active-directory-connections.md#shared_ad) (Preview)
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
sudo journalctl -u hostapd.service -u wpa_supplicant.service -u ztpd.service -u
|```sudo docker image prune``` |[removes all dangling images](https://docs.docker.com/engine/reference/commandline/image_prune/) | |```sudo watch docker ps``` <br> ```watch ifconfig [interface]``` |check docker container download status |
-## USB updates
+## USB update errors
|Error: |Solution: | ||--| |LIBUSB_ERROR_XXX during USB flash via UUU |This error is the result of a USB connection failure during UUU updating. If the USB cable is not properly connected to the USB ports on the PC or the Percept DK carrier board, an error of this form will occur. Try unplugging and reconnecting both ends of the USB cable and jiggling the cable to ensure a secure connection. This almost always solves the issue. |
+## Clearing hard drive space on the Azure Percept DK
+There are two components that take up the hard drive space on the Azure Percept DK, the docker container logs and the docker containers themselves. To ensure the container logs do not take up all fo the hard space, the Azure Percept DK has log rotation built in. This will rotate out any old logs as new logs get generated.
+
+For situations where the number of docker containers cause hard drive space issues you can delete unused containers by following these steps:
+1. [SSH into the dev kit](./how-to-ssh-into-percept-dk.md)
+1. Run this command:
+ `docker system prune`
+
+This will remove all unused containers, networks, images and optionally, volumes. [Go to this page](https://docs.docker.com/engine/reference/commandline/system_prune/) for more details.
+ ## Azure Percept DK carrier board LED states There are three small LEDs on top of the carrier board housing. A cloud icon is printed next to LED 1, a Wi-Fi icon is printed next to LED 2, and an exclamation mark icon is printed next to LED 3. See the table below for information on each LED state.
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/learn-bicep.md
In addition to the preceding path, the following modules contain Bicep content.
| [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/) | This module teaches you how to preview your changes with the what-if operation. By using what-if, you can make sure your Bicep file only makes changes that you expect. | | [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs/) | Learn how to create and publish template specs, and how to deploy them. | | [Authenticate your Azure deployment pipeline by using service principals](/learn/modules/authenticate-azure-deployment-pipeline-service-principals/) | Service principals enable your deployment pipelines to authenticate securely with Azure. In this module, you'll learn what service principals are, how they work, and how to create them. You'll also learn how to grant them permission to your Azure resources so that your pipelines can deploy your Bicep files. |
+| [Manage changes to your Bicep code by using Git](/learn/modules/manage-changes-bicep-code-git/) | Learn how to use Git to support your Bicep development workflow by keeping track of the changes you make as you work. You'll find out how to commit files, view the history of the files you've changed, and how to use branches to develop multiple versions of your code at the same time. You'll also learn how to use GitHub or Azure Repos to publish a repository so that you can collaborate with team members. |
## Next steps
-* For short introduction to Bicep, see [Bicep quickstart](quickstart-create-bicep-use-visual-studio-code.md).
+* For a short introduction to Bicep, see [Bicep quickstart](quickstart-create-bicep-use-visual-studio-code.md).
* For suggestions about how to improve your Bicep files, see [Best practices for Bicep](best-practices.md).
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | availabilitySets | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. |
+> | cloudservices | resource group | 1-15 <br><br>See note below. | Can't use space or these characters:<br> `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`<br><br>Can't start with underscore. Can't end with period or hyphen. |
> | diskEncryptionSets | resource group | 1-80 | Alphanumerics and underscores. | > | disks | resource group | 1-80 | Alphanumerics, underscores, and hyphens. | > | galleries | resource group | 1-80 | Alphanumerics and periods.<br><br>Start and end with alphanumeric. |
azure-signalr Howto Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/howto-shared-private-endpoints.md
+
+ Title: Secure outbound traffic through Shared Private Endpoints
+
+description: How to secure outbound traffic through Shared Private Endpoints to avoid traffic go to public network
+++++ Last updated : 07/08/2021+++
+# Secure outbound traffic through Shared Private Endpoints
+
+If you're using [serverless mode](concept-service-mode.md#serverless-mode) in Azure SignalR Service, you might have outbound traffic to upstream. Upstream such as
+Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach these endpoints.
+
+ :::image type="content" alt-text="Shared private endpoint overview." source="media\howto-shared-private-endpoints\shared-private-endpoint-overview.png" :::
+
+This outbound method is subject to the following requirements:
+++ The upstream must be Azure Web App or Azure Function.+++ The Azure SignalR Service service must be on the Standard tier.+++ The Azure Web App or Azure Function must be on certain SKUs. See [Use Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).+
+## Shared Private Link Resources Management APIs
+
+Private endpoints of secured resources that are created through Azure SignalR Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Function, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure SignalR Service execution environment and are not directly visible to you.
+
+At this moment, you can use Management REST API to create or delete *shared private link resources*. In the remainder of this article, we will use [Azure CLI](/cli/azure/) to demonstrate the REST API calls.
+
+> [!NOTE]
+> The examples in this article are based on the following assumptions:
+> * The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
+> * The resource ID of upstream Azure Function is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func.
+
+The rest of the examples show how the _contoso-signalr_ service can be configured so that its upstream calls to function go through a private endpoint rather than public network.
+
+### Step 1: Create a shared private link resource to the function
+
+You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
+
+```dotnetcli
+az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview --body @create-pe.json
+```
+
+The contents of the *create-pe.json* file, which represent the request body to the API, are as follows:
+
+```json
+{
+ "name": "func-pe",
+ "properties": {
+ "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func",
+ "groupId": "sites",
+ "requestMessage": "please approve"
+ }
+}
+```
+
+The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following:
+
+```plaintext
+"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview"
+```
+
+You can poll this URI periodically to obtain the status of the operation.
+
+If you are using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+
+```donetcli
+az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview
+```
+
+Wait until the status changes to "Succeeded" before proceeding to the next steps.
+
+### Step 2a: Approve the private endpoint connection for the function
+
+> [!NOTE]
+> In this section, you use the Azure portal to walk through the approval flow for a private endpoint to Azure Function. Alternately, you could use the [REST API](/rest/api/appservice/web-apps/approve-or-reject-private-endpoint-connection) that's available via the App Service provider.
+
+> [!IMPORTANT]
+> After you approved the private endpoint connection, the Function is no longer accessible from public network. You may need to create other private endpoints in your own virtual network to access the Function endpoint.
+
+1. In the Azure portal, select the **Networking** tab of your Function App and navigate to **Private endpoint connections**. Click **Configure your private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+
+ :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-shared-private-endpoints\portal-function-approve-private-endpoint.png" :::
+
+1. Select the private endpoint that Azure SignalR Service created. In the **Private endpoint** column, identify the private endpoint connection by the name that's specified in the previous API, select **Approve**.
+
+ Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+
+ :::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-shared-private-endpoints\portal-function-approved-private-endpoint.png" :::
+
+### Step 2b: Query the status of the shared private link resource
+
+It takes minutes for the approval to be propagated to Azure SignalR Service. To confirm that the shared private link resource has been updated after approval, you can also obtain the "Connection state" by using the GET API.
+
+```dotnetcli
+az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview
+```
+
+This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+
+```json
+{
+ "name": "func-pe",
+ "properties": {
+ "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func",
+ "groupId": "sites",
+ "requestMessage": "please approve",
+ "status": "Approved",
+ "provisioningState": "Succeeded"
+ }
+}
+
+```
+
+If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
+
+### Step 3: Verify upstream calls are from a private IP
+
+Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side.
++
+## Next steps
+
+Learn more about private endpoints:
+++ [What are private endpoints?](../private-link/private-endpoint-overview.md)
azure-sql Always Encrypted Enclaves Configure Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-configure-attestation.md
Title: "Configure Azure Attestation for your Azure SQL logical server"
+ Title: "Configure attestation for Always Encrypted using Azure Attestation"
description: "Configure Azure Attestation for Always Encrypted with secure enclaves in Azure SQL Database." keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
ms.reviwer: vanto Previously updated : 05/01/2021 Last updated : 07/14/2021
-# Configure Azure Attestation for your Azure SQL logical server
+# Configure attestation for Always Encrypted using Azure Attestation
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-> [!NOTE]
-> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
- [Microsoft Azure Attestation](../../attestation/overview.md) is a solution for attesting Trusted Execution Environments (TEEs), including Intel Software Guard Extensions (Intel SGX) enclaves. To use Azure Attestation for attesting Intel SGX enclaves used for [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database, you need to:
An [attestation provider](../../attestation/basic-concepts.md#attestation-provid
Attestation policies are specified using the [claim rule grammar](../../attestation/claim-rule-grammar.md).
+> [!IMPORTANT]
+> An attestation provider gets created with the default policy for Intel SGX enclaves, which does not validate the code running inside the enclave. Microsoft strongly advises you set the below recommended policy, and not use the default policy, for Always Encrypted with secure enclaves.
+ Microsoft recommends the following policy for attesting Intel SGX enclaves used for Always Encrypted in Azure SQL Database: ```output
The above policy verifies:
> One of the main goals of attestation is to convince clients that the binary running in the enclave is the binary that is supposed to run. Attestation policies provide two mechanisms for this purpose. One is the **mrenclave** claim which is the hash of the binary that is supposed to run in an enclave. The problem with the **mrenclave** is that the binary hash changes even with trivial changes to the code, which makes it hard to rev the code running in the enclave. Hence, we recommend the use of the **mrsigner**, which is a hash of a key that is used to sign the enclave binary. When Microsoft revs the enclave, the **mrsigner** stays the same as long as the signing key does not change. In this way, it becomes feasible to deploy updated binaries without breaking customers' applications. > [!IMPORTANT]
-> An attestation provider gets created with the default policy for Intel SGX enclaves, which does not validate the code running inside the enclave. Microsoft strongly advises you set the above recommended policy, and not use the default policy, for Always Encrypted with secure enclaves.
+> Microsoft may need to rotate the key used to sign the Always Encrypted enclave binary, which is expected to be a rare event. Before a new version of the enclave binary, signed with a new key, is deployed to Azure SQL Database, this article will be updated to provide a new recommended attestation policy and instructions on how you should update the policy in your attestation providers to ensure your applications continue to work uninterrupted.
For instructions for how to create an attestation provider and configure with an attestation policy using:
For instructions for how to create an attestation provider and configure with an
> [!IMPORTANT] > When you configure your attestation policy with Azure CLI, set the `attestation-type` parameter to `SGX-IntelSDK`. + ## Determine the attestation URL for your attestation policy After you've configured an attestation policy, you need to share the attestation URL with administrators of applications that use Always Encrypted with secure enclaves in Azure SQL Database. The attestation URL is the `Attest URI` of the attestation provider containing the attestation policy, which looks like this: `https://MyAttestationProvider.wus.attest.azure.net`.
azure-sql Always Encrypted Enclaves Enable Sgx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-enable-sgx.md
ms.reviwer: vanto Previously updated : 01/15/2021 Last updated : 07/14/2021 # Enable Intel SGX for Always Encrypted for your Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-> [!NOTE]
-> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
[Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves. For Intel SGX to be available, the database must use the [vCore model](service-tiers-vcore.md) and the [DC-series](service-tiers-sql-database-vcore.md#dc-series) hardware generation.
azure-sql Always Encrypted Enclaves Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-getting-started.md
ms.reviwer: vanto Previously updated : 05/01/2021 Last updated : 07/14/2021 # Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-> [!NOTE]
-> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
- This tutorial teaches you how to get started with [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database. It will show you: > [!div class="checklist"]
azure-sql Always Encrypted Enclaves Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-plan.md
ms.reviwer: vanto Previously updated : 01/15/2021 Last updated : 07/14/2021 # Plan for Intel SGX enclaves and attestation in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-> [!NOTE]
-> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
- [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves and requires [Microsoft Azure Attestation](/sql/relational-databases/security/encryption/always-encrypted-enclaves#secure-enclave-attestation). ## Plan for Intel SGX in Azure SQL Database
Intel SGX is a hardware-based trusted execution environment technology. Intel SG
## Plan for attestation in Azure SQL Database
-[Microsoft Azure Attestation](../../attestation/overview.md) (preview) is a solution for attesting Trusted Execution Environments (TEEs), including Intel SGX enclaves in Azure SQL databases using the DC-series hardware generation.
-
-To use Azure Attestation for attesting Intel SGX enclaves in Azure SQL Database, you need to:
-
-1. Create an [attestation provider](../../attestation/basic-concepts.md#attestation-provider) and configure it with an attestation policy.
+[Microsoft Azure Attestation](../../attestation/overview.md) is a solution for attesting Trusted Execution Environments (TEEs), including Intel SGX enclaves in Azure SQL databases using the DC-series hardware generation.
-2. Grant your Azure SQL logical server access to the created attestation provider.
+To use Azure Attestation for attesting Intel SGX enclaves in Azure SQL Database, you need to create an [attestation provider](../../attestation/basic-concepts.md#attestation-provider) and configure it with the Microsoft-provided attestation policy. See [Configure attestation for Always Encrypted using Azure Attestation](always-encrypted-enclaves-configure-attestation.md)
## Roles and responsibilities when configuring SGX enclaves and attestation
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
Previously updated : 05/18/2021 Last updated : 07/13/2021 # Features comparison: Azure SQL Database and Azure SQL Managed Instance
The Azure platform provides a number of PaaS capabilities that are added as an a
| File system access | No. Use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage) or [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#i-accessing-data-from-a-file-stored-on-azure-blob-storage) to access and load data from Azure Blob Storage as an alternative. | No. Use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage) or [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#i-accessing-data-from-a-file-stored-on-azure-blob-storage) to access and load data from Azure Blob Storage as an alternative. | | [Geo-restore](recovery-using-backups.md#geo-restore) | Yes | Yes | | [Hyperscale architecture](service-tier-hyperscale.md) | Yes | No |
-| [Long-term backup retention - LTR](long-term-retention-overview.md) | Yes, keep automatically taken backups up to 10 years. | Not yet. Use `COPY_ONLY` [manual backups](../managed-instance/transact-sql-tsql-differences-sql-server.md#backup) as a temporary workaround. |
+| [Long-term backup retention - LTR](long-term-retention-overview.md) | Yes, keep automatically taken backups up to 10 years. | Yes, keep automatically taken backups up to 10 years. |
| Pause/resume | Yes, in [serverless model](serverless-tier-overview.md) | No | | [Policy-based management](/sql/relational-databases/policy-based-management/administer-servers-by-using-policy-based-management) | No | No | | Public IP address | Yes. The access can be restricted using firewall or service endpoints. | Yes. Needs to be explicitly enabled and port 3342 must be enabled in NSG rules. Public IP can be disabled if needed. See [Public endpoint](../managed-instance/public-endpoint-overview.md) for more details. |
azure-sql Long Term Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-retention-overview.md
Previously updated : 02/25/2021 Last updated : 07/13/2021 # Long-term retention - Azure SQL Database and Azure SQL Managed Instance
You can configure long-term backup retention using the Azure portal and PowerShe
To learn how to configure long-term retention or restore a database from backup for SQL Database using the Azure portal or PowerShell, see [Manage Azure SQL Database long-term backup retention](long-term-backup-retention-configure.md).
-To learn how to configure long-term retention or restore a database from backup for SQL Managed Instance using PowerShell, see [Manage Azure SQL Managed Instance long-term backup retention](../managed-instance/long-term-backup-retention-configure.md).
+To learn how to configure long-term retention or restore a database from backup for SQL Managed Instance using the Azure portal or PowerShell, see [Manage Azure SQL Managed Instance long-term backup retention](../managed-instance/long-term-backup-retention-configure.md).
To restore a database from the LTR storage, you can select a specific backup based on its timestamp. The database can be restored to any existing server under the same subscription as the original database. To learn how to restore your database from an LTR backup, using the Azure portal, or PowerShell, see [Manage Azure SQL Database long-term backup retention](long-term-backup-retention-configure.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Expected duration of configuring maintenance window on managed instance can be c
> A short reconfiguration happens at the end of the maintenance operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should schedule the operation outside of the peak hours. ### IP address space requirements
-Each new virtual cluster in subnet requires additional IP addresses according to the [virtual cluster IP address allocation](../managed-instance/vnet-subnet-determine-size.md#determine-subnet-size). Changing maintenance window for existing managed instance also requires [temporary additional IP capacity](../managed-instance/vnet-subnet-determine-size.md#address-requirements-for-update-scenarios) as in scaling vCores scenario for corresponding service tier.
+Each new virtual cluster in subnet requires additional IP addresses according to the [virtual cluster IP address allocation](../managed-instance/vnet-subnet-determine-size.md#determine-subnet-size). Changing maintenance window for existing managed instance also requires [temporary additional IP capacity](../managed-instance/vnet-subnet-determine-size.md#update-scenarios) as in scaling vCores scenario for corresponding service tier.
### IP address change Configuring and changing maintenance window causes change of the IP address of the instance, within the IP address range of the subnet.
azure-sql Move Resources Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/move-resources-across-regions.md
This article provides a general workflow for moving resources to a different reg
1. Create a target server for each source server. 1. Configure the firewall with the right exceptions by using [PowerShell](scripts/create-and-configure-database-powershell.md). 1. Configure the servers with the correct logins. If you're not the subscription administrator or SQL server administrator, work with the administrator to assign the permissions that you need. For more information, see [How to manage Azure SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
-1. If your databases are encrypted with transparent data encryption and use your own encryption key in Azure Key Vault, ensure that the correct encryption material is provisioned in the target regions. For more information, see [Azure SQL transparent data encryption with customer-managed keys in Azure Key Vault](transparent-data-encryption-byok-overview.md).
+1. If your databases are encrypted with transparent data encryption (TDE) and bring your own encryption key (BYOK or Customer-Managed Key) in Azure Key Vault, ensure that the correct encryption material is provisioned in the target regions.
+ - The simplest way to do this is to add the encryption key from the existing key vault (that is being used as TDE Protector on source server) to the target server and then set the key as the TDE Protector on the target server
+ > [!NOTE]
+ > A server or managed instance in one region can now be connected to a key vault in any other region.
+ - As a best practice to ensure the target server has access to older encryption keys (required for restoring database backups), run the [Get-AzSqlServerKeyVaultKey](/powershell/module/az.sql/get-azsqlserverkeyvaultkey) cmdlet on the source server or [Get-AzSqlInstanceKeyVaultKey](/powershell/module/az.sql/get-azsqlinstancekeyvaultkey) cmdlet on the source managed instance to return the list of available keys and add those keys to the target server.
+ - For more information and best practices on configuring customer-managed TDE on the target server, see [Azure SQL transparent data encryption with customer-managed keys in Azure Key Vault](transparent-data-encryption-byok-overview.md).
+ - To move the key vault to the new region, see [Move an Azure key vault across regions](https://docs.microsoft.com/azure/key-vault/general/move-region)
1. If database-level audit is enabled, disable it and enable server-level auditing instead. After failover, database-level auditing will require the cross-region traffic, which isn't desired or possible after the move. 1. For server-level audits, ensure that: - The storage container, Log Analytics, or event hub with the existing audit logs is moved to the target region.
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-sql-database-vcore.md
Previously updated : 06/02/2021 Last updated : 07/14/2021 # vCore purchase model overview - Azure SQL Database
To enable M-series hardware for a subscription and region, a support request mus
### DC-series
-> [!NOTE]
-> DC-series is currently in **public preview**.
- - DC-series hardware uses Intel processors with Software Guard Extensions (Intel SGX) technology. - DC-series is required for [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves), which is not supported with other hardware configurations. - DC-series is designed for workloads that process sensitive data and demand confidential query processing capabilities, provided by Always Encrypted with secure enclaves.
Approved support requests are typically fulfilled within 5 business days.
#### DC-series
-> [!NOTE]
-> DC-series is currently in **public preview**.
- DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
-If you need DC-series in a currently unsupported region, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) following the instructions in [Request quota increases for Azure SQL Database and SQL Managed Instance](quota-increase-request.md).
+If you need DC-series in a currently unsupported region, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). On the **Basics** page, provide the following:
+
+1. For **Issue type**, select **Technical**.
+1. For **Service type**, select **SQL Database**.
+1. For **Problem type**, select **Security, Private and Compliance**.
+1. For **Problem subtype**, select **Always Encrypted**.
+ ## Next steps
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
Previously updated : 02/25/2021 Last updated : 07/13/2021
-# Manage Azure SQL Managed Instance long-term backup retention (PowerShell)
+# Manage Azure SQL Managed Instance long-term backup retention
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)] In Azure SQL Managed Instance, you can configure a [long-term backup retention](../database/long-term-retention-overview.md) policy (LTR) as a public preview feature. This allows you to automatically retain database backups in separate Azure Blob storage containers for up to 10 years. You can then recover a database using these backups with PowerShell.
azure-sql Vnet Subnet Determine Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/vnet-subnet-determine-size.md
Last updated 06/14/2021
-# Determine required subnet size & range for Azure SQL Managed Instance
+# Determine required subnet size and range for Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-Azure SQL Managed Instance must be deployed within an Azure [virtual network (VNet)](../../virtual-network/virtual-networks-overview.md).
+Azure SQL Managed Instance must be deployed within an Azure [virtual network](../../virtual-network/virtual-networks-overview.md). The number of managed instances that can be deployed in the subnet of a virtual network depends on the size of the subnet (subnet range).
-The number of managed instances that can be deployed in the subnet of a VNet depends on the size of the subnet (subnet range).
+When you create a managed instance, Azure allocates a number of virtual machines that depends on the tier you selected during provisioning. Because these virtual machines are associated with your subnet, they require IP addresses. To ensure high availability during regular operations and service maintenance, Azure might allocate more virtual machines. The number of required IP addresses in a subnet then becomes larger than the number of managed instances in that subnet.
-When you create a managed instance, Azure allocates a number of virtual machines depending on the tier you selected during provisioning. Because these virtual machines are associated with your subnet, they require IP addresses. To ensure high availability during regular operations and service maintenance, Azure may allocate additional virtual machines. As a result, the number of required IP addresses in a subnet is larger than the number of managed instances in that subnet.
+By design, a managed instance needs a minimum of 32 IP addresses in a subnet. As a result, you can use a minimum subnet mask of /27 when defining your subnet IP ranges. We recommend careful planning of subnet size for your managed instance deployments. Consider the following inputs during planning:
-By design, a managed instance needs a minimum of 32 IP addresses in a subnet. As a result, you can use minimum subnet mask of /27 when defining your subnet IP ranges. Careful planning of subnet size for your managed instance deployments is recommended. Inputs that should be taken into consideration during planning are:
--- Number of managed instances including following instance parameters:
- - service tier
- - hardware generation
- - number of vCores
- - [maintenance window](../database/maintenance-window.md)
-- Plans to scale up/down or change service tier
+- Number of managed instances, including the following instance parameters:
+ - Service tier
+ - Hardware generation
+ - Number of vCores
+ - [Maintenance window](../database/maintenance-window.md)
+- Plans to scale up/down or change the service tier
> [!IMPORTANT]
-> A subnet size with 16 IP addresses (subnet mask /28) will allow deploying managed instance inside it, but it should be used only for deploying single instance used for evaluation or in dev/test scenarios, in which scaling operations will not be performed.
+> A subnet size of 16 IP addresses (subnet mask /28) allows the deployment of a single managed instance inside it. It should be used only for evaluation or for dev/test scenarios where scaling operations won't be performed.
## Determine subnet size
-Size your subnet according to the future instance deployment and scaling needs. Following parameters can help you in forming a calculation:
+Size your subnet according to your future needs for instance deployment and scaling. The following parameters can help you in forming a calculation:
-- Azure uses five IP addresses in the subnet for its own needs-- Each virtual cluster allocates additional number of addresses -- Each managed instance uses number of addresses that depends on pricing tier and hardware generation-- Each scaling request temporarily allocates additional number of addresses
+- Azure uses five IP addresses in the subnet for its own needs.
+- Each virtual cluster allocates an additional number of addresses.
+- Each managed instance uses a number of addresses that depends on pricing tier and hardware generation.
+- Each scaling request temporarily allocates an additional number of addresses.
> [!IMPORTANT]
-> It is not possible to change the subnet address range if any resource exists in the subnet. It is also not possible to move managed instances from one subnet to another. Whenever possible, please consider using bigger subnets rather than smaller to prevent issues in the future.
+> It's not possible to change the subnet address range if any resource exists in the subnet. It's also not possible to move managed instances from one subnet to another. Consider using bigger subnets rather than smaller ones to prevent issues in the future.
GP = general purpose; BC = business critical; VC = virtual cluster
-| **Hardware gen** | **Pricing tier** | **Azure usage** | **VC usage** | **Instance usage** | **Total*** |
+| **Hardware generation** | **Pricing tier** | **Azure usage** | **VC usage** | **Instance usage** | **Total** |
| | | | | | | | Gen4 | GP | 5 | 1 | 5 | 11 | | Gen4 | BC | 5 | 1 | 5 | 11 | | Gen5 | GP | 5 | 6 | 3 | 14 | | Gen5 | BC | 5 | 6 | 5 | 16 |
- \* Column total displays number of addresses that would be taken when one instance is deployed in subnet. Each additional instance in subnet adds number of addresses represented with instance usage column. Addresses represented with Azure usage column are shared across multiple virtual clusters while addresses represented with VC usage column are shared across instances placed in that virtual cluster.
+In the preceding table:
+
+- The **Total** column displays the total number of addresses that are used by a single deployed instance to the subnet.
+- When you add more instances to the subnet, the number of addresses used by the instance increases. The total number of addresses then also increases. For example, adding another Gen4 GP managed instance would increase the **Instance usage** value to 10 and would increase the **Total** value of used addresses to 16.
+- Addresses represented in the **Azure usage** column are shared across multiple virtual clusters.
+- Addresses represented in the **VC usage** column are shared across instances placed in that virtual cluster.
-Additional input for consideration when determining subnet size (especially when multiple instances will be deployed inside the same subnet) is [maintenance window feature](../database/maintenance-window.md). Specifying maintenance window for managed instance during its creation or afterwards means that it must be placed in virtual cluster with corresponding maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
+Also consider the [maintenance window feature](../database/maintenance-window.md) when you're determining the subnet size, especially when multiple instances will be deployed inside the same subnet. Specifying a maintenance window for a managed instance during its creation or afterward means that it must be placed in a virtual cluster with the corresponding maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
-Update operation typically requires virtual cluster resize (for more details check [management operations article](management-operations-overview.md)). When new create or update request comes, managed instance service communicates with compute platform with a request for new nodes that need to be added. Based on the compute response, deployment system either expands existing virtual cluster or creates a new one. Even if in most cases operation will be completed within same virtual cluster, there is no guarantee from the compute side that new one will not be spawned. This will increase number of IP addresses required for performing create or update operation and also reserve additional IP addresses in the subnet for newly created virtual cluster.
+An update operation typically requires [resizing the virtual cluster](management-operations-overview.md). When a new create or update request comes, the SQL Managed Instance service communicates with the compute platform with a request for new nodes that need to be added. Based on the compute response, the deployment system either expands the existing virtual cluster or creates a new one. Even if in most cases the operation will be completed within same virtual cluster, a new one might be created on the compute side.
-### Address requirements for update scenarios
-During scaling operation instances temporarily require additional IP capacity that depends on pricing tier and hardware generation
+## Update scenarios
-| **Hardware gen** | **Pricing tier** | **Scenario** | **Additional addresses*** |
+During a scaling operation, instances temporarily require additional IP capacity that depends on pricing tier and hardware generation:
+
+| **Hardware generation** | **Pricing tier** | **Scenario** | **Additional addresses** |
| | | | |
-| Gen4 | GP or BC | Scaling vCores | 5 |
-| Gen4 | GP or BC | Scaling storage | 5 |
+| Gen4<sup>1</sup> | GP or BC | Scaling vCores | 5 |
+| Gen4<sup>1</sup> | GP or BC | Scaling storage | 5 |
| Gen4 | GP or BC | Switching from GP to BC or BC to GP | 5 |
-| Gen4 | GP | Switching to Gen5* | 9 |
-| Gen4 | BC | Switching to Gen5* | 11 |
+| Gen4 | GP | Switching to Gen5 | 9 |
+| Gen4 | BC | Switching to Gen5 | 11 |
| Gen5 | GP | Scaling vCores | 3 | | Gen5 | GP | Scaling storage | 0 | | Gen5 | GP | Switching to BC | 5 |
During scaling operation instances temporarily require additional IP capacity th
| Gen5 | BC | Scaling storage | 5 | | Gen5 | BC | Switching to GP | 3 |
- \* Gen4 hardware is being phased out and is no longer available for new deployments. Update hardware generation from Gen4 to Gen5 to take advantage of the capabilities specific to Gen5 hardware generation.
+<sup>1</sup> Gen4 hardware is being phased out and is no longer available for new deployments. Updating the hardware generation from Gen4 to Gen5 will take advantage of capabilities specific to Gen5.
-## Recommended subnet calculator
+## Calculate the number of IP addresses
-Taking into the account potential creation of new virtual cluster during subsequent create request or instance update, and maintenance window requirement of virtual cluster per window, recommended formula for calculating total number of IP addresses required is:
+We recommend the following formula for calculating the total number of IP addresses. This formula takes into account the potential creation of a new virtual cluster during a later create request or instance update. It also takes into account the maintenance window requirements of virtual clusters.
-**Formula: 5 + a * 12 + b * 16 + c * 16**
+**Formula: 5 + (a * 12) + (b * 16) + (c * 16)**
- a = number of GP instances - b = number of BC instances
Explanation:
- 16 addresses as a backup = scenario where new virtual cluster is created Example: -- You plan to have three general purpose and two business critical managed instances deployed in the same subnet. All instances will have same maintenance window configured. That means you need 5 + 3 * 12 + 2 * 16 + 1 * 16 = 85 IP addresses. As IP ranges are defined in power of 2, your subnet requires minimum IP range of 128 (2^7) for this deployment. Therefore, you need to reserve the subnet with subnet mask of /25.
+- You plan to have three general-purpose and two business-critical managed instances deployed in the same subnet. All instances will have same maintenance window configured. That means you need 5 + (3 * 12) + (2 * 16) + (1 * 16) = 89 IP addresses.
+
+ Because IP ranges are defined in powers of 2, your subnet requires a minimum IP range of 128 (2^7) for this deployment. You need to reserve the subnet with a subnet mask of /25.
> [!NOTE]
-> Even though it is possible to deploy managed instances in the subnet with number of IP addresses less than the subnet calculator output, always consider using bigger subnets rather than smaller to avoid issue with lack of IP addresses in the future, including unability to create new instances in the subnet or scale existing ones.
+> Though it's possible to deploy managed instances to a subnet with a number of IP addresses that's less than the output of the subnet formula, always consider using bigger subnets instead. Using a bigger subnet can help avoid future issues stemming from a lack of IP addresses, such as the inability to create additional instances within the subnet or scale existing instances.
## Next steps - For an overview, see [What is Azure SQL Managed Instance?](sql-managed-instance-paas-overview.md). - Learn more about [connectivity architecture for SQL Managed Instance](connectivity-architecture-overview.md).-- See how to [create a VNet where you will deploy SQL Managed Instance](virtual-network-subnet-create-arm-template.md).
+- See how to [create a virtual network where you'll deploy SQL Managed Instance](virtual-network-subnet-create-arm-template.md).
- For DNS issues, see [Configure a custom DNS](custom-dns-configure.md).
azure-video-analyzer Computer Vision For Spatial Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/computer-vision-for-spatial-analysis.md
You can examine the Video Analyzer video resource that was created by the live p
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/record-stream-inference-data-with-video/bounding-box.png" alt-text="Bounding box icon":::
-> [!NOTE]
-> Because the source of the video was a container simulating a camera feed, the time stamps in the video are related to when you activated the live pipeline and when you deactivated it.
## Troubleshooting
azure-video-analyzer Detect Motion Record Video Clips Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/detect-motion-record-video-clips-cloud.md
This article walks you through the steps to use Azure Video Analyzer edge module
* An Azure account that includes an active subscription. [Create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) for free if you don't already have one.
- > [!NOTE]
- > You will need an Azure subscription where you have access to both [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role. If you do not have the right permissions, please reach out to your account administrator to grant you those permissions.
+ [!INCLUDE [azure-subscription-permissions](./includes/common-includes/azure-subscription-permissions.md)]
* [Visual Studio Code](https://code.visualstudio.com/), with the following extensions: * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
You can examine the Video Analyzer video resource that was created by the live p
<!--TODO: add image -- ![Video playback]() TODO: new screenshot is needed here -->
-> [!NOTE]
-> Because the source of the video was a container simulating a camera feed, the time stamps in the video are related to when you activated the live pipeline and when you deactivated it.
-
## Clean up resources
azure-video-analyzer Get Started Detect Motion Emit Events Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events-portal.md
After you complete the setup steps, you'll be able to run the simulated live vid
* [Deploy to an IoT Edge for Linux on Windows](deploy-iot-edge-linux-on-windows.md) * [Visual Studio Code](https://code.visualstudio.com/), with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) extension.
-> [!TIP]
-> You might be prompted to install Docker while you're installing the Azure IoT Tools extension. Feel free to ignore the prompt.
## Prepare your IoT Edge device The Azure Video Analyzer module should be configured to run on the IoT Edge device with a non-privileged local user account. The module needs certain local folders for storing application configuration data. The RTSP camera simulator module needs video files with which it can synthesize a live video feed.
azure-video-analyzer Get Started Detect Motion Emit Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events.md
After completing the setup steps, you'll be able to run the simulated live video
* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
- > [!NOTE]
- > You will need an Azure subscription where you have access to both [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role. If you do not have the right permissions, please reach out to your account administrator to grant you those permissions.
+ [!INCLUDE [azure-subscription-permissions](./includes/common-includes/azure-subscription-permissions.md)]
* [Visual Studio Code](https://code.visualstudio.com/), with the following extensions: * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
-> [!TIP]
-> You might be prompted to install Docker while you're installing the Azure IoT Tools extension. Feel free to ignore the prompt.
## Set up Azure resources
azure-video-analyzer Record Event Based Live Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-event-based-live-video.md
Read these articles before you begin:
Prerequisites for this tutorial are: * An Azure account that includes an active subscription. [Create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) for free if you don't already have one.
- > [!NOTE]
- > You will need an Azure subscription where you have access to both [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role. If you do not have the right permissions, please reach out to your account administrator to grant you those permissions.
+ [!INCLUDE [azure-subscription-permissions](./includes/common-includes/azure-subscription-permissions.md)]
* [Install Docker](https://docs.docker.com/desktop/#download-and-install) on your machine. * [Visual Studio Code](https://code.visualstudio.com/), with the following extensions: * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
You can examine the Video Analyzer video resource that was created by the live p
<!--TODO: add image -- ![Video playback]() TODO: new screenshot is needed here -->
-> [!NOTE]
-> Because the source of the video was a container simulating a camera feed, the time stamps in the video are related to when you activated the live pipeline and when you deactivated it.
->
## Clean up resources
-If you intend to try the other tutorials, hold on to the resources you created. Otherwise, go to the Azure portal, browse to your resource groups, select the resource group under which you ran this tutorial, and delete the resource group.
## Next steps
azure-video-analyzer Record Stream Inference Data With Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-stream-inference-data-with-video.md
You can examine the Video Analyzer video resource that was created by the live p
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/record-stream-inference-data-with-video/video-playback.png" alt-text="Screenshot of video playback":::
-> [!NOTE]
-> Because the source of the video was a container simulating a camera feed, the time stamps in the video are related to when you activated the live pipeline and when you deactivated it.
## Clean up resources
-If you intend to try the other tutorials, hold on to the resources you created. Otherwise, go to the Azure portal, browse to your resource groups, select the resource group under which you ran this tutorial, and delete the resource group.
## Next steps
azure-video-analyzer Use Continuous Video Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-continuous-video-recording.md
You can examine the Video Analyzer video resource that was created by the live p
1. Select the video. 1. The video details page will open and the playback should start automatically.
-> [!NOTE]
-> Because the source of the video was a container simulating a camera feed, the time stamps in the video are related to when you activated the live pipeline and when you deactivated it.
## Clean up resources
-If you intend to try the other tutorials, hold on to the resources you created. Otherwise, go to the Azure portal, browse to your resource groups, select the resource group under which you ran this tutorial, and delete the resource group.
## Next steps
azure-video-analyzer Use Intel Grpc Video Analytics Serving Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-intel-grpc-video-analytics-serving-tutorial.md
This tutorial shows you how to use the Intel OpenVINOΓäó DL Streamer ΓÇô Edge AI
This tutorial uses an Azure VM as an simulated IoT Edge device, and it uses a simulated live video stream. It's based on sample code written in C#, and it builds on the [Detect motion and emit events](detect-motion-emit-events-quickstart.md) quickstart.
-> [!NOTE]
-> This tutorial requires the use of an x86-64 machine as your Edge device.
## Prerequisites
azure-video-analyzer Use Intel Openvino Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-intel-openvino-tutorial.md
This tutorial shows you how to use the [OpenVINOΓäó Model Server ΓÇô AI Extensio
This tutorial uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. It's based on sample code written in C#.
-> [!NOTE]
-> This tutorial requires the use of an x86-64 machine as your Edge device.
## Prerequisites
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported. **Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.
-**Zone-redundant storage (ZRS)** | Available in the UK South, South East Asia, Australia East, North Europe and Central US.
+**Zone-redundant storage (ZRS)** | Available in the UK South, South East Asia, Australia East, North Europe, Central US and Japan East.
**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault. ## On-premises backup support
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-faq.md
Previously updated : 07/12/2021 Last updated : 07/13/2021 # Azure Bastion FAQ
Azure Bastion needs to be able to communicate with certain internal endpoints to
* core.windows.net * azure.com
-Note that if you are using a Private endpoint integrated Azure Private DNS Zone, the [recommended DNS zone name](https://docs.microsoft.com/azure/private-link/private-endpoint-dns#azure-services-dns-zone-configuration) for several Azure services overlap with the names listed above. The use of Azure Bastion is *not* supported with these setups.
+Note that if you are using a Private endpoint integrated Azure Private DNS Zone, the [recommended DNS zone name](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for several Azure services overlap with the names listed above. The use of Azure Bastion is *not* supported with these setups.
The use of Azure Bastion is also not supported with Azure Private DNS Zones in national clouds. - ### <a name="rdpssh"></a>Do I need an RDP or SSH client? No. You don't need an RDP or SSH client to access the RDP/SSH to your Azure virtual machine in your Azure portal. Use the [Azure portal](https://portal.azure.com) to let you get RDP/SSH access to your virtual machine directly in the browser.
No. You don't need an RDP or SSH client to access the RDP/SSH to your Azure virt
No. You don't need to install an agent or any software on your browser or your Azure virtual machine. The Bastion service is agentless and doesn't require any additional software for RDP/SSH.
-### <a name="limits"></a>How many concurrent RDP and SSH sessions does each Azure Bastion support?
-
-Both RDP and SSH are a usage-based protocol. High usage of sessions will cause the bastion host to support a lower total number of sessions. The numbers below assume normal day-to-day workflows.
-- ### <a name="rdpfeaturesupport"></a>What features are supported in an RDP session? At this time, only text copy/paste is supported. Features, such as file copy, are not supported. Feel free to share your feedback about new features on the [Azure Bastion Feedback page](https://feedback.azure.com/forums/217313-networking?category_id=367303).
This feature doesn't work with AADJ VM extension-joined machines using Azure AD
The browser must support HTML 5. Use the Microsoft Edge browser or Google Chrome on Windows. For Apple Mac, use Google Chrome browser. Microsoft Edge Chromium is also supported on both Windows and Mac, respectively.
+### <a name="pricingpage"></a>What is the pricing?
+
+For more information, see the [pricing page](https://aka.ms/BastionHostPricing).
+ ### <a name="data"></a>Where does Azure Bastion store customer data? Azure Bastion doesn't move or store customer data out of the region it is deployed in.
In order to make a connection, the following roles are required:
* Reader role on the Azure Bastion resource. * Reader Role on the Virtual Network (Not needed if there is no peered virtual network).
-### <a name="pricingpage"></a>What is the pricing?
-
-For more information, see the [pricing page](https://aka.ms/BastionHostPricing).
- ### <a name="rdscal"></a>Does Azure Bastion require an RDS CAL for administrative purposes on Azure-hosted VMs? No, access to Windows Server VMs by Azure Bastion does not require an [RDS CAL](https://www.microsoft.com/p/windows-server-remote-desktop-services-cal/dg7gmgf0dvsv?activetab=pivot:overviewtab) when used solely for administrative purposes.
No. UDR is not supported on an Azure Bastion subnet.
For scenarios that include both Azure Bastion and Azure Firewall/Network Virtual Appliance (NVA) in the same virtual network, you donΓÇÖt need to force traffic from an Azure Bastion subnet to Azure Firewall because the communication between Azure Bastion and your VMs is private. For more information, see [Accessing VMs behind Azure Firewall with Bastion](https://azure.microsoft.com/blog/accessing-virtual-machines-behind-azure-firewall-with-azure-bastion/).
+### <a name="upgradesku"></a> Can I upgrade from a Basic SKU to a Standard SKU?
+
+Yes. For steps, see [Upgrade a SKU](upgrade-sku.md). For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article.
+
+### <a name="downgradesku"></a> Can I downgrade from a Standard SKU to a Basic SKU?
+
+No. Downgrading from a Standard SKU to a Basic SKU is not supported. For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article.
+ ### <a name="subnet"></a> Can I deploy multiple Azure resources in my Azure Bastion subnet? No. The Azure Bastion subnet (*AzureBastionSubnet*) is reserved only for the deployment of your Azure Bastion resource.
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/configuration-settings.md
Previously updated : 07/12/2021 Last updated : 07/13/2021
During Preview, you must use the Azure portal if you want to specify the Standar
### <a name="upgradesku"></a>Upgrade a SKU
-Azure Bastion supports upgrading from a Basic to a Standard SKU. However, downgrading from Standard to Basic is not supported. To downgrade, you must delete and recreate Azure Bastion. The Standard SKU is in Preview.
+Azure Bastion supports upgrading from a Basic to a Standard SKU. The Standard SKU is in Preview.
+
+> [!NOTE]
+> Downgrading from a Standard SKU to a Basic SKU is not supported. To downgrade, you must delete and recreate Azure Bastion.
+>
#### Configuration methods
bastion Configure Host Scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/configure-host-scaling.md
Previously updated : 07/12/2021 Last updated : 07/13/2021 # Customer intent: As someone with a networking background, I want to configure host scaling.
This article helps you add additional scale units (instances) to Azure Bastion i
## Configuration steps + 1. In the Azure portal, navigate to your Bastion host. 1. Host scaling instance count requires Standard tier. On the **Configuration** page, for **Tier**, verify the tier is **Standard**. If the tier is Basic, select **Standard** from the dropdown.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/quickstart-host-portal.md
Previously updated : 07/12/2021 Last updated : 07/13/2021 # Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
You can use the following example values when creating this configuration, or yo
There are a few different ways to configure a bastion host. In the following steps, you'll create a bastion host in the Azure portal directly from your VM. When you create a host from a VM, various settings will automatically populate corresponding to your virtual machine and/or virtual network.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Sign in to the Azure portal.
1. Navigate to the VM that you want to connect to, then select **Connect**. :::image type="content" source="./media/quickstart-host-portal/vm-connect.png" alt-text="Screenshot of virtual machine settings." lightbox="./media/quickstart-host-portal/vm-connect.png":::
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/tutorial-create-host-portal.md
Previously updated : 07/12/2021 Last updated : 07/13/2021
You can use the following example values when creating this configuration, or yo
## Sign in to the Azure portal
-Sign in to the [Azure portal](https://portal.azure.com).
+
+Sign in to the Azure portal.
## <a name="createhost"></a>Create a bastion host
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/upgrade-sku.md
Previously updated : 07/12/2021 Last updated : 07/13/2021 # Customer intent: As someone with a networking background, I want to upgrade to the Standard SKU.
This article helps you upgrade from the Basic Tier (SKU) to Standard. Once you u
## Configuration steps + 1. In the Azure portal, navigate to your Bastion host. 1. On the **Configuration** page, for **Tier**, select **Standard** from the dropdown.
bastion Vnet Peering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/vnet-peering.md
Previously updated : 06/22/2021 Last updated : 07/13/2021
This figure shows the architecture of an Azure Bastion deployment in a hub-and-s
3. To see Bastion in the **Connect** drop down menu, you must select the subs you have access to in **Subscription > global subscription**. 4. Select the virtual machine to connect to. 5. Azure Bastion is seamlessly detected across the peered VNet.
-6. With a single click, the RDP/SSH session opens in the browser. For RDP and SSH concurrent session limits, see [RDP and SSH sessions](bastion-faq.md#limits).
+6. With a single click, the RDP/SSH session opens in the browser.
:::image type="content" source="../../includes/media/bastion-vm-rdp/connect-vm.png" alt-text="Connect":::
certification How To Edit Published Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/how-to-edit-published-device.md
Previously updated : 06/30/2021 Last updated : 07/13/2021
On the project summary, you should notice that your project is in read-only mode
1. Acknowledge the notification on the page that you will be required to submit your product for review after editing. > [!NOTE] > By confirming this edit, you are **not** removing your device from the Azure Certified Device catalog if it has already been published. Your previous version of the product will remain on the catalog until you have republished your device.
+ > You will also not have to repeat the Connect & test section of the portal.
1. Once acknowledging this warning, you can edit your device details. Make sure to leave a note in the `Comments for Reviewer` section of `Device Details` of what has been changed.
cognitive-services Luis How To Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md
Previously updated : 05/28/2021 Last updated : 07/12/2021
To create LUIS resources, you can use the LUIS portal, [Azure portal](https://ms
[!INCLUDE [Create LUIS Prediction resource in LUIS portal](./includes/add-prediction-resource-portal.md)]
-# [Azure CLI](#tab/cli)
+# [Without LUIS portal](#tab/without-portal)
-### Create LUIS resources in the Azure CLI
+### Create LUIS resources without using the LUIS portal
Use the [Azure CLI](/cli/azure/install-azure-cli) to create each resource individually.
The following procedure assigns a resource to a specific app.
1. On the **Prediction resource** or **Authoring resource** tab, select the **Add prediction resource** or **Add authoring resource** button. 1. Use the fields in the form to find the correct resource, and then select **Save**.
-# [Azure CLI](#tab/cli)
+# [Without LUIS portal](#tab/without-portal)
-## Assign prediction resource programmatically
+## Assign prediction resource without using the LUIS portal
For automated processes like CI/CD pipelines, you can automate the assignment of a LUIS resource to a LUIS app with the following steps:
When you unassign a resource, it's not deleted from Azure. It's only unlinked fr
1. Go to **Manage** > **Azure Resources**. 1. Select the **Unassign resource** button for the resource.
-# [Azure CLI](#tab/cli)
+# [Without LUIS portal](#tab/without-portal)
-## Unassign prediction resource programmatically
+## Unassign prediction resource without using the LUIS portal
1. Get an [Azure Resource Manager token](https://resources.azure.com/api/token?plaintext=true) which is an alphanumeric string of characters. This token does expire, so use it right away. You can also use the following Azure CLI command.
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You ca
Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your dataset.
-On the **Data details**, you can check the data details of the training set. In case of some typical issues with the data, follow the instructions in the message displayed to fix them before training.
+On the **Data details**, you can check the data details of the training set. If there are any typical issues with the data, follow the instructions in the message displayed to fix them before training.
The issues are divided into three types. Referring to the following three tables to check the respective types of errors.
-The first type of errors listed in the table below must be fixed manually, otherwise the data with these errors will be excluded during training.
-
-| Category | Name | Description | Suggestion |
-| | -- | -- | |
-| Script | Invalid separator| These script lines don't have valid separator TAB:{}.| Use TAB to separate ID and content.|
-| Script | Invalid script ID| Script ID format is invalid.| Script line ID should be numeric.|
-| Script | Script content duplicated| Line {} script content is duplicated with line {}.| Script line content should be unique.|
-| Script | Script content too long| Script line content is longer than maximum 1000.| Script line content length should be less than 1000 characters.|
-| Script | Script has no matching audio| Script line ID doesn't have matching audio.| Script line ID should match audio ID.|
-| Script | No valid script| No valid script found in this dataset.| Fix the problematic script lines according to detailed issue list.|
-| Audio | Audio has no matching script| Audio file doesn't match script ID.| Wav file name should match ID in script file.|
-| Audio | Invalid audio format| Wav file has an invalid format and cannot be read.| Check wav file format by audio tool like sox.|
-| Audio | Low sampling rate| Audio sampling rate is lower than 16 KHz. | Wav file sampling rate should be equal to or higher than 16 KHz. |
-| Audio | Audio duration too long| Audio duration is longer than 30 seconds.| Split long duration audio to multiple files to make sure each is less than 15 seconds.|
-| Audio | No valid audio| No valid audio found in this dataset.| Fix the problematic audio according to detailed issue list.|
-
-The second type of errors listed in the table below will be automatically fixed, but double checking the fixed data is recommended.
-
-| Category | Name | Description | Suggestion |
-| | -- | -- | |
-| Audio | Stereo audio | Only one channel in stereo audio will be used for TTS model training.| Use mono in TTS recording or data preparation. This audio is converted into mono. Download normalized dataset and review.|
-| Volume | Volume peak out of range |Volume peak is not within range -3 dB (70% of max volume) to -6 dB (50%). It's auto adjusted to -4 dB (65%) now.| Control volume peak to proper range during recording or data preparation. This audio is linear scaled to fit the peak range. Download normalized dataset and review.|
-|Mismatch | Long silence detected before first word | Long silence detected before first word.| The start silence is trimmed to 200 ms. Download normalized dataset and review. |
-| Mismatch | Long silence detected after last word | Long silence detected after last word. | The end silence is trimmed to 200 ms. Download normalized dataset and review. |
-| Mismatch |Start silence too short | Start silence is shorter than 100 ms. | The start silence is extended to 100 ms. Download normalized dataset and review. |
-| Mismatch | End silence too short | End silence is shorter than 100 ms. | The end silence is extended to 100 ms. Download normalized dataset and review. |
+The first type of errors listed in the table below must be fixed manually, otherwise the data with these errors will be excluded during training.
+
+| Category | Name | Description |
+| | -- | |
+| Script | Invalid separator| You must separate the utterance ID and the script content with a TAB character.|
+| Script | Invalid script ID| Script line ID must be numeric.|
+| Script | Duplicated script|Each line of the script content must be unique. The line is duplicated with {}.|
+| Script | Script too long| The script must be less than 1,000 characters.|
+| Script | No matching audio| The ID of each utterance (each line of the script file) must match the audio ID.|
+| Script | No valid script| No valid script found in this dataset. Fix the script lines that appear in the detailed issue list.|
+| Audio | No matching script| No audio files match the script ID. The name of the wav files must match with the IDs in the script file.|
+| Audio | Invalid audio format| The audio format of the .wav files is invalid. Check the wav file format using an audio tool like [SoX](http://sox.sourceforge.net/).|
+| Audio | Low sampling rate| The sampling rate of the .wav files cannot be lower than 16 KHz.|
+| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. We suggest utterances should be shorter than 15 seconds.|
+| Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
+
+The second type of errors listed in the table below will be automatically fixed, but double checking the fixed data is recommended.
+
+| Category | Name | Description |
+| | -- | |
+| Audio | Stereo audio auto fixed | Use mono in your audio sample recordings. Stereo audio channels are automatically merged into a mono channel, which can cause content loss. Download the normalized dataset and review it.|
+| Volume | Volume peak auto fixed |The volume peak should be within the range of -3 dB (70% of max volume) to -6 dB (50%). Control the volume peak during the sample recording or data preparation. This audio is linearly scaled to fit the peak range automatically (-4 dB or 65%). Download the normalized dataset and review it.|
+|Mismatch | Silence auto fixed| The start silence is detected to be longer than 200 ms, and has been trimmed to 200 ms automatically. Download the normalized dataset and review it. |
+| Mismatch |Silence auto fixed | The end silence is detected to be longer than 200 ms, and has been trimmed to 200 ms automatically. Download the normalized dataset and review it. |
+| Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. |
+| Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
If the third type of errors listed in the table below aren't fixed, although the data with these errors won't be excluded during training, it will affect the quality of training. For higher-quality training, manually fixing these errors is recommended.
-| Category | Name | Description | Suggestion |
-| | -- | -- | |
-| Script | Contain digit 0-9| These script lines contain digit 0-9.| The script lines contain digit 0-9. Expand them to normalized words and match with audio. For example, '123' to 'one hundred and twenty three'.|
-| Script | Pronunciation confused word '{}' | Script contains pronunciation confused word: '{}'.| Expand word to its actual pronunciation. For example, {}.|
-| Script | Question utterances too few| Question script lines are less than 1/6 of total script lines.| Question script lines should be at least 1/6 of total lines for voice font properly expressing question tone.|
-| Script | Exclamation utterances too few| Exclamation script lines are less than 1/6 of total script lines.| Exclamation script lines should be at least 1/6 of total lines for voice font properly expressing Exclamation tone.|
-| Audio| Low sampling rate for neural voice | Audio sampling rate is lower than 24 KHz.| Wav file sampling rate should be equal or higher than 24 KHz for high-quality neural voice.|
-| Volume | Overall volume too low | Volume of {} samples is lower than -18 dB (10% of max volume).| Control volume average level to proper range during recording or data preparation.|
-| Volume | Volume truncation| Volume truncation is detected at {}s.| Adjust recording equipment to avoid volume truncation at its peak value.|
-| Volume | Start silence not clean | First 100 ms silence isn't clean. Detect volume larger than -40 dB (1% of max volume).| Reduce recording noise floor level and leave the starting 100 ms as silence.|
-| Volume| End silence not clean| Last 100 ms silence isn't clean. Detect volume larger than -40 dB (1% of max volume).| Reduce recording noise level and leave the end 100 ms as silence.|
-| Mismatch | Script audio mismatch detected| There's a mismatch between script and audio content. | Review script and audio content to make sure they match and control the noise floor level. Reduce the long silence length or split into multiple utterances.|
-| Mismatch | Extra audio energy detected before first word | Extra audio energy detected before first word. It may also be because of too short start silence before first word.| Review script and audio content to make sure they match and control the noise floor level. Also leave 100 ms silence before first word.|
-| Mismatch | Extra audio energy detected after last word| Extra audio energy detected after last word. It may also be because of too short silence after last word.| Review script and audio content to make sure they match and control the noise floor level. Also leave 100 ms silence after last word.|
-| Mismatch | Low signal-noise ratio | Audio SNR level is lower than {} dB.| Reduce audio noise level during recording or data preparation.|
-| Mismatch | Recognize speech content fail | Fail to do speech recognition on this audio.| Check audio and script content to make sure the audio is valid speech, and match with script.|
+| Category | Name | Description |
+| | -- | |
+| Script | Non-normalized text|This script contains digit 0-9. Expand them to normalized words and match with the audio. For example, normalize '123' to 'one hundred and twenty-three'.|
+| Script | Non-normalized text|This script contains symbols {}. Normalize the symbols to match the audio. For example, '50%' to 'fifty percent'.|
+| Script | Not enough question utterances| At least 10% of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.|
+| Script |Not enough exclamation utterances| At least 10% of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
+| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. It will be automatically upsampled to 24 KHz if it's lower.|
+| Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10% of max volume). Control the volume average level within proper range during the sample recording or data preparation.|
+| Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.|
+| Volume | Start silence issue | The first 100 ms silence isn't clean. Reduce the recording noise floor level and leave the first 100 ms at the start silent.|
+| Volume| End silence issue| The last 100 ms silence isn't clean. Reduce the recording noise floor level and leave the last 100 ms at the end silent.|
+| Mismatch | Script and audio mismatch|Review the script and the audio content to make sure they match and control the noise floor level. Reduce the length of long silence or split the audio into multiple utterances if it's too long.|
+| Mismatch | Start silence issue |Extra audio was heard before the first word. Review the script and the audio content to make sure they match, control the noise floor level, and make the first 100 ms silent.|
+| Mismatch | End silence issue| Extra audio was heard after the last word. Review the script and the audio content to make sure they match, control the noise floor level, and make the last 100 ms silent.|
+| Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.|
+| Mismatch | No score available |Failed to recognize speech content in this audio. Check the audio and the script content to make sure the audio is valid, and matches the script.|
## Train your custom neural voice model
We also provide an online tool, [Audio Content Creation](https://speech.microsof
- [How to record voice samples](record-custom-voice-samples.md) - [Text-to-Speech API reference](rest-text-to-speech.md)-- [Long Audio API](long-audio-api.md)
+- [Long Audio API](long-audio-api.md)
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
This table lists data types and how each is used to create a custom text-to-spee
| Data type | Description | When to use | Additional processing required | | | -- | -- | | | **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
-| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a transcript (.txt) that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
+| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.| Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type.
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
# Record voice samples to create a custom voice
-Creating a high-quality production custom neural voice from scratch is not a casual undertaking. The central component of a custom neural voice is a large collection of audio samples of human speech. It's vital that these audio recordings be of high quality. Choose a voice talent who has experience making these kinds of recordings, and have them recorded by a recording engineer using professional equipment.
+Creating a high-quality production custom neural voice from scratch isn't a casual undertaking. The central component of a custom neural voice is a large collection of audio samples of human speech. It's vital that these audio recordings be of high quality. Choose a voice talent who has experience making these kinds of recordings, and have them recorded by a recording engineer using professional equipment.
-Before you can make these recordings, though, you need a script: the words that will be spoken by your voice talent to create the audio samples. For best results, your script must have good phonetic coverage and sufficient variety to train the custom neural voice model.
+Before you can make these recordings, though, you need a script: the words that will be spoken by your voice talent to create the audio samples.
Many small but important details go into creating a professional voice recording. This guide is a roadmap for a process that will help you get good, consistent results. > [!NOTE] > To train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom neural voice model. When preparing your recording script, make sure you include the below sentence.
-> ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+> "I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice."
This sentence will be used to verify if the training data is done by the same person that makes the consent. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here. > Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
-> [!TIP]
-> For the highest quality results, consider engaging Microsoft to help develop your custom neural voice. Microsoft has extensive experience producing high-quality voices for its own products, including Cortana and Office.
- ## Voice recording roles There are four basic roles in a custom neural voice recording project:
Recording engineer |Oversees the technical aspects of the recording and operate
Director |Prepares the script and coaches the voice talent's performance. Editor |Finalizes the audio files and prepares them for upload to Speech Studio
-An individual may fill more than one role. This guide assumes that you will be primarily filling the director role and hiring both a voice talent and a recording engineer. If you want to make the recordings yourself, this article includes some information about the recording engineer role. The editor role isn't needed until after the session, so can be performed by the director or the recording engineer.
+An individual may fill more than one role. This guide assumes that you'll be primarily filling the director role and hiring both a voice talent and a recording engineer. If you want to make the recordings yourself, this article includes some information about the recording engineer role. The editor role isn't needed until after the session, so can be performed by the director or the recording engineer.
## Choose your voice talent
-Actors with experience in voiceover or voice character work make good custom neural voice talent. You can also often find suitable talent among announcers and newsreaders. Choose voice talent whose natural voice you like. It is possible to create unique "character" voices, but it's much harder for most talent to perform them consistently, and the effort can cause voice strain. The single most important factor for choosing voice talent is consistency. Your recordings should all sound like they were made on the same day in the same room. You can approach this ideal through good recording practices and engineering.
+Actors with experience in voiceover or voice character work make good custom neural voice talent. You can also often find suitable talent among announcers and newsreaders. Choose voice talent whose natural voice you like. It's possible to create unique "character" voices, but it's much harder for most talent to perform them consistently, and the effort can cause voice strain. The single most important factor for choosing voice talent is consistency. Your recordings for the same voice style should all sound like they were made on the same day in the same room. You can approach this ideal through good recording practices and engineering.
Your voice talent is the other half of the equation. They must be able to speak with consistent rate, volume level, pitch, and tone. Clear diction is a must. The talent also needs to be able to strictly control their pitch variation, emotional affect, and speech mannerisms. Recording voice samples can be more fatiguing than other kinds of voice work. Most voice talent can record for two or three hours a day. Limit sessions to three or four a week, with a day off in-between if possible.
-Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom neural voice. In the process, you'll pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotions. Define the "speaking styles" and ask your voice talent to read the script in a way that resonate the styles you want.
+Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom neural voice. In the process, you'll pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotions. Define the "speaking styles" and ask your voice talent to read the script in a way that resonates the styles you want.
A persona might have, for example, a naturally upbeat personality. So "their" voice might carry a note of optimism even when they speak neutrally. However, such a personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
The starting point of any custom neural voice recording session is the script, w
The utterances in your script can come from anywhere: fiction, non-fiction, transcripts of speeches, news reports, and anything else available in printed form. If you want to make sure your voice does well on specific kinds of words (such as medical terminology or programming jargon), you might want to include sentences from scholarly papers or technical documents. For a brief discussion of potential legal issues, see the ["Legalities"](#legalities) section. You can also write your own text.
-Your utterances don't need to come from the same source, or the same kind of source. They don't even need to have anything to do with each other. However, if you will use set phrases (for example, "You have successfully logged in") in your speech application, make sure to include them in your script. This will give your custom neural voice a better chance of pronouncing those phrases well. And if you should decide to use a recording in place of synthesized speech, you'll already have it in the same voice.
+Your utterances don't need to come from the same source, or the same kind of source. They don't even need to have anything to do with each other. However, if you'll use set phrases (for example, "You have successfully logged in") in your speech application, make sure to include them in your script. It will give your custom neural voice a better chance of pronouncing those phrases well.
+
+We recommend the recording scripts include both general sentences and your domain-specific sentences. For example, if you plan to record 2,000 sentences, 1,000 of them could be general sentences, another 1,000 of them could be sentences from your target domain or the use case of your application.
+
+We provide [sample scripts in the 'General', 'Chat' and 'Customer Service' domains for each language](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) to help you prepare your recording scripts. You can use these Microsoft shared scripts for your recordings directly or use them as a reference to create your own. Building a custom neural voice requires at least 300 recorded sentences as training data.
+
+You can select your domain-specific scripts from the sentences that your custom voice will be used to read.
+
+### Script selection criteria
+
+Below are some general guidelines that you can follow to create a good corpus (recorded audio samples) for Custom Neural Voice training.
+
+- Balance your script to cover different sentence types in your domain including statements, questions, exclamations long sentences, and short sentences.
+
+ In general, each sentence should contain 4 words to 30 words. It's required that no duplicate sentences are included in your script.<br>
+ Statement sentences are the major part of the script, taking about 70-80% of all.
+ Question sentences should take about 10%-20% of your domain script with rising and falling tones covered.<br>
+ If exclamations normally result in a different tone in your target language, consider to include 10%-20% of scripts for exclamations in your samples.<br>
+ Short word/phrase scripts should also take about 10% cases of the total utterances, with 5 to 7 words per case.
+
+ Best practices include:
+ - Balanced coverage for Part of Speech, like verb, noun, adjective, and so on.
+ - Balanced coverage for pronunciations. Include all letters from A to Z so the TTS engine learns how to pronounce each letter in your defined style.
+ - Readable, understandable, common-sense for speaker to read out.
+ - Avoid too much similar pattern for word/phrase, like "easy" and "easier".
+ - Include different format of numbers: address, unit, phone, quantity, date, and so on in all sentence types.
+ - Include spelling sentences if it's something your TTS voice will be used to read. For example, "Spell of Apple is A P P L E".
-While consistency is key in choosing voice talent, variety is the hallmark of a good script. Your script should include many different words and sentences with a variety of sentence lengths, structures, and moods. Every sound in the language should be represented multiple times and in numerous contexts (called *phonetic coverage*).
+- Don't put multiple sentences into one line/one utterance. Separate each line per utterances.
-Furthermore, the text should incorporate all the ways that a particular sound can be represented in writing, and place each sound at varying places in the sentences. Both declarative sentences and questions should be included and read with appropriate intonation.
+- Make sure the sentence is mostly clean. In general, don't include too many non-standard words like numbers or abbreviations as they are usually hard to read. Some application may need to read many numbers or acronyms. In this case, you can include these words, but normalize them in their spoken form.
-It's difficult to write a script that provides *just enough* data to allow Speech Studio to build a good voice. In practice, the simplest way to make a script that achieves robust phonetic coverage is to include a large number of samples. The standard voices that Microsoft provides were built from tens of thousands of utterances. You should be prepared to record a few to several thousand utterances at minimum to build a production-quality custom neural voice.
+ Below are some best practices for example:
+ - For lines with abbreviations, instead of "BTW", you have "by the way".
+ - For lines with digits, instead of "911", you have "nine one one".
+ - For lines with acronyms, instead of "ABC", you have "A B C"
+ With that, make sure your voice talent pronounces these words in the expected way. Keep your script and recordings match consistently during the training process.
-Check the script carefully for errors. If possible, have someone else check it too. When you run through the script with your talent, you'll probably catch a few more mistakes.
+- Your script should include many different words and sentences with different kinds of sentence lengths, structures, and moods.
+
+- Check the script carefully for errors. If possible, have someone else check it too. When you run through the script with your talent, you'll probably catch a few more mistakes.
+
+### Typical defects of a script
+
+The script's poor quality can adversely affect the training results. To achieve high-quality training results, it's crucial to avoid the defects.
+
+The script defects generally fall into the following categories:
+
+| Category | Example |
+| : | : |
+| Have a meaningless content in a common way. | |
+| Incomplete sentences. |- "This was my last eve" (no subject, no specific meaning) <br>- "He's obviously already funny" (no quote mark in the end, it is not a complete sentence) |
+| Typo in the sentences. | - Start with a lower case<br>- No ending punctuation if needed<br> - Misspelling <br>- Lack of punctuation: no period in the end (except news title)<br>- End with symbols, except comma, question, exclamation <br>- Wrong format, such as:<br> &emsp;- 45$ (should be $45)<br> &emsp;- No space or excess space between word/punctuation |
+|Duplication in similar format, one per each pattern is enough. |- "Now is 1pm in New York"<br>- "Now is 2pm in New York"<br>- "Now is 3pm in New York"<br>- "Now is 1pm in Seattle"<br>- "Now is 1pm in Washington D.C." |
+|Uncommon foreign words: only the commonly used foreign word is acceptable in our script. | |
+|Emoji or any other uncommon symbols. | |
### Script format
You can write your script in Microsoft Word. The script is for use during the re
A basic script format contains three columns:
-* The number of the utterance, starting at 1. Numbering makes it easy for everyone in the studio to refer to a particular utterance ("let's try number 356 again"). You can use the Word paragraph numbering feature to number the rows of the table automatically.
-* A blank column where you'll write the take number or time code of each utterance to help you find it in the finished recording.
-* The text of the utterance itself.
+- The number of the utterance, starting at 1. Numbering makes it easy for everyone in the studio to refer to a particular utterance ("let's try number 356 again"). You can use the Word paragraph numbering feature to number the rows of the table automatically.
+- A blank column where you'll write the take number or time code of each utterance to help you find it in the finished recording.
+- The text of the utterance itself.
-![Sample script](media/custom-voice/script.png)
+ ![Sample script](media/custom-voice/script.png)
> [!NOTE] > Most studios record in short segments known as *takes*. Each take typically contains 10 to 24 utterances. Just noting the take number is sufficient to find an utterance later. If you're recording in a studio that prefers to make longer recordings, you'll want to note the time code instead. The studio will have a prominent time display.
Print three copies of the script: one for the talent, one for the engineer, and
### Legalities
-Under copyright law, an actor's reading of copyrighted text might be a performance for which the author of the work should be compensated. This performance will not be recognizable in the final product, the custom neural voice. Even so, the legality of using a copyrighted work for this purpose is not well established. Microsoft cannot provide legal advice on this issue; consult your own counsel.
+Under copyright law, an actor's reading of copyrighted text might be a performance for which the author of the work should be compensated. This performance won't be recognizable in the final product, the custom neural voice. Even so, the legality of using a copyrighted work for this purpose isn't well established. Microsoft can't provide legal advice on this issue; consult your own counsel.
-Fortunately, it is possible to avoid these issues entirely. There are many sources of text you can use without permission or license.
+Fortunately, it's possible to avoid these issues entirely. There are many sources of text you can use without permission or license.
|Text source|Description| |-|-|
Fortunately, it is possible to avoid these issues entirely. There are many sourc
|Works no longer<br>under copyright|Typically works published prior to 1923. For English, [Project Gutenberg](https://www.gutenberg.org/) offers tens of thousands of such works. You may want to focus on newer works, as the language will be closer to modern English.| |Government&nbsp;works|Works created by the United States government are not copyrighted in the United States, though the government may claim copyright in other countries/regions.| |Public domain|Works for which copyright has been explicitly disclaimed or that have been dedicated to the public domain. It may not be possible to waive copyright entirely in some jurisdictions.|
-|Permissively-licensed works|Works distributed under a license like Creative Commons or the GNU Free Documentation License (GFDL). Wikipedia uses the GFDL. Some licenses, however, may impose restrictions on performance of the licensed content that may impact the creation of a custom neural voice model, so read the license carefully.|
+|Permissively licensed works|Works distributed under a license like Creative Commons or the GNU Free Documentation License (GFDL). Wikipedia uses the GFDL. Some licenses, however, may impose restrictions on performance of the licensed content that may impact the creation of a custom neural voice model, so read the license carefully.|
## Recording your script
-Record your script at a professional recording studio that specializes in voice work. They'll have a recording booth, the right equipment, and the right people to operate it. It pays not to skimp on recording.
+Record your script at a professional recording studio that specializes in voice work. They'll have a recording booth, the right equipment, and the right people to operate it. It's recommended not to skimp on recording.
+
+Discuss your project with the studio's recording engineer and listen to their advice. The recording should have little or no dynamic range compression (maximum of 4:1). It's critical that the audio has consistent volume and a high signal-to-noise ratio, while being free of unwanted sounds.
+
+### Recording requirements
+
+To achieve high-quality training results, you need to comply with the following requirements during recording or data preparation:
+
+- Clear and well pronounced
+
+- Natural speed: not too slow or too fast between audio files.
+
+- Appropriate volume, prosody and break: stable within the same sentence or between sentences, correct break for punctuation.
+
+- No noise during recording
+
+- Fit your persona design
+
+- No wrong accent: fit to the target design
+
+- No wrong pronunciation
+
+You can refer to below specification to prepare for the audio samples as best practice.
+
+| Property | Value |
+| : | : |
+| File format | *.wav, Mono |
+| Sampling rate | 24 KHz |
+| Sample format | 16 bit, PCM |
+| Peak volume levels | -3 dB to -6 dB |
+| SNR | > 35 dB |
+| Silence | - There should have some silence (recommend 100 ms) at the beginning and ending, but no longer than 200 ms<br>- Silence between words or phrases < -30 dB<br>- Silence in the wave after last word is spoken <-60 dB |
+| Environment noise, echo | - The level of noise at start of the wave before speaking < -70 dB |
+
+> [!Note]
+> You can record at higher sampling rate and bit depth, for example in the format of 48 KHz 24 bit PCM. During the custom voice training, we'll down sample it to 24 KHz 16 bit PCM automatically.
+
+### Typical audio errors
+
+For high-quality training results, avoiding audio errors is highly recommended. The errors of audio normally involve the following categories:
+
+- Audio file name doesn't match the script ID.
+- War file has an invalid format and cannot be read.
+- Audio sampling rate is lower than 16 KHz. Also, it is recommended that wav file sampling rate should be equal or higher than 24 KHz for high-quality neural voice.
+- Volume peak isn't within the range of -3 dB (70% of max volume) to -6 dB (50%).
+- Waveform overflow. That is, the waveform at its peak value is cut and thus not complete.
+
+ ![waveform overflow](media/custom-voice/overflow.png)
+
+- The silence part isn't clean, such as ambient noise, mouth noise and echo.
+
+ For example, below audio contains the environment noise between speeches.
+
+ ![environment noise](media/custom-voice/environment-noise.png)
+
+ Below sample contains noises of DC offset or echo.
+
+ ![DC offset or echo](media/custom-voice/dc-offset-noise.png)
+
+- The overall volume is too low. Your data will be tagged as an issue if the volume is lower than -18 dB (10% of max volume). Make sure all audio files should be consistent at the same level of volume.
+
+ ![overall volume](media/custom-voice/overall-volume.png)
+
+- No silence before the first word or after the last word. Also, the start or end silence should not be longer than 200 ms or shorter than 100 ms.
-Discuss your project with