Updates from: 04/30/2022 01:17:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
Previously updated : 03/30/2022 Last updated : 04/30/2022
Your resulting code should look similar to following sample:
```javascript const msalConfig = {
- auth: {
- clientId: "<your-MyApp-application-ID>"
- authority: b2cPolicies.authorities.signUpSignIn.authority,
- knownAuthorities: [b2cPolicies.authorityDomain],
- },
- cache: {
- cacheLocation: "localStorage",
- storeAuthStateInCookie: true
- }
+ auth: {
+ clientId: "<your-MyApp-application-ID>", // This is the ONLY mandatory field; everything else is optional.
+ authority: b2cPolicies.authorities.signUpSignIn.authority, // Choose sign-up/sign-in user-flow as your default.
+ knownAuthorities: [b2cPolicies.authorityDomain], // You must identify your tenant's domain as a known authority.
+ redirectUri: "http://localhost:6420", // You must register this URI on Azure Portal/App Registration. Defaults to "window.location.href".
+ },
+ cache: {
+ cacheLocation: "sessionStorage",
+ storeAuthStateInCookie: false,
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback: (level, message, containsPii) => {
+ if (containsPii) {
+ return;
+ }
+ switch (level) {
+ case msal.LogLevel.Error:
+ console.error(message);
+ return;
+ case msal.LogLevel.Info:
+ console.info(message);
+ return;
+ case msal.LogLevel.Verbose:
+ console.debug(message);
+ return;
+ case msal.LogLevel.Warning:
+ console.warn(message);
+ return;
+ }
+ }
+ }
+ }
+ };
}; const loginRequest = {
- scopes: ["openid", "profile"],
+ scopes: ["openid", ...apiConfig.b2cScopes],
}; const tokenRequest = {
- scopes: apiConfig.b2cScopes
+ scopes: [...apiConfig.b2cScopes], // e.g. ["https://fabrikamb2c.onmicrosoft.com/helloapi/demo.read"]
+ forceRefresh: false // Set this to "true" to skip a cached token and go to the server to get a new token
}; ```
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
description: Use filter for devices in Conditional Access to enhance security po
Previously updated : 04/05/2022 Last updated : 04/28/2022
When creating Conditional Access policies, administrators have asked for the abi
There are multiple scenarios that organizations can now enable using filter for devices condition. Below are some core scenarios with examples of how to use this new condition. -- Restrict access to privileged resources like Microsoft Azure Management, to privileged users, accessing from [privileged or secure admin workstations](/security/compass/privileged-access-devices). For this scenario, organizations would create two Conditional Access policies:
+- **Restrict access to privileged resources**. For this example, lets say you want to allow access to Microsoft Azure Management from a user who is assigned a privilged role Global Admin, has satisfied multifactor authentication and accessing from a device that is [privileged or secure admin workstations](/security/compass/privileged-access-devices) and attested as compliant. For this scenario, organizations would create two Conditional Access policies:
- Policy 1: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
- - Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block.
-- Block access to organization resources from devices running an unsupported Operating System version like Windows 7. For this scenario, organizations would create the following two Conditional Access policies:
- - Policy 1: All users, accessing all cloud apps and for Access controls, Grant access, but require device to be marked as compliant or require device to be hybrid Azure AD joined.
- - Policy 2: All users, accessing all cloud apps, including a filter for devices using rule expression device.operatingSystem equals Windows and device.operatingSystemVersion startsWith "6.1" and for Access controls, Block.
-- Do not require multifactor authentication for specific accounts like service accounts when used on specific devices like Teams phones or Surface Hub devices. For this scenario, organizations would create the following two Conditional Access policies:
+ - Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](https://docs.microsoft.com/graph/api/device-update?view=graph-rest-1.0&tabs=http).
+- **Block access to organization resources from devices running an unsupported Operating System**. For this example, lets say you want to block access to resources from Windows OS version older than Windows 10. For this scenario, organizations would create the following Conditional Access policy:
+ - All users, accessing all cloud apps, excluding a filter for devices using rule expression device.operatingSystem equals Windows and device.operatingSystemVersion startsWith "10.0" and for Access controls, Block.
+- **Do not require multifactor authentication for specific accounts on specific devices**. For this example, lets say you want to not require multifactor authentication when using service accounts on specific devices like Teams phones or Surface Hub devices. For this scenario, organizations would create the following two Conditional Access policies:
- Policy 1: All users excluding service accounts, accessing all cloud apps, and for Access controls, Grant access, but require multifactor authentication. - Policy 2: Select users and groups and include group that contains service accounts only, accessing all cloud apps, excluding a filter for devices using rule expression device.extensionAttribute2 not equals TeamsPhoneDevice and for Access controls, Block.
+> [!NOTE]
+> Azure AD uses device authentication to evaluate device filter rules. For devices that are unregistered with Azure AD, all device properties are considered as null values.
+ ## Create a Conditional Access policy Filter for devices is an option when creating a Conditional Access policy in the Azure portal or using the Microsoft Graph API.
active-directory Msal V1 App Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-v1-app-scopes.md
The logic used by Azure AD is the following:
- For ADAL (Azure AD v1.0) endpoint with a v1.0 access token (the only possible), aud=resource - For MSAL (Microsoft identity platform) asking an access token for a resource accepting v2.0 tokens, `aud=resource.AppId`-- For MSAL (v2.0 endpoint) asking an access token for a resource that accepts a v1.0 access token (which is the case above), Azure AD parses the desired audience from the requested scope by taking everything before the last slash and using it as the resource identifier. Therefore, if `https://database.windows.net` expects an audience of `https://database.windows.net`, you'll need to request a scope of `https://database.windows.net//.default`. See also GitHub issue [#747: `Resource url's trailing slash is omitted, which caused sql auth failure`](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/747).
+- For MSAL (v2.0 endpoint) asking an access token for a resource that accepts a v1.0 access token (which is the case above), Azure AD parses the desired audience from the requested scope by taking everything before the last slash and using it as the resource identifier. Therefore, if `https://database.windows.net` expects an audience of `https://database.windows.net/`, you'll need to request a scope of `https://database.windows.net//.default`. See also GitHub issue [#747: `Resource url's trailing slash is omitted, which caused sql auth failure`](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/747).
## Scopes to request access to all the permissions of a v1.0 application
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
A blue "verified" badge appears on the Azure AD consent prompt and other screens
![Consent prompt](./media/publisher-verification-overview/consent-prompt.png)
-> [!NOTE]
-> We recently changed the color of the "verified" badge from blue to gray. We will revert that change sometime in the last half of February 2022, so the "verified" badge will be blue.
- This feature is primarily for developers building multi-tenant apps that leverage [OAuth 2.0 and OpenID Connect](active-directory-v2-protocols.md) with the [Microsoft identity platform](v2-overview.md). These apps can sign users in using OpenID Connect, or they may use OAuth 2.0 to request access to data using APIs like [Microsoft Graph](https://developer.microsoft.com/graph/). ## Benefits
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
Review the [UserInfo documentation](userinfo.md#calling-the-api) to look over ho
When you want to sign out the user from your app, it isn't sufficient to clear your app's cookies or otherwise end the user's session. You must also redirect the user to the Microsoft identity platform to sign out. If you don't do this, the user reauthenticates to your app without entering their credentials again, because they will have a valid single sign-in session with the Microsoft identity platform.
-You can redirect the user to the `end_session_endpoint` listed in the OpenID Connect metadata document:
+You can redirect the user to the `end_session_endpoint` (which supports both HTTP GET and POST requests) listed in the OpenID Connect metadata document:
```HTTP GET https://login.microsoftonline.com/common/oauth2/v2.0/logout?
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
Typically, a software workload (such as an application, service, script, or cont
You use workload identity federation to configure an Azure AD app registration to trust tokens from an external identity provider (IdP), such as GitHub. Once that trust relationship is created, your software workload can exchange trusted tokens from the external IdP for access tokens from Microsoft identity platform. Your software workload then uses that access token to access the Azure AD protected resources to which the workload has been granted access. This eliminates the maintenance burden of manually managing credentials and eliminates the risk of leaking secrets or having certificates expire. ## Supported scenarios
+> [!NOTE]
+> Azure AD-issued tokens might not be used for federated identity flows.
The following scenarios are supported for accessing Azure AD protected resources using workload identity federation:
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
+## October 2021
+
+### Limits on the number of configured API permissions for an application registration will be enforced starting in October 2021
+
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** Developer Experience
+
+Sometimes, application developers configure their apps to require more permissions than it's possible to grant. To prevent this from happening, a limit on the total number of required permissions that can be configured for an app registration will be enforced.
+
+The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions theyΓÇÖre configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
+
+In the Azure portal, the required permissions are listed under API permissions for the application you wish to configure. Using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. [Learn more](../enterprise-users/directory-service-limits-restrictions.md).
+
++
+### Email one-time passcode on by default change beginning rollout in November 2021
+
+**Type:** Plan for change
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Previously, we announced that starting October 31, 2021, Microsoft Azure Active Directory [email one-time passcode](../external-identities/one-time-passcode.md) authentication will become the default method for inviting accounts and tenants for B2B collaboration scenarios. However, because of deployment schedules, we'll begin rolling out on November 1, 2021. Most of the tenants will see the change rolled out in January 2022 to minimize disruptions during the holidays and deployment lock downs. After this change, Microsoft will no longer allow redemption of invitations using Azure Active Directory accounts that are unmanaged. [Learn more](../external-identities/one-time-passcode.md#frequently-asked-questions).
+
++
+### Conditional Access Guest Access Blocking Screen
+
+**Type:** Fixed
+**Service category:** Conditional Access
+**Product capability:** End User Experiences
+
+If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, weΓÇÖve created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
+
++
+### 50105 Errors will now result in a UX error message instead of an error response to the application
+
+**Type:** Fixed
+**Service category:** Authentications (Logins)
+**Product capability:** Developer Experience
+
+Azure AD has fixed a bug in an error response that occurs when a user isn't assigned to an app that requires a user assignment. Previously, Azure AD would return error 50105 with the OIDC error code "interaction_required" even during interactive authentication. This would cause well-coded applications to loop indefinitely, as they do interactive authentication and receive an error telling them to do interactive authentication, which they would then do.
+
+The bug has been fixed, so that during non-interactive auth an "interaction_required" error will still be returned. Also, during interactive authentication an error page will be directly displayed to the user.
+
+For greater details, see the change notices for [Azure AD protocols](../develop/reference-breaking-changes.md#error-50105-has-been-fixed-to-not-return-interaction_required-during-interactive-authentication).
+++
+### Public preview - New claims transformation capabilities
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+The following new capabilities have been added to the claims transformations available for manipulating claims in tokens issued from Azure AD:
+
+- Join() on NameID. Used to be restricted to joining an email format address with a verified domain. Now Join() can be used on the NameID claim in the same way as any other claim, so NameID transforms can be used to create Windows account style NameIDs or any other string. For now if the result is an email address, the Azure AD will still validate that the domain is one that is verified in the tenant.
+- Substring(). A new transformation in the claims configuration UI allows extraction of defined position substrings such as five characters starting at character three - substring(3,5)
+- Claims transformations. These transformations can now be performed on Multi-valued attributes, and can emit multi-valued claims. Microsoft Graph can now be used to read/write multi-valued directory schema extension attributes. [Learn more](../develop/active-directory-saml-claims-customization.md).
+++
+### Public Preview ΓÇô Flagged Sign-ins
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+Flagged sign-ins is a feature that will increase the signal to noise ratio for user sign-ins where users need help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with. Also to help admins and help desk workers find the right sign-in events quickly and efficiently. [Learn more](../reports-monitoring/overview-flagged-sign-ins.md).
+++
+### Public preview - Device overview
+
+**Type:** New feature
+**Service category:** Device Registration and Management
+**Product capability:** Device Lifecycle Management
+
+The new Device Overview feature provides actionable insights about devices in your tenant. [Learn more](../devices/device-management-azure-portal.md).
+
++
+### Public preview - Azure Active Directory workload identity federation
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Developer Experience
+
+Azure AD workload identity federation is a new capability that's in public preview. It frees developers from handling application secrets or certificates. This includes secrets in scenarios such as using GitHub Actions and building applications on Kubernetes. Rather than creating an application secret and using that to get tokens for that application, developers can instead use tokens provided by the respective platforms such as GitHub and Kubernetes without having to manage any secrets manually.[Learn more](../develop/workload-identity-federation.md).
+++
+### Public Preview - Updates to Sign-in Diagnostic
+
+**Type:** Changed feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+With this update, the diagnostic covers more scenarios and is made more easily available to admins.
+
+New scenarios covered when using the Sign-in Diagnostic:
+- Pass Through Authentication sign-in failures
+- Seamless Single-Sign On sign-in failures
+
+Other changes include:
+- Flagged Sign-ins will automatically appear for investigation when using the Sign-in Diagnostic from Diagnose and Solve.
+- Sign-in Diagnostic is now available from the Enterprise Apps Diagnose and Solve blade.
+- The Sign-in Diagnostic is now available in the Basic Info tab of the Sign-in Log event view for all sign-in events. [Learn more](../reports-monitoring/concept-sign-in-diagnostics-scenarios.md#supported-scenarios).
+++
+### General Availability - Privileged Role Administrators can now create Azure AD access reviews on role-assignable groups
+
+**Type:** Fixed
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Privileged Role Administrators can now create Azure AD access reviews on Azure AD role-assignable groups, in addition to Azure AD roles. [Learn more](../governance/deploy-access-reviews.md#who-will-create-and-manage-access-reviews).
+
++
+### General Availability - Azure AD single Sign on and device-based Conditional Access support in Firefox on Windows 10/11
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** SSO
+
+We now support native single sign-on (SSO) support and device-based Conditional Access to Firefox browser on Windows 10 and Windows Server 2019 starting in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
+
++
+### General Availability - New app indicator in My Apps
+
+**Type:** New feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+Apps that have been recently assigned to the user show up with a "new" indicator. When the app is launched or the page is refreshed, this indicator disappears. [Learn more](/azure/active-directory/user-help/my-apps-portal-end-user-access).
+
++
+### General availability - Custom domain support in Azure AD B2C
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+Azure AD B2C customers can now enable custom domains so their end-users are redirected to a custom URL domain for authentication. This is done via integration with Azure Front Door's custom domains capability. [Learn more](../../active-directory-b2c/custom-domain.md?pivots=b2c-user-flow).
+
++
+### General availability - Edge Administrator built-in role
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+
+Users in this role can create and manage the enterprise site list required for Internet Explorer mode on Microsoft Edge. This role grants permissions to create, edit, and publish the site list and additionally allows access to manage support tickets. [Learn more](/deployedge/edge-ie-mode-cloud-site-list-mgmt)
+
++
+### General availability - Windows 365 Administrator built-in role
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices to associate a policy, and create and manage groups. [Learn more](../roles/permissions-reference.md)
+
++
+### New Federated Apps available in Azure AD Application gallery - October 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In October 2021 we've added the following 10 new applications in our App gallery with Federation support:
+
+[Adaptive Shield](../saas-apps/adaptive-shield-tutorial.md), [SocialChorus Search](https://socialchorus.com/), [Hiretual-SSO](../saas-apps/hiretual-tutorial.md), [TeamSticker by Communitio](../saas-apps/teamsticker-by-communitio-tutorial.md), [embed signage](../saas-apps/embed-signage-tutorial.md), [JoinedUp](../saas-apps/joinedup-tutorial.md), [VECOS Releezme Locker management system](../saas-apps/vecos-releezme-locker-management-system-tutorial.md), [Altoura](../saas-apps/altoura-tutorial.md), [Dagster Cloud](../saas-apps/dagster-cloud-tutorial.md), [Qualaroo](../saas-apps/qualaroo-tutorial.md)
+
+You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the following article: https://aka.ms/AzureADAppRequest
+++
+### Continuous Access Evaluation migration with Conditional Access
+
+**Type:** Changed feature
+**Service category:** Conditional Access
+**Product capability:** User Authentication
+
+A new user experience is available for our CAE tenants. Tenants will now access CAE as part of Conditional Access. Any tenants that were previously using CAE for some (but not all) user accounts under the old UX or had previously disabled the old CAE UX will now be required to undergo a one time migration experience.[Learn more](../conditional-access/concept-continuous-access-evaluation.md#migration).
+
++
+### Improved group list blade
+
+**Type:** Changed feature
+**Service category:** Group Management
+**Product capability:** Directory
+
+The new group list blade offers more sort and filtering capabilities, infinite scrolling, and better performance. [Learn more](../enterprise-users/groups-members-owners-search.md).
+
++
+### General availability - Google deprecation of Gmail sign-in support on embedded webviews on September 30, 2021
+
+**Type:** Changed feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Google has deprecated Gmail sign-ins on Microsoft Teams mobile and custom apps that run Gmail authentications on embedded webviews on Sept. 30th, 2021.
+
+If you would like to request an extension, impacted customers with affected OAuth client ID(s) should have received an email from Google Developers with the following information regarding a one-time policy enforcement extension, which must be completed by Jan 31, 2022.
+
+To continue allowing your Gmail users to sign in and redeem, we strongly recommend that you refer to [Embedded vs System Web](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) UI in the MSAL.NET documentation and modify your apps to use the system browser for sign-in. All MSAL SDKs use the system web-view by default.
+
+As a workaround, we are deploying the device login flow by October 8. Between today and until then, it is likely that it may not be rolled out to all regions yet (in which case, end-users will be met with an error screen until it gets deployed to your region.)
+
+For more details on the device login flow and details on requesting extension to Google, see [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
+
++
+### Identity Governance Administrator can create and manage Azure AD access reviews of groups and applications
+
+**Type:** Changed feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Identity Governance Administrator can create and manage Azure AD access reviews of groups and applications. [Learn more](../governance/deploy-access-reviews.md#who-will-create-and-manage-access-reviews).
+
+++++ ## September 2021 ### Limits on the number of configured API permissions for an application registration will be enforced starting in October 2021
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md). +
+## April 2022
+
+### General Availability- Microsoft Defender for Cloud for Endpoint Signal in Identity Protection
++
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+Identity Protection now integrates a signal from Microsoft Defender for Cloud for Endpoint (MDE) that will protect against PRT theft detection. To learn more, see: [What is risk? Azure AD Identity Protection | Microsoft Docs](../identity-protection/concept-identity-protection-risks.md).
+
+++
+### General availability - Entitlement management 3 stages of approval
++
+**Type:** Changed feature
+**Service category:** Other
+**Product capability:** Entitlement Management
+**Clouds impacted:** Public (Microsoft 365, GCC)
+
+
+This update extends the Azure AD entitlement management access package policy to allow a third approval stage. This will be able to be configured via the Azure portal or Microsoft Graph. For more information, see: [Change approval and requestor information settings for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-approval-policy.md).
+
+++
+### General Availability - Improvements to Azure AD Smart Lockout
++
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** User Management
+**Clouds impacted:** Public (Microsoft 365, GCC), China, US Gov(GCC-H, DOD), US Nat, US Sec
+
+
+With a recent improvement, Smart Lockout now synchronizes the lockout state across Azure AD data centers, so the total number of failed sign-in attempts allowed before an account is locked out will match the configured lockout threshold. For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md).
+
++++
+### Public Preview - Enabling customization capabilities for the Self-Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icons in Company Branding.
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+Updating the Company Branding functionality on the Azure AD/Microsoft 365 sign-in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icon. For more information, see: [Add branding to your organizationΓÇÖs Azure Active Directory sign-in page](customize-branding.md).
+++
+### Public Preview - Integration of Microsoft 365 App Certification details into AAD UX and Consent Experiences
++
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** AuthZ/Access Delegation
+**Clouds impacted:** Public (Microsoft 365, GCC)
+
+Microsoft 365 Certification status for an app is now available in Azure AD consent UX, and custom app consent policies. The status will later be displayed in several other Identity-owned interfaces such as enterprise apps. For more information, see: [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md).
+++
+### Public Preview - Organizations can replace all references to Microsoft on the AAD auth experience
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+Updating the Company Branding functionality on the Azure AD/Microsoft 365 sign-in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icon. For more information, see: [Add branding to your organizationΓÇÖs Azure Active Directory sign-in page](customize-branding.md).
+++
+### Public preview - Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels
++
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
++
+Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels. For more information, see: [Include B2B direct connect users and teams accessing Teams Shared Channels in access reviews (preview)](../governance/create-access-review.md#include-b2b-direct-connect-users-and-teams-accessing-teams-shared-channels-in-access-reviews-preview).
+++
+### Public Preview - New MS Graph APIs to configure federated settings when federated with Azure AD
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Identity Security & Protection
+**Clouds impacted:** Public (Microsoft 365, GCC)
++
+We're announcing the public preview of following MS Graph APIs and PowerShell cmdlets for configuring federated settings when federated with Azure AD:
++
+|Action |MS Graph API |PowerShell cmdlet |
+||||
+|Get federation settings for a federated domain | [Get internalDomainFederation](https://docs.microsoft.com/graph/api/internaldomainfederation-get?view=graph-rest-beta) | [Get-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta) |
+|Create federation settings for a federated domain | [Create internalDomainFederation](https://docs.microsoft.com/graph/api/domain-post-federationconfiguration?view=graph-rest-beta) | [New-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-beta) |
+|Remove federation settings for a federated domain | [Delete internalDomainFederation](https://docs.microsoft.com/graph/api/internaldomainfederation-delete?view=graph-rest-beta) | [Remove-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/remove-mgdomainfederationconfiguration?view=graph-powershell-beta) |
+|Update federation settings for a federated domain | [Update internalDomainFederation](https://docs.microsoft.com/graph/api/internaldomainfederation-update?view=graph-rest-beta) | [Update-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomainfederationconfiguration?view=graph-powershell-beta) |
+++
+If using older MSOnline cmdlets ([Get-MsolDomainFederationSettings](https://docs.microsoft.com/powershell/module/msonline/get-msoldomainfederationsettings?view=azureadps-1.0) and [Set-MsolDomainFederationSettings](https://docs.microsoft.com/powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0)), we highly recommend transitioning to the latest MS Graph APIs and PowerShell cmdlets.
++
+For more information, see [internalDomainFederation resource type - Microsoft Graph beta | Microsoft Docs](https://docs.microsoft.com/graph/api/resources/internaldomainfederation?view=graph-rest-beta).
++++
+### Public Preview ΓÇô Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users
+
+**Type:** New feature
+**Service category:** RBAC role
+**Product capability:** AuthZ/Access Delegation
+**Clouds impacted:** Public (Microsoft 365, GCC)
+
+Added functionality to session controls allowing admins to reauthenticate a user on every sign-in if a user or particular sign-in event is deemed risky, or when enrolling a device in Intune. For more information, see [Configure authentication session management with conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+++
+### Public Preview ΓÇô Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Identity Security & Protection
+**Clouds impacted:** Public (Microsoft 365, GCC)
+
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](https://docs.microsoft.com/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values).
+
+We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
+++
+### New Federated Apps available in Azure AD Application gallery - April 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Third Party Integration
+
+In April 2022 we added the following 24 new applications in our App gallery with Federation support
+[X-1FBO](https://www.x1fbo.com/), [select Armor](https://app.clickarmor.c)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
++
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
+++
+### General Availability - Customer data storage for Japan customers in Japanese data centers
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** GoLocal
+**Clouds impacted:** Public (Microsoft 365, GCC)
+
+From April 15, 2022, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with a Japan billing address within the Japanese data centers. For more information, see: [Customer data storage for Japan customers in Azure Active Directory](active-directory-data-storage-japan.md).
++++++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - April 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** Third Party Integration
+**Clouds impacted:** Public (Microsoft 365, GCC)
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+- [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md)
+- [embed signage](../saas-apps/embed-signage-provisioning-tutorial.md)
+- [KnowBe4 Security Awareness Training](../saas-apps/knowbe4-security-awareness-training-provisioning-tutorial.md)
+- [NordPass](../saas-apps/nordpass-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md)
++++ ## March 2022
This page is updated monthly, so revisit it regularly. If you're looking for ite
**Clouds impacted:** Public (Microsoft 365, GCC)
-We announced in April 2020 General Availability of our new combined registration experience, enabling users to register security information for multi-factor authentication and self-service password reset at the same time, which was available for existing customers to opt in. We're happy to announce the combined security information registration experience will be enabled to all non-enabled customers after September 30th, 2022. This change does not impact tenants created after August 15th, 2020, or tenants located in the China region. For more information, see: [Combined security information registration for Azure Active Directory overview](../authentication/concept-registration-mfa-sspr-combined.md).
+We announced in April 2020 General Availability of our new combined registration experience, enabling users to register security information for multi-factor authentication and self-service password reset at the same time, which was available for existing customers to opt in. We're happy to announce the combined security information registration experience will be enabled to all non-enabled customers after September 30, 2022. This change doesn't impact tenants created after August 15, 2020, or tenants located in the China region. For more information, see: [Combined security information registration for Azure Active Directory overview](../authentication/concept-registration-mfa-sspr-combined.md).
We announced in April 2020 General Availability of our new combined registration
**Type:** New feature **Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
+**Product capability:** Third Party Integration
For more information about how to better secure your organization by using autom
**Type:** New feature **Service category:** Reporting **Product capability:** Monitoring & Reporting
-**Clouds impacted:** Public (Microsoft 365,GCC)
+**Clouds impacted:** Public (Microsoft 365, GCC)
Azure AD Recommendations is now in public preview. This feature provides personalized insights with actionable guidance to help you identify opportunities to implement Azure AD best practices, and optimize the state of your tenant. For more information, see: [What is Azure Active Directory recommendations](../reports-monitoring/overview-recommendations.md)
Azure AD Recommendations is now in public preview. This feature provides persona
### Public Preview: Dynamic administrative unit membership for users and devices **Type:** New feature
-**Service category:** RBAC
+**Service category:** RBAC role
**Product capability:** Access Control
-**Clouds impacted:** Public (Microsoft 365,GCC)
+**Clouds impacted:** Public (Microsoft 365, GCC)
Administrative units now support dynamic membership rules for user and device members. Instead of manually assigning users and devices to administrative units, tenant admins can set up a query for the administrative unit. The membership will be automatically maintained by Azure AD. For more information, see:[Administrative units in Azure Active Directory](../roles/administrative-units.md).
Administrative units now support dynamic membership rules for user and device me
### Public Preview: Devices in Administrative Units **Type:** New feature
-**Service category:** RBAC
+**Service category:** RBAC role
**Product capability:** AuthZ/Access Delegation **Clouds impacted:** Public (Microsoft 365,GCC)
Devices can now be added as members of administrative units. This enables scoped
**Type:** New feature **Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
+**Product capability:** Third Party Integration
-In March 2022 we have added the following 29 new applications in our App gallery with Federation support:
+In March 2022 we've added the following 29 new applications in our App gallery with Federation support:
[Informatica Platform](../saas-apps/informatica-platform-tutorial.md), [Buttonwood Central SSO](../saas-apps/buttonwood-central-sso-tutorial.md), [Blockbax](../saas-apps/blockbax-tutorial.md), [Datto Workplace Single Sign On](../saas-apps/datto-workplace-tutorial.md), [Atlas by Workland](https://atlas.workland.com/), [Simply.Coach](https://app.simply.coach/signup), [Benevity](https://benevity.com/), [Engage Absence Management](https://engage.honeydew-health.com/users/sign_in), [LitLingo App Authentication](https://www.litlingo.com/litlingo-deployment-guide), [ADP EMEA French HR Portal mon.adp.com](../saas-apps/adp-emea-french-hr-portal-tutorial.md), [Ready Room](https://app.readyroom.net/), [Rainmaker UPSMQDEV](https://upsmqdev.rainmaker.aero/rainmaker.security.web/), [Axway CSOS](../saas-apps/axway-csos-tutorial.md), [Alloy](https://alloyapp.io/), [U.S. Bank Prepaid](../saas-apps/us-bank-prepaid-tutorial.md), [EdApp](https://admin.edapp.com/login), [GoSimplo](https://app.gosimplo.com/External/Microsoft/Signup), [Snow Atlas SSO](https://www.snowsoftware.io/), [Abacus.AI](https://alloyapp.io/), [Culture Shift](../saas-apps/culture-shift-tutorial.md), [StaySafe Hub](https://hub.staysafeapp.net/login), [OpenLearning](../saas-apps/openlearning-tutorial.md), [Draup, Inc](https://draup.com/platformlogin/), [Air](../saas-apps/air-tutorial.md), [Regulatory Lab](https://clientidentification.com/), [SafetyLine](https://slmonitor.com/login), [Zest](../saas-apps/zest-tutorial.md), [iGrafx Platform](../saas-apps/igrafx-platform-tutorial.md), [Tracker Software Technologies](../saas-apps/tracker-software-technologies-tutorial.md)
For listing your application in the Azure AD app gallery, please read the detail
### Public Preview - New APIs for fetching transitive role assignments and role permissions **Type:** New feature
-**Service category:** RBAC
+**Service category:** RBAC role
**Product capability:** Access Control
Use multi-stage reviews to create Azure AD access reviews in sequential stages,
**Type:** New feature **Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
+**Product capability:** Third Party Integration
In February 2022 we added the following 20 new applications in our App gallery with Federation support:
For more information about how to better secure your organization by using autom
In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support:
-[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
+[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems sign-in Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
You can also find the documentation of all the applications from: https://aka.ms/AppsTutorial,
New updates have been made to the Microsoft Authenticator app icon. To learn mor
-### General availability - Azure AD single Sign on and device-based Conditional Access support in Firefox on Windows 10/11
+### General availability - Azure AD single Sign-on and device-based Conditional Access support in Firefox on Windows 10/11
**Type:** New feature **Service category:** Authentications (Logins)
Updated "switch organizations" user interface in My Account. This visually impro
-## October 2021
-
-### Limits on the number of configured API permissions for an application registration will be enforced starting in October 2021
-
-**Type:** Plan for change
-**Service category:** Other
-**Product capability:** Developer Experience
-
-Sometimes, application developers configure their apps to require more permissions than it's possible to grant. To prevent this from happening, a limit on the total number of required permissions that can be configured for an app registration will be enforced.
-
-The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions theyΓÇÖre configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
-
-In the Azure portal, the required permissions are listed under API permissions for the application you wish to configure. Using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. [Learn more](../enterprise-users/directory-service-limits-restrictions.md).
-
--
-### Email one-time passcode on by default change beginning rollout in November 2021
-
-**Type:** Plan for change
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-Previously, we announced that starting October 31, 2021, Microsoft Azure Active Directory [email one-time passcode](../external-identities/one-time-passcode.md) authentication will become the default method for inviting accounts and tenants for B2B collaboration scenarios. However, because of deployment schedules, we'll begin rolling out on November 1, 2021. Most of the tenants will see the change rolled out in January 2022 to minimize disruptions during the holidays and deployment lock downs. After this change, Microsoft will no longer allow redemption of invitations using Azure Active Directory accounts that are unmanaged. [Learn more](../external-identities/one-time-passcode.md#frequently-asked-questions).
-
--
-### Conditional Access Guest Access Blocking Screen
-
-**Type:** Fixed
-**Service category:** Conditional Access
-**Product capability:** End User Experiences
-
-If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, weΓÇÖve created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
-
--
-### 50105 Errors will now result in a UX error message instead of an error response to the application
-
-**Type:** Fixed
-**Service category:** Authentications (Logins)
-**Product capability:** Developer Experience
-
-Azure AD has fixed a bug in an error response that occurs when a user isn't assigned to an app that requires a user assignment. Previously, Azure AD would return error 50105 with the OIDC error code "interaction_required" even during interactive authentication. This would cause well-coded applications to loop indefinitely, as they do interactive authentication and receive an error telling them to do interactive authentication, which they would then do.
-
-The bug has been fixed, so that during non-interactive auth an "interaction_required" error will still be returned. Also, during interactive authentication an error page will be directly displayed to the user.
-
-For greater details, see the change notices for [Azure AD protocols](../develop/reference-breaking-changes.md#error-50105-has-been-fixed-to-not-return-interaction_required-during-interactive-authentication).
---
-### Public preview - New claims transformation capabilities
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-The following new capabilities have been added to the claims transformations available for manipulating claims in tokens issued from Azure AD:
-
-- Join() on NameID. Used to be restricted to joining an email format address with a verified domain. Now Join() can be used on the NameID claim in the same way as any other claim, so NameID transforms can be used to create Windows account style NameIDs or any other string. For now if the result is an email address, the Azure AD will still validate that the domain is one that is verified in the tenant.-- Substring(). A new transformation in the claims configuration UI allows extraction of defined position substrings such as five characters starting at character three - substring(3,5)-- Claims transformations. These transformations can now be performed on Multi-valued attributes, and can emit multi-valued claims. Microsoft Graph can now be used to read/write multi-valued directory schema extension attributes. [Learn more](../develop/active-directory-saml-claims-customization.md).---
-### Public Preview ΓÇô Flagged Sign-ins
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-Flagged sign-ins is a feature that will increase the signal to noise ratio for user sign-ins where users need help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with. Also to help admins and help desk workers find the right sign-in events quickly and efficiently. [Learn more](../reports-monitoring/overview-flagged-sign-ins.md).
---
-### Public preview - Device overview
-
-**Type:** New feature
-**Service category:** Device Registration and Management
-**Product capability:** Device Lifecycle Management
-
-The new Device Overview feature provides actionable insights about devices in your tenant. [Learn more](../devices/device-management-azure-portal.md).
-
--
-### Public preview - Azure Active Directory workload identity federation
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Developer Experience
-
-Azure AD workload identity federation is a new capability that's in public preview. It frees developers from handling application secrets or certificates. This includes secrets in scenarios such as using GitHub Actions and building applications on Kubernetes. Rather than creating an application secret and using that to get tokens for that application, developers can instead use tokens provided by the respective platforms such as GitHub and Kubernetes without having to manage any secrets manually.[Learn more](../develop/workload-identity-federation.md).
---
-### Public Preview - Updates to Sign-in Diagnostic
-
-**Type:** Changed feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-With this update, the diagnostic covers more scenarios and is made more easily available to admins.
-
-New scenarios covered when using the Sign-in Diagnostic:
-- Pass Through Authentication sign-in failures-- Seamless Single-Sign On sign-in failures
-
-Other changes include:
-- Flagged Sign-ins will automatically appear for investigation when using the Sign-in Diagnostic from Diagnose and Solve.-- Sign-in Diagnostic is now available from the Enterprise Apps Diagnose and Solve blade.-- The Sign-in Diagnostic is now available in the Basic Info tab of the Sign-in Log event view for all sign-in events. [Learn more](../reports-monitoring/concept-sign-in-diagnostics-scenarios.md#supported-scenarios).---
-### General Availability - Privileged Role Administrators can now create Azure AD access reviews on role-assignable groups
-
-**Type:** Fixed
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-Privileged Role Administrators can now create Azure AD access reviews on Azure AD role-assignable groups, in addition to Azure AD roles. [Learn more](../governance/deploy-access-reviews.md#who-will-create-and-manage-access-reviews).
-
--
-### General Availability - Azure AD single Sign on and device-based Conditional Access support in Firefox on Windows 10/11
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** SSO
-
-We now support native single sign-on (SSO) support and device-based Conditional Access to Firefox browser on Windows 10 and Windows Server 2019 starting in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
-
--
-### General Availability - New app indicator in My Apps
-
-**Type:** New feature
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-Apps that have been recently assigned to the user show up with a "new" indicator. When the app is launched or the page is refreshed, this indicator disappears. [Learn more](/azure/active-directory/user-help/my-apps-portal-end-user-access).
-
--
-### General availability - Custom domain support in Azure AD B2C
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-Azure AD B2C customers can now enable custom domains so their end-users are redirected to a custom URL domain for authentication. This is done via integration with Azure Front Door's custom domains capability. [Learn more](../../active-directory-b2c/custom-domain.md?pivots=b2c-user-flow).
-
--
-### General availability - Edge Administrator built-in role
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-
-Users in this role can create and manage the enterprise site list required for Internet Explorer mode on Microsoft Edge. This role grants permissions to create, edit, and publish the site list and additionally allows access to manage support tickets. [Learn more](/deployedge/edge-ie-mode-cloud-site-list-mgmt)
-
--
-### General availability - Windows 365 Administrator built-in role
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices to associate a policy, and create and manage groups. [Learn more](../roles/permissions-reference.md)
-
--
-### New Federated Apps available in Azure AD Application gallery - October 2021
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In October 2021 we've added the following 10 new applications in our App gallery with Federation support:
-
-[Adaptive Shield](../saas-apps/adaptive-shield-tutorial.md), [SocialChorus Search](https://socialchorus.com/), [Hiretual-SSO](../saas-apps/hiretual-tutorial.md), [TeamSticker by Communitio](../saas-apps/teamsticker-by-communitio-tutorial.md), [embed signage](../saas-apps/embed-signage-tutorial.md), [JoinedUp](../saas-apps/joinedup-tutorial.md), [VECOS Releezme Locker management system](../saas-apps/vecos-releezme-locker-management-system-tutorial.md), [Altoura](../saas-apps/altoura-tutorial.md), [Dagster Cloud](../saas-apps/dagster-cloud-tutorial.md), [Qualaroo](../saas-apps/qualaroo-tutorial.md)
-
-You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
-
-For listing your application in the Azure AD app gallery, read the following article: https://aka.ms/AzureADAppRequest
---
-### Continuous Access Evaluation migration with Conditional Access
-
-**Type:** Changed feature
-**Service category:** Conditional Access
-**Product capability:** User Authentication
-
-A new user experience is available for our CAE tenants. Tenants will now access CAE as part of Conditional Access. Any tenants that were previously using CAE for some (but not all) user accounts under the old UX or had previously disabled the old CAE UX will now be required to undergo a one time migration experience.[Learn more](../conditional-access/concept-continuous-access-evaluation.md#migration).
-
--
-### Improved group list blade
-
-**Type:** Changed feature
-**Service category:** Group Management
-**Product capability:** Directory
-
-The new group list blade offers more sort and filtering capabilities, infinite scrolling, and better performance. [Learn more](../enterprise-users/groups-members-owners-search.md).
-
--
-### General availability - Google deprecation of Gmail sign-in support on embedded webviews on September 30, 2021
-
-**Type:** Changed feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-Google has deprecated Gmail sign-ins on Microsoft Teams mobile and custom apps that run Gmail authentications on embedded webviews on Sept. 30th, 2021.
-
-If you would like to request an extension, impacted customers with affected OAuth client ID(s) should have received an email from Google Developers with the following information regarding a one-time policy enforcement extension, which must be completed by Jan 31, 2022.
-
-To continue allowing your Gmail users to sign in and redeem, we strongly recommend that you refer to [Embedded vs System Web](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) UI in the MSAL.NET documentation and modify your apps to use the system browser for sign-in. All MSAL SDKs use the system web-view by default.
-
-As a workaround, we are deploying the device login flow by October 8. Between today and until then, it is likely that it may not be rolled out to all regions yet (in which case, end-users will be met with an error screen until it gets deployed to your region.)
-
-For more details on the device login flow and details on requesting extension to Google, see [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
-
--
-### Identity Governance Administrator can create and manage Azure AD access reviews of groups and applications
-
-**Type:** Changed feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-Identity Governance Administrator can create and manage Azure AD access reviews of groups and applications. [Learn more](../governance/deploy-access-reviews.md#who-will-create-and-manage-access-reviews).
-
--
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
Follow these steps to view the list of users who have assignments to two access
### Identifying users who already have incompatible access programmatically
+You can retrieve assignments to an access package using Microsoft Graph, that are scoped to just those users who also have an assignment to another access package. A user in an administrative role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list additional access](/graph/api/accesspackageassignment-additionalaccess?view=graph-rest-beta&preserve-view=true).
+
+### Identifying users who already have incompatible access using PowerShell
+ You can also query the users who have assignments to an access package with the `Get-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later. For example, if you have two access packages, one with ID `29be137f-b006-426c-b46a-0df3d4e25ccd` and the other with ID `cce10272-68d8-4482-8ba3-a5965c86cfe5`, then you could retrieve the users who have assignments to the first access package, and then compare them to the users who have assignments to the second access package. You can also report the users who have assignments delivered to both, using a PowerShell script similar to the following:
active-directory Datto File Protection Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datto-file-protection-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Datto File Protection Single Sign On'
+description: Learn how to configure single sign-on between Azure Active Directory and Datto File Protection Single Sign On.
++++++++ Last updated : 04/13/2022++++
+# Tutorial: Azure AD SSO integration with Datto File Protection Single Sign On
+
+In this tutorial, you'll learn how to integrate Datto File Protection Single Sign On with Azure Active Directory (Azure AD). When you integrate Datto File Protection Single Sign On with Azure AD, you can:
+
+* Control in Azure AD who has access to Datto File Protection Single Sign On.
+* Enable your users to be automatically signed-in to Datto File Protection Single Sign On with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Datto File Protection Single Sign On enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Datto File Protection Single Sign On supports **SP** and **IDP** initiated SSO.
+
+## Add Datto File Protection Single Sign On from the gallery
+
+To configure the integration of Datto File Protection Single Sign On into Azure AD, you need to add Datto File Protection Single Sign On from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Datto File Protection Single Sign On** in the search box.
+1. Select **Datto File Protection Single Sign On** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Datto File Protection Single Sign On
+
+Configure and test Azure AD SSO with Datto File Protection Single Sign On using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Datto File Protection Single Sign On.
+
+To configure and test Azure AD SSO with Datto File Protection Single Sign On, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Datto File Protection Single Sign On SSO](#configure-datto-file-protection-single-sign-on-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Datto File Protection Single Sign On test user](#create-datto-file-protection-single-sign-on-test-user)** - to have a counterpart of B.Simon in Datto File Protection Single Sign On that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Datto File Protection Single Sign On** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **SP** initiated mode then perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://saml.fileprotection.datto.com/singlesignon/saml/metadata`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://saml.fileprotection.datto.com/singlesignon/saml/SSO`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.fileprotection.datto.com`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign on URL. Contact [Datto File Protection Single Sign On Client support team](mailto:ms-sso-support@ot.soonr.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Datto File Protection Single Sign On.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Datto File Protection Single Sign On**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Datto File Protection Single Sign On SSO
+
+To configure single sign-on on **Datto File Protection Single Sign On** side, you need to send the **App Federation Metadata Url** to [Datto File Protection Single Sign On support team](mailto:ms-sso-support@ot.soonr.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Datto File Protection Single Sign On test user
+
+In this section, you create a user called Britta Simon in Datto File Protection Single Sign On. Work with [Datto File Protection Single Sign On support team](mailto:ms-sso-support@ot.soonr.com) to add the users in the Datto File Protection Single Sign On platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Datto File Protection Single Sign On Sign on URL where you can initiate the login flow.
+
+* Go to Datto File Protection Single Sign On Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Datto File Protection Single Sign On for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Datto File Protection Single Sign On tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Datto File Protection Single Sign On for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Datto File Protection Single Sign On you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Debroome Brand Portal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/debroome-brand-portal-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with deBroome Brand Portal'
+description: Learn how to configure single sign-on between Azure Active Directory and deBroome Brand Portal.
++++++++ Last updated : 04/29/2022+++
+# Tutorial: Azure AD SSO integration with deBroome Brand Portal
+
+In this tutorial, you'll learn how to integrate deBroome Brand Portal with Azure Active Directory (Azure AD). When you integrate deBroome Brand Portal with Azure AD, you can:
+
+* Control in Azure AD who has access to deBroome Brand Portal.
+* Enable your users to be automatically signed-in to deBroome Brand Portal with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* deBroome Brand Portal single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* deBroome Brand Portal supports **SP and IDP** initiated SSO.
+* deBroome Brand Portal supports **Just In Time** user provisioning.
+
+## Add deBroome Brand Portal from the gallery
+
+To configure the integration of deBroome Brand Portal into Azure AD, you need to add deBroome Brand Portal from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **deBroome Brand Portal** in the search box.
+1. Select **deBroome Brand Portal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for deBroome Brand Portal
+
+Configure and test Azure AD SSO with deBroome Brand Portal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in deBroome Brand Portal.
+
+To configure and test Azure AD SSO with deBroome Brand Portal, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure deBroome Brand Portal SSO](#configure-debroome-brand-portal-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create deBroome Brand Portal test user](#create-debroome-brand-portal-test-user)** - to have a counterpart of B.Simon in deBroome Brand Portal that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **deBroome Brand Portal** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<CustomerBrandPortalUrl>/rv2/saml2/metadata`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<CustomerBrandPortalUrl>/rv2/saml2/acs`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerBrandPortalUrl>/sso`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [deBroome Brand Portal Client support team](mailto:support@debroome.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. deBroome Brand Portal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, deBroome Brand Portal application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to deBroome Brand Portal.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **deBroome Brand Portal**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure deBroome Brand Portal SSO
+
+To configure single sign-on on **deBroome Brand Portal** side, you need to send the **App Federation Metadata Url** to [deBroome Brand Portal support team](mailto:support@debroome.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create deBroome Brand Portal test user
+
+In this section, a user called B.Simon is created in deBroome Brand Portal. deBroome Brand Portal supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in deBroome Brand Portal, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to deBroome Brand Portal Sign on URL where you can initiate the login flow.
+
+* Go to deBroome Brand Portal Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the deBroome Brand Portal for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the deBroome Brand Portal tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the deBroome Brand Portal for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure deBroome Brand Portal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Goalquest Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/goalquest-tutorial.md
Previously updated : 04/13/2022 Last updated : 04/29/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
-1. Airtable application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. GoalQuest application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![Screenshot that shows attributes configuration image.](common/default-attributes.png "Image")
-1. In addition to above, Airtable application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, GoalQuest application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | - | |
active-directory Memo 22 09 Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-multi-factor-authentication.md
# Meet multifactor authentication requirements of memorandum 22-09
-This series of articles offers guidance for using Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles, as described in the US federal government's Office of Management and Budget (OMB) [memorandum 22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf).
+This series of articles offers guidance for using Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles, as described in the US federal government's Office of Management and Budget (OMB) [memorandum 22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf).
The memo requires that all employees use enterprise-managed identities to access applications, and that phishing-resistant multifactor authentication (MFA) protect those personnel from sophisticated online attacks. Phishing is the attempt to obtain and compromise credentials, such as by sending a spoofed email that leads to an inauthentic site.
Adoption of MFA is critical for preventing unauthorized access to accounts and d
## Phishing-resistant methods
-* Active Directory Federation Services (AD FS) as a federated identity provider that's configured with certificate-based authentication.
+U.S. Federal agencies will be approaching this guidance from different starting points. Some agencies will have already deployed modern credentials such as [FIDO2 security keys](../authentication/concept-authentication-passwordless.md#fido2-security-keys) or [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview), many are evaluating [Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md) (currently in Public Preview), some are just starting to modernize their authentication credentials. This guidance is meant to inform agencies on the multiple options available to meet phishing-resistant MFA requirements with Azure AD. The reality is that phishing-resistant MFA is needed sooner then later. Microsoft recommends adopting phishing-resistant MFA method as soon as possible by whichever method below best matches the agency's current capability. Agencies should approach the phishing-resistant MFA requirement of the memorandum from the mindset of what can I do **now** to gain phishing-resistance for my accounts. Implementing phishing-resistant MFA will provide a significant positive impact on improving the agency's overall cybersecurity posture. The end goal here is to fully implement one or more of the modern credentials. However, if the quickest path to phishing-resistance is not a modern approach below, agencies should take that step as a starting point on their journey towards the more modern approaches.
-* Azure AD certificate-based authentication.
+![Table of Azure AD phishing-resistant methods.](media/memo-22-09/azure-active-directory-pr-methods.png)
-* FIDO2 security keys.
+### Modern approaches
-* Windows Hello for Business.
+- **[FIDO2 security keys](../authentication/concept-authentication-passwordless.md#fido2-security-keys)** are, according to the [Cybersecurity & Infrastructure Security Agency (CISA)](https://www.cisa.gov/mfa) the gold standard of multifactor authentication.
-* Microsoft Authenticator and conditional access policies that enforce managed or compliant devices to access the application or service. Microsoft Authenticator native phishing resistance is in development.
+- **[Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md)** offers cloud native certificate based authentication (without dependency on a federated identity provider). This includes smart card implementations such as Common Access Card (CAC) & Personal Identity Verification (PIV) as well as derived PIV credentials deployed to mobile devices or security keys
-Your current device capabilities, user personas, and other requirements might dictate specific multifactor methods. For example, if you're adopting FIDO2 security keys that have only USB-C support, they can be used only from devices with USB-C ports.
+- **[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview)** offers passwordless multifactor authentication that is phishing-resistant. For more information, see the [Windows Hello for Business Deployment Overview](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-deployment-guide)
-Consider the following approaches to evaluating phishing-resistant MFA methods:
+### Protection from external phishing
-* Device types and capabilities that you want to support. Examples include kiosks, laptops, mobile phones, biometric readers, USB, Bluetooth, and near-field communication devices.
+**[Microsoft Authenticator](../authentication/concept-authentication-authenticator-app.md) and conditional access policies that enforce managed devices**. Managed devices are Hybrid Azure AD joined device or device marked as compliant.
-* User personas within your organization. Examples include front-line workers, remote workers with and without company-owned hardware, administrators with privileged access workstations, and business-to-business guest users.
+Microsoft Authenticator can be installed on the device accessing the application protected by Azure AD or on a separate device.
-* Logistics of distributing, configuring, and registering MFA methods such as FIDO2 security keys, smart cards, government-furnished equipment, or Windows devices with TPM chips.
+>[!Important]
+>
+>To meet the phishing-resistant requirement with this approach:
+>
+>- Only the device accessing the protected application needs to be managed
+>- All users allowed to use Microsoft Authenticator must be in scope for conditional access policy requiring managed device for access to all applications.
+>- An additional conditional access policy is needed to block access targeting the Microsoft Intune Enrollment Cloud App. All users allowed to use Microsoft Authenticator must be in scope for this conditional access policy.
+>
+>Microsoft recommends that you use the same group(s) used to allow the Microsoft Authenticator App authentication method within both conditional access policies to ensure that once a user is enabled for the authentication method they are simultaneously in scope of both policies.
+>
+>This conditional access policy effectively prevents both:
+>
+>- The most significant vector of phishing threats from malicious external actors.
+>- A malicious actor's ability to phish Microsoft Authenticator to register a new credential or join a device and enroll it in Intune such that it will be marked as compliant
-* Need for FIPS 140 validation at a specific [authenticator assurance level](nist-about-authenticator-assurance-levels.md). For example, some FIDO security keys are FIPS 140 validated at levels required for [AAL3](nist-authenticator-assurance-level-3.md), as set by [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
+For more information on deploying this method, see the following resources:
+- [Plan your hybrid Azure Active Directory join implementation](../devices/hybrid-azuread-join-plan.md) **or** [How to: Plan your Azure AD join implementation](../devices/azureadjoin-plan.md)
+
+- [Conditional Access: Require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
+
+>[!NOTE]
+>
+> Today, Microsoft Authenticator by itself is **not** phishing-resistant. You must additionally secure the authentication with the phishing resistant properties gained from conditional access policy enforcement of managed devices.
+>
+>**Microsoft Authenticator native phishing resistance is in development.** Once available, Microsoft Authenticator will be natively phishing-resistant without reliance on conditional access policies that enforce Hybrid join device or device marked as compliant.
+
+### Legacy
+
+**Federated Identity Provider (IdP) such as Active Directory Federation Services (AD FS) that's configured with phishing-resistant method(s).** While agencies can achieve phishing resistance via federated IdP, adopting or continuing to use a federated IdP adds significant cost, complexity and risk. Microsoft encourages agencies to realize the security benefits of Azure AD as a cloud based identity provider, removing [associated risk of a federated IdP](../fundamentals/protect-m365-from-on-premises-attacks.md).
+
+For more information on deploying this method, see the following resources:
+
+- [Deploying Active Directory Federation Services in Azure](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs)
+- [Configuring AD FS for user certificate authentication](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)
+
+### Additional phishing-resistant method considerations
+
+Your current device capabilities, user personas, and other requirements might dictate specific multifactor methods. For example, if you're adopting FIDO2 security keys that have only USB-C support, they can be used only from devices with USB-C ports.
+
+Consider the following when evaluating phishing-resistant MFA methods:
+
+- Device types and capabilities that you want to support. Examples include kiosks, laptops, mobile phones, biometric readers, USB, Bluetooth, and near-field communication devices.
+
+- User personas within your organization. Examples include front-line workers, remote workers with and without company-owned hardware, administrators with privileged access workstations, and business-to-business guest users.
+
+- Logistics of distributing, configuring, and registering MFA methods such as FIDO2 security keys, smart cards, government-furnished equipment, or Windows devices with TPM chips.
+
+- Need for FIPS 140 validation at a specific [authenticator assurance level](nist-about-authenticator-assurance-levels.md). For example, some FIDO security keys are FIPS 140 validated at levels required for [AAL3](nist-authenticator-assurance-level-3.md), as set by [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
## Implementation considerations for phishing-resistant MFA
The following sections describe support for implementing phishing-resistant meth
The following table details the availability of phishing-resistant MFA scenarios, based on the device type that's used to sign in to the applications:
-| Device | AD FS as a federated identity provider configured with certificate-based authentication| Azure AD certificate-based authentication| FIDO2 security keys| Windows Hello for Business| Microsoft authenticator + certificate authority for managed devices |
+| Device | AD FS as a federated identity provider configured with certificate-based authentication| Azure AD certificate-based authentication| FIDO2 security keys| Windows Hello for Business| Microsoft Authenticator with conditional access policies that enforce hybrid Azure AD join or compliant devices |
| - | - | - | - | - | - | | Windows device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg) | | iOS mobile device| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| ![Checkmark with solid fill](media/memo-22-09/check.jpg)| Not applicable| Not applicable| ![Checkmark with solid fill](media/memo-22-09/check.jpg) |
For each of the five phishing-resistant MFA types previously mentioned, you use
| VMs hosted on-premises or in other clouds| Enable [Azure Arc](../../azure-arc/overview.md) on the VM and then enable Azure AD sign-in. (Currently in private preview for Linux. Support for Windows VMs hosted in these environments is on our roadmap.) | | Non-Microsoft virtual desktop solution| Integrate the virtual desktop solution as an app in Azure AD. | - ### Enforcing phishing-resistant MFA
-Conditional access enables you to enforce MFA for users in your tenant. With the addition of [cross-tenant access policies](../external-identities/cross-tenant-access-overview.md), you can enforce it on external users.
+Conditional access enables you to enforce MFA for users in your tenant. With the addition of [cross-tenant access policies](../external-identities/cross-tenant-access-overview.md), you can enforce it on external users.
#### Enforcement across agencies
Conditional access enables you to enforce MFA for users in your tenant. With the
- Limiting what other Microsoft tenants your users can access. - Enabling you to allow access to users whom you don't have to manage in your own tenant, but whom you can subject to your MFA and other access requirements.
-You must enforce MFA for partners and external users who access your organization's resources. This is common in many inter-agency collaboration scenarios. Azure AD provides cross-tenant access policies to help you configure MFA for external users who access your applications and resources.
+You must enforce MFA for partners and external users who access your organization's resources. This is common in many inter-agency collaboration scenarios. Azure AD provides cross-tenant access policies to help you configure MFA for external users who access your applications and resources.
-By using trust settings in cross-tenant access policies, you can trust the MFA method that the guest user's tenant is using instead of having them register an MFA method directly with your tenant. These policies can be configured on a per-organization basis. This ability requires you to understand the available MFA methods in the user's home tenant and determine if they meet the requirement for phishing resistance.
+By using trust settings in cross-tenant access policies, you can trust the MFA method that the guest user's tenant is using instead of having them register an MFA method directly with your tenant. These policies can be configured on a per-organization basis. This ability requires you to understand the available MFA methods in the user's home tenant and determine if they meet the requirement for phishing resistance.
## Password policies
The memo requires organizations to change password policies that are proven inef
* Use [Azure AD Identity Protection](..//identity-protection/concept-identity-protection-risks.md) to be alerted about compromised credentials so you can take immediate action.
-Although the memo isn't specific on which policies to use with passwords, consider the standard from [NIST 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
+Although the memo isn't specific on which policies to use with passwords, consider the standard from [NIST 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
## Next steps
The following articles are part of this documentation set:
For more information about Zero Trust, see:
-[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
+[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Filesystem Size Used Avail Use% Mounted on
## Windows containers
-The Azure disk CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers tutorial](windows-container-cli.md) to add a Windows node pool.
+The Azure disk CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart][aks-quickstart-cli] to add a Windows node pool.
After you have a Windows node pool, you can now use the built-in storage classes like `managed-csi`. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into the file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Win
[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create [az-disk-create]: /cli/azure/disk#az_disk_create [az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
For more information about AKS clusters interact with Azure disks, see the [Kube
[az-disk-create]: /cli/azure/disk#az_disk_create [az-group-list]: /cli/azure/group#az_group_list [az-resource-show]: /cli/azure/resource#az_resource_show
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[az-aks-show]: /cli/azure/aks#az_aks_show [install-azure-cli]: /cli/azure/install-azure-cli [azure-files-volume]: azure-files-volume.md
aks Azure Disks Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disks-dynamic-pv.md
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
Learn more about Kubernetes persistent volumes using Azure disks.
[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create [az-disk-create]: /cli/azure/disk#az_disk_create [az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
accountname.file.core.windows.net:/accountname/pvc-fa72ec43-ae64-42e4-a8a2-55660
## Windows containers
-The Azure Files CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers tutorial](windows-container-cli.md) to add a Windows node pool.
+The Azure Files CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart](./learn/quick-windows-container-deploy-cli.md) to add a Windows node pool.
After you have a Windows node pool, use the built-in storage classes like `azurefile-csi` or create custom ones. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into a file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
$ kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Win
[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create [az-disk-create]: /cli/azure/disk#az_disk_create [az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
aks Azure Files Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-dynamic-pv.md
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
Learn more about Kubernetes persistent volumes using Azure Files.
[az-storage-key-list]: /cli/azure/storage/account/keys#az_storage_account_keys_list [az-storage-share-create]: /cli/azure/storage/share#az_storage_share_create [mount-options]: #mount-options
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [az-aks-show]: /cli/azure/aks#az_aks_show [storage-skus]: ../storage/common/storage-redundancy.md
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS 1.21 or above cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
For associated best practices, see [Best practices for storage and backups in AK
[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
aks Azure Hpc Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hpc-cache.md
Last updated 09/08/2021
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
> [!IMPORTANT] > Your AKS cluster must be [in a region that supports Azure HPC Cache][hpc-cache-regions].
We'd love to hear from you! Please send any feedback or questions to <aks-hpcca
* For more information on Azure HPC Cache, see [HPC Cache Overview][hpc-cache]. * For more information on using NFS with AKS, see [Manually create and use an NFS (Network File System) Linux Server volume with Azure Kubernetes Service (AKS)][aks-nfs].
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[aks-nfs]: azure-nfs-volume.md [hpc-cache]: ../hpc-cache/hpc-cache-overview.md [hpc-cache-access-policies]: ../hpc-cache/access-policies.md
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Using a CSI driver to directly consume Azure NetApp Files volumes from AKS workl
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
> [!IMPORTANT] > Your AKS cluster must also be [in a region that supports Azure NetApp Files][anf-regions].
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
* For more information on Azure NetApp Files, see [What is Azure NetApp Files][anf].
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[aks-nfs]: azure-nfs-volume.md [anf]: ../azure-netapp-files/azure-netapp-files-introduction.md [anf-delegate-subnet]: ../azure-netapp-files/azure-netapp-files-delegate-subnet.md
aks Azure Nfs Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-nfs-volume.md
# Manually create and use an NFS (Network File System) Linux Server volume with Azure Kubernetes Service (AKS)
-Sharing data between containers is often a necessary component of container-based services and applications. You usually have various pods that need access to the same information on an external persistent volume.
+Sharing data between containers is often a necessary component of container-based services and applications. You usually have various pods that need access to the same information on an external persistent volume.
While Azure files are an option, creating an NFS Server on an Azure VM is another form of persistent shared storage. This article will show you how to create an NFS Server on an Ubuntu virtual machine. And also give your AKS containers access to this shared file system. ## Before you begin
-This article assumes that you have an existing AKS Cluster. If you need an AKS Cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
Your AKS Cluster will need to live in the same or peered virtual networks as the NFS Server. The cluster must be created in an existing VNET, which can be the same VNET as your VM.
For associated best practices, see [Best practices for storage and backups in AK
[peer-virtual-networks]: ../virtual-network/tutorial-connect-virtual-networks-portal.md <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[operator-best-practices-storage]: operator-best-practices-storage.md
aks Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md
To help understand some of the features and components of these best practices,
## Next steps
-If you need to get started with AKS, follow one of the quickstarts to deploy an Azure Kubernetes Service (AKS) cluster using the [Azure CLI](kubernetes-walkthrough.md) or [Azure portal](kubernetes-walkthrough-portal.md).
+If you need to get started with AKS, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+
+<!-- LINKS - internal -->
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
By using `containerd` for AKS nodes, pod startup latency improves and node resou
> [!IMPORTANT] > Clusters with Linux node pools created on Kubernetes v1.19 or greater default to `containerd` for its container runtime. Clusters with node pools on a earlier supported Kubernetes versions receive Docker for their container runtime. Linux node pools will be updated to `containerd` once the node pool Kubernetes version is updated to a version that supports `containerd`. You can still use Docker node pools and clusters on older supported versions until those fall off support. >
-> Using `containerd` with Windows Server 2019 node pools is generally available, although the default for node pools created on Kubernetes v1.22 and earlier is still Docker. For more details, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
+> Using `containerd` with Windows Server 2019 node pools is generally available, although the default for node pools created on Kubernetes v1.22 and earlier is still Docker. For more details, see [Add a Windows Server node pool with `containerd`][/learn/aks-add-np-containerd].
> > It is highly recommended to test your workloads on AKS node pools with `containerd` prior to using clusters with a Kubernetes version that supports `containerd` for your node pools.
aks Concepts Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-scale.md
For more information on core Kubernetes and AKS concepts, see the following arti
[virtual-kubelet]: https://virtual-kubelet.io/ <!-- LINKS - internal -->
-[aks-quickstart]: kubernetes-walkthrough.md
+[aks-quickstart]: ./learn/quick-kubernetes-deploy-cli.md
[aks-hpa]: tutorial-kubernetes-scale.md#autoscale-pods [aks-scale]: tutorial-kubernetes-scale.md [aks-manually-scale-pods]: tutorial-kubernetes-scale.md#manually-scale-pods
For more information on core Kubernetes and AKS concepts, see the following arti
[aks-concepts-storage]: concepts-storage.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-network]: concepts-network.md
-[virtual-nodes-cli]: virtual-nodes-cli.md
+[virtual-nodes-cli]: virtual-nodes-cli.md
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
When an AKS cluster is created or scaled up, the nodes are automatically deploye
> [!NOTE] > AKS clusters using:
-> * Kubernetes version 1.19 and greater for Linux node pools use `containerd` as its container runtime. Using `containerd` with Windows Server 2019 node pools is currently in preview. For more details, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
+> * Kubernetes version 1.19 and greater for Linux node pools use `containerd` as its container runtime. Using `containerd` with Windows Server 2019 node pools is currently in preview. For more details, see [Add a Windows Server node pool with `containerd`][/learn/aks-add-np-containerd].
> * Kubernetes prior to v1.19 for Linux node pools use Docker as its container runtime. For Windows Server 2019 node pools, Docker is the default container runtime. ### Node security patches
aks Control Kubeconfig Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-kubeconfig-access.md
This article shows you how to assign Azure roles that limit who can get the conf
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
This article also requires that you are running the Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
For enhanced security on access to AKS clusters, [integrate Azure Active Directo
[kubectl-config-view]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#config <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: /learn/quick-kubernetes-deploy-powershell.md
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [azure-rbac]: ../role-based-access-control/overview.md
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
This article shows you how to use ConfigMaps for basic customization options of
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
When creating a configuration like the examples below, your names in the *data* section must end in either *.server* or *.override*. This naming convention is defined in the default AKS CoreDNS Configmap which you can view using the `kubectl get configmaps --namespace=kube-system coredns -o yaml` command.
To learn more about core network concepts, see [Network concepts for application
<!-- LINKS - internal --> [concepts-network]: concepts-network.md
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Label: ```"admissions.enforcer/disabled": "true"``` or Annotation: ```"admission
## Can I run Windows Server containers on AKS?
-Yes, Windows Server containers are available on AKS. To run Windows Server containers in AKS, you create a node pool that runs Windows Server as the guest OS. Windows Server containers can use only Windows Server 2019. To get started, see [Create an AKS cluster with a Windows Server node pool][aks-windows-cli].
+Yes, Windows Server containers are available on AKS. To run Windows Server containers in AKS, you create a node pool that runs Windows Server as the guest OS. Windows Server containers can use only Windows Server 2019. To get started, see [Create an AKS cluster with a Windows Server node pool](./learn/quick-windows-container-deploy-cli.md).
Windows Server support for node pool includes some limitations that are part of the upstream Windows Server in Kubernetes project. For more information on these limitations, see [Windows Server containers in AKS limitations][aks-windows-limitations].
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Currently, using GPU-enabled node pools is only available for Linux node pools.
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see [Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI][aks-quickstart].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
For information on using Azure Kubernetes Service with Azure Machine Learning, s
[az-group-create]: /cli/azure/group#az_group_create [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[aks-quickstart]: kubernetes-walkthrough.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[aks-spark]: spark-job.md [gpu-skus]: ../virtual-machines/sizes-gpu.md [install-azure-cli]: /cli/azure/install-azure-cli
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-internal-ip.md
You can also:
[aks-http-app-routing]: http-application-routing.md [aks-ingress-own-tls]: ingress-own-tls.md [client-source-ip]: concepts-network.md#ingress-controllers
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-powershell]: kubernetes-walkthrough-powershell.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
[aks-configure-kubenet-networking]: configure-kubenet.md [aks-configure-advanced-networking]: configure-azure-cni.md [aks-supported versions]: supported-kubernetes-versions.md
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-static-ip.md
You can also:
[aks-ingress-tls]: ingress-tls.md [aks-http-app-routing]: http-application-routing.md [aks-ingress-own-tls]: ingress-own-tls.md
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-powershell]: kubernetes-walkthrough-powershell.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
[client-source-ip]: concepts-network.md#ingress-controllers [aks-static-ip]: static-ip.md [aks-supported versions]: supported-kubernetes-versions.md
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
You can also:
[aks-ingress-basic]: ingress-basic.md [aks-http-app-routing]: http-application-routing.md [aks-ingress-own-tls]: ingress-own-tls.md
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-powershell]: kubernetes-walkthrough-powershell.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
[client-source-ip]: concepts-network.md#ingress-controllers [install-azure-cli]: /cli/azure/install-azure-cli [aks-supported versions]: supported-kubernetes-versions.md
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
To restrict access to your applications in Azure Kubernetes Service (AKS), you c
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
Learn more about Kubernetes services at the [Kubernetes services documentation][
[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [azure-lb-comparison]: ../load-balancer/skus.md [use-kubenet]: configure-kubenet.md
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources [different-subnet]: #specify-a-different-subnet
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, AKS is free; you only pay for the agent nodes within your clusters, not for the masters. You can create an AKS cluster using:
-* [The Azure CLI](kubernetes-walkthrough.md)
-* [The Azure portal](kubernetes-walkthrough-portal.md)
-* [Azure PowerShell](kubernetes-walkthrough-powershell.md)
-* Using template-driven deployment options, like [Azure Resource Manager templates](kubernetes-walkthrough-rm-template.md), [Bicep](../azure-resource-manager/bicep/overview.md) and Terraform
+* [The Azure CLI][aks-quickstart-cli]
+* [The Azure portal][aks-quickstart-portal]
+* [Azure PowerShell][aks-quickstart-powershell]
+* Using template-driven deployment options, like [Azure Resource Manager templates][aks-quickstart-template], [Bicep](../azure-resource-manager/bicep/overview.md) and Terraform.
-When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Advanced networking, Azure Active Directory (Azure AD) integration, monitoring, and other features can be configured during the deployment process.
+When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Advanced networking, Azure Active Directory (Azure AD) integration, monitoring, and other features can be configured during the deployment process.
For more information on Kubernetes basics, see [Kubernetes core concepts for AKS][concepts-clusters-workloads].
AKS is compliant with SOC, ISO, PCI DSS, and HIPAA. For more information, see [O
Learn more about deploying and managing AKS with the Azure CLI Quickstart. > [!div class="nextstepaction"]
-> [Deploy an AKS Cluster using Azure CLI][aks-cli]
+> [Deploy an AKS Cluster using Azure CLI][aks-quickstart-cli]
<!-- LINKS - external --> [aks-engine]: https://github.com/Azure/aks-engine
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
<!-- LINKS - internal --> [acr-docs]: ../container-registry/container-registry-intro.md [aks-aad]: ./azure-ad-integration-cli.md
-[aks-cli]: ./kubernetes-walkthrough.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-template]: ./learn/quick-kubernetes-deploy-rm-template.md
[aks-gpu]: ./gpu-cluster.md [aks-http-routing]: ./http-application-routing.md [aks-networking]: ./concepts-network.md
-[aks-portal]: ./kubernetes-walkthrough-portal.md
[aks-scale]: ./tutorial-kubernetes-scale.md [aks-upgrade]: ./upgrade-cluster.md [azure-dev-spaces]: /previous-versions/azure/dev-spaces/
aks Kubelet Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelet-logs.md
This article shows you how you can use `journalctl` to view the *kubelet* logs o
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
## Create an SSH connection
If you need additional troubleshooting information from the Kubernetes master, s
<!-- LINKS - internal --> [aks-ssh]: ssh.md [aks-master-logs]: monitor-aks-reference.md#resource-logs
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[aks-master-logs]: monitor-aks-reference.md#resource-logs [azure-container-logs]: ../azure-monitor/containers/container-insights-overview.md
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-helm.md
This article shows you how to configure and use Helm in a Kubernetes cluster on
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
For more information about managing Kubernetes application deployments with Helm
<!-- LINKS - internal --> [acr-helm]: ../container-registry/container-registry-helm-repos.md [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[taints]: operator-best-practices-advanced-scheduler.md
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md
The Kubernetes resource view from the Azure portal replaces the AKS dashboard ad
## Prerequisites
-To view Kubernetes resources in the Azure portal, you need an AKS cluster. Any cluster is supported, but if using Azure Active Directory (Azure AD) integration, your cluster must use [AKS-managed Azure AD integration][aks-managed-aad]. If your cluster uses legacy Azure AD, you can upgrade your cluster in the portal or with the [Azure CLI][cli-aad-upgrade]. You can also [use the Azure portal][portal-cluster] to create a new AKS cluster.
+To view Kubernetes resources in the Azure portal, you need an AKS cluster. Any cluster is supported, but if using Azure Active Directory (Azure AD) integration, your cluster must use [AKS-managed Azure AD integration][aks-managed-aad]. If your cluster uses legacy Azure AD, you can upgrade your cluster in the portal or with the [Azure CLI][cli-aad-upgrade]. You can also [use the Azure portal][aks-quickstart-portal] to create a new AKS cluster.
## View Kubernetes resources
To see the Kubernetes resources, navigate to your AKS cluster in the Azure porta
### Deploy an application
-In this example, we'll use our sample AKS cluster to deploy the Azure Vote application from the [AKS quickstart][portal-quickstart].
+In this example, we'll use our sample AKS cluster to deploy the Azure Vote application from the [AKS quickstart][aks-quickstart-portal].
1. Select **Add** from any of the resource views (Namespace, Workloads, Services and ingresses, Storage, or Configuration).
-1. Paste the YAML for the Azure Vote application from the [AKS quickstart][portal-quickstart].
-1. Select **Add** at the bottom of the YAML editor to deploy the application.
+1. Paste the YAML for the Azure Vote application from the [AKS quickstart][aks-quickstart-portal].
+1. Select **Add** at the bottom of the YAML editor to deploy the application.
Once the YAML file is added, the resource viewer shows both Kubernetes services that were created: the internal service (azure-vote-back), and the external service (azure-vote-front) to access the Azure Vote application. The external service includes a linked external IP address so you can easily view the application in your browser.
Once the YAML file is added, the resource viewer shows both Kubernetes services
### Monitor deployment insights
-AKS clusters with [Azure Monitor for containers][enable-monitor] enabled can quickly view deployment and other insights. From the Kubernetes resources view, users can see the live status of individual deployments, including CPU and memory usage, as well as transition to Azure monitor for more in-depth information about specific nodes and containers. Here's an example of deployment insights from a sample AKS cluster:
+AKS clusters with [Container insights][enable-monitor] enabled can quickly view deployment and other insights. From the Kubernetes resources view, users can see the live status of individual deployments, including CPU and memory usage, as well as transition to Azure monitor for more in-depth information about specific nodes and containers. Here's an example of deployment insights from a sample AKS cluster:
:::image type="content" source="media/kubernetes-portal/deployment-insights.png" alt-text="Deployment insights displayed in the Azure portal." lightbox="media/kubernetes-portal/deployment-insights.png":::
This article showed you how to access Kubernetes resources for your AKS cluster.
<!-- LINKS - internal --> [concepts-identity]: concepts-identity.md
-[portal-quickstart]: kubernetes-walkthrough-portal.md#run-the-application
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
[deployments]: concepts-clusters-workloads.md#deployments-and-yaml-manifests [aks-managed-aad]: managed-aad.md [cli-aad-upgrade]: managed-aad.md#upgrading-to-aks-managed-azure-ad-integration [enable-monitor]: ../azure-monitor/containers/container-insights-enable-existing-clusters.md
-[portal-cluster]: kubernetes-walkthrough-portal.md
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-walkthrough-portal.md
- Title: 'Quickstart: Deploy an AKS cluster by using the Azure portal'-
-description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal.
-- Previously updated : 1/13/2022-
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
--
-# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal
-
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
-* Deploy an AKS cluster using the Azure portal.
-* Run a multi-container application with a web front-end and a Redis instance in the cluster.
-* Monitor the health of the cluster and pods that run your application.
--
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Create an AKS cluster
-
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-2. Select **Containers** > **Kubernetes Service**.
-
-3. On the **Basics** page, configure the following options:
- - **Project details**:
- * Select an Azure **Subscription**.
- * Select or create an Azure **Resource group**, such as *myResourceGroup*.
- - **Cluster details**:
- * Ensure the the **Preset configuration** is *Standard ($$)*. For more details on preset configurations, see [Cluster configuration presets in the Azure portal][preset-config].
- * Enter a **Kubernetes cluster name**, such as *myAKSCluster*.
- * Select a **Region** and **Kubernetes version** for the AKS cluster.
- - **Primary node pool**:
- * Leave the default values selected.
-
- :::image type="content" source="media/kubernetes-walkthrough-portal/create-cluster-basics.png" alt-text="Create AKS cluster - provide basic information":::
-
- > [!NOTE]
- > You can change the preset configuration when creating your cluster by selecting *View all preset configurations* and choosing a different option.
- > :::image type="content" source="media/kubernetes-walkthrough-portal/cluster-preset-options.png" alt-text="Create AKS cluster - portal preset options":::
-
-4. Select **Next: Node pools** when complete.
-
-5. Keep the default **Node pools** options. At the bottom of the screen, click **Next: Authentication**.
- > [!CAUTION]
- > Newly created Azure AD service principals may take several minutes to propagate and become available, causing "service principal not found" errors and validation failures in Azure portal. If you hit this bump, please visit [our troubleshooting article](troubleshooting.md#received-an-error-saying-my-service-principal-wasnt-found-or-is-invalid-when-i-try-to-create-a-new-cluster) for mitigation.
-
-6. On the **Authentication** page, configure the following options:
- - Create a new cluster identity by either:
- * Leaving the **Authentication** field with **System-assigned managed identity**, or
- * Choosing **Service Principal** to use a service principal.
- * Select *(new) default service principal* to create a default service principal, or
- * Select *Configure service principal* to use an existing one. You will need to provide the existing principal's SPN client ID and secret.
- - Enable the Kubernetes role-based access control (Kubernetes RBAC) option to provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.
-
- By default, *Basic* networking is used, and Azure Monitor for containers is enabled.
-
-7. Click **Review + create** and then **Create** when validation completes.
--
-8. It takes a few minutes to create the AKS cluster. When your deployment is complete, navigate to your resource by either:
- * Clicking **Go to resource**, or
- * Browsing to the AKS cluster resource group and selecting the AKS resource.
- * Per example cluster dashboard below: browsing for *myResourceGroup* and selecting *myAKSCluster* resource.
-
- :::image type="content" source="media/kubernetes-walkthrough-portal/aks-portal-dashboard.png" alt-text="Example AKS dashboard in the Azure portal":::
-
-## Connect to the cluster
-
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-
-1. Open Cloud Shell using the `>_` button on the top of the Azure portal.
-
- ![Open the Azure Cloud Shell in the portal](media/kubernetes-walkthrough-portal/aks-cloud-shell.png)
-
- > [!NOTE]
- > To perform these operations in a local shell installation:
- > 1. Verify Azure CLI is installed.
- > 2. Connect to Azure via the `az login` command.
-
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command downloads credentials and configures the Kubernetes CLI to use them.
-
- ```azurecli
- az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
- ```
-
-3. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes.
-
- ```console
- kubectl get nodes
- ```
-
- Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
-
- ```output
- NAME STATUS ROLES AGE VERSION
- aks-agentpool-12345678-vmss000000 Ready agent 23m v1.19.11
- aks-agentpool-12345678-vmss000001 Ready agent 24m v1.19.11
- ```
-
-## Run the application
-
-A Kubernetes manifest file defines a cluster's desired state, like which container images to run.
-
-In this quickstart, you will use a manifest to create all objects needed to run the Azure Vote application. This manifest includes two Kubernetes deployments:
-* The sample Azure Vote Python applications.
-* A Redis instance.
-
-Two Kubernetes Services are also created:
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
-
-1. In the Cloud Shell, use an editor to create a file named `azure-vote.yaml`, such as:
- * `code azure-vote.yaml`
- * `nano azure-vote.yaml`, or
- * `vi azure-vote.yaml`.
-
-1. Copy in the following YAML definition:
-
- ```yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-back
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
- metadata:
- labels:
- app: azure-vote-back
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-back
- spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-front
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
- metadata:
- labels:
- app: azure-vote-front
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-front
- spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
- ```
-
-1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest:
-
- ```console
- kubectl apply -f azure-vote.yaml
- ```
-
- Output shows the successfully created deployments and
-
- ```output
- deployment "azure-vote-back" created
- service "azure-vote-back" created
- deployment "azure-vote-front" created
- service "azure-vote-front" created
- ```
-
-## Test the application
-
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-
-To monitor progress, use the `kubectl get service` command with the `--watch` argument.
-
-```console
-kubectl get service azure-vote-front --watch
-```
-
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
-
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
--
-```output
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
--
-## Monitor health and logs
-
-When you created the cluster, Azure Monitor for containers was enabled. Azure Monitor for containers provides health metrics for both the AKS cluster and pods running on the cluster.
-
-Metric data takes a few minutes to populate in the Azure portal. To see current health status, uptime, and resource usage for the Azure Vote pods:
-
-1. Browse back to the AKS resource in the Azure portal.
-1. Under **Monitoring** on the left-hand side, choose **Insights**.
-1. Across the top, choose to **+ Add Filter**.
-1. Select **Namespace** as the property, then choose *\<All but kube-system\>*.
-1. Select **Containers** to view them.
-
-The `azure-vote-back` and `azure-vote-front` containers will display, as shown in the following example:
--
-To view logs for the `azure-vote-front` pod, select **View in Log Analytics** from the top of the *azure-vote-front | Overview* area on the right side. These logs include the *stdout* and *stderr* streams from the container.
--
-## Delete cluster
-
-To avoid Azure charges, clean up your unnecessary resources. Select the **Delete** button on the AKS cluster dashboard. You can also use the [az aks delete][az-aks-delete] command in the Cloud Shell:
-
-```azurecli
-az aks delete --resource-group myResourceGroup --name myAKSCluster --yes --no-wait
-```
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
->
-> If you used a managed identity, the identity is managed by the platform and does not require removal.
-
-## Get the code
-
-Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
-
-## Next steps
-
-In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it. Access the Kubernetes web dashboard for your AKS cluster.
-
-To learn more about AKS by walking through a complete example, including building an application, deploying from Azure Container Registry, updating a running application, and scaling and upgrading your cluster, continue to the Kubernetes cluster tutorial.
-
-> [!div class="nextstepaction"]
-> [AKS tutorial][aks-tutorial]
-
-<!-- LINKS - external -->
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
-[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-documentation]: https://kubernetes.io/docs/home/
-
-<!-- LINKS - internal -->
-[kubernetes-concepts]: concepts-clusters-workloads.md
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az-aks-delete]: /cli/azure/aks#az_aks_delete
-[aks-monitor]: ../azure-monitor/containers/container-insights-overview.md
-[aks-network]: ./concepts-network.md
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
-[http-routing]: ./http-application-routing.md
-[preset-config]: ./quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
aks Kubernetes Walkthrough Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-walkthrough-powershell.md
- Title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
-description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell.
-- Previously updated : 01/13/2022-
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
--
-# Quickstart: Deploy an Azure Kubernetes Service cluster using PowerShell
-
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
-* Deploy an AKS cluster using PowerShell.
-* Run a multi-container application with a web front-end and a Redis instance in the cluster.
-
-To learn more about creating a Windows Server node pool, see
-[Create an AKS cluster that supports Windows Server containers][windows-container-powershell].
-
-![Voting app deployed in Azure Kubernetes Service](./media/kubernetes-walkthrough-powershell/voting-app-deployed-in-azure-kubernetes-service.png)
-
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see
-[Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
--
-If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
-```azurepowershell-interactive
-Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
-```
-
-## Create a resource group
-
-An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
-* The storage location of your resource group metadata.
-* Where your resources will run in Azure if you don't specify another region during resource creation.
-
-The following example creates a resource group named **myResourceGroup** in the **eastus** region.
-
-Create a resource group using the [New-AzResourceGroup][new-azresourcegroup]
-cmdlet.
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name myResourceGroup -Location eastus
-```
-
-Output for successfully created resource group:
-
-```plaintext
-ResourceGroupName : myResourceGroup
-Location : eastus
-ProvisioningState : Succeeded
-Tags :
-ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup
-```
-
-## Create AKS cluster
-
-1. Generate an SSH key pair using the `ssh-keygen` command-line utility. For more details, see:
- * [Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md)
- * [How to use SSH keys with Windows on Azure](../virtual-machines/linux/ssh-from-windows.md)
-
-1. Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet.
-
- The following example creates a cluster named **myAKSCluster** with one node.
-
- ```azurepowershell-interactive
- New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
- ```
-
-After a few minutes, the command completes and returns information about the cluster.
-
-> [!NOTE]
-> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](./faq.md#why-are-two-resource-groups-created-with-aks)
-
-## Connect to the cluster
-
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-
-1. Install `kubectl` locally using the `Install-AzAksKubectl` cmdlet:
-
- ```azurepowershell
- Install-AzAksKubectl
- ```
-
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
-
- ```azurepowershell-interactive
- Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
- ```
-
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
-
- ```azurepowershell-interactive
- kubectl get nodes
- ```
-
- Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
-
- ```plaintext
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10
- ```
-
-## Run the application
-
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
-
-In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
-* The sample Azure Vote Python applications.
-* A Redis instance.
-
-Two [Kubernetes Services][kubernetes-service] are also created:
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
-
-1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
-1. Copy in the following YAML definition:
-
- ```yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-back
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
- metadata:
- labels:
- app: azure-vote-back
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-back
- spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-front
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
- metadata:
- labels:
- app: azure-vote-front
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-front
- spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
- ```
-
-1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
- ```azurepowershell-interactive
- kubectl apply -f azure-vote.yaml
- ```
-
- Output shows the successfully created deployments and
-
- ```plaintext
- deployment.apps/azure-vote-back created
- service/azure-vote-back created
- deployment.apps/azure-vote-front created
- service/azure-vote-front created
- ```
-
-## Test the application
-
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-
-Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
-
-```azurepowershell-interactive
-kubectl get service azure-vote-front --watch
-```
-
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
-
-```plaintext
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
-
-```plaintext
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
-
-![Voting app deployed in Azure Kubernetes Service](./media/kubernetes-walkthrough-powershell/voting-app-deployed-in-azure-kubernetes-service.png)
-
-## Delete the cluster
-
-To avoid Azure charges, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
->
-> If you used a managed identity, the identity is managed by the platform and does not require removal.
-
-## Get the code
-
-Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
-
-## Next steps
-
-In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it.
-
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
-
-> [!div class="nextstepaction"]
-> [AKS tutorial][aks-tutorial]
-
-<!-- LINKS - external -->
-[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
-
-<!-- LINKS - internal -->
-[windows-container-powershell]: windows-container-powershell.md
-[kubernetes-concepts]: concepts-clusters-workloads.md
-[install-azure-powershell]: /powershell/azure/install-az-ps
-[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
-[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
-[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
-[kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: concepts-network.md#services
-[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
aks Kubernetes Walkthrough Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-walkthrough-rm-template.md
- Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster
-description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS)
-- Previously updated : 03/15/2021-
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
--
-# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an ARM template
-
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
-* Deploy an AKS cluster using an Azure Resource Manager template.
-* Run a multi-container application with a web front-end and a Redis instance in the cluster.
-
-![Image of browsing to Azure Vote](media/container-service-kubernetes-walkthrough/azure-voting-application.png)
--
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
-
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
----- This article requires version 2.0.61 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--- To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section.-
-### Create an SSH key pair
-
-To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location.
-
-1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.
-
-1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096:
-
- ```console
- ssh-keygen -t rsa -b 4096
- ```
-
-For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure][ssh-keys].
-
-## Review the template
-
-The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/aks/).
--
-For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site.
-
-## Deploy the template
-
-1. Select the following button to sign in to Azure and open a template.
-
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
-
-2. Select or enter the following values.
-
- For this quickstart, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, *OS Type*, and *Kubernetes Version*. Provide your own values for the following template parameters:
-
- * **Subscription**: Select an Azure subscription.
- * **Resource group**: Select **Create new**. Enter a unique name for the resource group, such as *myResourceGroup*, then choose **OK**.
- * **Location**: Select a location, such as **East US**.
- * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
- * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*.
- * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*.
- * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
-
- ![Resource Manager template to create an Azure Kubernetes Service cluster in the portal](./media/kubernetes-walkthrough-rm-template/create-aks-cluster-using-template-portal.png)
-
-3. Select **Review + Create**.
-
-It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step.
-
-## Validate the deployment
-
-### Connect to the cluster
-
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-
-1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
-
- ```azurecli
- az aks install-cli
- ```
-
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
-
- ```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
- ```
-
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
-
- ```console
- kubectl get nodes
- ```
-
- Output shows the nodes created in the previous steps. Make sure that the status for all the nodes is *Ready*:
-
- ```output
- NAME STATUS ROLES AGE VERSION
- aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
- aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
- aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6
- ```
-
-### Run the application
-
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
-
-In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
-* The sample Azure Vote Python applications.
-* A Redis instance.
-
-Two [Kubernetes Services][kubernetes-service] are also created:
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
-
-1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
-1. Copy in the following YAML definition:
-
- ```yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-back
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
- metadata:
- labels:
- app: azure-vote-back
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-back
- spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-front
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
- metadata:
- labels:
- app: azure-vote-front
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-front
- spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
- ```
-
-1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
- ```console
- kubectl apply -f azure-vote.yaml
- ```
-
- Output shows the successfully created deployments and
-
- ```output
- deployment "azure-vote-back" created
- service "azure-vote-back" created
- deployment "azure-vote-front" created
- service "azure-vote-front" created
- ```
-
-### Test the application
-
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-
-Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
-
-```console
-kubectl get service azure-vote-front --watch
-```
-
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
-
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
-
-```output
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
-
-![Image of browsing to Azure Vote](media/container-service-kubernetes-walkthrough/azure-voting-application.png)
-
-## Clean up resources
-
-To avoid Azure charges, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
-
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
-
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
->
-> If you used a managed identity, the identity is managed by the platform and does not require removal.
-
-## Get the code
-
-Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
-
-## Next steps
-
-In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it.
-
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
-
-> [!div class="nextstepaction"]
-> [AKS tutorial][aks-tutorial]
-
-<!-- LINKS - external -->
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
-[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
-[aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service
-
-<!-- LINKS - internal -->
-[kubernetes-concepts]: concepts-clusters-workloads.md
-[aks-monitor]: ../azure-monitor/containers/container-insights-onboard.md
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
-[az-aks-browse]: /cli/azure/aks#az_aks_browse
-[az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[az-group-create]: /cli/azure/group#az_group_create
-[az-group-delete]: /cli/azure/group#az_group_delete
-[azure-cli-install]: /cli/azure/install-azure-cli
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
-[azure-portal]: https://portal.azure.com
-[kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: concepts-network.md#services
-[ssh-keys]: ../virtual-machines/linux/create-ssh-keys-detailed.md
-[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
aks Kubernetes Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-walkthrough.md
- Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI'
-description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI.
-- Previously updated : 01/18/2022-
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
--
-# Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI
-
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
-* Deploy an AKS cluster using the Azure CLI.
-* Run a multi-container application with a web front-end and a Redis instance in the cluster.
-* Monitor the health of the cluster and pods that run your application.
-
- ![Voting app deployed in Azure Kubernetes Service](./media/container-service-kubernetes-walkthrough/voting-app-deployed-in-azure-kubernetes-service.png)
-
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
--
-To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers][windows-container-cli].
---- This article requires version 2.0.64 or greater of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](concepts-identity.md).-- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:-
- ```azurecli
- az provider show -n Microsoft.OperationsManagement -o table
- az provider show -n Microsoft.OperationalInsights -o table
- ```
-
- If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
-
- ```azurecli
- az provider register --namespace Microsoft.OperationsManagement
- az provider register --namespace Microsoft.OperationalInsights
- ```
-
-> [!NOTE]
-> Run the commands as administrator if you plan to run the commands in this quickstart locally instead of in Azure Cloud Shell.
-
-## Create a resource group
-
-An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
-* The storage location of your resource group metadata.
-* Where your resources will run in Azure if you don't specify another region during resource creation.
-
-The following example creates a resource group named *myResourceGroup* in the *eastus* location.
-
-Create a resource group using the [az group create][az-group-create] command.
--
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-```
-
-Output for successfully created resource group:
-
-```json
-{
- "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup",
- "location": "eastus",
- "managedBy": null,
- "name": "myResourceGroup",
- "properties": {
- "provisioningState": "Succeeded"
- },
- "tags": null
-}
-```
-
-## Create AKS cluster
-
-Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Azure Monitor container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node:
-
-```azurecli-interactive
-az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
-```
-
-After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-> [!NOTE]
-> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](./faq.md#why-are-two-resource-groups-created-with-aks)
-
-## Connect to the cluster
-
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-
-1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
-
- ```azurecli
- az aks install-cli
- ```
-
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
- * Downloads credentials and configures the Kubernetes CLI to use them.
- * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file*.
--
- ```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
- ```
-
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
-
- ```azurecli-interactive
- kubectl get nodes
- ```
-
- Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
-
- ```output
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
- ```
-
-## Run the application
-
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
-
-In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
-* The sample Azure Vote Python applications.
-* A Redis instance.
-
-Two [Kubernetes Services][kubernetes-service] are also created:
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
-
-1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
-1. Copy in the following YAML definition:
-
- ```yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-back
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
- metadata:
- labels:
- app: azure-vote-back
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-back
- spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-front
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
- metadata:
- labels:
- app: azure-vote-front
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
- apiVersion: v1
- kind: Service
- metadata:
- name: azure-vote-front
- spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
- ```
-
-1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
- ```console
- kubectl apply -f azure-vote.yaml
- ```
-
- Output shows the successfully created deployments and
-
- ```output
- deployment "azure-vote-back" created
- service "azure-vote-back" created
- deployment "azure-vote-front" created
- service "azure-vote-front" created
- ```
-
-## Test the application
-
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-
-Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
-
-```azurecli-interactive
-kubectl get service azure-vote-front --watch
-```
-
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
-
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
-
-```output
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
-
-![Voting app deployed in Azure Kubernetes Service](./media/container-service-kubernetes-walkthrough/voting-app-deployed-in-azure-kubernetes-service.png)
-
-View the cluster nodes' and pods' health metrics captured by [Azure Monitor container insights][azure-monitor-containers] in the Azure portal.
-
-## Delete the cluster
-
-To avoid Azure charges, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
-
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
-
-> [!NOTE]
-> If the AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
->
-> If the AKS cluster was created with service principal as the identity option instead, then when you delete the cluster, the service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
-
-## Get the code
-
-Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
-
-## Next steps
-
-In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it.
-
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
-
-> [!div class="nextstepaction"]
-> [AKS tutorial][aks-tutorial]
-
-This quickstart is for introductory purposes. For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
-
-<!-- LINKS - external -->
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
-[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubeconfig-file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
-
-<!-- LINKS - internal -->
-[kubernetes-concepts]: concepts-clusters-workloads.md
-[aks-monitor]: ../azure-monitor/containers/container-insights-onboard.md
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
-[az-aks-browse]: /cli/azure/aks#az-aks-browse
-[az-aks-create]: /cli/azure/aks#az-aks-create
-[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
-[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
-[az-group-create]: /cli/azure/group#az-group-create
-[az-group-delete]: /cli/azure/group#az-group-delete
-[azure-cli-install]: /cli/azure/install_azure_cli
-[azure-monitor-containers]: ../azure-monitor/containers/container-insights-overview.md
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
-[azure-portal]: https://portal.azure.com
-[kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: concepts-network.md#services
-[windows-container-cli]: windows-container-cli.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
+
+ Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI'
+description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI.
++ Last updated : 04/29/2022+
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
++
+# Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+
+* Deploy an AKS cluster using the Azure CLI.
+* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
++
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
++
+To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).
++
+- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+
+- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
+[Az account](/cli/azure/account) command.
+
+- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
+
+ ```azurecli
+ az provider show -n Microsoft.OperationsManagement -o table
+ az provider show -n Microsoft.OperationalInsights -o table
+ ```
+
+ If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
+
+ ```azurecli
+ az provider register --namespace Microsoft.OperationsManagement
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+> [!NOTE]
+> Run the commands with administrative privileges if you plan to run the commands in this quickstart locally instead of in Azure Cloud Shell.
+
+## Create a resource group
+
+An [Azure resource group](../../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:
+
+* The storage location of your resource group metadata.
+* Where your resources will run in Azure if you don't specify another region during resource creation.
+
+The following example creates a resource group named *myResourceGroup* in the *eastus* location.
+
+Create a resource group using the [az group create][az-group-create] command.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+```
+
+The following output example resembles successful creation of the resource group:
+
+```json
+{
+ "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup",
+ "location": "eastus",
+ "managedBy": null,
+ "name": "myResourceGroup",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null
+}
+```
+
+## Create AKS cluster
+
+Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node:
+
+```azurecli-interactive
+az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
+```
+
+After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+> [!NOTE]
+> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](../faq.md#why-are-two-resource-groups-created-with-aks)
+
+## Connect to the cluster
+
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
+
+1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
+
+ ```azurecli
+ az aks install-cli
+ ```
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
+ * Downloads credentials and configures the Kubernetes CLI to use them.
+ * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+ The following output example shows the single node created in the previous steps. Make sure the node status is *Ready*:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
+ ```
+
+## Deploy the application
+
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services][kubernetes-service] are also created:
+
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml`.
+ * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
+ app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-front
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
+
+1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
+
+ The following example resembles output showing the successfully created deployments and
+
+ ```output
+ deployment "azure-vote-back" created
+ service "azure-vote-back" created
+ deployment "azure-vote-front" created
+ service "azure-vote-front" created
+ ```
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+
+Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
+
+```azurecli-interactive
+kubectl get service azure-vote-front --watch
+```
+
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```output
+azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+To see the Azure Vote app in action, open a web browser to the external IP address of your service.
++
+## Delete the cluster
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroup --yes --no-wait
+```
+
+> [!NOTE]
+> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this quickstart, you deployed a Kubernetes cluster and then deployed a simple multi-container application to it.
+
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+This quickstart is for introductory purposes. For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
+
+<!-- LINKS - external -->
+[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubeconfig-file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[az-aks-browse]: /cli/azure/aks#az-aks-browse
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-group-create]: /cli/azure/group#az-group-create
+[az-group-delete]: /cli/azure/group#az-group-delete
+[azure-cli-install]: /cli/azure/install_azure_cli
+[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md
+[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[azure-portal]: https://portal.azure.com
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
+[kubernetes-service]: ../concepts-network.md#services
+[windows-container-cli]: ../windows-container-cli.md
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
+
+ Title: 'Quickstart: Deploy an AKS cluster by using the Azure portal'
+
+description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal.
++ Last updated : 04/29/2022+
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
++
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+
+* Deploy an AKS cluster using the Azure portal.
+* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
++
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+
+## Prerequisites
++
+- If you are unfamiliar with using the Bash environment in Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
+
+- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+
+## Create an AKS cluster
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+
+3. Select **Containers** > **Kubernetes Service**.
+
+4. On the **Basics** page, configure the following options:
+
+ - **Project details**:
+ * Select an Azure **Subscription**.
+ * Select or create an Azure **Resource group**, such as *myResourceGroup*.
+ - **Cluster details**:
+ * Ensure the the **Preset configuration** is *Standard ($$)*. For more details on preset configurations, see [Cluster configuration presets in the Azure portal][preset-config].
+ * Enter a **Kubernetes cluster name**, such as *myAKSCluster*.
+ * Select a **Region** for the AKS cluster, and leave the default value selected for **Kubernetes version**.
+ * Select **99.5%** for **API server availability**.
+ - **Primary node pool**:
+ * Leave the default values selected.
+
+ :::image type="content" source="media/quick-kubernetes-deploy-portal/create-cluster-basics.png" alt-text="Screenshot of Create AKS cluster - provide basic information.":::
+
+ > [!NOTE]
+ > You can change the preset configuration when creating your cluster by selecting *Learn more and compare presets* and choosing a different option.
+ > :::image type="content" source="media/quick-kubernetes-deploy-portal/cluster-preset-options.png" alt-text="Screenshot of Create AKS cluster - portal preset options.":::
+
+5. Select **Next: Node pools** when complete.
+
+6. Keep the default **Node pools** options. At the bottom of the screen, click **Next: Access**.
+
+7. On the **Access** page, configure the following options:
+
+ - The default value for **Resource identity** is **System-assigned managed identity**. Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. For more details about managed identities, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md).
+ - The Kubernetes role-based access control (RBAC) option is the default value to provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.
+
+ By default, *Basic* networking is used, and [Container insights](../../azure-monitor/containers/container-insights-overview.md) is enabled.
+
+8. Click **Review + create**. When you navigate to the **Review + create** tab, Azure runs validation on the settings that you have chosen. If validation passes, you can proceed to create the AKS cluster by selecting **Create**. If validation fails, then it indicates which settings need to be modified.
+
+9. It takes a few minutes to create the AKS cluster. When your deployment is complete, navigate to your resource by either:
+ * Selecting **Go to resource**, or
+ * Browsing to the AKS cluster resource group and selecting the AKS resource. In this example you browse for *myResourceGroup* and select the resource *myAKSCluster*.
+
+ :::image type="content" source="media/quick-kubernetes-deploy-portal/aks-portal-dashboard.png" alt-text="Screenshot of AKS dashboard in the Azure portal.":::
+
+## Connect to the cluster
+
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. If you are unfamiliar with the Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
+
+1. Open Cloud Shell using the `>_` button on the top of the Azure portal.
+
+ :::image type="content" source="media/quick-kubernetes-deploy-portal/aks-cloud-shell.png" alt-text="Screenshot of Open the Azure Cloud Shell in the portal option.":::
+
+ > [!NOTE]
+ > To perform these operations in a local shell installation:
+ >
+ > 1. Verify Azure CLI is installed.
+ > 2. Connect to Azure via the `az login` command.
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command downloads credentials and configures the Kubernetes CLI to use them.
+
+ ```azurecli
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+3. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes.
+
+ ```console
+ kubectl get nodes
+ ```
+
+ Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-12345678-vmss000000 Ready agent 23m v1.19.11
+ aks-agentpool-12345678-vmss000001 Ready agent 24m v1.19.11
+ ```
+
+## Deploy the application
+
+A Kubernetes manifest file defines a cluster's desired state, like which container images to run.
+
+In this quickstart, you will use a manifest to create all objects needed to run the Azure Vote application. This manifest includes two Kubernetes deployments:
+
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two Kubernetes Services are also created:
+
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. In the Cloud Shell, use an editor to create a file named `azure-vote.yaml`, such as:
+ * `code azure-vote.yaml`
+ * `nano azure-vote.yaml`, or
+ * `vi azure-vote.yaml`.
+
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
+ app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-front
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
+
+1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest:
+
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
+
+ Output shows the successfully created deployments and
+
+ ```output
+ deployment "azure-vote-back" created
+ service "azure-vote-back" created
+ deployment "azure-vote-front" created
+ service "azure-vote-front" created
+ ```
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+
+To monitor progress, use the `kubectl get service` command with the `--watch` argument.
+
+```console
+kubectl get service azure-vote-front --watch
+```
+
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```output
+azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+To see the Azure Vote app in action, open a web browser to the external IP address of your service.
++
+## Delete cluster
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Select the **Delete** button on the AKS cluster dashboard. You can also use the [az aks delete][az-aks-delete] command in the Cloud Shell:
+
+```azurecli
+az aks delete --resource-group myResourceGroup --name myAKSCluster --yes --no-wait
+```
+
+> [!NOTE]
+> When you delete the cluster, system-assigned managed identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
+
+To learn more about AKS by walking through a complete example, including building an application, deploying from Azure Container Registry, updating a running application, and scaling and upgrading your cluster, continue to the Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+<!-- LINKS - external -->
+[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubernetes-documentation]: https://kubernetes.io/docs/home/
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-delete]: /cli/azure/aks#az_aks_delete
+[aks-monitor]: ../azure-monitor/containers/container-insights-overview.md
+[aks-network]: ../concepts-network.md
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[http-routing]: ../http-application-routing.md
+[preset-config]: ../quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal
+[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
+
+ Title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
+description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell.
++ Last updated : 04/29/2022+
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
++
+# Quickstart: Deploy an Azure Kubernetes Service cluster using PowerShell
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+
+* Deploy an AKS cluster using PowerShell.
+* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
++
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+
+## Prerequisites
++
+- If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
+
+- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+
+- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
+[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+ ```azurepowershell-interactive
+ Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+ ```
++
+## Create a resource group
+
+An [Azure resource group](../../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
+
+* The storage location of your resource group metadata.
+* Where your resources will run in Azure if you don't specify another region during resource creation.
+
+The following example creates a resource group named *myResourceGroup* in the *eastus* region.
+
+Create a resource group using the [New-AzResourceGroup][new-azresourcegroup]
+cmdlet.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myResourceGroup -Location eastus
+```
+
+The following output example resembles successful creation of the resource group:
+
+```plaintext
+ResourceGroupName : myResourceGroup
+Location : eastus
+ProvisioningState : Succeeded
+Tags :
+ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup
+```
+
+## Create AKS cluster
+
+Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet with the *--WorkspaceResourceId* parameter to enable [Azure Monitor container insights][azure-monitor-containers].
+
+1. Generate an SSH key pair using the `ssh-keygen` command-line utility. For more details, see:
+ * [Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md)
+ * [How to use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md)
+
+1. Create an AKS cluster named **myAKSCluster** with one node.
+
+ ```azurepowershell-interactive
+ New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
+ ```
+
+After a few minutes, the command completes and returns information about the cluster.
+
+> [!NOTE]
+> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](../faq.md#why-are-two-resource-groups-created-with-aks)
+
+## Connect to the cluster
+
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
+
+1. Install `kubectl` locally using the `Install-AzAksKubectl` cmdlet:
+
+ ```azurepowershell
+ Install-AzAksKubectl
+ ```
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
+
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
+
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+
+ ```azurepowershell-interactive
+ kubectl get nodes
+ ```
+
+ The following output example shows the single node created in the previous steps. Make sure the node status is *Ready*:
+
+ ```plaintext
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10
+ ```
+
+## Deploy the application
+
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services][kubernetes-service] are also created:
+
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml`.
+ * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
+ app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-front
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
+
+1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```azurepowershell-interactive
+ kubectl apply -f azure-vote.yaml
+ ```
+
+ The following example resembles output showing the successfully created deployments and
+
+ ```plaintext
+ deployment.apps/azure-vote-back created
+ service/azure-vote-back created
+ deployment.apps/azure-vote-front created
+ service/azure-vote-front created
+ ```
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+
+Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
+
+```azurepowershell-interactive
+kubectl get service azure-vote-front --watch
+```
+
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+
+```plaintext
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```plaintext
+azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+To see the Azure Vote app in action, open a web browser to the external IP address of your service.
++
+## Delete the cluster
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroup
+```
+
+> [!NOTE]
+> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
+
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+<!-- LINKS - external -->
+[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
+
+<!-- LINKS - internal -->
+[windows-container-powershell]: ../windows-container-powershell.md
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
+[install-azure-powershell]: /powershell/azure/install-az-ps
+[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
+[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md
+[kubernetes-service]: ../concepts-network.md#services
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
+[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
+
+ Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster
+description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS)
++ Last updated : 04/29/2021+
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
++
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an ARM template
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+
+* Deploy an AKS cluster using an Azure Resource Manager template.
+* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+++
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
+++
+- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section.
+
+- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+
+- To deploy a Bicep file or ARM template, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a virtual machine, you need Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+
+### Create an SSH key pair
+
+To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location.
+
+1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.
+
+1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096:
+
+ ```console
+ ssh-keygen -t rsa -b 4096
+ ```
+
+For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure][ssh-keys].
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/aks/).
++
+For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site.
+
+## Deploy the template
+
+1. Select the following button to sign in to Azure and open a template.
+
+ [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
+
+2. Select or enter the following values.
+
+ For this quickstart, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, *OS Type*, and *Kubernetes Version*. Provide your own values for the following template parameters:
+
+ * **Subscription**: Select an Azure subscription.
+ * **Resource group**: Select **Create new**. Enter a unique name for the resource group, such as *myResourceGroup*, then choose **OK**.
+ * **Location**: Select a location, such as **East US**.
+ * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
+ * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*.
+ * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*.
+ * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
+
+ :::image type="content" source="./media/quick-kubernetes-deploy-rm-template/create-aks-cluster-using-template-portal.png" alt-text="Screenshot of Resource Manager template to create an Azure Kubernetes Service cluster in the portal.":::
+
+3. Select **Review + Create**.
+
+It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step.
+
+## Validate the deployment
+
+### Connect to the cluster
+
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
+
+1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
+
+ ```azurecli
+ az aks install-cli
+ ```
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+
+ ```console
+ kubectl get nodes
+ ```
+
+ The following output example shows the single node created in the previous steps. Make sure the node status is *Ready*:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
+ aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
+ aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6
+ ```
+
+### Deploy the application
+
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services][kubernetes-service] are also created:
+
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml`.
+ * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
+ app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-front
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
+
+1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
+
+ The following example resembles output showing the successfully created deployments and
+
+ ```output
+ deployment "azure-vote-back" created
+ service "azure-vote-back" created
+ deployment "azure-vote-front" created
+ service "azure-vote-front" created
+ ```
+
+### Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+
+Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
+
+```console
+kubectl get service azure-vote-front --watch
+```
+
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```output
+azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+To see the Azure Vote app in action, open a web browser to the external IP address of your service.
++
+## Clean up resources
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroup --yes --no-wait
+```
+
+> [!NOTE]
+> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
+
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+<!-- LINKS - external -->
+[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
+[aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[az-aks-browse]: /cli/azure/aks#az_aks_browse
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[az-group-create]: /cli/azure/group#az_group_create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[azure-cli-install]: /cli/azure/install-azure-cli
+[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[azure-portal]: https://portal.azure.com
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
+[kubernetes-service]: ../concepts-network.md#services
+[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md
+[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
+
+ Title: Create a Windows Server container on an AKS cluster by using Azure CLI
+description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure CLI.
++ Last updated : 04/29/2022++
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
++
+# Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using the Azure CLI
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you deploy an AKS cluster that runs Windows Server 2019 containers using the Azure CLI. You also deploy an ASP.NET sample application in a Windows Server container to the cluster.
++
+This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
+++
+- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+
+- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
+[Az account](/cli/azure/account) command.
+
+### Limitations
+
+The following limitations apply when you create and manage AKS clusters that support multiple node pools:
+
+* You can't delete the first node pool.
+
+The following additional limitations apply to Windows Server node pools:
+
+* The AKS cluster can have a maximum of 10 node pools.
+* The AKS cluster can have a maximum of 100 nodes in each node pool.
+* The Windows Server node pool name has a limit of 6 characters.
+
+## Create a resource group
+
+An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.
+
+The following example creates a resource group named *myResourceGroup* in the *eastus* location.
+
+> [!NOTE]
+> This article uses Bash syntax for the commands in this tutorial.
+> If you are using Azure Cloud Shell, ensure that the dropdown in the upper-left of the Cloud Shell window is set to **Bash**.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+```
+
+The following example output shows the resource group created successfully:
+
+```json
+{
+ "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup",
+ "location": "eastus",
+ "managedBy": null,
+ "name": "myResourceGroup",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null,
+ "type": null
+}
+```
+
+## Create an AKS cluster
+
+To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more detailed information to help plan out the required subnet ranges and network considerations, see [configure Azure CNI networking][use-advanced-networking]. Use the [az aks create][az-aks-create] command to create an AKS cluster named *myAKSCluster*. This command will create the necessary network resources if they don't exist.
+
+* The cluster is configured with two nodes.
+* The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password]. If you don't specify the *windows-admin-password* parameter, you will be prompted to provide a value.
+* The node pool uses `VirtualMachineScaleSets`.
+
+> [!NOTE]
+> To ensure your cluster to operate reliably, you should run at least 2 (two) nodes in the default node pool.
+
+Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and sets it to *WINDOWS_USERNAME* for use in a later command (remember that the commands in this article are entered into a BASH shell).
+
+```azurecli-interactive
+echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
+```
+
+Create your cluster ensuring you specify `--windows-admin-username` parameter. The following example command creates a cluster using the value from *WINDOWS_USERNAME* you set in the previous command. Alternatively you can provide a different username directly in the parameter instead of using *WINDOWS_USERNAME*. The following command will also prompt you to create a password for the administrator credentials for the Windows Server nodes on your cluster. Alternatively, you can use the *windows-admin-password* parameter and specify your own value there.
+
+```azurecli-interactive
+az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --enable-addons monitoring \
+ --generate-ssh-keys \
+ --windows-admin-username $WINDOWS_USERNAME \
+ --vm-set-type VirtualMachineScaleSets \
+ --kubernetes-version 1.20.7 \
+ --network-plugin azure
+```
+
+> [!NOTE]
+> If you get a password validation error, verify the password you set meets the [Windows Server password requirements][windows-server-password]. If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
+>
+> If you do not specify an administrator username and password when setting `--vm-set-type VirtualMachineScaleSets` and `--network-plugin azure`, the username is set to *azureuser* and the password is set to a random value.
+>
+> The administrator username can't be changed, but you can change the administrator password your AKS cluster uses for Windows Server nodes using `az aks update`. For more details, see [Windows Server node pools FAQ][win-faq-change-admin-creds].
+
+After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
+
+## Add a Windows Server node pool
+
+By default, an AKS cluster is created with a node pool that can run Linux containers. Use `az aks nodepool add` command to add an additional node pool that can run Windows Server containers alongside the Linux node pool.
+
+```azurecli
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --os-type Windows \
+ --name npwin \
+ --node-count 1
+```
+
+The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
+
+## Optional: Using `containerd` with Windows Server node pools
+
+Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools. From Kubernetes 1.23, containerd will be the default container runtime for Windows.
+
+> [!IMPORTANT]
+> When using `containerd` with Windows Server 2019 node pools:
+> - Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.
+> - When creating or updating a node pool to run Windows Server containers, the default value for *node-vm-size* is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the *node-vm-size* parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
+> - It is highly recommended that you use [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
+
+### Add a Windows Server node pool with `containerd`
+
+Use the `az aks nodepool add` command to add an additional node pool that can run Windows Server containers with the `containerd` runtime.
+
+> [!NOTE]
+> If you do not specify the *WindowsContainerRuntime=containerd* custom header, the node pool will use Docker as the container runtime.
+
+```azurecli
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --os-type Windows \
+ --name npwcd \
+ --node-vm-size Standard_D4s_v3 \
+ --kubernetes-version 1.20.5 \
+ --aks-custom-headers WindowsContainerRuntime=containerd \
+ --node-count 1
+```
+
+The above command creates a new Windows Server node pool using `containerd` as the runtime named *npwcd* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
+
+### Upgrade an existing Windows Server node pool to `containerd`
+
+Use the `az aks nodepool upgrade` command to upgrade a specific node pool from Docker to `containerd`.
+
+```azurecli
+az aks nodepool upgrade \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name npwd \
+ --kubernetes-version 1.20.7 \
+ --aks-custom-headers WindowsContainerRuntime=containerd
+```
+
+The above command upgrades a node pool named *npwd* to the `containerd` runtime.
+
+To upgrade all existing node pools in a cluster to use the `containerd` runtime for all Windows Server node pools:
+
+```azurecli
+az aks upgrade \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --kubernetes-version 1.20.7 \
+ --aks-custom-headers WindowsContainerRuntime=containerd
+```
+
+The above command upgrades all Windows Server node pools in the *myAKSCluster* to use the `containerd` runtime.
+
+> [!NOTE]
+> After upgrading all existing Windows Server node pools to use the `containerd` runtime, Docker will still be the default runtime when adding new Windows Server node pools.
+
+## Connect to the cluster
+
+To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
+
+```azurecli
+az aks install-cli
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurecli-interactive
+az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+```
+
+To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a list of the cluster nodes.
+
+```console
+kubectl get nodes -o wide
+```
+
+The following example output shows the all the nodes in the cluster. Make sure that the status of all nodes is *Ready*:
+
+```output
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-12345678-vmss000000 Ready agent 34m v1.20.7 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 34m v1.20.7 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwcd123456 Ready agent 9m6s v1.20.7 10.240.0.97 <none> Windows Server 2019 Datacenter 10.0.17763.1879 containerd://1.4.4+unknown
+aksnpwin987654 Ready agent 25m v1.20.7 10.240.0.66 <none> Windows Server 2019 Datacenter 10.0.17763.1879 docker://19.3.14
+```
+
+> [!NOTE]
+> The container runtime for each node pool is shown under *CONTAINER-RUNTIME*. Notice *aksnpwin987654* begins with `docker://` which means it is using Docker for the container runtime. Notice *aksnpwcd123456* begins with `containerd://` which means it is using `containerd` for the container runtime.
+
+## Deploy the application
+
+A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. In this article, a manifest is used to create all objects needed to run the ASP.NET sample application in a Windows Server container. This manifest includes a [Kubernetes deployment][kubernetes-deployment] for the ASP.NET sample application and an external [Kubernetes service][kubernetes-service] to access the application from the internet.
+
+The ASP.NET sample application is provided as part of the [.NET Framework Samples][dotnet-samples] and runs in a Windows Server container. AKS requires Windows Server containers to be based on images of *Windows Server 2019* or greater. The Kubernetes manifest file must also define a [node selector][node-selector] to tell your AKS cluster to run your ASP.NET sample application's pod on a node that can run Windows Server containers.
+
+Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: sample
+ labels:
+ app: sample
+spec:
+ replicas: 1
+ template:
+ metadata:
+ name: sample
+ labels:
+ app: sample
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": windows
+ containers:
+ - name: sample
+ image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
+ resources:
+ limits:
+ cpu: 1
+ memory: 800M
+ requests:
+ cpu: .1
+ memory: 300M
+ ports:
+ - containerPort: 80
+ selector:
+ matchLabels:
+ app: sample
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample
+spec:
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 80
+ selector:
+ app: sample
+```
+
+Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+```console
+kubectl apply -f sample.yaml
+```
+
+The following example output shows the Deployment and Service created successfully:
+
+```output
+deployment.apps/sample created
+service/sample created
+```
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally the service can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
+
+To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
+
+```console
+kubectl get service sample --watch
+```
+
+Initially the *EXTERNAL-IP* for the *sample* service is shown as *pending*.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+sample LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```output
+sample LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+To see the sample app in action, open a web browser to the external IP address of your service.
++
+> [!Note]
+> If you receive a connection timeout when trying to load the page then you should verify the sample app is ready with the following command [kubectl get pods --watch]. Sometimes the Windows container will not be started by the time your external IP address is available.
+
+## Delete cluster
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroup --yes --no-wait
+```
+
+> [!NOTE]
+> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this article, you deployed a Kubernetes cluster and deployed an ASP.NET sample application in a Windows Server container to it.
+
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+<!-- LINKS - external -->
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[node-selector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+[dotnet-samples]: https://hub.docker.com/_/microsoft-dotnet-framework-samples/
+[azure-cni]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[aks-taints]: ../use-multiple-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool
+[az-aks-browse]: /cli/azure/aks#az_aks_browse
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-group-create]: /cli/azure/group#az_group_create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-cni-about]: ../concepts-network.md#azure-cni-advanced-networking
+[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[azure-portal]: https://portal.azure.com
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
+[kubernetes-service]: ../concepts-network.md#services
+[restricted-vm-sizes]: ../quotas-skus-regions.md#restricted-vm-sizes
+[use-advanced-networking]: ../configure-azure-cni.md
+[aks-support-policies]: ../support-policies.md
+[aks-faq]: faq.md
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference
+[win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
+
+ Title: Create a Windows Server container on an AKS cluster by using PowerShell
+description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell.
++ Last updated : 04/29/2022+++
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
++
+# Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and
+manage clusters. In this article, you deploy an AKS cluster running Windows Server 2019 containers using PowerShell. You also deploy an
+`ASP.NET` sample application in a Windows Server container to the cluster.
++
+This article assumes a basic understanding of Kubernetes concepts. For more information, see
+[Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
+
+* The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+* If you choose to use PowerShell locally, you need to install the [Az PowerShell](/powershell/azure/new-azureps-module-az)
+module and connect to your Azure account using the
+[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information
+about installing the Az PowerShell module, see
+[Install Azure PowerShell][install-azure-powershell].
+* You also must install the [Az.Aks](/powershell/module/az.aks) PowerShell module:
+
+ ```azurepowershell-interactive
+ Install-Module Az.Aks
+ ```
++
+If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
+should be billed. Select a specific subscription ID using the
+[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+```azurepowershell-interactive
+Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+```
+
+## Limitations
+
+The following limitations apply when you create and manage AKS clusters that support multiple node pools:
+
+* You can't delete the first node pool.
+
+The following additional limitations apply to Windows Server node pools:
+
+* The AKS cluster can have a maximum of 10 node pools.
+* The AKS cluster can have a maximum of 100 nodes in each node pool.
+* The Windows Server node pool name has a limit of 6 characters.
+
+## Create a resource group
+
+An [Azure resource group](../../azure-resource-manager/management/overview.md)
+is a logical group in which Azure resources are deployed and managed. When you create a resource
+group, you are asked to specify a location. This location is where resource group metadata is
+stored, it is also where your resources run in Azure if you don't specify another region during
+resource creation. Create a resource group using the [New-AzResourceGroup][new-azresourcegroup]
+cmdlet.
+
+The following example creates a resource group named **myResourceGroup** in the **eastus** location.
+
+> [!NOTE]
+> This article uses PowerShell syntax for the commands in this tutorial. If you are using Azure Cloud
+> Shell, ensure that the dropdown in the upper-left of the Cloud Shell window is set to **PowerShell**.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myResourceGroup -Location eastus
+```
+
+The following example output shows the resource group created successfully:
+
+```plaintext
+ResourceGroupName : myResourceGroup
+Location : eastus
+ProvisioningState : Succeeded
+Tags :
+ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup
+```
+
+## Create an AKS cluster
+
+Use the `ssh-keygen` command-line utility to generate an SSH key pair. For more details, see
+[Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md).
+
+To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to
+use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more
+detailed information to help plan out the required subnet ranges and network considerations, see
+[configure Azure CNI networking][use-advanced-networking]. Use the [New-AzAksCluster][new-azakscluster] cmdlet
+below to create an AKS cluster named **myAKSCluster**. The following example creates the necessary
+network resources if they don't exist.
+
+> [!NOTE]
+> To ensure your cluster operates reliably, you should run at least 2 (two) nodes in the default
+> node pool.
+
+```azurepowershell-interactive
+$Username = Read-Host -Prompt 'Please create a username for the administrator credentials on your Windows Server containers: '
+$Password = Read-Host -Prompt 'Please create a password for the administrator credentials on your Windows Server containers: ' -AsSecureString
+New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName $Username -WindowsProfileAdminUserPassword $Password
+```
+
+> [!Note]
+> If you are unable to create the AKS cluster because the version is not supported in this region
+> then you can use the `Get-AzAksVersion -Location eastus` command to find the supported version
+> list for this region.
+
+After a few minutes, the command completes and returns information about the cluster. Occasionally
+the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
+
+## Add a Windows Server node pool
+
+By default, an AKS cluster is created with a node pool that can run Linux containers. Use
+`New-AzAksNodePool` cmdlet to add a node pool that can run Windows Server containers alongside the
+Linux node pool.
+
+```azurepowershell-interactive
+New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -Name npwin
+```
+
+The above command creates a new node pool named **npwin** and adds it to the **myAKSCluster**. When
+creating a node pool to run Windows Server containers, the default value for **VmSize** is
+**Standard_D2s_v3**. If you choose to set the **VmSize** parameter, check the list of
+[restricted VM sizes][restricted-vm-sizes]. The minimum recommended size is **Standard_D2s_v3**. The
+previous command also uses the default subnet in the default vnet created when running `New-AzAksCluster`.
+
+## Connect to the cluster
+
+To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If
+you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the
+`Install-AzAksKubectl` cmdlet:
+
+```azurepowershell-interactive
+Install-AzAksKubectl
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the
+[Import-AzAksCredential][import-azakscredential] cmdlet. This command
+downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurepowershell-interactive
+Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+```
+
+To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a
+list of the cluster nodes.
+
+```azurepowershell-interactive
+kubectl get nodes
+```
+
+The following example output shows all the nodes in the cluster. Make sure that the status of all
+nodes is **Ready**:
+
+```plaintext
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-12345678-vmssfedcba Ready agent 13m v1.16.7
+aksnpwin987654 Ready agent 108s v1.16.7
+```
+
+## Deploy the application
+
+A Kubernetes manifest file defines a desired state for the cluster, such as what container images to
+run. In this article, a manifest is used to create all objects needed to run the ASP.NET sample
+application in a Windows Server container. This manifest includes a
+[Kubernetes deployment][kubernetes-deployment] for the ASP.NET sample application and an external
+[Kubernetes service][kubernetes-service] to access the application from the internet.
+
+The ASP.NET sample application is provided as part of the [.NET Framework Samples][dotnet-samples]
+and runs in a Windows Server container. AKS requires Windows Server containers to be based on images
+of **Windows Server 2019** or greater. The Kubernetes manifest file must also define a
+[node selector][node-selector] to tell your AKS cluster to run your ASP.NET sample application's pod
+on a node that can run Windows Server containers.
+
+Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure
+Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical
+system:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: sample
+ labels:
+ app: sample
+spec:
+ replicas: 1
+ template:
+ metadata:
+ name: sample
+ labels:
+ app: sample
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": windows
+ containers:
+ - name: sample
+ image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
+ resources:
+ limits:
+ cpu: 1
+ memory: 800M
+ requests:
+ cpu: .1
+ memory: 300M
+ ports:
+ - containerPort: 80
+ selector:
+ matchLabels:
+ app: sample
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample
+spec:
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 80
+ selector:
+ app: sample
+```
+
+Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your
+YAML manifest:
+
+```azurepowershell-interactive
+kubectl apply -f sample.yaml
+```
+
+The following example output shows the Deployment and Service created successfully:
+
+```plaintext
+deployment.apps/sample created
+service/sample created
+```
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application frontend to the internet.
+This process can take a few minutes to complete. Occasionally the service can take longer than a few
+minutes to provision. Allow up to 10 minutes in these cases.
+
+To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
+
+```azurepowershell-interactive
+kubectl get service sample --watch
+```
+
+Initially the **EXTERNAL-IP** for the **sample** service is shown as **pending**.
+
+```plaintext
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+sample LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+When the **EXTERNAL-IP** address changes from **pending** to an actual public IP address, use `CTRL-C`
+to stop the `kubectl` watch process. The following example output shows a valid public IP address
+assigned to the service:
+
+```plaintext
+sample LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+To see the sample app in action, open a web browser to the external IP address of your service.
++
+> [!Note]
+> If you receive a connection timeout when trying to load the page then you should verify the sample
+> app is ready with the following command `kubectl get pods --watch`. Sometimes the Windows
+> container will not be started by the time your external IP address is available.
+
+## Delete cluster
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, use the
+[Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroup
+```
+
+> [!NOTE]
+> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+In this article, you deployed a Kubernetes cluster and deployed an `ASP.NET` sample application in a
+Windows Server container to it.
+
+To learn more about AKS, and walk through a complete code to deployment example, continue to the
+Kubernetes cluster tutorial.
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+<!-- LINKS - external -->
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[dotnet-samples]: https://hub.docker.com/_/microsoft-dotnet-framework-samples/
+[node-selector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[install-azure-powershell]: /powershell/azure/install-az-ps
+[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
+[azure-cni-about]: ../concepts-network.md#azure-cni-advanced-networking
+[use-advanced-networking]: ../configure-azure-cni.md
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
+[restricted-vm-sizes]: ../quotas-skus-regions.md#restricted-vm-sizes
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
+[kubernetes-service]: ../concepts-network.md#services
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
+[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
Azure Load Balancer is available in two SKUs - *Basic* and *Standard*. By defaul
For more information on the *Basic* and *Standard* SKUs, see [Azure load balancer SKU comparison][azure-lb-comparison].
-This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer and walks through how to use and configure some of the capabilities and features of the load balancer.
-If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer and walks through how to use and configure some of the capabilities and features of the load balancer. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
> [!IMPORTANT] > If you prefer not to leverage the Azure Load Balancer to provide outbound connection and instead have your own gateway, firewall or proxy for that purpose you can skip the creation of the load balancer outbound pool and respective frontend IP by using [**Outbound type as UserDefinedRouting (UDR)**](egress-outboundtype.md). The Outbound type defines the egress method for a cluster and it defaults to type: load balancer.
az aks update \
When SNAT port resources are exhausted, outbound flows fail until existing flows release SNAT ports. Load Balancer reclaims SNAT ports when the flow closes and the AKS-configured load balancer uses a 30-minute idle timeout for reclaiming SNAT ports from idle flows. You can also use transport (for example, **`TCP keepalives`**) or **`application-layer keepalives`** to refresh an idle flow and reset this idle timeout if necessary. You can configure this timeout following the below example: - ```azurecli-interactive az aks update \ --resource-group myResourceGroup \
The following limitations apply when you create and manage AKS clusters that sup
* You can only use one type of load balancer SKU (Basic or Standard) in a single cluster. * *Standard* SKU Load Balancers only support *Standard* SKU IP Addresses. - ## Next steps Learn more about Kubernetes services at the [Kubernetes services documentation][kubernetes-services].
Learn more about using Internal Load Balancer for Inbound traffic at the [AKS In
[advanced-networking]: configure-azure-cni.md [aks-support-policies]: support-policies.md [aks-faq]: faq.md
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources [az-aks-show]: /cli/azure/aks#az_aks_show [az-aks-create]: /cli/azure/aks#az_aks_create
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
This article shows you how to create a connection to an AKS node.
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
This article also assumes you have an SSH key. You can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. If you use PuTTY Gen to create the key pair, save the key pair in an OpenSSH format rather than the default PuTTy private key format (.ppk file).
kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs]. - <!-- INTERNAL LINKS --> [view-kubelet-logs]: kubelet-logs.md [view-master-logs]: monitor-aks-reference.md#resource-logs
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [aks-windows-rdp]: rdp.md [ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
If AKS finds multiple unhealthy nodes during a health check, each node is repair
## Node Autodrain
-[Scheduled Events][scheduled-events] can occur on the underlying virtual machines (VMs) in any of your node pools. For [spot node pools][spot-node-pools], scheduled events may cause a *preempt* node event for the node. Certain node events, such as *preempt*, cause AKS node autodrain to attempt a cordon and drain of the affected node, which allows for a graceful reschedule of any affected workloads on that node.
+[Scheduled Events][scheduled-events] can occur on the underlying virtual machines (VMs) in any of your node pools. For [spot node pools][spot-node-pools], scheduled events may cause a *preempt* node event for the node. Certain node events, such as *preempt*, cause AKS node autodrain to attempt a cordon and drain of the affected node, which allows for a graceful reschedule of any affected workloads on that node. When this happens, you might notice the node to receive a taint with *"remediator.aks.microsoft.com/unschedulable"*, because of *"kubernetes.azure.com/scalesetpriority: spot"*.
The following table shows the node events, and the actions they cause for AKS node autodrain.
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
The snapshot is an Azure resource that will contain the configuration informatio
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
### Limitations
az aks create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-i
- Learn more about multiple node pools and how to upgrade node pools with [Create and manage multiple node pools][use-multiple-node-pools]. <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[supported-versions]: supported-kubernetes-versions.md [upgrade-cluster]: upgrade-cluster.md [node-image-upgrade]: node-image-upgrade.md
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
This article shows you how to use the open-source [kured (KUbernetes REboot Daem
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Understand the AKS node update experience
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS]
[kubectl-get-nodes]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [DaemonSet]: concepts-clusters-workloads.md#statefulsets-and-daemonsets [aks-ssh]: ssh.md
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
This article shows you how you can automate the update process of AKS nodes. You
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
jobs:
[cron-syntax]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/crontab.html#tag_20_25_07 <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [managed-node-upgrades-article]: node-image-upgrade.md [cluster-upgrades-article]: upgrade-cluster.md
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Your AKS cluster has regular maintenance performed on it automatically. By defau
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
Planned Maintenance will detect if you are using Cluster Auto-Upgrade and schedu
- To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade] - <!-- LINKS - Internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[aks-support-policies]: support-policies.md [aks-faq]: faq.md [az-extension-add]: /cli/azure/extension#az_extension_add
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
This article shows you how to create an RDP connection with an AKS node using th
## Before you begin
-This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure CLI][aks-windows-cli]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
+This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure CLI][aks-quickstart-windows-cli]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
If you need to reset the password you can use `az aks update` to change the password.
If you need additional troubleshooting data, you can [view the Kubernetes master
[rdp-mac]: https://aka.ms/rdmac <!-- INTERNAL LINKS -->
-[aks-windows-cli]: windows-container-cli.md
+[aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md
[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-vm-delete]: /cli/azure/vm#az_vm_delete
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
When an Azure VM is in the `Stopped` (deallocated) state, you will not be charge
> [!WARNING] > In order to preserve any deallocated VMs, you must set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs.
-This article assumes that you have an existing AKS cluster and the latest version of the Azure CLI installed. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
### Limitations
az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m
- To learn more about the cluster autoscaler, see [Automatically scale a cluster to meet application demands on AKS][cluster-autoscaler] <!-- LINKS - Internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[aks-support-policies]: support-policies.md [aks-faq]: faq.md [az-extension-add]: /cli/azure/extension#az_extension_add
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
To optimize your costs further during these periods, you can completely turn off
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][kubernetes-walkthrough-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
### Limitations
If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully
<!-- LINKS - external --> <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: /learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
aks Start Stop Nodepools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-nodepools.md
Your AKS workloads may not need to run continuously, for example a development c
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][kubernetes-walkthrough-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
### Install aks-preview CLI extension
You can verify your node pool has started using [az aks show][az-aks-show] and c
<!-- LINKS - external --> <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
This article shows you how to create a static public IP address and assign it to
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
This article covers using a *Standard* SKU IP with a *Standard* SKU load balancer. For more information, see [IP address types and allocation methods in Azure][ip-sku].
For additional control over the network traffic to your applications, you may wa
[az-aks-show]: /cli/azure/aks#az_aks_show [aks-ingress-basic]: ingress-basic.md [aks-static-ingress]: ingress-static-ip.md
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [ip-sku]: ../virtual-network/ip-services/public-ip-addresses.md#sku
aks Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-policy.md
This article shows you how to apply policy definitions to your cluster and verif
## Prerequisites -- An existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+- This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
- The Azure Policy Add-on for AKS installed on an AKS cluster. Follow these [steps to install the Azure Policy Add-on][azure-policy-addon]. ## Assign a built-in policy definition or initiative
For more information about how Azure Policy works:
<!-- LINKS - internal --> [aks-policies]: policy-reference.md
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[azure-policy]: ../governance/policy/overview.md [azure-policy-addon]: ../governance/policy/concepts/policy-for-kubernetes.md#install-azure-policy-add-on-for-aks [azure-policy-addon-remove]: ../governance/policy/concepts/policy-for-kubernetes.md#remove-the-add-on-from-aks
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Learn more about [system node pools][use-system-pool].
In this article, you learned how to create and manage multiple node pools in an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
-To create and use Windows Server container node pools, see [Create a Windows Server container in AKS][aks-windows].
+To create and use Windows Server container node pools, see [Create a Windows Server container in AKS][aks-quickstart-windows-cli].
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your AKS applications.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set <!-- INTERNAL LINKS -->
-[aks-windows]: windows-container-cli.md
+[aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
To improve the security of your AKS cluster, you can limit what pods can be sche
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
### Install aks-preview CLI extension
For more information about limiting pod network traffic, see [Secure traffic bet
[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ [kubernetes-policy-reference]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-reference <!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [network-policies]: use-network-policies.md [az-feature-register]: /cli/azure/feature#az_feature_register
aks Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-tags.md
$ az aks show -g myResourceGroup -n myAKSCluster --query 'agentPoolProfiles[].{n
You can apply Azure tags to public IPs, disks, and files by using a Kubernetes manifest.
-For public IPs, use *service.beta.kubernetes.io/azure-pip-tags*. For example:
+For public IPs, use *service.beta.kubernetes.io/azure-pip-tags* under *annotations*. For example:
```yml apiVersion: v1 kind: Service
-...
+metadata:
+ annotations:
+ service.beta.kubernetes.io/azure-pip-tags: costcenter=3333,team=beta
spec: ...
- service.beta.kubernetes.io/azure-pip-tags: costcenter=3333,team=beta
- ...
``` For files and disks, use *tags* under *parameters*. For example:
aks Windows Container Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-container-cli.md
- Title: Create a Windows Server container on an AKS cluster by using Azure CLI
-description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure CLI.
-- Previously updated : 08/06/2021--
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
--
-# Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using the Azure CLI
-
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you deploy an AKS cluster using the Azure CLI. You also deploy an ASP.NET sample application in a Windows Server container to the cluster.
-
-![Image of browsing to ASP.NET sample application](media/windows-container/asp-net-sample-app.png)
-
-This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
---
-### Limitations
-
-The following limitations apply when you create and manage AKS clusters that support multiple node pools:
-
-* You can't delete the first node pool.
-
-The following additional limitations apply to Windows Server node pools:
-
-* The AKS cluster can have a maximum of 10 node pools.
-* The AKS cluster can have a maximum of 100 nodes in each node pool.
-* The Windows Server node pool name has a limit of 6 characters.
-
-## Create a resource group
-
-An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.
-
-The following example creates a resource group named *myResourceGroup* in the *eastus* location.
-
-> [!NOTE]
-> This article uses Bash syntax for the commands in this tutorial.
-> If you are using Azure Cloud Shell, ensure that the dropdown in the upper-left of the Cloud Shell window is set to **Bash**.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-```
-
-The following example output shows the resource group created successfully:
-
-```json
-{
- "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup",
- "location": "eastus",
- "managedBy": null,
- "name": "myResourceGroup",
- "properties": {
- "provisioningState": "Succeeded"
- },
- "tags": null,
- "type": null
-}
-```
-
-## Create an AKS cluster
-
-To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more detailed information to help plan out the required subnet ranges and network considerations, see [configure Azure CNI networking][use-advanced-networking]. Use the [az aks create][az-aks-create] command to create an AKS cluster named *myAKSCluster*. This command will create the necessary network resources if they don't exist.
-
-* The cluster is configured with two nodes.
-* The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password]. If you don't specify the *windows-admin-password* parameter, you will be prompted to provide a value.
-* The node pool uses `VirtualMachineScaleSets`.
-
-> [!NOTE]
-> To ensure your cluster to operate reliably, you should run at least 2 (two) nodes in the default node pool.
-
-Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell).
-
-```azurecli-interactive
-echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
-```
-
-Create your cluster ensuring you specify `--windows-admin-username` parameter. The following example command creates a cluster using the value from *WINDOWS_USERNAME* you set in the previous command. Alternatively you can provide a different username directly in the parameter instead of using *WINDOWS_USERNAME*. The following command will also prompt you to create a password for the administrator credentials for the Windows Server nodes on your cluster. Alternatively, you can use the *windows-admin-password* parameter and specify your own value there.
-
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --enable-addons monitoring \
- --generate-ssh-keys \
- --windows-admin-username $WINDOWS_USERNAME \
- --vm-set-type VirtualMachineScaleSets \
- --kubernetes-version 1.20.7 \
- --network-plugin azure
-```
-
-> [!NOTE]
-> If you get a password validation error, verify the password you set meets the [Windows Server password requirements][windows-server-password]. If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
->
-> If you do not specify an administrator username and password when setting `--vm-set-type VirtualMachineScaleSets` and `--network-plugin azure`, the username is set to *azureuser* and the password is set to a random value.
->
-> The administrator username can't be changed, but you can change the administrator password your AKS cluster uses for Windows Server nodes using `az aks update`. For more details, see [Windows Server node pools FAQ][win-faq-change-admin-creds].
-
-After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
-
-## Add a Windows Server node pool
-
-By default, an AKS cluster is created with a node pool that can run Linux containers. Use `az aks nodepool add` command to add an additional node pool that can run Windows Server containers alongside the Linux node pool.
-
-```azurecli
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --os-type Windows \
- --name npwin \
- --node-count 1
-```
-
-The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-
-## Optional: Using `containerd` with Windows Server node pools
-
-Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools. From Kubernetes 1.23, containerd will be the default container runtime for Windows.
--
-> [!IMPORTANT]
-> When using `containerd` with Windows Server 2019 node pools:
-> - Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.
-> - When creating or updating a node pool to run Windows Server containers, the default value for *node-vm-size* is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the *node-vm-size* parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
-> - It is highly recommended that you use [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
--
-### Add a Windows Server node pool with `containerd`
-
-Use the `az aks nodepool add` command to add an additional node pool that can run Windows Server containers with the `containerd` runtime.
-
-> [!NOTE]
-> If you do not specify the *WindowsContainerRuntime=containerd* custom header, the node pool will use Docker as the container runtime.
-
-```azurecli
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --os-type Windows \
- --name npwcd \
- --node-vm-size Standard_D4s_v3 \
- --kubernetes-version 1.20.5 \
- --aks-custom-headers WindowsContainerRuntime=containerd \
- --node-count 1
-```
-
-The above command creates a new Windows Server node pool using `containerd` as the runtime named *npwcd* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-
-### Upgrade an existing Windows Server node pool to `containerd`
-
-Use the `az aks nodepool upgrade` command to upgrade a specific node pool from Docker to `containerd`.
-
-```azurecli
-az aks nodepool upgrade \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name npwd \
- --kubernetes-version 1.20.7 \
- --aks-custom-headers WindowsContainerRuntime=containerd
-```
-
-The above command upgrades a node pool named *npwd* to the `containerd` runtime.
-
-To upgrade all existing node pools in a cluster to use the `containerd` runtime for all Windows Server node pools:
-
-```azurecli
-az aks upgrade \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --kubernetes-version 1.20.7 \
- --aks-custom-headers WindowsContainerRuntime=containerd
-```
-
-The above command upgrades all Windows Server node pools in the *myAKSCluster* to use the `containerd` runtime.
-
-> [!NOTE]
-> After upgrading all existing Windows Server node pools to use the `containerd` runtime, Docker will still be the default runtime when adding new Windows Server node pools.
-
-## Connect to the cluster
-
-To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
-
-```azurecli
-az aks install-cli
-```
-
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
-
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
-
-To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a list of the cluster nodes.
-
-```console
-kubectl get nodes -o wide
-```
-
-The following example output shows the all the nodes in the cluster. Make sure that the status of all nodes is *Ready*:
-
-```output
-NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-nodepool1-12345678-vmss000000 Ready agent 34m v1.20.7 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aks-nodepool1-12345678-vmss000001 Ready agent 34m v1.20.7 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aksnpwcd123456 Ready agent 9m6s v1.20.7 10.240.0.97 <none> Windows Server 2019 Datacenter 10.0.17763.1879 containerd://1.4.4+unknown
-aksnpwin987654 Ready agent 25m v1.20.7 10.240.0.66 <none> Windows Server 2019 Datacenter 10.0.17763.1879 docker://19.3.14
-```
-
-> [!NOTE]
-> The container runtime for each node pool is shown under *CONTAINER-RUNTIME*. Notice *aksnpwin987654* begins with `docker://` which means it is using Docker for the container runtime. Notice *aksnpwcd123456* begins with `containerd://` which means it is using `containerd` for the container runtime.
-
-## Run the application
-
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. In this article, a manifest is used to create all objects needed to run the ASP.NET sample application in a Windows Server container. This manifest includes a [Kubernetes deployment][kubernetes-deployment] for the ASP.NET sample application and an external [Kubernetes service][kubernetes-service] to access the application from the internet.
-
-The ASP.NET sample application is provided as part of the [.NET Framework Samples][dotnet-samples] and runs in a Windows Server container. AKS requires Windows Server containers to be based on images of *Windows Server 2019* or greater. The Kubernetes manifest file must also define a [node selector][node-selector] to tell your AKS cluster to run your ASP.NET sample application's pod on a node that can run Windows Server containers.
-
-Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: sample
- labels:
- app: sample
-spec:
- replicas: 1
- template:
- metadata:
- name: sample
- labels:
- app: sample
- spec:
- nodeSelector:
- "kubernetes.io/os": windows
- containers:
- - name: sample
- image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
- resources:
- limits:
- cpu: 1
- memory: 800M
- requests:
- cpu: .1
- memory: 300M
- ports:
- - containerPort: 80
- selector:
- matchLabels:
- app: sample
-
-apiVersion: v1
-kind: Service
-metadata:
- name: sample
-spec:
- type: LoadBalancer
- ports:
- - protocol: TCP
- port: 80
- selector:
- app: sample
-```
-
-Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f sample.yaml
-```
-
-The following example output shows the Deployment and Service created successfully:
-
-```output
-deployment.apps/sample created
-service/sample created
-```
-
-## Test the application
-
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally the service can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
-
-To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
-
-```console
-kubectl get service sample --watch
-```
-
-Initially the *EXTERNAL-IP* for the *sample* service is shown as *pending*.
-
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-sample LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
-
-```output
-sample LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-To see the sample app in action, open a web browser to the external IP address of your service.
-
-![Image of browsing to ASP.NET sample application](media/windows-container/asp-net-sample-app.png)
-
-> [!Note]
-> If you receive a connection timeout when trying to load the page then you should verify the sample app is ready with the following command [kubectl get pods --watch]. Sometimes the Windows container will not be started by the time your external IP address is available.
-
-## Delete cluster
-
-When the cluster is no longer needed, use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
-
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
-
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and does not require removal.
-
-## Next steps
-
-In this article, you deployed a Kubernetes cluster and deployed an ASP.NET sample application in a Windows Server container to it.
-
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
-
-> [!div class="nextstepaction"]
-> [AKS tutorial][aks-tutorial]
-
-<!-- LINKS - external -->
-[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[node-selector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
-[dotnet-samples]: https://hub.docker.com/_/microsoft-dotnet-framework-samples/
-[azure-cni]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
-
-<!-- LINKS - internal -->
-[kubernetes-concepts]: concepts-clusters-workloads.md
-[aks-monitor]: ../azure-monitor/containers/container-insights-onboard.md
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
-[aks-taints]: use-multiple-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool
-[az-aks-browse]: /cli/azure/aks#az_aks_browse
-[az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-group-create]: /cli/azure/group#az_group_create
-[az-group-delete]: /cli/azure/group#az_group_delete
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[azure-cli-install]: /cli/azure/install-azure-cli
-[azure-cni-about]: concepts-network.md#azure-cni-advanced-networking
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
-[azure-portal]: https://portal.azure.com
-[kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: concepts-network.md#services
-[restricted-vm-sizes]: quotas-skus-regions.md#restricted-vm-sizes
-[use-advanced-networking]: configure-azure-cni.md
-[aks-support-policies]: support-policies.md
-[aks-faq]: faq.md
-[az-extension-add]: /cli/azure/extension#az-extension-add
-[az-extension-update]: /cli/azure/extension#az-extension-update
-[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference
-[win-faq-change-admin-creds]: windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster
aks Windows Container Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-container-powershell.md
- Title: Create a Windows Server container on an AKS cluster by using PowerShell
-description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell.
-- Previously updated : 03/12/2021---
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
--
-# Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell
-
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and
-manage clusters. In this article, you deploy an AKS cluster using PowerShell. You also deploy an
-`ASP.NET` sample application in a Windows Server container to the cluster.
-
-![Image of browsing to ASP.NET sample application](media/windows-container-powershell/asp-net-sample-app.png)
-
-This article assumes a basic understanding of Kubernetes concepts. For more information, see
-[Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
-
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information
-about installing the Az PowerShell module, see
-[Install Azure PowerShell][install-azure-powershell]. You also must install the Az.Aks PowerShell module:
-
-```azurepowershell-interactive
-Install-Module Az.Aks
-```
--
-If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
-should be billed. Select a specific subscription ID using the
-[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
-```azurepowershell-interactive
-Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
-```
-
-## Limitations
-
-The following limitations apply when you create and manage AKS clusters that support multiple node pools:
-
-* You can't delete the first node pool.
-
-The following additional limitations apply to Windows Server node pools:
-
-* The AKS cluster can have a maximum of 10 node pools.
-* The AKS cluster can have a maximum of 100 nodes in each node pool.
-* The Windows Server node pool name has a limit of 6 characters.
-
-## Create a resource group
-
-An [Azure resource group](../azure-resource-manager/management/overview.md)
-is a logical group in which Azure resources are deployed and managed. When you create a resource
-group, you are asked to specify a location. This location is where resource group metadata is
-stored, it is also where your resources run in Azure if you don't specify another region during
-resource creation. Create a resource group using the [New-AzResourceGroup][new-azresourcegroup]
-cmdlet.
-
-The following example creates a resource group named **myResourceGroup** in the **eastus** location.
-
-> [!NOTE]
-> This article uses PowerShell syntax for the commands in this tutorial. If you are using Azure Cloud
-> Shell, ensure that the dropdown in the upper-left of the Cloud Shell window is set to **PowerShell**.
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name myResourceGroup -Location eastus
-```
-
-The following example output shows the resource group created successfully:
-
-```plaintext
-ResourceGroupName : myResourceGroup
-Location : eastus
-ProvisioningState : Succeeded
-Tags :
-ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup
-```
-
-## Create an AKS cluster
-
-Use the `ssh-keygen` command-line utility to generate an SSH key pair. For more details, see
-[Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
-
-To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to
-use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more
-detailed information to help plan out the required subnet ranges and network considerations, see
-[configure Azure CNI networking][use-advanced-networking]. Use the [New-AzAksCluster][new-azakscluster] cmdlet
-below to create an AKS cluster named **myAKSCluster**. The following example creates the necessary
-network resources if they don't exist.
-
-> [!NOTE]
-> To ensure your cluster operates reliably, you should run at least 2 (two) nodes in the default
-> node pool.
-
-```azurepowershell-interactive
-$Username = Read-Host -Prompt 'Please create a username for the administrator credentials on your Windows Server containers: '
-$Password = Read-Host -Prompt 'Please create a password for the administrator credentials on your Windows Server containers: ' -AsSecureString
-New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName $Username -WindowsProfileAdminUserPassword $Password
-```
-
-> [!Note]
-> If you are unable to create the AKS cluster because the version is not supported in this region
-> then you can use the `Get-AzAksVersion -Location eastus` command to find the supported version
-> list for this region.
-
-After a few minutes, the command completes and returns information about the cluster. Occasionally
-the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
-
-## Add a Windows Server node pool
-
-By default, an AKS cluster is created with a node pool that can run Linux containers. Use
-`New-AzAksNodePool` cmdlet to add a node pool that can run Windows Server containers alongside the
-Linux node pool.
-
-```azurepowershell-interactive
-New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -Name npwin
-```
-
-The above command creates a new node pool named **npwin** and adds it to the **myAKSCluster**. When
-creating a node pool to run Windows Server containers, the default value for **VmSize** is
-**Standard_D2s_v3**. If you choose to set the **VmSize** parameter, check the list of
-[restricted VM sizes][restricted-vm-sizes]. The minimum recommended size is **Standard_D2s_v3**. The
-previous command also uses the default subnet in the default vnet created when running `New-AzAksCluster`.
-
-## Connect to the cluster
-
-To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If
-you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the
-`Install-AzAksKubectl` cmdlet:
-
-```azurepowershell-interactive
-Install-AzAksKubectl
-```
-
-To configure `kubectl` to connect to your Kubernetes cluster, use the
-[Import-AzAksCredential][import-azakscredential] cmdlet. This command
-downloads credentials and configures the Kubernetes CLI to use them.
-
-```azurepowershell-interactive
-Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
-```
-
-To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a
-list of the cluster nodes.
-
-```azurepowershell-interactive
-kubectl get nodes
-```
-
-The following example output shows all the nodes in the cluster. Make sure that the status of all
-nodes is **Ready**:
-
-```plaintext
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-12345678-vmssfedcba Ready agent 13m v1.16.7
-aksnpwin987654 Ready agent 108s v1.16.7
-```
-
-## Run the application
-
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to
-run. In this article, a manifest is used to create all objects needed to run the ASP.NET sample
-application in a Windows Server container. This manifest includes a
-[Kubernetes deployment][kubernetes-deployment] for the ASP.NET sample application and an external
-[Kubernetes service][kubernetes-service] to access the application from the internet.
-
-The ASP.NET sample application is provided as part of the [.NET Framework Samples][dotnet-samples]
-and runs in a Windows Server container. AKS requires Windows Server containers to be based on images
-of **Windows Server 2019** or greater. The Kubernetes manifest file must also define a
-[node selector][node-selector] to tell your AKS cluster to run your ASP.NET sample application's pod
-on a node that can run Windows Server containers.
-
-Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure
-Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical
-system:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: sample
- labels:
- app: sample
-spec:
- replicas: 1
- template:
- metadata:
- name: sample
- labels:
- app: sample
- spec:
- nodeSelector:
- "kubernetes.io/os": windows
- containers:
- - name: sample
- image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
- resources:
- limits:
- cpu: 1
- memory: 800M
- requests:
- cpu: .1
- memory: 300M
- ports:
- - containerPort: 80
- selector:
- matchLabels:
- app: sample
-
-apiVersion: v1
-kind: Service
-metadata:
- name: sample
-spec:
- type: LoadBalancer
- ports:
- - protocol: TCP
- port: 80
- selector:
- app: sample
-```
-
-Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your
-YAML manifest:
-
-```azurepowershell-interactive
-kubectl apply -f sample.yaml
-```
-
-The following example output shows the Deployment and Service created successfully:
-
-```plaintext
-deployment.apps/sample created
-service/sample created
-```
-
-## Test the application
-
-When the application runs, a Kubernetes service exposes the application frontend to the internet.
-This process can take a few minutes to complete. Occasionally the service can take longer than a few
-minutes to provision. Allow up to 10 minutes in these cases.
-
-To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
-
-```azurepowershell-interactive
-kubectl get service sample --watch
-```
-
-Initially the **EXTERNAL-IP** for the **sample** service is shown as **pending**.
-
-```plaintext
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-sample LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-When the **EXTERNAL-IP** address changes from **pending** to an actual public IP address, use `CTRL-C`
-to stop the `kubectl` watch process. The following example output shows a valid public IP address
-assigned to the service:
-
-```plaintext
-sample LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-To see the sample app in action, open a web browser to the external IP address of your service.
-
-![Image of browsing to ASP.NET sample application](media/windows-container-powershell/asp-net-sample-app.png)
-
-> [!Note]
-> If you receive a connection timeout when trying to load the page then you should verify the sample
-> app is ready with the following command `kubectl get pods --watch`. Sometimes the Windows
-> container will not be started by the time your external IP address is available.
-
-## Delete cluster
-
-When the cluster is no longer needed, use the
-[Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove
-the resource group, container service, and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster
-> is not removed. For steps on how to remove the service principal, see
-> [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity,
-> the identity is managed by the platform and does not require removal.
-
-## Next steps
-
-In this article, you deployed a Kubernetes cluster and deployed an `ASP.NET` sample application in a
-Windows Server container to it.
-
-To learn more about AKS, and walk through a complete code to deployment example, continue to the
-Kubernetes cluster tutorial.
-
-> [!div class="nextstepaction"]
-> [AKS tutorial][aks-tutorial]
-
-<!-- LINKS - external -->
-[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[dotnet-samples]: https://hub.docker.com/_/microsoft-dotnet-framework-samples/
-[node-selector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-
-<!-- LINKS - internal -->
-[kubernetes-concepts]: concepts-clusters-workloads.md
-[install-azure-powershell]: /powershell/azure/install-az-ps
-[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
-[azure-cni-about]: concepts-network.md#azure-cni-advanced-networking
-[use-advanced-networking]: configure-azure-cni.md
-[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
-[restricted-vm-sizes]: quotas-skus-regions.md#restricted-vm-sizes
-[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
-[kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: concepts-network.md#services
-[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
-[sp-delete]: kubernetes-service-principal.md#additional-considerations
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
To get started with Windows Server containers in AKS, see [Create a node pool th
[azure-network-models]: concepts-network.md#azure-virtual-networks [configure-azure-cni]: configure-azure-cni.md [nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
-[windows-node-cli]: windows-container-cli.md
+[windows-node-cli]: ./learn/quick-windows-container-deploy-cli.md
[aks-support-policies]: support-policies.md [aks-faq]: faq.md [upgrade-cluster]: upgrade-cluster.md
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
In the following example, request forwarding is retried up to ten times using an
```
+### Example
+
+In the following example, sending a request to a URL other than the defined backend is retried up to three times if the connection is dropped/timed out, or the request results in a server-side error. Since `first-fast-retry` is set to true, the first retry is executed immediately upon the initial request failure. Note that `send-request` must set `ignore-error` to true in order for `response-variable-name` to be null in the event of an error.
+
+```xml
+
+<retry
+ condition="@(context.Variables["response"] == null || ((IResponse)context.Variables["response"]).StatusCode >= 500)"
+ count="3"
+ interval="1"
+ first-fast-retry="true">
+ <send-request
+ mode="new"
+ response-variable-name="response"
+ timeout="3"
+ ignore-error="true">
+ <set-url>https://api.contoso.com/products/5</set-url>
+ <set-method>GET</set-method>
+ </send-request>
+</retry>
+
+```
+ ### Elements | Element | Description | Required |
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
This article provides the steps for deploying self-hosted gateway component of A
## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)-- [Create an Azure Kubernetes cluster](../aks/kubernetes-walkthrough-portal.md)
+- Create an Azure Kubernetes cluster [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
- [Provision a gateway resource in your API Management instance](api-management-howto-provision-self-hosted-gateway.md). ## Deploy the self-hosted gateway to AKS
api-management How To Deploy Self Hosted Gateway Kubernetes Opentelemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md
You learn how to:
## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)-- [Create an Azure Kubernetes cluster](../aks/kubernetes-walkthrough-portal.md)
+- Create an Azure Kubernetes cluster [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
- [Provision a self-hosted gateway resource in your API Management instance](api-management-howto-provision-self-hosted-gateway.md). - ## Introduction to OpenTelemetry [OpenTelemetry](https://opentelemetry.io/) is a set of open-source tools and frameworks for logging, metrics, and tracing in a vendor-neutral way.
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
The app must be running in the **Standard**, **Premium**, or **Isolated** tier i
![Configuration source](./media/web-sites-staged-publishing/ConfigurationSource1.png) You can clone a configuration from any existing slot. Settings that can be cloned include app settings, connection strings, language framework versions, web sockets, HTTP version, and platform bitness.
+
+ > [!NOTE]
+ > Currently, VNET and the Private Endpoint are not cloned across slots.
+ >
4. After the slot is added, select **Close** to close the dialog box. The new slot is now shown on the **Deployment slots** page. By default, **Traffic %** is set to 0 for the new slot, with all customer traffic routed to the production slot.
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
App Service Environment v3 requires the subnet it's in to have a single delegati
```azurecli az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments ```-
-![subnet delegation sample](./media/migration/subnet-delegation.png)
## 6. Migrate to App Service Environment v3
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
From the [Azure portal](https://portal.azure.com), navigate to the **Migration** page for the App Service Environment you'll be migrating. You can do this by clicking on the banner at the top of the **Overview** page for your App Service Environment or by clicking the **Migration** item on the left-hand side. ![migration access points](./media/migration/portal-overview.png) On the migration page, the platform will validate if migration is supported for your App Service Environment. If your environment isn't supported for migration, a banner will appear at the top of the page and include an error message with a reason. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the error messages you may see if you aren't eligible for migration. If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you won't be able to use the migration feature. If your environment [won't be supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
-![migration not supported sample](./media/migration/migration-not-supported.png)
If migration is supported for your App Service Environment, you'll be able to proceed to the next step in the process. The migration page will guide you through the series of steps to complete the migration.
-![migration page sample](./media/migration/migration-ux-pre.png)
## 2. Generate IP addresses for your new App Service Environment v3 Under **Get new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. You won't be able to scale or make changes to your existing App Service Environment during this time. If after 15 minutes you don't see your new IP addresses, select refresh as shown in the sample to allow your new IP addresses to appear.
-![pre-migration request to refresh](./media/migration/pre-migration-refresh.png)
## 3. Update dependent resources with new IPs When the previous step finishes, you'll be shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. Don't move on to the next step until you confirm that you have made these updates.
-![sample IPs](./media/migration/ip-sample.png)
## 4. Delegate your App Service Environment subnet App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and/or update the delegation if needed before migrating. A link to your subnet is given so that you can confirm and update as needed.
-![ux subnet delegation sample](./media/migration/subnet-delegation-ux.png)
## 5. Migrate to App Service Environment v3
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 4/27/2022 Last updated : 4/29/2022
If your App Service Environment doesn't pass the validation checks or you try to
|Migration to ASEv3 is not allowed for this ASE|You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](/azure/azure-resource-manager/management/azure-subscription-service-limits#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. | |`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location|You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
+|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](using-an-ase.md#upgrade-preference) from the Azure portal. |Wait until the upgrade finishes and then migrate. |
## Overview of the migration process using the migration feature
There's no cost to migrate your App Service Environment. You'll stop being charg
> [App Service Environment v3 Networking](networking.md) > [!div class="nextstepaction"]
-> [Using an App Service Environment v3](using.md)
+> [Using an App Service Environment v3](using.md)
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
After your app integrates with your virtual network, it uses the same DNS server
There are some limitations with using regional virtual network integration:
-* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Basic and Standard tier but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created.
+* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Basic and Standard tier but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Basic or Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created.
* The feature can't be used by Isolated plan apps that are in an App Service Environment. * You can't reach resources across peering connections with classic virtual networks. * The feature requires an unused subnet that's an IPv4 `/28` block or larger in an Azure Resource Manager virtual network.
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to Azure databases, including: -- [Azure SQL Database](/azure/sql-database/)
+- [Azure SQL Database](/azure/azure-sql/database/)
- [Azure Database for MySQL](/azure/mysql/) - [Azure Database for PostgreSQL](/azure/postgresql/)
Prepare your environment for the Azure CLI.
First, enable Azure Active Directory authentication to the Azure database by assigning an Azure AD user as the administrator of the server. For the scenario in the tutorial, you'll use this user to connect to your Azure database from the local development environment. Later, you set up the managed identity for your App Service app to connect from within Azure. > [!NOTE]
-> This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](../azure-sql/database/authentication-aad-overview.md#azure-ad-features-and-limitations).
+> This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](/azure/azure-sql/database/authentication-aad-overview#azure-ad-features-and-limitations).
1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
First, enable Azure Active Directory authentication to the Azure database by ass
az sql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name ADMIN --object-id $azureaduser ```
- For more information on adding an Active Directory administrator, see [Provision an Azure Active Directory administrator for your server](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance)
+ For more information on adding an Active Directory administrator, see [Provision an Azure Active Directory administrator for your server](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance)
# [Azure Database for MySQL](#tab/mysql)
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
connection.connect(); ```
- The [tedious](https://tediousjs.github.io/tedious/) library also has an authentication type `azure-active-directory-msi-app-service`, which doesn't require you to retrieve the token yourself, but the use of `DefaultAzureCredential` in this example works both in App Service and in your local development environment. For more information, see [Quickstart: Use Node.js to query a database in Azure SQL Database or Azure SQL Managed Instance](../azure-sql/database/connect-query-nodejs.md)
+ The [tedious](https://tediousjs.github.io/tedious/) library also has an authentication type `azure-active-directory-msi-app-service`, which doesn't require you to retrieve the token yourself, but the use of `DefaultAzureCredential` in this example works both in App Service and in your local development environment. For more information, see [Quickstart: Use Node.js to query a database in Azure SQL Database or Azure SQL Managed Instance](/azure/azure-sql/database/connect-query-nodejs)
# [Azure Database for MySQL](#tab/mysql)
app-service Tutorial Networking Isolate Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-networking-isolate-vnet.md
# Tutorial: Isolate back-end communication in Azure App Service with Virtual Network integration
-In this article you will configure an App Service app with secure, network-isolated communication to backend services. The example scenario used is in [Tutorial: Secure Cognitive Service connection from App Service using Key Vault](tutorial-connect-msi-key-vault.md). When you're finished, you have an App Service app that accesses both Key Vault and Cognitive Services through an [Azure virtual network](../virtual-network/virtual-networks-overview.md) (VNet), and no other traffic is allowed to access those back-end resources. All traffic will be isolated within your VNet using [VNet integration](web-sites-integrate-with-vnet.md) and [private endpoints](../private-link/private-endpoint-overview.md).
+In this article you will configure an App Service app with secure, network-isolated communication to backend services. The example scenario used is in [Tutorial: Secure Cognitive Service connection from App Service using Key Vault](tutorial-connect-msi-key-vault.md). When you're finished, you have an App Service app that accesses both Key Vault and Cognitive Services through an [Azure virtual network](../virtual-network/virtual-networks-overview.md), and no other traffic is allowed to access those back-end resources. All traffic will be isolated within your virtual network using [virtual network integration](web-sites-integrate-with-vnet.md) and [private endpoints](../private-link/private-endpoint-overview.md).
-As a multi-tenanted service, outbound network traffic from your App Service app to other Azure services shares the same environment with other apps or even other subscriptions. While the traffic itself can be encrypted, certain scenarios may require an extra level of security by isolating back-end communication from other network traffic. These scenarios are typically accessible to large enterprises with a high level of expertise, but App Service puts it within reach with VNet integration.
+As a multi-tenanted service, outbound network traffic from your App Service app to other Azure services shares the same environment with other apps or even other subscriptions. While the traffic itself can be encrypted, certain scenarios may require an extra level of security by isolating back-end communication from other network traffic. These scenarios are typically accessible to large enterprises with a high level of expertise, but App Service puts it within reach with virtual network integration.
![scenario architecture](./media/tutorial-networking-isolate-vnet/architecture.png) With this architecture: - Public traffic to the back-end services is blocked.-- Outbound traffic from App Service is routed to the VNet and can reach the back-end services.
+- Outbound traffic from App Service is routed to the virtual network and can reach the back-end services.
- App Service is able to perform DNS resolution to the back-end services through the private DNS zones. What you will learn: > [!div class="checklist"]
-> * Create a VNet and subnets for App Service VNet integration
+> * Create a virtual network and subnets for App Service virtual network integration
> * Create private DNS zones > * Create private endpoints
-> * Configure VNet integration in App Service
+> * Configure virtual network integration in App Service
## Prerequisites
The tutorial continues to use the following environment variables from the previ
vaultName=<vault-name> ```
-## Create VNet and subnets
+## Create virtual network and subnets
-1. Create a VNet. Replace *\<virtual-network-name>* with a unique name.
+1. Create a virtual network. Replace *\<virtual-network-name>* with a unique name.
```azurecli-interactive # Save vnet name as variable for convenience
The tutorial continues to use the following environment variables from the previ
az network vnet create --resource-group $groupName --location $region --name $vnetName --address-prefixes 10.0.0.0/16 ```
-1. Create a subnet for the App Service VNet integration.
+1. Create a subnet for the App Service virtual network integration.
```azurecli-interactive az network vnet subnet create --resource-group $groupName --vnet-name $vnetName --name vnet-integration-subnet --address-prefixes 10.0.0.0/24 --delegations Microsoft.Web/serverfarms ```
- For App Service, the VNet integration subnet is recommended to have a CIDR block of `/26` at a minimum (see [VNet integration subnet requirements](overview-vnet-integration.md#subnet-requirements)). `/24` is more than sufficient. `--delegations Microsoft.Web/serverfarms` specifies that the subnet is [delegated for App Service VNet integration](../virtual-network/subnet-delegation-overview.md).
+ For App Service, the virtual network integration subnet is recommended to have a CIDR block of `/26` at a minimum (see [Virtual network integration subnet requirements](overview-vnet-integration.md#subnet-requirements)). `/24` is more than sufficient. `--delegations Microsoft.Web/serverfarms` specifies that the subnet is [delegated for App Service virtual network integration](../virtual-network/subnet-delegation-overview.md).
1. Create another subnet for the private endpoints.
Because your Key Vault and Cognitive Services resources will sit behind [private
For more information on these settings, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration)
-1. Link the private DNS zones to the VNet.
+1. Link the private DNS zones to the virtual network.
```azurecli-interactive az network private-dns link vnet create --resource-group $groupName --name cognitiveservices-zonelink --zone-name privatelink.cognitiveservices.azure.com --virtual-network $vnetName --registration-enabled False
Because your Key Vault and Cognitive Services resources will sit behind [private
## Create private endpoints
-1. In the private endpoint subnet of your VNet, create a private endpoint for your key vault.
+1. In the private endpoint subnet of your virtual network, create a private endpoint for your key vault.
```azurecli-interactive # Get Cognitive Services resource ID
Because your Key Vault and Cognitive Services resources will sit behind [private
> [!NOTE] > Again, you can observe the behavior change in the sample app. You can no longer load the app because it can no longer access the key vault references. The app has lost its connectivity to the key vault through the shared networking.
-The two private endpoints are only accessible to clients inside the VNet you created. You can't even access the secrets in the key vault through **Secrets** page in the Azure portal, because the portal accesses them through the public internet (see [Manage the locked down resources](#manage-the-locked-down-resources)).
+The two private endpoints are only accessible to clients inside the virtual network you created. You can't even access the secrets in the key vault through **Secrets** page in the Azure portal, because the portal accesses them through the public internet (see [Manage the locked down resources](#manage-the-locked-down-resources)).
-## Configure VNet integration in your app
+## Configure virtual network integration in your app
-1. Scale the app up to **Standard** tier. VNet integration requires **Standard** tier or above (see [Integrate your app with an Azure virtual network](overview-vnet-integration.md)).
+1. Scale the app up to a supported pricing tier (see [Integrate your app with an Azure virtual network](overview-vnet-integration.md)).
```azurecli-interactive az appservice plan update --name $appName --resource-group $groupName --sku S1
The two private endpoints are only accessible to clients inside the VNet you cre
az webapp update --resource-group $groupName --name $appName --https-only ```
-1. Enable VNet integration on your app.
+1. Enable virtual network integration on your app.
```azurecli-interactive az webapp vnet-integration add --resource-group $groupName --name $appName --vnet $vnetName --subnet vnet-integration-subnet ```
- VNet integration allows outbound traffic to flow directly into the VNet. By default, only local IP traffic defined in [RFC-1918](https://tools.ietf.org/html/rfc1918#section-3) is routed to the VNet, which is what you need for the private endpoints. To route all your traffic to the VNet, see [Manage virtual network integration routing](configure-vnet-integration-routing.md). Routing all traffic can also be used if you want to route internet traffic through your VNet, such as through an [Azure VNet NAT](../virtual-network/nat-gateway/nat-overview.md) or an [Azure Firewall](../firewall/overview.md).
+ Virtual network integration allows outbound traffic to flow directly into the virtual network. By default, only local IP traffic defined in [RFC-1918](https://tools.ietf.org/html/rfc1918#section-3) is routed to the virtual network, which is what you need for the private endpoints. To route all your traffic to the virtual network, see [Manage virtual network integration routing](configure-vnet-integration-routing.md). Routing all traffic can also be used if you want to route internet traffic through your virtual network, such as through an [Azure Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) or an [Azure Firewall](../firewall/overview.md).
1. In the browser, navigate to `<app-name>.azurewebsites.net` again and wait for the integration to take effect. If you get an HTTP 500 error, wait a few minutes and try again. If you can load the page and get detection results, then you're connecting to the Cognitive Services endpoint with key vault references.
The two private endpoints are only accessible to clients inside the VNet you cre
Depending on your scenarios, you may not be able to manage the private endpoint protected resources through the Azure portal, Azure CLI, or Azure PowerShell (for example, Key Vault). These tools all make REST API calls to access the resources through the public internet, and are blocked by your configuration. Here are a few options for accessing the locked down resources: - For Key Vault, add the public IP of your local machine to view or update the private endpoint protected secrets.-- If your on premises network is extended into the Azure VNet through a [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../expressroute/expressroute-introduction.md), you can manage the private endpoint protected resources directly from your on premises network. -- Manage the private endpoint protected resources from a [jump server](https://wikipedia.org/wiki/Jump_server) in the VNet.-- [Deploy Cloud Shell into the VNet](../cloud-shell/private-vnet.md).
+- If your on premises network is extended into the Azure virtual network through a [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../expressroute/expressroute-introduction.md), you can manage the private endpoint protected resources directly from your on premises network.
+- Manage the private endpoint protected resources from a [jump server](https://wikipedia.org/wiki/Jump_server) in the virtual network.
+- [Deploy Cloud Shell into the virtual network](../cloud-shell/private-vnet.md).
## Clean up resources
app-service Webjobs Dotnet Deploy Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-dotnet-deploy-vs.md
# Develop and deploy WebJobs using Visual Studio
-This article explains how to use Visual Studio to deploy a console app project to a web app in [Azure App Service](overview.md) as an [Azure WebJob](https://go.microsoft.com/fwlink/?LinkId=390226). For information about how to deploy WebJobs by using the [Azure portal](https://portal.azure.com), see [Run background tasks with WebJobs in Azure App Service](webjobs-create.md).
+This article explains how to use Visual Studio to deploy a console app project to a web app in [Azure App Service](overview.md) as an [Azure WebJob](/azure/app-service/webjobs-create). For information about how to deploy WebJobs by using the [Azure portal](https://portal.azure.com), see [Run background tasks with WebJobs in Azure App Service](webjobs-create.md).
You can choose to develop a WebJob that runs as either a [.NET Core app](#webjobs-as-net-core-console-apps) or a [.NET Framework app](#webjobs-as-net-framework-console-apps). Version 3.x of the [Azure WebJobs SDK](webjobs-sdk-how-to.md) lets you develop WebJobs that run as either .NET Core apps or .NET Framework apps, while version 2.x supports only the .NET Framework. The way that you deploy a WebJobs project is different for .NET Core projects than for .NET Framework projects.
To create a new WebJobs-enabled project, use the console app project template an
Create a project that is configured to deploy automatically as a WebJob when you deploy a web project in the same solution. Use this option when you want to run your WebJob in the same web app in which you run the related web application. > [!NOTE]
-> The WebJobs new-project template automatically installs NuGet packages and includes code in *Program.cs* for the [WebJobs SDK](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/getting-started-with-windows-azure-webjobs). If you don't want to use the WebJobs SDK, remove or change the `host.RunAndBlock` statement in *Program.cs*.
+> The WebJobs new-project template automatically installs NuGet packages and includes code in *Program.cs* for the [WebJobs SDK](/azure/app-service/webjobs-sdk-get-started). If you don't want to use the WebJobs SDK, remove or change the `host.RunAndBlock` statement in *Program.cs*.
> >
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
An HTTP 408 response can be observed when client requests to the frontend listen
#### 499 ΓÇô Client closed the connection
-An HTTP 499 response is presented if a client request that is sent to application gateways using v2 sku is closed before the server finished responding. This error can be observed when a large response is returned to the client, but the client may have closed or refreshed their browser/application before the server had a chance to finish responding.
+An HTTP 499 response is presented if a client request that is sent to application gateways using v2 sku is closed before the server finished responding. This error can be observed when a large response is returned to the client, but the client may have closed or refreshed their browser/application before the server had a chance to finish responding. In application gateways using v1 sku, an HTTP 0 response code may be raised for the client closing the connection before the server has finished responding as well.
## 5XX response codes (server error)
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Kubernetes infrastructure.
For the following steps, we need setup [kubectl](https://kubectl.docs.kubernetes.io/) command, which we will use to connect to our new Kubernetes cluster. [Cloud Shell](https://shell.azure.com/) has `kubectl` already installed. We will use `az` CLI to obtain credentials for Kubernetes.
-Get credentials for your newly deployed AKS ([read more](../aks/kubernetes-walkthrough.md#connect-to-the-cluster)):
+Get credentials for your newly deployed AKS ([read more](../aks/manage-azure-rbac.md#use-azure-rbac-for-kubernetes-authorization-with-kubectl)):
+ ```azurecli # use the deployment-outputs.json created after deployment to get the cluster name and resource group name aksClusterName=$(jq -r ".aksClusterName.value" deployment-outputs.json)
az aks get-credentials --resource-group $resourceGroupName --name $aksClusterNam
* [Managed Identity Controller (MIC)](https://github.com/Azure/aad-pod-identity#managed-identity-controllermic) component * [Node Managed Identity (NMI)](https://github.com/Azure/aad-pod-identity#node-managed-identitynmi) component - To install AAD Pod Identity to your cluster: - *Kubernetes RBAC enabled* AKS cluster
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models.md
You can see how data is extracted from custom forms by trying our Sample Labelin
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
Using the REST API, you can make an [Analyze Form](https://westus.dev.cognitive.
### [**Client-library SDKs**](#tab/sdks)
-Using the programming language of your choice to analyze a form or document with a custom or composed model. You'll need your Form Recognizer endpoint, API key, and model ID.
+Using the programming language of your choice to analyze a form or document with a custom or composed model. You'll need your Form Recognizer endpoint, key, and model ID.
* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
See how data, including name, job title, address, email, and company name, is ex
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following tools are supported by Form Recognizer v3.0:
See how data is extracted from your specific or unique documents by using custom models. You need the following resources: * An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal.":::
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
See how to extract data, including name, birth date, machine-readable zone, and
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
See how data, including customer information, vendor details, and line items, is
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
See how data is extracted from forms and documents using the Form Recognizer Stu
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
See how text is extracted from forms and documents using the Form Recognizer Stu
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
See how data, including time and date of transactions, merchant information, and
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A [Form Recognizer instance](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your API key and endpoint.
+* A [Form Recognizer instance](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
Each container has the following configuration settings:
|Required|Setting|Purpose| |--|--|--|
-|Yes|[ApiKey](#apikey-and-billing-configuration-setting)|Tracks billing information.|
-|Yes|[Billing](#apikey-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. _See_ [Billing]](form-recognizer-container-install-run.md#billing), for more information. For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
+|Yes|[Key](#key-and-billing-configuration-setting)|Tracks billing information.|
+|Yes|[Billing](#key-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. _See_ [Billing]](form-recognizer-container-install-run.md#billing), for more information. For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.| |No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetry support to your container.| |No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
Each container has the following configuration settings:
|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. | > [!IMPORTANT]
-> The [`ApiKey`](#apikey-and-billing-configuration-setting), [`Billing`](#apikey-and-billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together. You must provide valid values for all three settings; otherwise, your containers won't start. For more information about using these configuration settings to instantiate a container, see [Billing](form-recognizer-container-install-run.md#billing).
+> The [`Key`](#key-and-billing-configuration-setting), [`Billing`](#key-and-billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together. You must provide valid values for all three settings; otherwise, your containers won't start. For more information about using these configuration settings to instantiate a container, see [Billing](form-recognizer-container-install-run.md#billing).
-## ApiKey and Billing configuration setting
+## Key and Billing configuration setting
-The `ApiKey` setting specifies the Azure resource key that's used to track billing information for the container. The value for the ApiKey must be a valid key for the resource that's specified for `Billing` in the "Billing configuration setting" section.
+The `Key` setting specifies the Azure resource key that's used to track billing information for the container. The value for the Key must be a valid key for the resource that's specified for `Billing` in the "Billing configuration setting" section.
The `Billing` setting specifies the endpoint URI of the resource on Azure that's used to meter billing information for the container. The value for this configuration setting must be a valid endpoint URI for a resource on Azure. The container reports usage about every 10 to 15 minutes.
The **docker compose** method is built from three steps:
### Single container example
-In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Layout container instance.
+In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
#### **Layout container**
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
ports: - "5000"
networks:
#### **Receipt and OCR Read containers**
-In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Receipt container and {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} values for your Computer Vision Read container.
+In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container and {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
```yml version: "3"
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports: - "5000:5050"
environment: - EULA=accept - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apikey={COMPUTER_VISION_API_KEY}
+ - key={COMPUTER_VISION_KEY}
networks: - ocrvnet
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine-learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents and output structured data that includes the relationships in the original file.
-In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** (for Receipt, Business Card and ID Document containers you will also need the **Read** OCR container).
+In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** (for Receipt, Business Card and ID Document containers you'll also need the **Read** OCR container).
## Prerequisites
You'll also need the following to use Form Recognizer containers:
|-|| | **Familiarity with Docker** | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). | | **Docker Engine installed** | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
-|**Form Recognizer resource** | A [**single-service Azure Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated API key and endpoint URI. Both values are available on the Azure portal Form Recognizer **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_API_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
-| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image-with-docker-pull). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the API key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_API_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
+|**Form Recognizer resource** | A [**single-service Azure Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated key and endpoint URI. Both values are available on the Azure portal Form Recognizer **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
+| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image-with-docker-pull). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
|Optional|Purpose| ||-|
-|**Azure CLI (command-line interface)** | The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It is available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell. |
+|**Azure CLI (command-line interface)** | The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell. |
||| ## Request approval to run the container
Complete and submit the [Application for Gated Services form](https://aka.ms/csg
The form requests information about you, your company, and the user scenario for which you'll use the container. After you submit the form, the Azure Cognitive Services team will review it and email you with a decision within 10 business days.
-On the form, you must use an email address associated with an Azure subscription ID. The Azure resource you use to run the container must have been created with the approved Azure subscription ID. Check your email (both inbox and junk folders) for updates on the status of your application from Microsoft. After you're approved, you will be able to run the container after downloading it from the Microsoft Container Registry (MCR), described later in the article.
+On the form, you must use an email address associated with an Azure subscription ID. The Azure resource you use to run the container must have been created with the approved Azure subscription ID. Check your email (both inbox and junk folders) for updates on the status of your application from Microsoft. After you're approved, you'll be able to run the container after downloading it from the Microsoft Container Registry (MCR), described later in the article.
## Host computer requirements
The host is a x64-based computer that runs the Docker container. It can be a com
#### Required containers
-The following table lists the additional supporting container(s) for each Form Recognizer container you download. Refer to the [Billing](#billing) section for more information.
+The following table lists the supporting container(s) for each Form Recognizer container you download. For more information, see the [Billing](#billing) section.
| Feature container | Supporting container(s) | ||--|
The following host machine requirements are applicable to **train and analyze**
| Custom API| 0.5 cores, 0.5-GB memory| 1 cores, 1-GB memory | |Custom Supervised | 4 cores, 2-GB memory | 8 cores, 4-GB memory|
-If you are only making analyze calls, the host machine requirements are as follows:
+If you're only making analyze calls, the host machine requirements are as follows:
| Container | Minimum | Recommended | |--||-|
If you are only making analyze calls, the host machine requirements are as follo
## Run the container with the **docker-compose up** command
-* Replace the {ENDPOINT_URI} and {API_KEY} values with your resource Endpoint URI and the API Key from the Azure resource page.
+* Replace the {ENDPOINT_URI} and {API_KEY} values with your resource Endpoint URI and the key from the Azure resource page.
:::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page."::: * Ensure that the EULA value is set to "accept".
-* The `EULA`, `Billing`, and `ApiKey` values must be specified; otherwise the container won't start.
+* The `EULA`, `Billing`, and `Key` values must be specified; otherwise the container won't start.
> [!IMPORTANT]
-> The subscription keys are used to access your Form Recognizer resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
+> The keys are used to access your Form Recognizer resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
### [Layout](#tab/layout)
-Below is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_API_KEY} values for your Layout container instance.
+Below is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
```yml version: "3.9"
azure-cognitive-service-layout:
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
ports: - "5000" networks:
docker-compose up
### [Business Card](#tab/business-card)
-Below is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} for your Computer Vision Read container.
+Below is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} for your Computer Vision Read container.
```yml version: "3.9"
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports: - "5000:5050"
environment: - EULA=accept - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apikey={COMPUTER_VISION_API_KEY}
+ - key={COMPUTER_VISION_KEY}
networks: - ocrvnet
docker-compose up
### [ID Document](#tab/id-document)
-Below is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} values for your Computer Vision Read container.
+Below is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
```yml version: "3.9"
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports: - "5000:5050"
environment: - EULA=accept - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apikey={COMPUTER_VISION_API_KEY}
+ - key={COMPUTER_VISION_KEY}
networks: - ocrvnet
docker-compose up
### [Invoice](#tab/invoice)
-Below is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Invoice and Layout containers.
+Below is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout containers.
```yml version: "3.9"
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000 ports: - "5000:5050"
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
networks: - ocrvnet
docker-compose up
### [Receipt](#tab/receipt)
-Below is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_API_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_API_KEY} values for your Computer Vision Read container.
+Below is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
```yml version: "3.9"
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apikey={FORM_RECOGNIZER_API_KEY}
+ - key={FORM_RECOGNIZER_KEY}
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports: - "5000:5050"
environment: - EULA=accept - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apikey={COMPUTER_VISION_API_KEY}
+ - key={COMPUTER_VISION_KEY}
networks: - ocrvnet
docker-compose up
### [Custom](#tab/custom)
-In addition to the [prerequisites](#prerequisites) mentioned above, you will need to do the following to process a custom document:
+In addition to the [prerequisites](#prerequisites) mentioned above, you'll need to do the following to process a custom document:
#### &bullet; Create a folder to store the following files:
In addition to the [prerequisites](#prerequisites) mentioned above, you will nee
#### &bullet; Create a folder to store your input data 1. Name this folder **shared**.
- 1. We will reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
+ 1. We'll reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file, below. #### &bullet; Create a folder to store the logs written by the Form Recognizer service on your local machine. 1. Name this folder **output**.
- 1. We will reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
+ 1. We'll reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file, below. #### &bullet; Create an environment file
In addition to the [prerequisites](#prerequisites) mentioned above, you will nee
SHARED_MOUNT_PATH="<file-path-to-shared-folder>" OUTPUT_MOUNT_PATH="<file -path-to-output-folder>" FORM_RECOGNIZER_ENDPOINT_URI="<your-form-recognizer-endpoint>"
- FORM_RECOGNIZER_API_KEY="<your-form-recognizer-apiKey>"
+ FORM_RECOGNIZER_KEY="<your-form-recognizer-key>"
RABBITMQ_HOSTNAME="rabbitmq" RABBITMQ_PORT=5672 NGINX_CONF_FILE="<file-path>"
http {
- rabbitmq environment: eula: accept
- apikey: ${FORM_RECOGNIZER_API_KEY}
+ key: ${FORM_RECOGNIZER_KEY}
billing: ${FORM_RECOGNIZER_ENDPOINT_URI} Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME} Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
http {
- rabbitmq environment: eula: accept
- apikey: ${FORM_RECOGNIZER_API_KEY}
+ key: ${FORM_RECOGNIZER_KEY}
billing: ${FORM_RECOGNIZER_ENDPOINT_URI} Logging:Console:LogLevel:Default: Information Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
http {
- rabbitmq environment: eula: accept
- apikey: ${FORM_RECOGNIZER_API_KEY}
+ key: ${FORM_RECOGNIZER_KEY}
billing: ${FORM_RECOGNIZER_ENDPOINT_URI} CustomFormRecognizer:ContainerPhase: All CustomFormRecognizer:LayoutAnalyzeUri: http://azure-cognitive-service-layout:5000/formrecognizer/v2.1/layout/analyze
docker-compose down
The Form Recognizer containers send billing information to Azure by using a Form Recognizer resource on your Azure account.
-Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey`. You will be billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you will be billed for the Form Recognizer `BusinessCard` and `Computer Vision Read` container instances. For the invoice feature, you will be billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You'll be billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you'll be billed for the Form Recognizer `BusinessCard` and `Computer Vision Read` container instances. For the invoice feature, you'll be billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
The [**docker-compose up**](https://docs.docker.com/engine/reference/commandline
| Option | Description | |--|-|
-| `ApiKey` | The API key of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to an API key for the provisioned resource that's specified in `Billing`. |
+| `Key` | The key of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to a key for the provisioned resource that's specified in `Billing`. |
| `Billing` | The endpoint of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.| | `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Let's get started:
:::image type="content" source="media/logic-apps-tutorial/form-recognizer-validation.gif" alt-text="GIF showing the Azure portal validation process.":::
-## Get Endpoint URL and API keys
+## Get Endpoint URL and keys
1. Once you receive the *deployment is complete* message, select the **Go to resource** button.
applied-ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/disaster-recovery.md
If your app or business depends on the use of a Form Recognizer custom model, we
## Prerequisites 1. Two Form Recognizer Azure resources in different Azure regions. If you don't have them, go to the Azure portal and <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a new Form Recognizer resource" target="_blank">create a new Form Recognizer resource </a>.
-1. The subscription key, endpoint URL, and subscription ID of your Form Recognizer resource. You can find these values on the resource's **Overview** tab on the Azure portal.
+1. The key, endpoint URL, and subscription ID of your Form Recognizer resource. You can find these values on the resource's **Overview** tab on the Azure portal.
## Copy API overview
The following HTTP request gets copy authorization from your target resource. Yo
``` POST https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization
-Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_API_KEY}
+Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
``` You'll get a `201\Created` response with a `modelId` value in the body. This string is the ID of the newly created (blank) model. The `accessToken` is needed for the API to copy data to this resource, and the `expirationDateTimeTicks` value is the expiration of the token. Save all three of these values to a secure location.
The following HTTP request starts the Copy operation on the source resource. You
``` POST https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/<your model ID>/copy HTTP/1.1
-Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_API_KEY}
+Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
``` The body of your request needs to have the following format. You'll need to enter the resource ID and region name of your target resource. You can find your resource ID on the **Properties** tab of your resource in the Azure portal, and you can find the region name on the **Keys and endpoint** tab. You'll also need the model ID, access token, and expiration value that you copied from the previous step.
Track your progress by querying the **Get Copy Model Result** API against the so
``` GET https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/eccc3f13-8289-4020-ba16-9f1d1374e96f/copyresults/02989ba8-1296-499f-aaf4-55cfff41b8f1 HTTP/1.1
-Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_API_KEY}
+Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
``` Your response will vary depending on the status of the operation. Look for the `"status"` field in the JSON body. If you're automating this API call in a script, we recommend querying the operation once every second.
Content-Type: application/json; charset=utf-8
|Error|Resolution| |:--|:--| |"errors":[{"code":"AuthorizationError",<br>"message":"Authorization failure due to <br>missing or invalid authorization claims."}] | Occurs when the `copyAuthorization` payload or content is modified from what was returned by the `copyAuthorization` API. Ensure that the payload is the same exact content that was returned from the earlier `copyAuthorization` call.|
-|"errors":[{"code":"AuthorizationError",<br>"message":"Could not retrieve authorization <br>metadata. If this issue persists use a different <br>target model to copy into."}] | Indicates that the `copyAuthorization` payload is being reused with a copy request. A copy request that succeeds will not allow any further requests that use the same `copyAuthorization` payload. If you raise a separate error (like the ones noted below) and you subsequently retry the copy with the same authorization payload, this error gets raised. The resolution is to generate a new `copyAuthorization` payload and then reissue the copy request.|
-|"errors":[{"code":"DataProtectionTransformServiceError",<br>"message":"Data transfer request is not allowed <br>as it downgrades to a less secure data protection scheme. Refer documentation or contact your service administrator <br>for details."}] | Occurs when copying between an `AEK` enabled resource to a non `AEK` enabled resource. To allow copying encrypted model to the target as unencrypted specify `x-ms-forms-copy-degrade: true` header with the copy request.|
-|"errors":[{"code":"ResourceResolverError",<br>"message":"Could not fetch information for Cognitive resource with Id '...'. Ensure the resource is valid and exists in the specified region 'westus2'.."}] | Indicates that the Azure resource indicated by the `targetResourceId` is not a valid Cognitive resource or does not exist. Verify and reissue the copy request to resolve this issue.|
+|"errors":[{"code":"AuthorizationError",<br>"message":"Couldn't retrieve authorization <br>metadata. If this issue persists use a different <br>target model to copy into."}] | Indicates that the `copyAuthorization` payload is being reused with a copy request. A copy request that succeeds won't allow any further requests that use the same `copyAuthorization` payload. If you raise a separate error (like the ones noted below) and you later retry the copy with the same authorization payload, this error gets raised. The resolution is to generate a new `copyAuthorization` payload and then reissue the copy request.|
+|"errors":[{"code":"DataProtectionTransformServiceError",<br>"message":"Data transfer request isn't allowed <br>as it downgrades to a less secure data protection scheme. Refer documentation or contact your service administrator <br>for details."}] | Occurs when copying between an `AEK` enabled resource to a non `AEK` enabled resource. To allow copying encrypted model to the target as unencrypted specify `x-ms-forms-copy-degrade: true` header with the copy request.|
+|"errors":[{"code":"ResourceResolverError",<br>"message":"Couldn't fetch information for Cognitive resource with ID '...'. Ensure the resource is valid and exists in the specified region 'westus2'.."}] | Indicates that the Azure resource indicated by the `targetResourceId` isn't a valid Cognitive resource or doesn't exist. Verify and reissue the copy request to resolve this issue.|
### [Optional] Track the target model ID
You can also use the **Get Custom Model** API to track the status of the operati
``` GET https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/33f4d42c-cd2f-4e74-b990-a1aeafab5a5d HTTP/1.1
-Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_API_KEY}
+Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
``` In the response body, you'll see information about the model. Check the `"status"` field for the status of the model.
Content-Type: application/json; charset=utf-8
## cURL sample code
-The following code snippets use cURL to make the API calls outlined in the steps above. You'll still need to fill in the model IDs and subscription information specific to your own resources.
+The following code snippets use cURL to make API calls outlined in the steps above. You'll still need to fill in the model IDs and subscription information specific to your own resources.
### Generate Copy authorization request ```bash
-curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_API_KEY}"
+curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}"
``` ### Start Copy operation ```bash
-curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_API_KEY}" --data-ascii "{ \"targetResourceId\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_ID}\", \"targetResourceRegion\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_REGION_NAME}\", \"copyAuthorization\": "{\"modelId\":\"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d\",\"accessToken\":\"1855fe23-5ffc-427b-aab2-e5196641502f\",\"expirationDateTimeTicks\":637233481531659440}"}"
+curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}" --data-ascii "{ \"targetResourceId\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_ID}\", \"targetResourceRegion\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_REGION_NAME}\", \"copyAuthorization\": "{\"modelId\":\"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d\",\"accessToken\":\"1855fe23-5ffc-427b-aab2-e5196641502f\",\"expirationDateTimeTicks\":637233481531659440}"}"
``` ### Track Copy progress ```bash
-curl -i GET "https://<SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT>/formrecognizer/v2.1/custom/models/{SOURCE_MODELID}/copyResults/{RESULT_ID}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_API_KEY}"
+curl -i GET "https://<SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT>/formrecognizer/v2.1/custom/models/{SOURCE_MODELID}/copyResults/{RESULT_ID}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}"
``` ## Next steps
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Fill in the fields with the following values:
In the Sample Labeling tool, projects store your configurations and settings. Create a new project and fill in the fields with the following values: * **Display Name** - the project display name
-* **Security Token** - Some project settings can include sensitive values, such as API keys or other shared secrets. Each project will generate a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
+* **Security Token** - Some project settings can include sensitive values, such as keys or other shared secrets. Each project will generate a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
* **Source Connection** - The Azure Blob Storage connection you created in the previous step that you would like to use for this project. * **Folder Path** - Optional - If your source forms are located in a folder on the blob container, specify the folder name here * **Form Recognizer Service Uri** - Your Form Recognizer endpoint URL.
-* **API Key** - Your Form Recognizer subscription key.
+* **Key** - Your Form Recognizer key.
* **Description** - Optional - Project description :::image type="content" source="media/label-tool/new-project.png" alt-text="New project page on Sample Labeling tool.":::
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Form Recognizer offers several prebuilt models to choose from. Each model has it
1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-1. In the **API key** field, paste the subscription key you obtained from your Form Recognizer resource.
+1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
:::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown window.":::
Azure the Form Recognizer Layout API extracts text, tables, selection marks, and
1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-1. In the **API key** field, paste the subscription key you obtained from your Form Recognizer resource.
+1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
1. In the **Source: URL** field, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg` and select the **Fetch** button.
Configure the **Project Settings** fields with the following values:
1. **Form Recognizer Service Uri** - Your Form Recognizer endpoint URL.
-1. **API Key**. Your Form Recognizer subscription key.
+1. **Key**. Your Form Recognizer key.
1. **API version**. Keep the v2.1 (default) value.
attestation Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/audit-logs.md
Individual blobs are stored as text, formatted as a JSON blob. LetΓÇÖs look at a
} ```
-Most of these fields are documented in the [Top-level common schema](/azure-monitor/essentials/resource-logs-schema#top-level-common-schema). The following table lists the field names and descriptions for the entries not included in the top-level common schema:
+Most of these fields are documented in the [Top-level common schema](/azure/azure-monitor/essentials/resource-logs-schema#top-level-common-schema). The following table lists the field names and descriptions for the entries not included in the top-level common schema:
| Field Name | Description | ||--|
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
Azure Attestation includes the below claims in the attestation token for all att
- **x-ms-policy-hash**: Hash of Azure Attestation evaluation policy computed as BASE64URL(SHA256(UTF8(BASE64URL(UTF8(policy text))))) - **x-ms-policy-signer**: JSON object with a "jwkΓÇ¥ member representing the key a customer used to sign their policy. This is applicable when customer uploads a signed policy - **x-ms-runtime**: JSON object containing "claims" that are defined and generated within the attested environment. This is a specialization of the ΓÇ£enclave held dataΓÇ¥ concept, where the ΓÇ£enclave held dataΓÇ¥ is specifically formatted as a UTF-8 encoding of well formed JSON-- **x-ms-inittime**: JSON object containing ΓÇ£claimsΓÇ¥ that are defined and enforced at secure environment initialization time
+- **x-ms-inittime**: JSON object containing ΓÇ£claimsΓÇ¥ that are defined and verified at initialization time of the attested environment
Below claim names are used from [IETF JWT specification](https://tools.ietf.org/html/rfc7519)
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
The process to install a user Hybrid Runbook Worker depends on the operating sys
|Linux | [Manual](automation-linux-hrw-install.md#install-a-linux-hybrid-runbook-worker) | |Either | For user Hybrid Runbook Workers, see [Deploy an extension-based Windows or Linux user Hybrid Runbook Worker in Automation](./extension-based-hybrid-runbook-worker-install.md). This is the recommended method. |
+>[!NOTE]
+> Hybrid Runbook Worker is currently not supported on VM Scale Sets.
+ ## <a name="network-planning"></a>Network planning Check [Azure Automation Network Configuration](automation-network-configuration.md#network-planning-for-hybrid-runbook-worker) for detailed information on the ports, URLs, and other networking details required for the Hybrid Runbook Worker.
automation Automation Tutorial Runbook Textual Python 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual-python-3.md
To do this, the script has to authenticate using the Run As account credential f
> [!NOTE] > The Automation account must have been created with the Run As account for there to be a Run As certificate. > If your Automation account was not created with the Run As account, you can authenticate as described in
-> [Authenticate with the Azure Management Libraries for Python](/azure/python/python-sdk-azure-authenticate) or [create a Run As account](../create-run-as-account.md).
+> [Authenticate with the Azure Management Libraries for Python](/azure/developer/python/sdk/authentication-overview) or [create a Run As account](../create-run-as-account.md).
1. Open the textual editor by selecting **Edit** on the **MyFirstRunbook-Python3** pane.
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
In addition, you need the following additional extensions to connect the cluster
## Access your Kubernetes cluster
-After installing the client tools, you need access to a Kubernetes cluster. You can create Kubernetes cluster with [`az aks create`](/cli/azure/aks#az-aks-create), or you can follow the steps below to create the cluster in the Azure portal.
+After installing the client tools, you need access to a Kubernetes cluster. You can create a Kubernetes cluster with [`az aks create`](/cli/azure/aks#az-aks-create), or you can follow the steps below to create the cluster in the Azure portal.
### Create a cluster
After creating the cluster, connect to the cluster through the Azure CLI.
### Arc enable the Kubernetes cluster
-Now that the cluster is running, connect the cluster to Azure. When you connect a cluster to Azure, you Arc enable it. Arc enabling your cluster allow you to view and manage the cluster, and deploy and manage additional services such as Arc-enabled data services on the cluster directly from Azure portal.
+Now that the cluster is running, connect the cluster to Azure. When you connect a cluster to Azure, you Arc enable it. Arc enabling your cluster allows you to view and manage the cluster, and deploy and manage additional services such as Arc-enabled data services on the cluster directly from Azure portal.
Use `az connectedk8s connect` to connect the cluster to Azure:
NAME STATE
<namespace> Ready ```
-## Create Azure Arc-enabled SQL Managed Instance
+## Create an Azure Arc-enabled SQL Managed Instance
1. In the portal, locate the resource group. 1. In the resource group, select **Create**.
azure-arc Create Complete Managed Instance Indirectly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md
Follow the steps below to deploy the cluster from the Azure CLI.
For command details, see [az aks create](/cli/azure/aks#az-aks-create).
- For a complete demonstration, including an application on a single-node Kubernetes cluster, go to [Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI](../../aks/kubernetes-walkthrough.md).
+ For a complete demonstration, including an application on a single-node Kubernetes cluster, go to [Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md).
1. Get credentials
azure-arc Deployment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deployment-options.md
Connecting machines in your hybrid environment directly with Azure can be accomp
| At scale | [Connect machines using a service principal](onboard-service-principal.md) to install the agent at scale non-interactively.| | At scale | [Connect machines by running PowerShell scripts with Configuration Manager](onboard-configuration-manager-powershell.md) | At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md)
+| At scale | [Connect Windows machines using Group Policy](onboard-group-policy.md)
| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. | > [!IMPORTANT]
azure-arc Tutorial Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-assign-policy-portal.md
Title: Tutorial - New policy assignment with Azure portal description: In this tutorial, you use Azure portal to create an Azure Policy assignment to identify non-compliant resources. Previously updated : 04/21/2021 Last updated : 04/20/2022 # Tutorial: Create a policy assignment to identify non-compliant resources
-The first step in understanding compliance in Azure is to identify the status of your resources. Azure Policy supports auditing the state of your Azure Arc-enabled server with guest configuration policies. Azure Policy's guest configuration definitions can audit or apply settings inside the machine. This tutorial steps you through the process of creating and assigning a policy, identifying which of your Azure Arc-enabled servers don't have the Log Analytics agent installed.
+The first step in understanding compliance in Azure is to identify the status of your resources. Azure Policy supports auditing the state of your Azure Arc-enabled server with guest configuration policies. Azure Policy's guest configuration definitions can audit or apply settings inside the machine.
+
+This tutorial steps you through the process of creating and assigning a policy in order to identify which of your Azure Arc-enabled servers don't have the Log Analytics agent for Windows or Linux installed. These machines are considered _non-compliant_ with the policy assignment.
+
+In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+> * Create policy assignment and assign a definition to it
+> * Identify resources that aren't compliant with the new policy
+> * Remove the policy from non-compliant resources
-At the end of this process, you'll successfully identify machines that don't have the Log Analytics agent for Windows or Linux installed. They're _non-compliant_ with the policy assignment.
## Prerequisites
before you begin.
## Create a policy assignment
-In this tutorial, you create a policy assignment and assign the _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition.
+Follow the steps below to create a policy assignment and assign the policy definition _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_:
-1. Launch the Azure Policy service in the Azure portal by clicking **All services**, then searching
+1. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching
for and selecting **Policy**.
- :::image type="content" source="./media/tutorial-assign-policy-portal/search-policy.png" alt-text="Search for Policy in All Services" border="false":::
+ :::image type="content" source="./media/tutorial-assign-policy-portal/all-services-page.png" alt-text="Screenshot of All services window showing search for policy service." border="true":::
1. Select **Assignments** on the left side of the Azure Policy page. An assignment is a policy that has been assigned to take place within a specific scope.
- :::image type="content" source="./media/tutorial-assign-policy-portal/select-assignment.png" alt-text="Select Assignments page from Policy Overview page" border="false":::
+ :::image type="content" source="./media/tutorial-assign-policy-portal/assignments-tab.png" alt-text="Screenshot of All services Policy window showing policy assignments." border="true":::
1. Select **Assign Policy** from the top of the **Policy - Assignments** page.
- :::image type="content" source="./media/tutorial-assign-policy-portal/select-assign-policy.png" alt-text="Assign a policy definition from Assignments page" border="false":::
- 1. On the **Assign Policy** page, select the **Scope** by clicking the ellipsis and selecting either a management group or subscription. Optionally, select a resource group. A scope determines what resources or grouping of resources the policy assignment gets enforced on. Then click **Select**
In this tutorial, you create a policy assignment and assign the _\[Preview]: Log
For a partial list of available built-in policies, see [Azure Policy samples](../../../governance/policy/samples/index.md). 1. Search through the policy definitions list to find the _\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines_
- definition if you have enabled the Arc-enabled servers agent on a Windows-based machine. For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**.
+ definition (if you have enabled the Arc-enabled servers agent on a Windows-based machine). For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**.
1. The **Assignment name** is automatically populated with the policy name you selected, but you can
- change it. For this example, leave _\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines_ or _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ depending on which one you selected. You can also add an optional **Description**. The description provides details about this policy assignment.
- **Assigned by** will automatically fill based on who is logged in. This field is optional, so
- custom values can be entered.
-
-1. Leave **Create a Managed Identity** unchecked. This box _must_ be checked when the policy or
- initiative includes a policy with the
- [deployIfNotExists](../../../governance/policy/concepts/effects.md#deployifnotexists) effect. As the policy used for this
- quickstart doesn't, leave it blank. For more information, see
- [managed identities](../../../active-directory/managed-identities-azure-resources/overview.md) and
- [how remediation security works](../../../governance/policy/how-to/remediate-resources.md#how-remediation-security-works).
-
-1. Click **Assign**.
+ change it. For this example, leave the policy name as is, and don't change any of the remaining options on the page.
+
+1. For this example, we don't need to change any settings on the other tabs. Select **Review + Create** to review your new policy assignment, then select **Create**.
You're now ready to identify non-compliant resources to understand the compliance state of your environment.
environment.
Select **Compliance** in the left side of the page. Then locate the **\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines** or **\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines** policy assignment you created. If there are any existing resources that aren't compliant with this new assignment, they appear under **Non-compliant resources**. When a condition is evaluated against your existing resources and found true, then those resources
-are marked as non-compliant with the policy. The following table shows how different policy effects
+are marked as non-compligitant with the policy. The following table shows how different policy effects
work with the condition evaluation for the resulting compliance state. Although you don't see the evaluation logic in the Azure portal, the compliance state results are shown. The compliance state result is either compliant or non-compliant.
-| **Resource State** | **Effect** | **Policy Evaluation** | **Compliance State** |
+| **Resource state** | **Effect** | **Policy evaluation** | **Compliance state** |
| | | | |
-| Exists | Deny, Audit, Append\*, DeployIfNotExist\*, AuditIfNotExist\* | True | Non-Compliant |
+| Exists | Deny, Audit, Append\*, DeployIfNotExist\*, AuditIfNotExist\* | True | Non-compliant |
| Exists | Deny, Audit, Append\*, DeployIfNotExist\*, AuditIfNotExist\* | False | Compliant |
-| New | Audit, AuditIfNotExist\* | True | Non-Compliant |
+| New | Audit, AuditIfNotExist\* | True | Non-compliant |
| New | Audit, AuditIfNotExist\* | False | Compliant | \* The Append, DeployIfNotExist, and AuditIfNotExist effects require the IF statement to be TRUE.
To remove the assignment created, follow these steps:
1. Right-click the policy assignment and select **Delete assignment**.
- :::image type="content" source="./media/tutorial-assign-policy-portal/delete-assignment.png" alt-text="Delete an assignment from the Compliance page" border="false":::
- ## Next steps In this tutorial, you assigned a policy definition to a scope and evaluated its compliance report. The policy definition validates that all the resources in the scope are compliant and identifies which ones aren't. Now you are ready to monitor your Azure Arc-enabled servers machine by enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md).
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
+
+ Title: Connect machines at scale using group policy
+description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy.
Last updated : 04/29/2022++++
+# Connect machines at scale using Group Policy
+
+You can onboard Active DirectoryΓÇôjoined Windows machines to Azure Arc-enabled servers at scale using Group Policy.
+
+You'll first need to set up a local remote share with the Connected Machine Agent and define a configuration file specifying the Arc-enabled server's landing zone within Azure. You will then define a Group Policy Object to run an onboarding script using a scheduled task. This Group Policy can be applied at the site, domain, or organizational unit level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers.
+
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prepare a remote share
+
+The Group Policy to onboard Azure Arc-enabled servers requires a remote share with the Connected Machine Agent. You will need to:
+
+1. Prepare a remote share to host the Azure Connected Machine agent package for Windows and the configuration file. You need to be able to add files to the distributed location.
+
+1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
+
+## Generate an onboarding script and configuration file from Azure Portal
+
+Before you can run the script to connect your machines, you'll need to do the following:
+
+1. Follow the steps toΓÇ»[create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale).
+
+ * Assign the Azure Connected Machine Onboarding role to your service principal and limit the scope of the role to the target Azure landing zone.
+ * Make a note of the Service Principal Secret; you'll need this value later.
+
+1. Modify and save the following configuration file to the remote share as `ArcConfig.json`. Edit the file with your Azure subscription, resource group, and location details. Use the service principal details from step 1 for the last two fields:
+
+```
+{
+ "tenant-id": "INSERT AZURE TENANTID",
+ "subscription-id": "INSERT AZURE SUBSCRIPTION ID",
+ "resource-group": "INSERT RESOURCE GROUP NAME",
+ "location": "INSERT REGION",
+ "service-principal-id": "INSERT SPN ID",
+ "service-principal-secret": "INSERT SPN Secret"
+ }
+```
+
+The group policy will project machines as Arc-enabled servers in the Azure subscription, resource group, and region specified in this configuration file.
+
+## Modify and save the onboarding script
+
+Before you can run the script to connect your machines, you'll need to modify and save the onboarding script:
+
+1. Edit the field for `remotePath` to reflect the distributed share location with the configuration file and Connected Machine Agent.
+
+1. Edit the `localPath` with the local path where the logs generated from the onboarding to Azure Arc-enabled servers will be saved per machine.
+
+1. Save the modified onboarding script locally and note its location. This will be referenced when creating the Group Policy Object.
+
+```
+[string] $remotePath = "\\dc-01.contoso.lcl\Software\Arc"
+[string] $localPath = "$env:HOMEDRIVE\ArcDeployment"
+
+[string] $RegKey = "HKLM\SOFTWARE\Microsoft\Azure Connected Machine Agent"
+[string] $logFile = "installationlog.txt"
+[string] $InstallationFolder = "ArcDeployment"
+[string] $configFilename = "ArcConfig.json"
+
+if (!(Test-Path $localPath) ) {
+ $BitsDirectory = new-item -path C:\ -Name $InstallationFolder -ItemType Directory
+ $logpath = new-item -path $BitsDirectory -Name $logFile -ItemType File
+}
+else{
+ $BitsDirectory = "C:\ArcDeployment"
+ }
+
+function Deploy-Agent {
+ [bool] $isDeployed = Test-Path $RegKey
+ if ($isDeployed) {
+ $logMessage = "Azure Arc Serverenabled agent is deployed , exit process"
+ $logMessage >> $logpath
+ exit
+ }
+ else {
+ Copy-Item -Path "$remotePath\*" -Destination $BitsDirectory -Recurse -Verbose
+ $exitCode = (Start-Process -FilePath msiexec.exe -ArgumentList @("/i", "$BitsDirectory\AzureConnectedMachineAgent.msi" , "/l*v", "$BitsDirectory\$logFile", "/qn") -Wait -Passthru).ExitCode
+
+ if($exitCode -eq 0){
+ Start-Sleep -Seconds 120
+ $x= & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --config "$BitsDirectory\$configFilename"
+ $msg >> $logpath
+ }
+ else {
+ $message = (net helpmsg $exitCode)
+ $message >> $logpath
+ }
+ }
+}
+
+Deploy-Agent
+```
+
+## Create a Group Policy Object
+
+Create a new Group Policy Object (GPO) to run the onboarding script using the configuration file details:
+
+1. Open the Group Policy Management Console (GPMC).
+
+1. Navigate to the Organization Unit (OU), Domain, or Security Group in your AD forest that contains the machines you want to onboard to Azure Arc-enabled servers.
+
+1. Right-click on this set of resources and select **Create a GPO in this domain, and Link it here.**
+
+1. Assign the name ΓÇ£Onboard servers to Azure Arc-enabled serversΓÇ¥ to this new Group Policy Object (GPO).
+
+## Create a scheduled task
+
+The newly created GPO needs to be modified to run the onboarding script at the appropriate cadence. Use Group PolicyΓÇÖs built-in Scheduled Task capabilities to do so:
+
+1. Select **Computer Configuration > Preferences > Control Panel Settings > Scheduled Tasks**.
+
+1. Right-click in the blank area and select **New > Scheduled Task**.
+
+Your workstation must be running Windows 7 or higher to be able to create a Scheduled Task from Group Policy Management Console.
+
+### Assign general parameters for the task
+
+In the **General** tab, set the following parameters under **Security Options**:
+
+1. In the field **When running the task, use the following user account:**, enter "NT AUTHORITY\System".
+
+1. Select **Run whether user is logged on or not**.
+
+1. Check the box for **Run with highest privileges**.
+
+1. In the field **Configure for**, select **Windows Vista or Window 2008**.
++
+### Assign trigger parameters for the task
+
+In the **Triggers** tab, select **New**, then enter the following parameters in the **New Trigger** window:
+
+1. In the field **Begin the task**, select **On a schedule**.
+
+1. Under **Settings**, select **One time** and enter the date and time for the task to run.
+
+1. Under **Advanced Settings**, check the box for **Enabled**.
+
+1. Once you've set the trigger parameters, select **OK**.
++
+### Assign action parameters for the task
+
+In the **Actions** tab, select **New**, then enter the follow parameters in the **New Action** window:
+
+1. For **Action**, select **Start a program** from the dropdown.
+
+1. For **Program/script**, enter `C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe`.
+
+1. For **Add arguments (optional)**, enter `-ExecutionPolicy Bypass -command <Path to Deployment Script>`.
+
+ Note that you must enter the location of the deployment script, modified earlier with the `DeploymentPath` and `LocalPath`, instead of the placeholder "Path to Deployment Script".
+
+1. For **Start In (Optional)**, enter `C:\`.
+
+1. Once you've set the action parameters, select **OK**.
++
+## Apply the Group Policy Object
+
+On the Group Policy Management Console, right-click on the desired Organizational Unit and select the option to link an existent GPO. Choose the Group Policy Object defined in the Scheduled Task. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Azure AD Domain Services](../../active-directory-domain-services/manage-group-policy.md).
+
+After you have successfully installed the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+
+## Next steps
+
+- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn more about [Group Policy](/troubleshoot/windows-server/group-policy/group-policy-overview).
azure-fluid-relay Deploy Fluid Static Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/deploy-fluid-static-web-apps.md
Title: 'How to: Deploy Fluid applications using Azure Static Web Apps' description: Detailed explanation about how Fluid applications can be hosted on Azure Static Web Apps-+ Last updated 08/19/2021
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
def main(req: func.HttpRequest, messageJSON) -> func.HttpResponse:
return func.HttpResponse(f"Table row: {messageJSON}") ```
-With this simple binding, you can't programmatically handle a case in which no row that has a row key ID is found. For more fine-grained data selection, use the [storage SDK](/azure/developer/python/azure-sdk-example-storage-use?tabs=cmd).
+With this simple binding, you can't programmatically handle a case in which no row that has a row key ID is found. For more fine-grained data selection, use the [storage SDK](/azure/developer/python/sdk/examples/azure-sdk-example-storage-use?tabs=cmd).
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md
The agreement that describes Microsoft's commitments for uptime and connectivity
See [Service Level Agreements](https://azure.microsoft.com/support/legal/sla/) ## <a name="sas"></a>shared access signature (SAS)
-A signature that enables you to grant limited access to a resource, without exposing your account key. For example, [Azure Storage uses SAS](./storage/common/storage-sas-overview.md) to grant client access to objects such as blobs. [IoT Hub uses SAS](iot-hub/iot-hub-dev-guide-sas.md#security-tokens) to grant devices permission to send telemetry.
+A signature that enables you to grant limited access to a resource, without exposing your account key. For example, [Azure Storage uses SAS](./storage/common/storage-sas-overview.md) to grant client access to objects such as blobs. [IoT Hub uses SAS](iot-hub/iot-hub-dev-guide-sas.md#sas-tokens) to grant devices permission to send telemetry.
## storage account An account that gives you access to the Azure Blob, Queue, Table, and File services in Azure Storage. The storage account name defines the unique namespace for Azure Storage data objects.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; | | [Time Series Insights](../../time-series-insights/index.yml) | &#x2705; | &#x2705; | | [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; |
-| [Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/index.yml) (formerly Video Indexer) | &#x2705; | &#x2705; |
+| [Video Analyzer for Media](../../azure-video-indexer/index.yml) (formerly Video Indexer) | &#x2705; | &#x2705; |
| [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; | | [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; | | [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; |
azure-government Documentation Government Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-developer-guide.md
Navigate through the following links to get started using Azure Government:
- [Connect with CLI](./documentation-government-get-started-connect-with-cli.md) - [Connect with Visual Studio](./documentation-government-connect-vs.md) - [Connect to Azure Storage](./documentation-government-get-started-connect-to-storage.md)-- [Connect with Azure SDK for Python](/azure/developer/python/azure-sdk-sovereign-domain)
+- [Connect with Azure SDK for Python](/azure/developer/python/sdk/azure-sdk-sovereign-domain)
### Azure Government Video Library
azure-monitor Data Model Pageview Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-pageview-telemetry.md
+
+ Title: Azure Application Insights Data Model - PageView Telemetry
+description: Application Insights data model for page view telemetry
+ Last updated : 03/24/2022+++
+# PageView telemetry: Application Insights data model
+
+PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and is not necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages is not tied to browser page actions. [`pageViews.duration`](https://docs.microsoft.com/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user.
+
+> [!NOTE]
+> By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](https://docs.microsoft.com/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
+
+## Measuring browserTiming in Application Insights
+
+Modern browsers expose measurements for page load actions with the [Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API). Application Insights simplifies these measurements by consolidating related timings into [standard browser metrics](../essentials/metrics-supported.md#microsoftinsightscomponents) as defined by these processing time definitions:
+
+1. Client <--> DNS : Client reaches out to DNS to resolve website hostname, DNS responds with IP address.
+1. Client <--> Web Server : Client creates TCP then TLS handshakes with web server.
+1. Client <--> Web Server : Client sends request payload, waits for server to execute request, and receives first response packet.
+1. Client <-- Web Server : Client receives the rest of the response payload bytes from the web server.
+1. Client : Client now has full response payload and has to render contents into browser and load the DOM.
+
+* `browserTimings/networkDuration` = #1 + #2
+* `browserTimings/sendDuration` = #3
+* `browserTimings/receiveDuration` = #4
+* `browserTimings/processingDuration` = #5
+* `browsertimings/totalDuration` = #1 + #2 + #3 + #4 + #5
+* `pageViews/duration`
+ * The PageView duration is from the browserΓÇÖs performance timing interface, [`PerformanceNavigationTiming.duration`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceEntry/duration).
+ * If `PerformanceNavigationTiming` is available that duration is used.
+ * If itΓÇÖs not, then the *deprecated* [`PerformanceTiming`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming) interface is used and the delta between [`NavigationStart`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/navigationStart) and [`LoadEventEnd`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/loadEventEnd) is calculated.
+ * The developer specifies a duration value when logging custom PageView events using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
+
+![Screenshot of the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
azure-monitor Container Insights Enable New Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md
You can enable monitoring of an AKS cluster using one of the supported methods:
## Enable using Azure CLI
-To enable monitoring of a new AKS cluster created with Azure CLI, follow the step in the quickstart article under the section [Create AKS cluster](../../aks/kubernetes-walkthrough.md#create-aks-cluster).
+To enable monitoring of a new AKS cluster created with Azure CLI, follow the step in the quickstart article under the section [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md).
>[!NOTE] >If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.74 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
This article provides an overview of the options that are available for setting
- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x - [Red Hat OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4.x - You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using any of the following supported methods: - The Azure portal
To enable Container insights, use one of the methods that's described in the fol
| Deployment state | Method | Description | ||--|-|
-| New Kubernetes cluster | [Create an AKS cluster by using the Azure CLI](../../aks/kubernetes-walkthrough.md#create-aks-cluster)| You can enable monitoring for a new AKS cluster that you create by using the Azure CLI. |
+| New Kubernetes cluster | [Create an AKS cluster by using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md)| You can enable monitoring for a new AKS cluster that you create by using the Azure CLI. |
| | [Create an AKS cluster by using Terraform](container-insights-enable-new-cluster.md#enable-using-terraform)| You can enable monitoring for a new AKS cluster that you create by using the open-source tool Terraform. | | | [Create an OpenShift cluster by using an Azure Resource Manager template](container-insights-azure-redhat-setup.md#enable-for-a-new-cluster-using-an-azure-resource-manager-template) | You can enable monitoring for a new OpenShift cluster that you create by using a preconfigured Azure Resource Manager template. | | | [Create an OpenShift cluster by using the Azure CLI](/cli/azure/openshift#az-openshift-create) | You can enable monitoring when you deploy a new OpenShift cluster by using the Azure CLI. |
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Load Balancer |[Log Analytics for Azure Load Balancer](../../load-balancer/monitor-load-balancer.md) | | Azure Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) | | Azure Machine Learning | [Diagnostic logging in Azure Machine Learning](../../machine-learning/monitor-resource-reference.md) |
-| Azure Media Services | [Media Services monitoring schemas](/azure/media-services/latest/monitoring/monitor-media-services-data-reference#schemas) |
+| Azure Media Services | [Media Services monitoring schemas](/azure/media-services/latest/monitoring/monitor-media-services#schemas) |
| Network security groups |[Log Analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) | | Azure Power BI Embedded | [Logging for Power BI Embedded in Azure](/power-bi/developer/azure-pbie-diag-logs) | | Recovery Services | [Data model for Azure Backup](../../backup/backup-azure-reports-data-model.md)|
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 03/02/2022 Last updated : 04/28/2022 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
For more information, see [Capacity management FAQs](faq-capacity-management.md).
+For limits and constraints related to Azure NetApp Files network features, see [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
+ ## Determine if a directory is approaching the limit size <a name="directory-limit"></a> You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
description: Describes the functions to use in a Bicep file to retrieve values a
Previously updated : 03/02/2022 Last updated : 04/28/2022 # Resource functions for Bicep
resource roleAssignment 'Microsoft.Authorization/roleAssignments@2018-09-01-prev
} ```
+## managementGroupResourceId
+
+`managementGroupResourceId(resourceType, resourceName1, [resourceName2], ...)`
+
+Returns the unique identifier for a resource deployed at the management group level.
+
+Namespace: [az](bicep-functions.md#namespaces-for-functions).
+
+The `managementGroupResourceId` function is available in Bicep files, but typically you don't need it. Instead, use the symbolic name for the resource and access the `id` property.
+
+The identifier is returned in the following format:
+
+```json
+/providers/Microsoft.Management/managementGroups/{managementGroupName}/providers/{resourceType}/{resourceName}
+```
+
+### Remarks
+
+You use this function to get the resource ID for resources that are [deployed to the management group](deploy-to-management-group.md) rather than a resource group. The returned ID differs from the value returned by the [resourceId](#resourceid) function by not including a subscription ID and a resource group value.
+
+### managementGrouopResourceID example
+
+The following template creates a policy definition, and assign the policy defintion. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
+
+```bicep
+targetScope = 'managementGroup'
+
+@description('Target Management Group')
+param targetMG string
+
+@description('An array of the allowed locations, all other locations will be denied by the created policy.')
+param allowedLocations array = [
+ 'australiaeast'
+ 'australiasoutheast'
+ 'australiacentral'
+]
+
+var mgScope = tenantResourceId('Microsoft.Management/managementGroups', targetMG)
+var policyDefinitionName = 'LocationRestriction'
+
+resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2020-03-01' = {
+ name: policyDefinitionName
+ properties: {
+ policyType: 'Custom'
+ mode: 'All'
+ parameters: {}
+ policyRule: {
+ if: {
+ not: {
+ field: 'location'
+ in: allowedLocations
+ }
+ }
+ then: {
+ effect: 'deny'
+ }
+ }
+ }
+}
+
+resource location_lock 'Microsoft.Authorization/policyAssignments@2020-03-01' = {
+ name: 'location-lock'
+ properties: {
+ scope: mgScope
+ policyDefinitionId: managementGroupResourceId('Microsoft.Authorization/policyDefinitions', policyDefinitionName)
+ }
+ dependsOn: [
+ policyDefinition
+ ]
+}
+```
+ ## tenantResourceId `tenantResourceId(resourceType, resourceName1, [resourceName2], ...)`
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/26/2022 Last updated : 04/28/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | netAppAccounts | resource group | 1-128 | Alphanumerics, underscores, periods, and hyphens. |
-> | netAppAccounts / capacityPools | NetApp account | 1-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. |
-> | netAppAccounts / snapshotPolicies | NetApp account | 1-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. |
-> | netAppAccounts / volumeGroups | NetApp account | 1-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. |
+> | netAppAccounts | resource group | 1-128 | Alphanumerics, underscores, and hyphens. <br><br> Start with alphanumeric. |
+> | netAppAccounts / backups | NetApp account | 3-225 | Alphanumerics, underscores, periods, and hyphens. <br><br> Start with alphanumeric. |
+> | netAppAccounts / backupPolicies | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens. <br><br> Start with alphanumeric. |
+> | netAppAccounts / capacityPools | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens.<br><br>Start with alphanumeric. |
+> | netAppAccounts / snapshots | NetApp account | 1-255 | Alphanumerics, underscores, and hyphens. <br><br> Start with alphanumeric. |
+> | netAppAccounts / snapshotPolicies | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens.<br><br>Start with alphanumeric. |
+> | netAppAccounts / volumes | NetApp account | 1-64 | Alphanumerics, underscores, and hyphens. <br><br> Start with alphanumeric. |
+> | netAppAccounts / volumeGroups | NetApp account | 3-64 | Alphanumerics, underscores, and hyphens.<br><br>Start with alphanumeric. |
## Microsoft.Network
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 03/24/2022 Last updated : 03/31/2022
Resource Manager provides the following functions for getting resource values in
* [reference](#reference) * [resourceId](#resourceid) * [subscriptionResourceId](#subscriptionresourceid)
+* [managementGroupResourceId](#managementgroupresourceid)
* [tenantResourceId](#tenantresourceid) To get values from parameters, variables, or the current deployment, see [Deployment value functions](template-functions-deployment.md).
Continue adding resource names as parameters when the resource type includes mor
### Return value
-When the template is deployed at the scope of a resource group, the resource ID is returned in the following format:
+The resource ID is returned in different formats at different scopes:
-```json
-/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
-```
+* Resource group scope:
-You can use the `resourceId` function for other deployment scopes, but the format of the ID changes.
+ ```json
+ /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ```
-If you use `resourceId` while deploying to a subscription, the resource ID is returned in the following format:
+* Subscription scope:
-```json
-/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
-```
+ ```json
+ /subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ```
-If you use `resourceId` while deploying to a management group or tenant, the resource ID is returned in the following format:
+* Management group or tenant scope:
-```json
-/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
-```
+ ```json
+ /providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ```
To avoid confusion, we recommend that you don't use `resourceId` when working with resources deployed to the subscription, management group, or tenant. Instead, use the ID function that is designed for the scope.
-For [subscription-level resources](deploy-to-subscription.md), use the [subscriptionResourceId](#subscriptionresourceid) function.
-
-For [management group-level resources](deploy-to-management-group.md), use the [extensionResourceId](#extensionresourceid) function to reference a resource that is implemented as an extension of a management group. For example, custom policy definitions that are deployed to a management group are extensions of the management group. Use the [tenantResourceId](#tenantresourceid) function to reference resources that are deployed to the tenant but available in your management group. For example, built-in policy definitions are implemented as tenant level resources.
-
-For [tenant-level resources](deploy-to-tenant.md), use the [tenantResourceId](#tenantresourceid) function. Use `tenantResourceId` for built-in policy definitions because they're implemented at the tenant level.
+* For [subscription-level resources](deploy-to-subscription.md), use the [subscriptionResourceId](#subscriptionresourceid) function.
+* For [management group-level resources](deploy-to-management-group.md), use the [managementGroupResourceId](#managementgroupresourceid) function. Use the [extensionResourceId](#extensionresourceid) function to reference a resource that is implemented as an extension of a management group. For example, custom policy definitions that are deployed to a management group are extensions of the management group. Use the [tenantResourceId](#tenantresourceid) function to reference resources that are deployed to the tenant but available in your management group. For example, built-in policy definitions are implemented as tenant level resources.
+* For [tenant-level resources](deploy-to-tenant.md), use the [tenantResourceId](#tenantresourceid) function. Use `tenantResourceId` for built-in policy definitions because they're implemented at the tenant level.
### Remarks
The following template assigns a built-in role. You can deploy it to either a re
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/functions/resource/subscriptionresourceid.json":::
+## managementGroupResourceId
+
+`managementGroupResourceId([managementGroupResourceId],resourceType, resourceName1, [resourceName2], ...)`
+
+Returns the unique identifier for a resource deployed at the management group level.
+
+In Bicep, use the [managementGroupResourceId](../bicep/bicep-functions-resource.md#managementgroupresourceid) function.
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| managementGroupResourceId |No |string (in GUID format) |Default value is the current management group. Specify this value when you need to retrieve a resource in another management group. |
+| resourceType |Yes |string |Type of resource including resource provider namespace. |
+| resourceName1 |Yes |string |Name of resource. |
+| resourceName2 |No |string |Next resource name segment, if needed. |
+
+Continue adding resource names as parameters when the resource type includes more segments.
+
+### Return value
+
+The identifier is returned in the following format:
+
+```json
+/providers/Microsoft.Management/managementGroups/{managementGroupName}/providers/{resourceType}/{resourceName}
+```
+
+### Remarks
+
+You use this function to get the resource ID for resources that are [deployed to the management group](deploy-to-management-group.md) rather than a resource group. The returned ID differs from the value returned by the [resourceId](#resourceid) function by not including a subscription ID and a resource group value.
+
+### managementGrouopResourceID example
+
+The following template creates a policy definition, and assign the policy defintion. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "targetMG": {
+ "type": "string",
+ "metadata": {
+ "description": "Target Management Group"
+ }
+ },
+ "allowedLocations": {
+ "type": "array",
+ "defaultValue": [
+ "australiaeast",
+ "australiasoutheast",
+ "australiacentral"
+ ],
+ "metadata": {
+ "description": "An array of the allowed locations, all other locations will be denied by the created policy."
+ }
+ }
+ },
+ "functions": [],
+ "variables": {
+ "mgScope": "[tenantResourceId('Microsoft.Management/managementGroups', parameters('targetMG'))]",
+ "policyDefinitionName": "LocationRestriction"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Authorization/policyDefinitions",
+ "apiVersion": "2020-03-01",
+ "name": "[variables('policyDefinitionName')]",
+ "properties": {
+ "policyType": "Custom",
+ "mode": "All",
+ "parameters": {},
+ "policyRule": {
+ "if": {
+ "not": {
+ "field": "location",
+ "in": "[parameters('allowedLocations')]"
+ }
+ },
+ "then": {
+ "effect": "deny"
+ }
+ }
+ }
+ },
+ {
+ "type": "Microsoft.Authorization/policyAssignments",
+ "apiVersion": "2020-03-01",
+ "name": "location-lock",
+ "properties": {
+ "scope": "[variables('mgScope')]",
+ "policyDefinitionId": "[managementGroupResourceId('Microsoft.Authorization/policyDefinitions', variables('policyDefinitionName'))]"
+ },
+ "dependsOn": [
+ "[format('Microsoft.Authorization/policyDefinitions/{0}', variables('policyDefinitionName'))]"
+ ]
+ }
+ ]
+}
+```
+ ## tenantResourceId `tenantResourceId(resourceType, resourceName1, [resourceName2], ...)`
azure-sql Accelerated Database Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/accelerated-database-recovery.md
- Title: Accelerated database recovery-
-description: Accelerated database recovery provides fast and consistent database recovery, instantaneous transaction rollback, and aggressive log truncation for databases in the Azure SQL portfolio.
------- Previously updated : 02/18/2022-
-# Accelerated Database Recovery in Azure SQL
-
-**Accelerated Database RecoveryΓÇ»(ADR)** is a SQL Server database engine feature that greatly improves database availability, especially in the presence of long running transactions, by redesigning the SQL Server database engine recovery process.
-
-ADR is currently available for Azure SQL Database, Azure SQL Managed Instance, databases in Azure Synapse Analytics, and SQL Server on Azure VMs starting with SQL Server 2019. For information on ADR in SQL Server, see [Manage accelerated database recovery](/sql/relational-databases/accelerated-database-recovery-management).
-
-> [!NOTE]
-> ADR is enabled by default in Azure SQL Database and Azure SQL Managed Instance. Disabling ADR in Azure SQL Database and Azure SQL Managed Instance is not supported.
-
-## Overview
-
-The primary benefits of ADR are:
--- **Fast and consistent database recovery**-
- With ADR, long running transactions do not impact the overall recovery time, enabling fast and consistent database recovery irrespective of the number of active transactions in the system or their sizes.
--- **Instantaneous transaction rollback**-
- With ADR, transaction rollback is instantaneous, irrespective of the time that the transaction has been active or the number of updates that has performed.
--- **Aggressive log truncation**-
- With ADR, the transaction log is aggressively truncated, even in the presence of active long-running transactions, which prevents it from growing out of control.
-
-## Standard database recovery process
-
-Database recovery follows the [ARIES](https://people.eecs.berkeley.edu/~brewer/cs262/Aries.pdf) recovery model and consists of three phases, which are illustrated in the following diagram and explained in more detail following the diagram.
-
-![current recovery process](./media/accelerated-database-recovery/current-recovery-process.png)
--- **Analysis phase**-
- Forward scan of the transaction log from the beginning of the last successful checkpoint (or the oldest dirty page LSN) until the end, to determine the state of each transaction at the time the database stopped.
--- **Redo phase**-
- Forward scan of the transaction log from the oldest uncommitted transaction until the end, to bring the database to the state it was at the time of the crash by redoing all committed operations.
--- **Undo phase**-
- For each transaction that was active as of the time of the crash, traverses the log backwards, undoing the operations that this transaction performed.
-
-Based on this design, the time it takes the SQL Server database engine to recover from an unexpected restart is (roughly) proportional to the size of the longest active transaction in the system at the time of the crash. Recovery requires a rollback of all incomplete transactions. The length of time required is proportional to the work that the transaction has performed and the time it has been active. Therefore, the recovery process can take a long time in the presence of long-running transactions (such as large bulk insert operations or index build operations against a large table).
-
-Also, cancelling/rolling back a large transaction based on this design can also take a long time as it is using the same Undo recovery phase as described above.
-
-In addition, the SQL Server database engine cannot truncate the transaction log when there are long-running transactions because their corresponding log records are needed for the recovery and rollback processes. As a result of this design of the SQL Server database engine, some customers used to face the problem that the size of the transaction log grows very large and consumes huge amounts of drive space.
-
-## The Accelerated Database Recovery process
-
-ADR addresses the above issues by completely redesigning the SQL Server database engine recovery process to:
--- Make it constant time/instant by avoiding having to scan the log from/to the beginning of the oldest active transaction. With ADR, the transaction log is only processed from the last successful checkpoint (or oldest dirty page Log Sequence Number (LSN)). As a result, recovery time is not impacted by long running transactions.-- Minimize the required transaction log space since there is no longer a need to process the log for the whole transaction. As a result, the transaction log can be truncated aggressively as checkpoints and backups occur.-
-At a high level, ADR achieves fast database recovery by versioning all physical database modifications and only undoing logical operations, which are limited and can be undone almost instantly. Any transaction that was active as of the time of a crash are marked as aborted and, therefore, any versions generated by these transactions can be ignored by concurrent user queries.
-
-The ADR recovery process has the same three phases as the current recovery process. How these phases operate with ADR is illustrated in the following diagram and explained in more detail following the diagram.
-
-![ADR recovery process](./media/accelerated-database-recovery/adr-recovery-process.png)
--- **Analysis phase**-
- The process remains the same as before with the addition of reconstructing SLOG and copying log records for non-versioned operations.
-
-- **Redo** phase-
- Broken into two phases (P)
- - Phase 1
-
- Redo from SLOG (oldest uncommitted transaction up to last checkpoint). Redo is a fast operation as it only needs to process a few records from the SLOG.
-
- - Phase 2
-
- Redo from Transaction Log starts from last checkpoint (instead of oldest uncommitted transaction)
--- **Undo phase**-
- The Undo phase with ADR completes almost instantaneously by using SLOG to undo non-versioned operations and Persisted Version Store (PVS) with Logical Revert to perform row level version-based Undo.
-
-## ADR recovery components
-
-The four key components of ADR are:
--- **Persisted version store (PVS)**-
- The persisted version store is a new SQL Server database engine mechanism for persisting the row versions generated in the database itself instead of the traditional `tempdb` version store. PVS enables resource isolation as well as improves availability of readable secondaries.
--- **Logical revert**-
- Logical revert is the asynchronous process responsible for performing row-level version-based Undo - providing instant transaction rollback and undo for all versioned operations. Logical revert is accomplished by:
-
- - Keeping track of all aborted transactions and marking them invisible to other transactions.
- - Performing rollback by using PVS for all user transactions, rather than physically scanning the transaction log and undoing changes one at a time.
- - Releasing all locks immediately after transaction abort. Since abort involves simply marking changes in memory, the process is very efficient and therefore locks do not have to be held for a long time.
--- **SLOG**-
- SLOG is a secondary in-memory log stream that stores log records for non-versioned operations (such as metadata cache invalidation, lock acquisitions, and so on). The SLOG is:
-
- - Low volume and in-memory
- - Persisted on disk by being serialized during the checkpoint process
- - Periodically truncated as transactions commit
- - Accelerates redo and undo by processing only the non-versioned operations
- - Enables aggressive transaction log truncation by preserving only the required log records
--- **Cleaner**-
- The cleaner is the asynchronous process that wakes up periodically and cleans page versions that are not needed.
-
-## Accelerated Database Recovery (ADR) patterns
-
-The following types of workloads benefit most from ADR:
--- ADR is recommended for workloads with long running transactions. -- ADR is recommended for workloads that have seen cases where active transactions are causing the transaction log to grow significantly. -- ADR is recommended for workloads that have experienced long periods of database unavailability due to long running recovery (such as unexpected service restart or manual transaction rollback).-
-## Best practices for Accelerated Database Recovery
--- Avoid long-running transactions in the database. Though one objective of ADR is to speed up database recovery due to redo long active transactions, long-running transactions can delay version cleanup and increase the size of the PVS.--- Avoid large transactions with data definition changes or DDL operations. ADR uses a SLOG (system log stream) mechanism to track DDL operations used in recovery. The SLOG is only used while the transaction active. SLOG is checkpointed, so avoiding large transactions that use SLOG can help overall performance. These scenarios can cause the SLOG to take up more space:-
- - Many DDLs are executed in one transaction. For example, in one transaction, rapidly creating and dropping temp tables.
-
- - A table has very large number of partitions/indexes that are modified. For example, a DROP TABLE operation on such table would require a large reservation of SLOG memory, which would delay truncation of the transaction log and delay undo/redo operations. The workaround can be drop the indexes individually and gradually, then drop the table. For more information on the SLOG, see [ADR recovery components](/sql/relational-databases/accelerated-database-recovery-concepts).
--- Prevent or reduce unnecessary aborted situations. A high abort rate will put pressure on the PVS cleaner and lower ADR performance. The aborts may come from a high rate of deadlocks, duplicate keys, or other constraint violations. -
- - The `sys.dm_tran_aborted_transactions` DMV shows all aborted transactions on the SQL Server instance. The `nested_abort` column indicates that the transaction committed but there are portions that aborted (savepoints or nested transactions) which can block the PVS cleanup process. For more information, see [sys.dm_tran_aborted_transactions (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-aborted-transactions).
-
- - To activate the PVS cleanup process manually between workloads or during maintenance windows, use `sys.sp_persistent_version_cleanup`. For more information, see [sys.sp_persistent_version_cleanup](/sql/relational-databases/system-stored-procedures/sys-sp-persistent-version-cleanup-transact-sql).
--- If you observe issues either with storage usage, high abort transaction and other factors, see [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshoot).-
-## Next steps
--- [Accelerated database recovery](/sql/relational-databases/accelerated-database-recovery-concepts)-- [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshoot).
azure-sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/azure-hybrid-benefit.md
- Title: Azure Hybrid Benefit -
-description: Use existing SQL Server licenses for Azure SQL Database and SQL Managed Instance discounts.
-------- Previously updated : 11/09/2021-
-# Azure Hybrid Benefit - Azure SQL Database & SQL Managed Instance
-
-[Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) allows you to exchange your existing licenses for discounted rates on Azure SQL Database and Azure SQL Managed Instance. You can save up to 30 percent or more on SQL Database and SQL Managed Instance by using your Software Assurance-enabled SQL Server licenses on Azure. The [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) page has a calculator to help determine savings.
-
-Changing to Azure Hybrid Benefit does not require any downtime.
-
-## Overview
-
-![vcore pricing structure](./media/azure-hybrid-benefit/pricing.png)
-
-With Azure Hybrid Benefit, you pay only for the underlying Azure infrastructure by using your existing SQL Server license for the SQL Server database engine itself (Base Compute pricing). If you do not use Azure Hybrid Benefit, you pay for both the underlying infrastructure and the SQL Server license (License-Included pricing).
-
-For Azure SQL Database, Azure Hybrid Benefit is only available when using the provisioned compute tier of the [vCore-based purchasing model](database/service-tiers-vcore.md). Azure Hybrid Benefit doesn't apply to [DTU-based purchasing models](database/service-tiers-dtu.md) or the [serverless compute tier](database/serverless-tier-overview.md).
-
-## Enable Azure Hybrid Benefit
-
-### Azure SQL Database
-
-You can choose or change your licensing model for Azure SQL Database using the Azure portal or the API of your choice.
-
-You can only apply the Azure Hybrid licensing model when you choose a vCore-based purchasing model and the provisioned compute tier for your Azure SQL Database. Azure Hybrid Benefit isn't available for service tiers under the DTU-based purchasing model or for the serverless compute tier.
-
-#### [Portal](#tab/azure-portal)
-
-To set or update the license type using the Azure portal:
--- For new databases, during creation, select **Configure database** on the **Basics** tab and select the option to **Save money**.-- For existing databases, select **Compute + storage** in the **Settings** menu and select the option to **Save money**.-
-If you don't see the **Save money** option in the Azure portal, verify that you selected a service tier using the vCore-based purchasing model and the provisioned compute tier.
-#### [PowerShell](#tab/azure-powershell)
-
-To set or update the license type using PowerShell:
--- [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) with the -LicenseType parameter-- [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) with the -LicenseType parameter-
-#### [Azure CLI](#tab/azure-cli)
-
-To set or update the license type using the Azure CLI:
--- [az sql db create](/cli/azure/sql/db#az-sql-db-create) with the --license-type parameter-
-#### [REST API](#tab/rest)
-
-To set or update the license type using the REST API:
--- [Create or update](/rest/api/sql/databases/createorupdate) with the properties.licenseType parameter-- [Update](/rest/api/sql/databases/update) with the properties.licenseType parameter---
-### Azure SQL Managed Instance
-
-You can choose or change your licensing model for Azure SQL Managed Instance using the Azure portal or the API of your choice.
-#### [Portal](#tab/azure-portal)
-
-To set or update the license type using the Azure portal:
--- For new managed instances, during creation, select **Configure Managed Instance** on the **Basics** tab and select the option for **Azure Hybrid Benefit**.-- For existing managed instances, select **Compute + storage** in the **Settings** menu and select the option for **Azure Hybrid Benefit**.-
-#### [PowerShell](#tab/azure-powershell)
-
-To set or update the license type using PowerShell:
--- [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance) with the -LicenseType parameter-- [Set-AzSqlInstance](/powershell/module/az.sql/set-azsqlinstance) with the -LicenseType parameter-
-#### [Azure CLI](#tab/azure-cli)
-
-To set or update the license type using the Azure CLI:
--- [az sql mi create](/cli/azure/sql/mi#az-sql-mi-create) with the --license-type parameter-- [az sql mi update](/cli/azure/sql/mi#az-sql-mi-update) with the --license-type parameter-
-#### [REST API](#tab/rest)
-
-To set or update the license type using the REST API:
--- [Create or update](/rest/api/sql/managedinstances/createorupdate) with the properties.licenseType parameter-- [Update](/rest/api/sql/managedinstances/update) with the properties.licenseType parameter--
-## Frequently asked questions
-
-### Are there dual-use rights with Azure Hybrid Benefit for SQL Server?
-
-You have 180 days of dual use rights of the license to ensure migrations are running seamlessly. After that 180-day period, you can only use the SQL Server license on Azure. You no longer have dual use rights on-premises and on Azure.
-
-### How does Azure Hybrid Benefit for SQL Server differ from license mobility?
-
-We offer license mobility benefits to SQL Server customers with Software Assurance. License mobility allows reassignment of their licenses to a partner's shared servers. You can use this benefit on Azure IaaS and AWS EC2.
-
-Azure Hybrid Benefit for SQL Server differs from license mobility in two key areas:
--- It provides economic benefits for moving highly virtualized workloads to Azure. SQL Server Enterprise Edition customers can get four cores in Azure in the General Purpose SKU for every core they own on-premises for highly virtualized applications. License mobility doesn't allow any special cost benefits for moving virtualized workloads to the cloud.-- It provides for a PaaS destination on Azure (SQL Managed Instance) that's highly compatible with SQL Server.-
-### What are the specific rights of the Azure Hybrid Benefit for SQL Server?
-
-SQL Database and SQL Managed Instance customers have the following rights associated with Azure Hybrid Benefit for SQL Server:
-
-|License footprint|What does Azure Hybrid Benefit for SQL Server get you?|
-|||
-|SQL Server Enterprise Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>One core on-premises = Four vCores in Hyperscale SKU</li><br><li>One core on-premises = Four vCores in General Purpose SKU</li><br><li>One core on-premises = One vCore in Business Critical SKU</li>|
-|SQL Server Standard Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>One core on-premises = One vCore in Hyperscale SKU</li><br><li>One core on-premises = One vCore in General Purpose SKU</li><br><li>Four cores on-premises = One vCore in Business Critical SKU</li>|
-
-## Next steps
--- For help with choosing an Azure SQL deployment option, see [Service comparison](azure-sql-iaas-vs-paas-what-is-overview.md).-- For a comparison of SQL Database and SQL Managed Instance features, see [Features of SQL Database and SQL Managed Instance](database/features-comparison.md).
azure-sql Azure Sql Iaas Vs Paas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md
- Title: "What is Azure SQL?"
-description: "Learn about the different options within the Azure SQL family of
-----
-keywords: SQL Server cloud, SQL Server in the cloud, PaaS database, cloud SQL Server, DBaaS, IaaS
--- Previously updated : 03/18/2022-
-# What is Azure SQL?
-
-Azure SQL is a family of managed, secure, and intelligent products that use the SQL Server database engine in the Azure cloud.
--- **Azure SQL Database**: Support modern cloud applications on an intelligent, managed database service, that includes serverless compute. -- **Azure SQL Managed Instance**: Modernize your existing SQL Server applications at scale with an intelligent fully managed instance as a service, with almost 100% feature parity with the SQL Server database engine. Best for most migrations to the cloud.-- **SQL Server on Azure VMs**: Lift-and-shift your SQL Server workloads with ease and maintain 100% SQL Server compatibility and operating system-level access.
-
-Azure SQL is built upon the familiar SQL Server engine, so you can migrate applications with ease and continue to use the tools, languages, and resources you're familiar with. Your skills and experience transfer to the cloud, so you can do even more with what you already have.
-
-Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your business requirements. Whether you prioritize cost savings or minimal administration, this article can help you decide which approach delivers against the business requirements you care about most.
-
-If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners/?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player]
---
-## Overview
-
-In today's data-driven world, driving digital transformation increasingly depends on our ability to manage massive amounts of data and harness its potential. But today's data estates are increasingly complex, with data hosted on-premises, in the cloud, or at the edge of the network. Developers who are building intelligent and immersive applications can find themselves constrained by limitations that can ultimately impact their experience. Limitations arising from incompatible platforms, inadequate data security, insufficient resources and price-performance barriers create complexity that can inhibit app modernization and development.
-
-One of the first things to understand in any discussion of Azure versus on-premises SQL Server databases is that you can use it all. Microsoft's data platform leverages SQL Server technology and makes it available across physical on-premises machines, private cloud environments, third-party hosted private cloud environments, and the public cloud.
--
-### Fully managed and always up to date
-
-Spend more time innovating and less time patching, updating, and backing up your databases. Azure is the only cloud with evergreen SQL that automatically applies the latest updates and patches so that your databases are always up to dateΓÇöeliminating end-of-support hassle. Even complex tasks like performance tuning, high availability, disaster recovery, and backups are automated, freeing you to focus on applications.
-
-### Protect your data with built-in intelligent security
-
-Azure constantly monitors your data for threats. With Azure SQL, you can:
--- Remediate potential threats in real time with intelligent [advanced threat detection](../security/fundamentals/threat-detection.md#threat-protection-features-other-azure-services) and proactive vulnerability assessment alerts. -- Get industry-leading, multi-layered protection with [built-in security controls](https://azure.microsoft.com/overview/security/) including T-SQL, authentication, networking, and key management. -- Take advantage of the most comprehensive [compliance](https://azure.microsoft.com/overview/trusted-cloud/compliance/) coverage of any cloud database service. --
-### Business motivations
-
-There are several factors that can influence your decision to choose between the different data offerings:
--- [Cost](#cost): Both platform as a service (PaaS) and infrastructure as a service (IaaS) options include base price that covers underlying infrastructure and licensing. However, with the IaaS option you need to invest additional time and resources to manage your database, while in PaaS you get these administration features included in the price. IaaS enables you to shut down resources while you are not using them to decrease the cost, while PaaS is always running unless you drop and re-create your resources when they are needed.-- [Administration](#administration): PaaS options reduce the amount of time that you need to invest to administer the database. However, it also limits the range of custom administration tasks and scripts that you can perform or run. For example, the CLR is not supported with SQL Database, but is supported for an instance of SQL Managed Instance. Also, no deployment options in PaaS support the use of trace flags.-- [Service-level agreement](#service-level-agreement-sla): Both IaaS and PaaS provide high, industry standard SLA. PaaS option guarantees 99.99% SLA, while IaaS guarantees 99.95% SLA for infrastructure, meaning that you need to implement additional mechanisms to ensure availability of your databases. You can attain 99.99% SLA by creating an additional SQL virtual machine, and implementing the SQL Server Always On availability group high availability solution. -- [Time to move to Azure](#market): SQL Server on Azure VM is the exact match of your environment, so migration from on-premises to the Azure VM is no different than moving the databases from one on-premises server to another. SQL Managed Instance also enables easy migration; however, there might be some changes that you need to apply before your migration. --
-## Service comparison
-
- ![Cloud SQL Server options: SQL Server on IaaS, or SaaS SQL Database in the cloud.](./media/azure-sql-iaas-vs-paas-what-is-overview/SQLIAAS_SQL_Server_Cloud_Continuum.png)
-
-As seen in the diagram, each service offering can be characterized by the level of administration you have over the infrastructure, and by the degree of cost efficiency.
-
-In Azure, you can have your SQL Server workloads running as a hosted service ([PaaS](https://azure.microsoft.com/overview/what-is-paas/)), or a hosted infrastructure ([IaaS](https://azure.microsoft.com/overview/what-is-iaas/)). Within PaaS, you have multiple product options, and service tiers within each option. The key question that you need to ask when deciding between PaaS or IaaS is do you want to manage your database, apply patches, and take backups, or do you want to delegate these operations to Azure?
-
-### Azure SQL Database
-
-[Azure SQL Database](database/sql-database-paas-overview.md) is a relational database-as-a-service (DBaaS) hosted in Azure that falls into the industry category of *Platform-as-a-Service (PaaS)*.
-- Best for modern cloud applications that want to use the latest stable SQL Server features and have time constraints in development and marketing. -- A fully managed SQL Server database engine, based on the latest stable Enterprise Edition of SQL Server. SQL Database has two deployment options built on standardized hardware and software that is owned, hosted, and maintained by Microsoft. -
-With SQL Server, you can use built-in features and functionality that requires extensive configuration (either on-premises or in an Azure virtual machine). When using SQL Database, you pay-as-you-go with options to scale up or out for greater power with no interruption. SQL Database has some additional features that are not available in SQL Server, such as built-in high availability, intelligence, and management.
--
-Azure SQL Database offers the following deployment options:
- - As a [*single database*](database/single-database-overview.md) with its own set of resources managed via a [logical SQL server](database/logical-servers.md). A single database is similar to a [contained database](/sql/relational-databases/databases/contained-databases) in SQL Server. This option is optimized for modern application development of new cloud-born applications. [Hyperscale](database/service-tier-hyperscale.md) and [serverless](database/serverless-tier-overview.md) options are available.
- - An [*elastic pool*](database/elastic-pool-overview.md), which is a collection of databases with a shared set of resources managed via a [logical server](database/logical-servers.md). Single databases can be moved into and out of an elastic pool. This option is optimized for modern application development of new cloud-born applications using the multi-tenant SaaS application pattern. Elastic pools provide a cost-effective solution for managing the performance of multiple databases that have variable usage patterns.
-
-### Azure SQL Managed Instance
-
-[Azure SQL Managed Instance](managed-instance/sql-managed-instance-paas-overview.md) falls into the industry category of *Platform-as-a-Service (PaaS)*, and is best for most migrations to the cloud. SQL Managed Instance is a collection of system and user databases with a shared set of resources that is lift-and-shift ready.
-- Best for new applications or existing on-premises applications that want to use the latest stable SQL Server features and that are migrated to the cloud with minimal changes. An instance of SQL Managed Instance is similar to an instance of the [Microsoft SQL Server database engine](/sql/database-engine/sql-server-database-engine-overview) offering shared resources for databases and additional instance-scoped features. -- SQL Managed Instance supports database migration from on-premises with minimal to no database change. This option provides all of the PaaS benefits of Azure SQL Database but adds capabilities that were previously only available in SQL Server VMs. This includes a native virtual network and near 100% compatibility with on-premises SQL Server. Instances of SQL Managed Instance provide full SQL Server access and feature compatibility for migrating SQL Servers to Azure.-
-### SQL Server on Azure VM
-
-[SQL Server on Azure VM](virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) falls into the industry category *Infrastructure-as-a-Service (IaaS)* and allows you to run SQL Server inside a fully managed virtual machine (VM) in Azure.
-- SQL Server installed and hosted in the cloud runs on Windows Server or Linux virtual machines running on Azure, also known as an infrastructure as a service (IaaS). SQL virtual machines are a good option for migrating on-premises SQL Server databases and applications without any database change. All recent versions and editions of SQL Server are available for installation in an IaaS virtual machine. -- Best for migrations and applications requiring OS-level access. SQL virtual machines in Azure are lift-and-shift ready for existing applications that require fast migration to the cloud with minimal changes or no changes. SQL virtual machines offer full administrative control over the SQL Server instance and underlying OS for migration to Azure. -- The most significant difference from SQL Database and SQL Managed Instance is that SQL Server on Azure Virtual Machines allows full control over the database engine. You can choose when to start maintenance/patching, change the recovery model to simple or bulk-logged, pause or start the service when needed, and you can fully customize the SQL Server database engine. With this additional control comes the added responsibility to manage the virtual machine.-- Rapid development and test scenarios when you do not want to buy on-premises non-production SQL Server hardware. SQL virtual machines also run on standardized hardware that is owned, hosted, and maintained by Microsoft. When using SQL virtual machines, you can either pay-as-you-go for a SQL Server license already included in a SQL Server image or easily use an existing license. You can also stop or resume the VM as needed. -- Optimized for migrating existing applications to Azure or extending existing on-premises applications to the cloud in hybrid deployments. In addition, you can use SQL Server in a virtual machine to develop and test traditional SQL Server applications. With SQL virtual machines, you have the full administrative rights over a dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when an organization already has IT resources available to maintain the virtual machines. These capabilities allow you to build a highly customized system to address your applicationΓÇÖs specific performance and availability requirements.--
-### Comparison table
-
-Additional differences are listed in the following table, but *both SQL Database and SQL Managed Instance are optimized to reduce overall management costs to a minimum for provisioning and managing many databases.* Ongoing administration costs are reduced since you do not have to manage any virtual machines, operating system, or database software. You do not have to manage upgrades, high availability, or [backups](database/automated-backups-overview.md).
-
-In general, SQL Database and SQL Managed Instance can dramatically increase the number of databases managed by a single IT or development resource. [Elastic pools](database/elastic-pool-overview.md) also support SaaS multi-tenant application architectures with features including tenant isolation and the ability to scale to reduce costs by sharing resources across databases. [SQL Managed Instance](managed-instance/sql-managed-instance-paas-overview.md) provides support for instance-scoped features enabling easy migration of existing applications, as well as sharing resources among databases. Whereas, [SQL Server on Azure VMs](virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) provide DBAs with an experience most similar to the on-premises environment they're familiar with.
--
-| Azure SQL Database | Azure SQL Managed Instance | SQL Server on Azure VM |
-| : | : | : |
-|Supports most on-premises database-level capabilities. The most commonly used SQL Server features are available.<br/>99.995% availability guaranteed.<br/>Built-in backups, patching, recovery.<br/>Latest stable Database Engine version.<br/>Ability to assign necessary resources (CPU/storage) to individual databases.<br/>Built-in advanced intelligence and security.<br/>Online change of resources (CPU/storage).| Supports almost all on-premises instance-level and database-level capabilities. High compatibility with SQL Server.<br/>99.99% availability guaranteed.<br/>Built-in backups, patching, recovery.<br/>Latest stable Database Engine version.<br/>Easy migration from SQL Server.<br/>Private IP address within Azure Virtual Network.<br/>Built-in advanced intelligence and security.<br/>Online change of resources (CPU/storage).| You have full control over the SQL Server engine. Supports all on-premises capabilities.<br/>Up to 99.99% availability.<br/>Full parity with the matching version of on-premises SQL Server.<br/>Fixed, well-known Database Engine version.<br/>Easy migration from SQL Server.<br/>Private IP address within Azure Virtual Network.<br/>You have the ability to deploy application or services on the host where SQL Server is placed.|
-|Migration from SQL Server might be challenging.<br/>Some SQL Server features are not available.<br/>Configurable [maintenance windows](database/maintenance-window.md).<br/>Compatibility with the SQL Server version can be achieved only using database compatibility levels.<br/>Private IP address support with [Azure Private Link](database/private-endpoint-overview.md).|There is still some minimal number of SQL Server features that are not available.<br/>Configurable [maintenance windows](database/maintenance-window.md).<br/>Compatibility with the SQL Server version can be achieved only using database compatibility levels.|You may use [manual or automated backups](virtual-machines/windows/backup-restore.md).<br>You need to implement your own High-Availability solution.<br/>There is a downtime while changing the resources(CPU/storage)|
-| Databases of up to 100 TB. | Up to 16 TB. | SQL Server instances with up to 256 TB of storage. The instance can support as many databases as needed. |
-| On-premises application can access data in Azure SQL Database. | [Native virtual network implementation](managed-instance/vnet-existing-add-subnet.md) and connectivity to your on-premises environment using Azure Express Route or VPN Gateway. | With SQL virtual machines, you can have applications that run partly in the cloud and partly on-premises. For example, you can extend your on-premises network and Active Directory Domain to the cloud via [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). For more information on hybrid cloud solutions, see [Extending on-premises data solutions to the cloud](/azure/architecture/data-guide/scenarios/hybrid-on-premises-and-cloud). |
--
-## Cost
-
-Whether you're a startup that is strapped for cash, or a team in an established company that operates under tight budget constraints, limited funding is often the primary driver when deciding how to host your databases. In this section, you learn about the billing and licensing basics in Azure associated with the Azure SQL family of services. You also learn about calculating the total application cost.
-
-### Billing and licensing basics
-
-Currently, both **SQL Database** and **SQL Managed Instance** are sold as a service and are available with several options and in several service tiers with different prices for resources, all of which are billed hourly at a fixed rate based on the service tier and compute size you choose. For the latest information on the current supported service tiers, compute sizes, and storage amounts, see [DTU-based purchasing model for SQL Database](database/service-tiers-dtu.md) and [vCore-based purchasing model for both SQL Database and SQL Managed Instance](database/service-tiers-vcore.md).
--- With SQL Database, you can choose a service tier that fits your needs from a wide range of prices starting from 5$/month for basic tier and you can create [elastic pools](database/elastic-pool-overview.md) to share resources among databases to reduce costs and accommodate usage spikes.-- With SQL Managed Instance, you can also bring your own license. For more information on bring-your-own licensing, see [License Mobility through Software Assurance on Azure](https://azure.microsoft.com/pricing/license-mobility/) or use the [Azure Hybrid Benefit calculator](https://azure.microsoft.com/pricing/hybrid-benefit/#sql-database) to see how to **save up to 40%**.-
-In addition, you are billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/). You can dynamically adjust service tiers and compute sizes to match your applicationΓÇÖs varied throughput needs.
-
-With **SQL Database** and **SQL Managed Instance**, the database software is automatically configured, patched, and upgraded by Azure, which reduces your administration costs. In addition, its [built-in backup](database/automated-backups-overview.md) capabilities help you achieve significant cost savings, especially when you have a large number of databases.
-
-With **SQL on Azure VMs**, you can use any of the platform-provided SQL Server images (which includes a license) or bring your SQL Server license. All the supported SQL Server versions (2008R2, 2012, 2014, 2016, 2017, 2019) and editions (Developer, Express, Web, Standard, Enterprise) are available. In addition, Bring-Your-Own-License versions (BYOL) of the images are available. When using the Azure provided images, the operational cost depends on the VM size and the edition of SQL Server you choose. Regardless of VM size or SQL Server edition, you pay per-minute licensing cost of SQL Server and the Windows or Linux Server, along with the Azure Storage cost for the VM disks. The per-minute billing option allows you to use SQL Server for as long as you need without buying addition SQL Server licenses. If you bring your own SQL Server license to Azure, you are charged for server and storage costs only. For more information on bring-your-own licensing, see [License Mobility through Software Assurance on Azure](https://azure.microsoft.com/pricing/license-mobility/). In addition, you are billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
-
-#### Calculating the total application cost
-
-When you start using a cloud platform, the cost of running your application includes the cost for new development and ongoing administration costs, plus the public cloud platform service costs.
-
-For more information on pricing, see the following resources:
--- [SQL Database & SQL Managed Instance pricing](https://azure.microsoft.com/pricing/details/sql-database/)-- [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/) for [SQL](https://azure.microsoft.com/pricing/details/virtual-machines/#sql) and for [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/#windows)-- [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)-
-## Administration
-
-For many businesses, the decision to transition to a cloud service is as much about offloading complexity of administration as it is cost. With IaaS and PaaS, Azure administers the underlying infrastructure and automatically replicates all data to provide disaster recovery, configures and upgrades the database software, manages load balancing, and does transparent failover if there is a server failure within a data center.
--- With **SQL Database** and **SQL Managed Instance**, you can continue to administer your database, but you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include databases and logins, index and query tuning, and auditing and security. Additionally, configuring high availability to another data center requires minimal configuration and administration.-- With **SQL on Azure VM**, you have full control over the operating system and SQL Server instance configuration. With a VM, it's up to you to decide when to update/upgrade the operating system and database software and when to install any additional software such as anti-virus. Some automated features are provided to dramatically simplify patching, backup, and high availability. In addition, you can control the size of the VM, the number of disks, and their storage configurations. Azure allows you to change the size of a VM as needed. For information, see [Virtual Machine and Cloud Service Sizes for Azure](../virtual-machines/sizes.md).-
-## Service-level agreement (SLA)
-
-For many IT departments, meeting up-time obligations of a service-level agreement (SLA) is a top priority. In this section, we look at what SLA applies to each database hosting option.
-
-For both **Azure SQL Database** and **Azure SQL Managed Instance**, Microsoft provides an availability SLA of 99.99%. For the latest information, see [Service-level agreement](https://azure.microsoft.com/support/legal/sla/azure-sql-database).
-
-For **SQL on Azure VM**, Microsoft provides an availability SLA of 99.95% that covers just the virtual machine. This SLA does not cover the processes (such as SQL Server) running on the VM and requires that you host at least two VM instances in an availability set. For the latest information, see the [VM SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/). For database high availability (HA) within VMs, you should configure one of the supported high availability options in SQL Server, such as [Always On availability groups](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server). Using a supported high availability option doesn't provide an additional SLA, but allows you to achieve >99.99% database availability.
-
-## <a name="market"></a>Time to move to Azure
-
-**Azure SQL Database** is the right solution for cloud-designed applications when developer productivity and fast time-to-market for new solutions are critical. With programmatic DBA-like functionality, it is perfect for cloud architects and developers as it lowers the need for managing the underlying operating system and database.
-
-**Azure SQL Managed Instance** greatly simplifies the migration of existing applications to Azure, enabling you to bring migrated database applications to market in Azure quickly.
-
-**SQL on Azure VM** is perfect if your existing or new applications require large databases or access to all features in SQL Server or Windows/Linux, and you want to avoid the time and expense of acquiring new on-premises hardware. It is also a good fit when you want to migrate existing on-premises applications and databases to Azure as-is - in cases where SQL Database or SQL Managed Instance is not a good fit. Since you do not need to change the presentation, application, and data layers, you save time and budget on re-architecting your existing solution. Instead, you can focus on migrating all your solutions to Azure and in doing some performance optimizations that may be required by the Azure platform. For more information, see [Performance Best Practices for SQL Server on Azure Virtual Machines](./virtual-machines/windows/performance-guidelines-best-practices-checklist.md).
--
-## Next steps
--- See [Your first Azure SQL Database](database/single-database-create-quickstart.md) to get started with SQL Database.-- See [Your first Azure SQL Managed Instance](managed-instance/instance-create-quickstart.md) to get started with SQL Managed Instance. -- See [SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database/).-- See [Azure SQL Managed Instance pricing](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/single/).-- See [Provision a SQL Server virtual machine in Azure](virtual-machines/windows/create-sql-vm-portal.md) to get started with SQL Server on Azure VMs.-- [Identify the right SQL Database or SQL Managed Instance SKU for your on-premises database](/sql/dma/dma-sku-recommend-sql-db/).
azure-sql Capacity Errors Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/capacity-errors-troubleshoot.md
- Title: Resolve capacity errors with Azure SQL resources
-description: Learn how to resolve possible capacity errors when attempting to deploy or scale Azure SQL Database or Azure SQL Managed Instance resources.
------- Previously updated : 09/03/2021---
-# Resolve capacity errors with Azure SQL Database or Azure SQL Managed Instance
-
-In this article, learn how to resolve capacity errors when deploying Azure SQL Database or Azure SQL Managed Instance resources.
-
-## Exceeded quota
-
-If you encounter any of the following errors when attempting to deploy your Azure SQL resource, please [request to increase your quota](database/quota-increase-request.md):
--- `Server quota limit has been reached for this location. Please select a different location with lower server count.`-- `Could not perform the operation because server would exceed the allowed Database Throughput Unit quota of xx.`-- During a scale operation, you may see the following error:
- `Could not perform the operation because server would exceed the allowed Database Throughput Unit quota of xx. `.
-
-## Subscription access
-
-Your subscription may not have access to create a server in the selected region if your subscription has not been registered with the SQL resource provider (RP).
-
-If you see the following errors, please [register your subscription with the SQL RP](#register-with-sql-rp):
-- `Your subscription does not have access to create a server in the selected region.`-- `Provisioning is restricted in this region. Please choose a different region. For exceptions to this rule please open a support request with issue type of 'Service and subscription limits' `-- `Location 'region name' is not accepting creation of new Windows Azure SQL Database servers for the subscription 'subscription id' at this time`--
-## Enable region
-
-Your subscription may not have access to create a server in the selected region if that region has not been enabled. To resolve this, file a [support request to enable a specific region](database/quota-increase-request.md#region) for your subscription.
-
-If you see the following errors, file a support ticket to enable a specific region:
-- `Your subscription does not have access to create a server in the selected region.`-- `Provisioning is restricted in this region. Please choose a different region. For exceptions to this rule please open a support request with issue type of 'Service and subscription limits' `-- `Location 'region name' is not accepting creation of new Windows Azure SQL Database servers for the subscription 'subscription id' at this time`---
-## Register with SQL RP
-
-To deploy Azure SQL resources, register your subscription with the SQL resource provider (RP).
-
-You can register your subscription using the Azure portal, [the Azure CLI](/cli/azure/install-azure-cli), or [Azure PowerShell](/powershell/azure/install-az-ps).
-
-# [Azure portal](#tab/portal)
-
-To register your subscription in the Azure portal, follow these steps:
-
-1. Open the Azure portal and go to **All Services**.
-1. Go to **Subscriptions** and select the subscription of interest.
-1. On the **Subscriptions** page, select **Resource providers** under **Settings**.
-1. Enter **sql** in the filter to bring up the SQL-related extensions.
-1. Select **Register**, **Re-register**, or **Unregister** for the **Microsoft.Sql** provider, depending on your desired action.
-
- ![Modify the provider](./media/capacity-errors-troubleshoot/register-with-sql-rp.png)
-
-# [Azure CLI](#tab/bash)
-
-To register your subscription using [the Azure CLI](/cli/azure/install-azure-cli), run this cmdlet:
-
-```azurecli-interactive
-# Register the SQL resource provider to your subscription
-az provider register --namespace Microsoft.SqlVirtualMac
-```
-
-# [Azure PowerShell](#tab/powershell)
-
-To register your subscription using [Azure PowerShell](/powershell/azure/install-az-ps), run this cmdlet:
-
-```powershell-interactive
-# Register the SQL resource provider to your subscription
-Register-AzResourceProvider -ProviderNamespace Microsoft.Sql
-
-```
---
-## Additional provisioning issues
-
-If you're still experiencing provisioning issues, please open a **Region** access request under the support topic of SQL Database and specify the DTU or vCores you want to consume on Azure SQL Database or Azure SQL Managed Instance.
-
-## Azure Program regions
-
-Azure Program offerings (Azure Pass, Imagine, Azure for Students, MPN, BizSpark, BizSpark Plus, Microsoft for Startups / Sponsorship Offers, Visual Studio Subscriptions / MSDN) have access to a limited set of regions.
-
-If your subscription is part of an Azure Program offering, and you would like to request access to any of the following regions, please consider using an alternate region instead:
-
-_Australia Central, Australia Central 2, Australia SouthEast, Brazil SouthEast, Canada East, China East, China North, China North 2, France South, Germany North, Japan West, JIO India Central, JIO India West, Korea South, Norway West, South Africa West, South India, Switzerland West, UAE Central , UK West, US DoD Central, US DoD East, US Gov Arizona, US Gov Texas, West Central US, West India._
-
-## Next steps
-
-After you submit your request, it will be reviewed. You will be contacted with an answer based on the information you provided in the form.
-
-For more information about other Azure limits, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
azure-sql Active Directory Interactive Connect Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-directory-interactive-connect-azure-sql-db.md
- Title: ActiveDirectoryInteractive connects to SQL
-description: "C# Code example, with explanations, for connecting to Azure SQL Database by using SqlAuthenticationMethod.ActiveDirectoryInteractive mode."
-------- Previously updated : 04/06/2022-
-# Connect to Azure SQL Database with Azure AD Multi-Factor Authentication
-
-This article provides a C# program that connects to Azure SQL Database. The program uses interactive mode authentication, which supports [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md).
-
-For more information about Multi-Factor Authentication support for SQL tools, see [Using multi-factor Azure Active Directory authentication](./authentication-mfa-ssms-overview.md).
-
-## Multi-Factor Authentication for Azure SQL Database
-
-`Active Directory Interactive` authentication supports multi-factor authentication using [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) to connect to Azure SQL data sources. In a client C# program, the enum value directs the system to use the Azure Active Directory (Azure AD) interactive mode that supports Multi-Factor Authentication to connect to Azure SQL Database. The user who runs the program sees the following dialog boxes:
-
-* A dialog box that displays an Azure AD user name and asks for the user's password.
-
- If the user's domain is federated with Azure AD, the dialog box doesn't appear, because no password is needed.
-
- If the Azure AD policy imposes Multi-Factor Authentication on the user, a dialog box to sign in to your account will display.
-
-* The first time a user goes through Multi-Factor Authentication, the system displays a dialog box that asks for a mobile phone number to send text messages to. Each message provides the *verification code* that the user must enter in the next dialog box.
-
-* A dialog box that asks for a Multi-Factor Authentication verification code, which the system has sent to a mobile phone.
-
-For information about how to configure Azure AD to require Multi-Factor Authentication, see [Getting started with Azure AD Multi-Factor Authentication in the cloud](../../active-directory/authentication/howto-mfa-getstarted.md).
-
-For screenshots of these dialog boxes, see [Configure multi-factor authentication for SQL Server Management Studio and Azure AD](authentication-mfa-ssms-configure.md).
-
-> [!TIP]
-> You can search .NET Framework APIs with the [.NET API Browser tool page](/dotnet/api/).
->
-> You can also search directly with the [optional ?term=&lt;search value&gt; parameter](/dotnet/api/?term=SqlAuthenticationMethod).
-
-## Prerequisite
-
-Before you begin, you should have a [logical SQL server](logical-servers.md) created and available.
-
-### Set an Azure AD admin for your server
-
-For the C# example to run, a [logical SQL server](logical-servers.md) admin needs to assign an Azure AD admin for your server.
-
-On the **SQL server** page, select **Active Directory admin** > **Set admin**.
-
-For more information about Azure AD admins and users for Azure SQL Database, see the screenshots in [Configure and manage Azure Active Directory authentication with SQL Database](authentication-aad-configure.md#provision-azure-ad-admin-sql-database).
-
-## Microsoft.Data.SqlClient
-
-The C# example relies on the [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) namespace. For more information, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
-
-> [!NOTE]
-> [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're using the [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) namespace for Azure Active Directory authentication, migrate applications to [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) and the [Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-migration.md). For more information about using Azure AD authentication with SqlClient, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
-
-## Verify with SQL Server Management Studio
-
-Before you run the C# example, it's a good idea to check that your setup and configurations are correct in [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms). Any C# program failure can then be narrowed to source code.
-
-### Verify server-level firewall IP addresses
-
-Run SSMS from the same computer, in the same building, where you plan to run the C# example. For this test, any **Authentication** mode is OK. If there's any indication that the server isn't accepting your IP address, see [server-level and database-level firewall rules](firewall-configure.md) for help.
-
-### Verify Azure Active Directory Multi-Factor Authentication
-
-Run SSMS again, this time with **Authentication** set to **Azure Active Directory - Universal with MFA**. This option requires SSMS version 17.5 or later.
-
-For more information, see [Configure Multi-Factor Authentication for SSMS and Azure AD](authentication-mfa-ssms-configure.md).
-
-> [!NOTE]
-> If you are a guest user in the database, you also need to provide the Azure AD domain name for the database: Select **Options** > **AD domain name or tenant ID**. If you are running SSMS 18.x or later, the AD domain name or tenant ID is no longer needed for guest users because 18.x or later automatically recognizes it.
->
->To find the domain name in the Azure portal, select **Azure Active Directory** > **Custom domain names**. In the C# example program, providing a domain name is not necessary.
-
-## C# code example
-
-> [!NOTE]
-> If you are using .NET Core, you will want to use the [Microsoft.Data.SqlClient](/dotnet/api/microsoft.data.sqlclient) namespace. For more information, see the following [blog](https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/).
-
-This is an example of C# source code.
-
-```csharp
-
-using System;
-using Microsoft.Data.SqlClient;
-
-public class Program
-{
- public static void Main(string[] args)
- {
- // Use your own server, database, and user ID.
- // Connetion string - user ID is not provided and is asked interactively.
- string ConnectionString = @"Server=<your server>.database.windows.net; Authentication=Active Directory Interactive; Database=<your database>";
--
- using (SqlConnection conn = new SqlConnection(ConnectionString))
-
- {
- conn.Open();
- Console.WriteLine("ConnectionString2 succeeded.");
- using (var cmd = new SqlCommand("SELECT @@Version", conn))
- {
- Console.WriteLine("select @@version");
- var result = cmd.ExecuteScalar();
- Console.WriteLine(result.ToString());
- }
-
- }
- Console.ReadKey();
-
- }
-}
-
-```
-
-&nbsp;
-
-This is an example of the C# test output.
-
-```C#
-ConnectionString2 succeeded.
-select @@version
-Microsoft SQL Azure (RTM) - 12.0.2000.8
- ...
-```
-
-## Next steps
--- [Azure Active Directory server principals](authentication-azure-ad-logins.md)-- [Azure AD-only authentication with Azure SQL](authentication-azure-ad-only-authentication.md)-- [Using multi-factor Azure Active Directory authentication](authentication-mfa-ssms-overview.md)
azure-sql Active Geo Replication Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-configure-portal.md
- Title: "Tutorial: Geo-replication & failover in portal"
-description: Learn how to configure geo-replication for an SQL database using the Azure portal or Azure CLI, and initiate failover.
-------- Previously updated : 08/20/2021-
-# Tutorial: Configure active geo-replication and failover (Azure SQL Database)
--
-This article shows you how to configure [active geo-replication for Azure SQL Database](active-geo-replication-overview.md#active-geo-replication-terminology-and-capabilities) using the [Azure portal](https://portal.azure.com) or Azure CLI and to initiate failover.
-
-For best practices using auto-failover groups, see [Auto-failover groups with Azure SQL Database](auto-failover-group-sql-db.md) and [Auto-failover groups with Azure SQL Managed Instance](../managed-instance/auto-failover-group-sql-mi.md).
---
-## Prerequisites
-
-# [Portal](#tab/portal)
-
-To configure active geo-replication by using the Azure portal, you need the following resource:
-
-* A database in Azure SQL Database: The primary database that you want to replicate to a different geographical region.
-
-> [!Note]
-> When using Azure portal, you can only create a secondary database within the same subscription as the primary. If a secondary database is required to be in a different subscription, use [Create Database REST API](/rest/api/sql/databases/createorupdate) or [ALTER DATABASE Transact-SQL API](/sql/t-sql/statements/alter-database-transact-sql).
-
-# [Azure CLI](#tab/azure-cli)
-
-To configure active geo-replication, you need a database in Azure SQL Database. It's the primary database that you want to replicate to a different geographical region.
-
-Prepare your environment for the Azure CLI.
----
-## Add a secondary database
-
-The following steps create a new secondary database in a geo-replication partnership.
-
-To add a secondary database, you must be the subscription owner or co-owner.
-
-The secondary database has the same name as the primary database and has, by default, the same service tier and compute size. The secondary database can be a single database or a pooled database. For more information, see [DTU-based purchasing model](service-tiers-dtu.md) and [vCore-based purchasing model](service-tiers-vcore.md).
-After the secondary is created and seeded, data begins replicating from the primary database to the new secondary database.
-
-> [!NOTE]
-> If the partner database already exists, (for example, as a result of terminating a previous geo-replication relationship) the command fails.
-
-# [Portal](#tab/portal)
-
-1. In the [Azure portal](https://portal.azure.com), browse to the database that you want to set up for geo-replication.
-2. On the SQL Database page, select your database, scroll to **Data management**, select **Replicas**, and then select **Create replica**.
-
- :::image type="content" source="./media/active-geo-replication-configure-portal/azure-cli-create-geo-replica.png" alt-text="Configure geo-replication":::
-
-3. Select or create the server for the secondary database, and configure the **Compute + storage** options if necessary. You can select any region for your secondary server, but we recommend the [paired region](../../availability-zones/cross-region-replication-azure.md).
-
- :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-create-and-configure-replica.png" alt-text="{alt-text}":::
-
- Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a pool, select **Yes** next to **Want to use SQL elastic pool?** and select a pool on the target server. A pool must already exist on the target server. This workflow doesn't create a pool.
-
-4. Click **Review + create**, review the information, and then click **Create**.
-5. The secondary database is created and the deployment process begins.
-
- :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-geo-replica-deployment.png" alt-text="Screenshot that shows the deployment status of the secondary database.":::
-
-6. When the deployment is complete, the secondary database displays its status.
-
- :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-sql-database-secondary-status.png" alt-text="Screenshot that shows the secondary database status after deployment.":::
-
-7. Return to the primary database page, and then select **Replicas**. Your secondary database is listed under **Geo replicas**.
-
- :::image type="content" source="./media/active-geo-replication-configure-portal/azure-sql-db-geo-replica-list.png" alt-text="Screenshot that shows the SQL database primary and geo replicas.":::
-
-# [Azure CLI](#tab/azure-cli)
-
-Select the database you want to set up for geo-replication. You'll need the following information:
-- Your original Azure SQL database name.-- The Azure SQL server name.-- Your resource group name.-- The name of the server to create the new replica in.-
-> [!NOTE]
-> The secondary database must have the same service tier as the primary.
-
-You can select any region for your secondary server, but we recommend the [paired region](../../availability-zones/cross-region-replication-azure.md).
-
-Run the [az sql db replica create](/cli/azure/sql/db/replica#az-sql-db-replica-create) command.
-
-```azurecli
-az sql db replica create --resource-group ContosoHotel --server contosoeast --name guestlist --partner-server contosowest --family Gen5 --capacity 2 --secondary-type Geo
-```
-
-Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a pool, use the `--elastic-pool` parameter. A pool must already exist on the target server. This workflow doesn't create a pool.
-
-The secondary database is created and the deployment process begins.
-
-When the deployment is complete, you can check the status of the secondary database by running the [az sql db replica list-links](/cli/azure/sql/db/replica#az-sql-db-replica-list-links) command:
-
-```azurecli
-az sql db replica list-links --name guestlist --resource-group ContosoHotel --server contosowest
-```
---
-## Initiate a failover
-
-The secondary database can be switched to become the primary.
-
-# [Portal](#tab/portal)
-
-1. In the [Azure portal](https://portal.azure.com), browse to the primary database in the geo-replication partnership.
-2. Scroll to **Data management**, and then select **Replicas**.
-3. In the **Geo replicas** list, select the database you want to become the new primary, select the ellipsis, and then select **Forced failover**.
-
- :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-select-forced-failover.png" alt-text="Screenshot that shows selecting forced failover from the drop-down.":::
-4. Select **Yes** to begin the failover.
-
-# [Azure CLI](#tab/azure-cli)
-
-Run the [az sql db replica set-primary](/cli/azure/sql/db/replica#az-sql-db-replica-set-primary) command.
-
-```azurecli
-az sql db replica set-primary --name guestlist --resource-group ContosoHotel --server contosowest
-```
---
-The command immediately switches the secondary database into the primary role. This process normally should complete within 30 seconds or less.
-
-There's a short period during which both databases are unavailable, on the order of 0 to 25 seconds, while the roles are switched. If the primary database has multiple secondary databases, the command automatically reconfigures the other secondaries to connect to the new primary. The entire operation should take less than a minute to complete under normal circumstances.
-
-> [!NOTE]
-> This command is designed for quick recovery of the database in case of an outage. It triggers failover without data synchronization, or forced failover. If the primary is online and committing transactions when the command is issued some data loss may occur.
-
-## Remove secondary database
-
-This operation permanently stops the replication to the secondary database, and changes the role of the secondary to a regular read-write database. If the connectivity to the secondary database is broken, the command succeeds but the secondary doesn't become read-write until after connectivity is restored.
-
-# [Portal](#tab/portal)
-
-1. In the [Azure portal](https://portal.azure.com), browse to the primary database in the geo-replication partnership.
-2. Select **Replicas**.
-3. In the **Geo replicas** list, select the database you want to remove from the geo-replication partnership, select the ellipsis, and then select **Stop replication**.
-
- :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-select-stop-replication.png" alt-text="Screenshot that shows selecting stop replication from the drop-down.":::
-5. A confirmation window opens. Click **Yes** to remove the database from the geo-replication partnership. (Set it to a read-write database not part of any replication.)
-
-# [Azure CLI](#tab/azure-cli)
-
-Run the [az sql db replica delete-link](/cli/azure/sql/db/replica#az-sql-db-replica-delete-link) command.
-
-```azurecli
-az sql db replica delete-link --name guestlist --resource-group ContosoHotel --server contosoeast --partner-server contosowest
-```
-
-Confirm that you want to perform the operation.
---
-## Next steps
-
-* To learn more about active geo-replication, see [active geo-replication](active-geo-replication-overview.md).
-* To learn about auto-failover groups, see [Auto-failover groups](auto-failover-group-overview.md)
-* For a business continuity overview and scenarios, see [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md).
azure-sql Active Geo Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-overview.md
- Title: Active geo-replication
-description: Use active geo-replication to create readable secondary databases of individual databases in Azure SQL Database in the same or different regions.
------- Previously updated : 4/14/2022--
-# Active geo-replication
-
-Active geo-replication is a feature that lets you to create a continuously synchronized readable secondary database for a primary database. The readable secondary database may be in the same Azure region as the primary, or, more commonly, in a different region. This kind of readable secondary databases are also known as geo-secondaries, or geo-replicas.
-
-Active geo-replication is designed as a business continuity solution that lets you perform quick disaster recovery of individual databases in case of a regional disaster or a large scale outage. Once geo-replication is set up, you can initiate a geo-failover to a geo-secondary in a different Azure region. The geo-failover is initiated programmatically by the application or manually by the user.
-
-> [!NOTE]
-> Active geo-replication for Azure SQL Hyperscale is [now in public preview](service-tier-hyperscale-replicas.md#geo-replica-in-preview). Current limitations include:
-> - Primary can have only one geo-secondary replica.
-> - Restore or database copy from geo-secondary is not supported.
-> - Can't use geo-secondary as a source for geo-replication to another database.
--
-> [!NOTE]
-> Active geo-replication is not supported by Azure SQL Managed Instance. For geographic failover of instances of SQL Managed Instance, use [Auto-failover groups](auto-failover-group-overview.md).
-
-> [!NOTE]
-> To migrate SQL databases from Azure Germany using active geo-replication, see [Migrate SQL Database using active geo-replication](../../germany/germany-migration-databases.md#migrate-sql-database-using-active-geo-replication).
-
-If your application requires a stable connection endpoint and automatic geo-failover support in addition to geo-replication, use [Auto-failover groups](auto-failover-group-overview.md).
-
-The following diagram illustrates a typical configuration of a geo-redundant cloud application using Active geo-replication.
-
-![active geo-replication](./media/active-geo-replication-overview/geo-replication.png)
-
-If for any reason your primary database fails, you can initiate a geo-failover to any of your secondary databases. When a secondary is promoted to the primary role, all other secondaries are automatically linked to the new primary.
-
-You can manage geo-replication and initiate a geo-failover using the following:
--- The [Azure portal](active-geo-replication-configure-portal.md)-- [PowerShell: Single database](scripts/setup-geodr-and-failover-database-powershell.md)-- [PowerShell: Elastic pool](scripts/setup-geodr-and-failover-elastic-pool-powershell.md)-- [Transact-SQL: Single database or elastic pool](/sql/t-sql/statements/alter-database-azure-sql-database)-- [REST API: Single database](/rest/api/sql/replicationlinks)-
-Active geo-replication leverages the [Always On availability group](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) technology to asynchronously replicate transaction log generated on the primary replica to all geo-replicas. While at any given point, a secondary database might be slightly behind the primary database, the data on a secondary is guaranteed to be transactionally consistent. In other words, changes made by uncommitted transactions are not visible.
-
-> [!NOTE]
-> Active geo-replication replicates changes by streaming database transaction log from the primary replica to secondary replicas. It is unrelated to [transactional replication](/sql/relational-databases/replication/transactional/transactional-replication), which replicates changes by executing DML (INSERT, UPDATE, DELETE) commands on subscribers.
-
-Regional redundancy provided by geo-replication enables applications to quickly recover from a permanent loss of an entire Azure region, or parts of a region, caused by natural disasters, catastrophic human errors, or malicious acts. Geo-replication RPO can be found in [Overview of Business Continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md).
-
-The following figure shows an example of active geo-replication configured with a primary in the North Central US region and a geo-secondary in the South Central US region.
-
-![geo-replication relationship](./media/active-geo-replication-overview/geo-replication-relationship.png)
-
-In addition to disaster recovery, active geo-replication can be used in the following scenarios:
--- **Database migration**: You can use active geo-replication to migrate a database from one server to another with minimum downtime.-- **Application upgrades**: You can create an extra secondary as a fail back copy during application upgrades.-
-To achieve full business continuity, adding database regional redundancy is only a part of the solution. Recovering an application (service) end-to-end after a catastrophic failure requires recovery of all components that constitute the service and any dependent services. Examples of these components include the client software (for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all components are resilient to the same failures and become available within the recovery time objective (RTO) of your application. Therefore, you need to identify all dependent services and understand the guarantees and capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the failover of the services on which it depends. For more information about designing solutions for disaster recovery, see [Designing Cloud Solutions for Disaster Recovery Using active geo-replication](designing-cloud-solutions-for-disaster-recovery.md).
-
-## Active geo-replication terminology and capabilities
--- **Automatic asynchronous replication**-
- You can only create a geo-secondary for an existing database. The geo-secondary can be created on any logical server, other than the server with the primary database. Once created, the geo-secondary replica is populated with the data of the primary database. This process is known as seeding. After a geo-secondary has been created and seeded, updates to the primary database are automatically and asynchronously replicated to the geo-secondary replica. Asynchronous replication means that transactions are committed on the primary database before they are replicated.
--- **Readable geo-secondary replicas**-
- An application can access a geo-secondary replica to execute read-only queries using the same or different security principals used for accessing the primary database. For more information, see [Use read-only replicas to offload read-only query workloads](read-scale-out.md).
-
- > [!IMPORTANT]
- > You can use geo-replication to create secondary replicas in the same region as the primary. You can use these secondaries to satisfy read scale-out scenarios in the same region. However, a secondary replica in the same region does not provide additional resilience to catastrophic failures or large scale outages, and therefore is not a suitable failover target for disaster recovery purposes. It also does not guarantee availability zone isolation. Use Business Critical or Premium service tiers [zone redundant configuration](high-availability-sla.md#premium-and-business-critical-service-tier-zone-redundant-availability) or General Purpose service tier [zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability) to achieve availability zone isolation.
- >
--- **Planned geo-failover**-
- Planned geo-failover switches the roles of primary and geo-secondary databases after completing full data synchronization. A planned failover does not result in data loss. The duration of planned geo-failover depends on the size of transaction log on the primary that needs to be synchronized to the geo-secondary. Planned geo-failover is designed for the following scenarios:
-
- - Perform DR drills in production when the data loss is not acceptable;
- - Relocate the database to a different region;
- - Return the database to the primary region after the outage has been mitigated (known as failback).
--- **Unplanned geo-failover**-
- Unplanned, or forced, geo-failover immediately switches the geo-secondary to the primary role without any synchronization with the primary. Any transactions committed on the primary but not yet replicated to the secondary are lost. This operation is designed as a recovery method during outages when the primary is not accessible, but database availability must be quickly restored. When the original primary is back online, it will be automatically re-connected, reseeded using the current primary data, and become a new geo-secondary.
-
- > [!IMPORTANT]
- > After either planned or unplanned geo-failover, the connection endpoint for the new primary changes because the new primary is now located on a different logical server.
--- **Multiple readable geo-secondaries**-
- Up to four geo-secondaries can be created for a primary. If there is only one secondary, and it fails, the application is exposed to higher risk until a new secondary is created. If multiple secondaries exist, the application remains protected even if one of the secondaries fails. Additional secondaries can also be used to scale out read-only workloads.
-
- > [!TIP]
- > If you are using active geo-replication to build a globally distributed application and need to provide read-only access to data in more than four regions, you can create a secondary of a secondary (a process known as chaining) to create additional geo-replicas. Replication lag on chained geo-replicas may be higher than on geo-replicas connected directly to the primary. Setting up chained geo-replication topologies is only supported programmatically, and not from Azure portal.
--- **Geo-replication of databases in an elastic pool**-
- Each geo-secondary can be a single database or a database in an elastic pool. The elastic pool choice for each geo-secondary database is separate and does not depend on the configuration of any other replica in the topology (either primary or secondary). Each elastic pool is contained within a single logical server. Because database names on a logical server must be unique, multiple geo-secondaries of the same primary can never share an elastic pool.
--- **User-controlled geo-failover and failback**-
- A geo-secondary that has finished initial seeding can be explicitly switched to the primary role (failed over) at any time by the application or the user. During an outage where the primary is inaccessible, only an unplanned geo-failover can be used. That immediately promotes a geo-secondary to be the new primary. When the outage is mitigated, the system automatically makes the recovered primary a geo-secondary, and brings it up-to-date with the new primary. Due to the asynchronous nature of geo-replication, recent transactions may be lost during unplanned geo-failovers if the primary fails before these transactions are replicated to a geo-secondary. When a primary with multiple geo-secondaries fails over, the system automatically reconfigures replication relationships and links the remaining geo-secondaries to the newly promoted primary, without requiring any user intervention. After the outage that caused the geo-failover is mitigated, it may be desirable to return the primary to its original region. To do that, invoke a planned geo-failover.
-
-## <a name="preparing-secondary-database-for-failover"></a> Prepare for geo-failover
-
-To ensure that your application can immediately access the new primary after geo-failover, validate that authentication and network access for your secondary server are properly configured. For details, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md). Also validate that backup retention policy on the secondary database matches that of the primary. This setting is not a part of the database and is not replicated from the primary. By default, the geo-secondary is configured with a default PITR retention period of seven days. For details, see [SQL Database automated backups](automated-backups-overview.md).
-
-> [!IMPORTANT]
-> If your database is a member of a failover group, you cannot initiate its failover using the geo-replication failover command. Use the failover command for the group. If you need to failover an individual database, you must remove it from the failover group first. See [Auto-failover groups](auto-failover-group-overview.md) for details.
-
-## <a name="configuring-secondary-database"></a> Configure geo-secondary
-
-Both primary and geo-secondary are required to have the same service tier. It is also strongly recommended that the geo-secondary is configured with the same backup storage redundancy and compute size (DTUs or vCores) as the primary. If the primary is experiencing a heavy write workload, a geo-secondary with a lower compute size may not be able to keep up. That will cause replication lag on the geo-secondary, and may eventually cause unavailability of the geo-secondary. To mitigate these risks, active geo-replication will reduce (throttle) the primary's transaction log rate if necessary to allow its secondaries to catch up.
-
-Another consequence of an imbalanced geo-secondary configuration is that after failover, application performance may suffer due to insufficient compute capacity of the new primary. In that case, it will be necessary to scale up the database to have sufficient resources, which may take significant time, and will require a [high availability](high-availability-sla.md) failover at the end of the scale up process, which may interrupt application workloads.
-
-If you decide to create the geo-secondary with a lower compute size, you should monitor log IO rate on the primary over time. This lets you estimate the minimal compute size of the geo-secondary required to sustain the replication load. For example, if your primary database is P6 (1000 DTU) and its log IO is sustained at 50%, the geo-secondary needs to be at least P4 (500 DTU). To retrieve historical log IO data, use the [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) view. To retrieve recent log IO data with higher granularity that better reflects short-term spikes, use the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view.
-
-> [!TIP]
-> Transaction log IO throttling on the primary due to lower compute size on a geo-secondary is reported using the HADR_THROTTLE_LOG_RATE_MISMATCHED_SLO wait type, visible in the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql) database views.
->
-> Transaction log IO on the primary may be throttled for reasons unrelated to lower compute size on a geo-secondary. This kind of throttling may occur even if the geo-secondary has the same or higher compute size than the primary. For details, including wait types for different kinds of log IO throttling, see [Transaction log rate governance](resource-limits-logical-server.md#transaction-log-rate-governance).
-
-By default, backup storage redundancy of the geo-secondary is same as for the primary database. You can choose to configure a geo-secondary with a different backup storage redundancy. Backups are always taken on the primary database. If the secondary is configured with a different backup storage redundancy, then after a geo-failover, when the geo-secondary is promoted to the primary, new backups will be stored and billed according to the type of storage (RA-GRS, ZRS, LRS) selected on the new primary (previous secondary).
-
-## Cross-subscription geo-replication
-
-To create a geo-secondary in a subscription different from the subscription of the primary (whether under the same Azure Active Directory tenant or not), follow the steps in this section.
-
-1. Add the IP address of the client machine executing the T-SQL commands below to the server firewalls of **both** the primary and secondary servers. You can confirm that IP address by executing the following query while connected to the primary server from the same client machine.
-
- ```sql
- select client_net_address from sys.dm_exec_connections where session_id = @@SPID;
- ```
-
- For more information see, [Configure firewall](firewall-configure.md).
-
-2. In the master database on the **primary** server, create a SQL authentication login dedicated to active geo-replication setup. Adjust login name and password as needed.
-
- ```sql
- create login geodrsetup with password = 'ComplexPassword01';
- ```
-
-3. In the same database, create a user for the login, and add it to the `dbmanager` role:
-
- ```sql
- create user geodrsetup for login geodrsetup;
- alter role dbmanager add member geodrsetup;
- ```
-
-4. Take note of the SID value of the new login. Obtain the SID value using the following query.
-
- ```sql
- select sid from sys.sql_logins where name = 'geodrsetup';
- ```
-
-5. Connect to the **primary** database (not the master database), and create a user for the same login.
-
- ```sql
- create user geodrsetup for login geodrsetup;
- ```
-
-6. In the same database, add the user to the `db_owner` role.
-
- ```sql
- alter role db_owner add member geodrsetup;
- ```
-
-7. In the master database on the **secondary** server, create the same login as on the primary server, using the same name, password, and SID. Replace the hexadecimal SID value in the sample command below with the one obtained in Step 4.
-
- ```sql
- create login geodrsetup with password = 'ComplexPassword01', sid=0x010600000000006400000000000000001C98F52B95D9C84BBBA8578FACE37C3E;
- ```
-
-8. In the same database, create a user for the login, and add it to the `dbmanager` role.
-
- ```sql
- create user geodrsetup for login geodrsetup;
- alter role dbmanager add member geodrsetup;
- ```
-
-9. Connect to the master database on the **primary** server using the new `geodrsetup` login, and initiate geo-secondary creation on the secondary server. Adjust database name and secondary server name as needed. Once the command is executed, you can monitor geo-secondary creation by querying the [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database) view in the **primary** database, and the [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) view in the master database on the **primary** server. The time needed to create a geo-secondary depends on the primary database size.
-
- ```sql
- alter database [dbrep] add secondary on server [servername];
- ```
-
-10. After the geo-secondary is successfully created, the users, logins, and firewall rules created by this procedure can be removed.
-
-> [!NOTE]
-> Cross-subscription geo-replication operations including setup and geo-failover are only supported using T-SQL commands.
->
-> Adding a geo-secondary using T-SQL is not supported when connecting to the primary server over a [private endpoint](private-endpoint-overview.md). If a private endpoint is configured but public network access is allowed, adding a geo-secondary is supported when connected to the primary server from a public IP address. Once a geo-secondary is added, public access can be [denied](connectivity-settings.md#deny-public-network-access).
->
-> Creating a geo-secondary on a logical server in a different Azure tenant is not supported when [Azure Active Directory only](https://techcommunity.microsoft.com/t5/azure-sql/azure-active-directory-only-authentication-for-azure-sql/ba-p/2417673) authentication for Azure SQL is active (enabled) on either primary or secondary logical server.
-
-## <a name="keeping-credentials-and-firewall-rules-in-sync"></a> Keep credentials and firewall rules in sync
-
-When using public network access for connecting to the database, we recommend using [database-level IP firewall rules](firewall-configure.md) for geo-replicated databases. These rules are replicated with the database, which ensures that all geo-secondaries have the same IP firewall rules as the primary. This approach eliminates the need for customers to manually configure and maintain firewall rules on servers hosting the primary and secondary databases. Similarly, using [contained database users](logins-create-manage.md) for data access ensures both primary and secondary databases always have the same authentication credentials. This way, after a geo-failover, there is no disruptions due to authentication credential mismatches. If you are using logins and users (rather than contained users), you must take extra steps to ensure that the same logins exist for your secondary database. For configuration details see [How to configure logins and users](active-geo-replication-security-configure.md).
-
-## <a name="upgrading-or-downgrading-primary-database"></a> Scale primary database
-
-You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary.
-
-> [!NOTE]
-> If you created a geo-secondary as part of failover group configuration, it is not recommended to scale it down. This is to ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.
-
-> [!IMPORTANT]
-> The primary database in a failover group can't scale to a higher service tier (edition) unless the secondary database is first scaled to the higher tier. For example, if you want to scale up the primary from General Purpose to Business Critical, you have to first scale the geo-secondary to Business Critical. If you try to scale the primary or geo-secondary in a way that violates this rule, you will receive the following error:
->
-> `The source database 'Primaryserver.DBName' cannot have higher edition than the target database 'Secondaryserver.DBName'. Upgrade the edition on the target before upgrading the source.`
->
-
-## <a name="preventing-the-loss-of-critical-data"></a> Prevent loss of critical data
-
-Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism. Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical transactions from data loss, an application developer can call the [sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) stored procedure immediately after committing the transaction. Calling `sp_wait_for_database_copy_sync` blocks the calling thread until the last committed transaction has been transmitted and hardened in the transaction log of the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on the secondary. `sp_wait_for_database_copy_sync` is scoped to a specific geo-replication link. Any user with the connection rights to the primary database can call this procedure.
-
-> [!NOTE]
-> `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
-
-## <a name="monitoring-geo-replication-lag"></a> Monitor geo-replication lag
-
-To monitor lag with respect to RPO, use *replication_lag_sec* column of [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database) on the primary database. It shows lag in seconds between the transactions committed on the primary, and hardened to the transaction log on the secondary. For example, if the lag is one second, it means that if the primary is impacted by an outage at this moment and a geo-failover is initiated, transactions committed in the last second will be lost.
-
-To measure lag with respect to changes on the primary database that have been hardened on the geo-secondary, compare *last_commit* time on the geo-secondary with the same value on the primary.
-
-> [!TIP]
-> If *replication_lag_sec* on the primary is NULL, it means that the primary does not currently know how far behind a geo-secondary is. This typically happens after process restarts and should be a transient condition. Consider sending an alert if *replication_lag_sec* returns NULL for an extended period of time. It may indicate that the geo-secondary cannot communicate with the primary due to a connectivity failure.
->
-> There are also conditions that could cause the difference between *last_commit* time on the geo-secondary and on the primary to become large. For example, if a commit is made on the primary after a long period of no changes, the difference will jump up to a large value before quickly returning to zero. Consider sending an alert if the difference between these two values remains large for a long time.
-
-## <a name="programmatically-managing-active-geo-replication"></a> Programmatically manage active geo-replication
-
-As discussed previously, active geo-replication can also be managed programmatically using T-SQL, Azure PowerShell, and REST API. The following tables describe the set of commands available. Active geo-replication includes a set of Azure Resource Manager APIs for management, including the [Azure SQL Database REST API](/rest/api/sql/) and [Azure PowerShell cmdlets](/powershell/azure/). These APIs support Azure role-based access control (Azure RBAC). For more information on how to implement access roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-
-### <a name="t-sql-manage-failover-of-single-and-pooled-databases"></a> T-SQL: Manage geo-failover of single and pooled databases
-
-> [!IMPORTANT]
-> These T-SQL commands only apply to active geo-replication and do not apply to failover groups. As such, they also do not apply to SQL Managed Instance, which only supports failover groups.
-
-| Command | Description |
-| | |
-| [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?preserve-view=true&view=azuresqldb-current) |Use **ADD SECONDARY ON SERVER** argument to create a secondary database for an existing database and starts data replication |
-| [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?preserve-view=true&view=azuresqldb-current) |Use **FAILOVER** or **FORCE_FAILOVER_ALLOW_DATA_LOSS** to switch a secondary database to be primary to initiate failover |
-| [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?preserve-view=true&view=azuresqldb-current) |Use **REMOVE SECONDARY ON SERVER** to terminate a data replication between a SQL Database and the specified secondary database. |
-| [sys.geo_replication_links](/sql/relational-databases/system-dynamic-management-views/sys-geo-replication-links-azure-sql-database) |Returns information about all existing replication links for each database on a server. |
-| [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database) |Gets the last replication time, last replication lag, and other information about the replication link for a given database. |
-| [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) |Shows the status for all database operations including changes to replication links. |
-| [sys.sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) |Causes the application to wait until all committed transactions are hardened to the transaction log of a geo-secondary. |
--
-### <a name="powershell-manage-failover-of-single-and-pooled-databases"></a> PowerShell: Manage geo-failover of single and pooled databases
-
-> [!IMPORTANT]
-> The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is for the Az.Sql module. For these cmdlets, see [AzureRM.Sql](/powershell/module/AzureRM.Sql/). The arguments for the commands in the Az module and in the AzureRm modules are substantially identical.
-
-| Cmdlet | Description |
-| | |
-| [Get-AzSqlDatabase](/powershell/module/az.sql/get-azsqldatabase) |Gets one or more databases. |
-| [New-AzSqlDatabaseSecondary](/powershell/module/az.sql/new-azsqldatabasesecondary) |Creates a secondary database for an existing database and starts data replication. |
-| [Set-AzSqlDatabaseSecondary](/powershell/module/az.sql/set-azsqldatabasesecondary) |Switches a secondary database to be primary to initiate failover. |
-| [Remove-AzSqlDatabaseSecondary](/powershell/module/az.sql/remove-azsqldatabasesecondary) |Terminates data replication between a SQL Database and the specified secondary database. |
-| [Get-AzSqlDatabaseReplicationLink](/powershell/module/az.sql/get-azsqldatabasereplicationlink) |Gets the geo-replication links for a database. |
-
-> [!TIP]
-> For sample scripts, see [Configure and failover a single database using active geo-replication](scripts/setup-geodr-and-failover-database-powershell.md) and [Configure and failover a pooled database using active geo-replication](scripts/setup-geodr-and-failover-elastic-pool-powershell.md).
-
-### <a name="rest-api-manage-failover-of-single-and-pooled-databases"></a> REST API: Manage geo-failover of single and pooled databases
-
-| API | Description |
-| | |
-| [Create or Update Database (createMode=Restore)](/rest/api/sql/databases/createorupdate) |Creates, updates, or restores a primary or a secondary database. |
-| [Get Create or Update Database Status](/rest/api/sql/databases/createorupdate) |Returns the status during a create operation. |
-| [Set Secondary Database as Primary (Planned Failover)](/rest/api/sql/replicationlinks/failover) |Sets which secondary database is primary by failing over from the current primary database. **This option is not supported for SQL Managed Instance.**|
-| [Set Secondary Database as Primary (Unplanned Failover)](/rest/api/sql/replicationlinks/failoverallowdataloss) |Sets which secondary database is primary by failing over from the current primary database. This operation might result in data loss. **This option is not supported for SQL Managed Instance.**|
-| [Get Replication Link](/rest/api/sql/replicationlinks/get) |Gets a specific replication link for a given database in a geo-replication partnership. It retrieves the information visible in the sys.geo_replication_links catalog view. **This option is not supported for SQL Managed Instance.**|
-| [Replication Links - List By Database](/rest/api/sql/replicationlinks/listbydatabase) | Gets all replication links for a given database in a geo-replication partnership. It retrieves the information visible in the sys.geo_replication_links catalog view. |
-| [Delete Replication Link](/rest/api/sql/replicationlinks/delete) | Deletes a database replication link. Cannot be done during failover. |
--
-## Next steps
--- For sample scripts, see:
- - [Configure and failover a single database using active geo-replication](scripts/setup-geodr-and-failover-database-powershell.md).
- - [Configure and failover a pooled database using active geo-replication](scripts/setup-geodr-and-failover-elastic-pool-powershell.md).
-- SQL Database also supports auto-failover groups. For more information, see using [auto-failover groups](auto-failover-group-overview.md).-- For a business continuity overview and scenarios, see [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md).-- To learn about Azure SQL Database automated backups, see [SQL Database automated backups](automated-backups-overview.md).-- To learn about using automated backups for recovery, see [Restore a database from the service-initiated backups](recovery-using-backups.md).-- To learn about authentication requirements for a new primary server and database, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
azure-sql Active Geo Replication Security Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-security-configure.md
- Title: Configure security for disaster recovery
-description: Learn the security considerations for configuring and managing security after a database restore or a failover to a secondary server.
-------- Previously updated : 12/18/2018-
-# Configure and manage Azure SQL Database security for geo-restore or failover
-
-This article describes the authentication requirements to configure and control [active geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md). It also provides the steps required to set up user access to the secondary database. Finally, it also describes how to enable access to the recovered database after using [geo-restore](recovery-using-backups.md#geo-restore). For more information on recovery options, see [Business Continuity Overview](business-continuity-high-availability-disaster-recover-hadr-overview.md).
-
-## Disaster recovery with contained users
-
-Unlike traditional users, which must be mapped to logins in the master database, a contained user is managed completely by the database itself. This has two benefits. In the disaster recovery scenario, the users can continue to connect to the new primary database or the database recovered using geo-restore without any additional configuration, because the database manages the users. There are also potential scalability and performance benefits from this configuration from a login perspective. For more information, see [Contained Database Users - Making Your Database Portable](/sql/relational-databases/security/contained-database-users-making-your-database-portable).
-
-The main trade-off is that managing the disaster recovery process at scale is more challenging. When you have multiple databases that use the same login, maintaining the credentials using contained users in multiple databases may negate the benefits of contained users. For example, the password rotation policy requires that changes be made consistently in multiple databases rather than changing the password for the login once in the master database. For this reason, if you have multiple databases that use the same user name and password, using contained users is not recommended.
-
-## How to configure logins and users
-
-If you are using logins and users (rather than contained users), you must take extra steps to ensure that the same logins exist in the master database. The following sections outline the steps involved and additional considerations.
-
- >[!NOTE]
- > It is also possible to use Azure Active Directory (AAD) logins to manage your databases. For more information, see [Azure SQL logins and users](./logins-create-manage.md).
-
-### Set up user access to a secondary or recovered database
-
-In order for the secondary database to be usable as a read-only secondary database, and to ensure proper access to the new primary database or the database recovered using geo-restore, the master database of the target server must have the appropriate security configuration in place before the recovery.
-
-The specific permissions for each step are described later in this topic.
-
-Preparing user access to a geo-replication secondary should be performed as part configuring geo-replication. Preparing user access to the geo-restored databases should be performed at any time when the original server is online (e.g. as part of the DR drill).
-
-> [!NOTE]
-> If you fail over or geo-restore to a server that does not have properly configured logins, access to it will be limited to the server admin account.
-
-Setting up logins on the target server involves three steps outlined below:
-
-#### 1. Determine logins with access to the primary database
-
-The first step of the process is to determine which logins must be duplicated on the target server. This is accomplished with a pair of SELECT statements, one in the logical master database on the source server and one in the primary database itself.
-
-Only the server admin or a member of the **LoginManager** server role can determine the logins on the source server with the following SELECT statement.
-
-```sql
-SELECT [name], [sid]
-FROM [sys].[sql_logins]
-WHERE [type_desc] = 'SQL_Login'
-```
-
-Only a member of the db_owner database role, the dbo user, or server admin, can determine all of the database user principals in the primary database.
-
-```sql
-SELECT [name], [sid]
-FROM [sys].[database_principals]
-WHERE [type_desc] = 'SQL_USER'
-```
-
-#### 2. Find the SID for the logins identified in step 1
-
-By comparing the output of the queries from the previous section and matching the SIDs, you can map the server login to database user. Logins that have a database user with a matching SID have user access to that database as that database user principal.
-
-The following query can be used to see all of the user principals and their SIDs in a database. Only a member of the db_owner database role or server admin can run this query.
-
-```sql
-SELECT [name], [sid]
-FROM [sys].[database_principals]
-WHERE [type_desc] = 'SQL_USER'
-```
-
-> [!NOTE]
-> The **INFORMATION_SCHEMA** and **sys** users have *NULL* SIDs, and the **guest** SID is **0x00**. The **dbo** SID may start with *0x01060000000001648000000000048454*, if the database creator was the server admin instead of a member of **DbManager**.
-
-#### 3. Create the logins on the target server
-
-The last step is to go to the target server, or servers, and generate the logins with the appropriate SIDs. The basic syntax is as follows.
-
-```sql
-CREATE LOGIN [<login name>]
-WITH PASSWORD = '<login password>',
-SID = 0x1234 /*replace 0x1234 with the desired login SID*/
-```
-
-> [!NOTE]
-> If you want to grant user access to the secondary, but not to the primary, you can do that by altering the user login on the primary server by using the following syntax.
->
-> ```sql
-> ALTER LOGIN [<login name>] DISABLE
-> ```
->
-> DISABLE doesnΓÇÖt change the password, so you can always enable it if needed.
-
-## Next steps
-
-* For more information on managing database access and logins, see [SQL Database security: Manage database access and login security](logins-create-manage.md).
-* For more information on contained database users, see [Contained Database Users - Making Your Database Portable](/sql/relational-databases/security/contained-database-users-making-your-database-portable).
-* To learn about active geo-replication, see [Active geo-replication](active-geo-replication-overview.md).
-* To learn about auto-failover groups, see [Auto-failover groups](auto-failover-group-overview.md).
-* For information about using geo-restore, see [geo-restore](recovery-using-backups.md#geo-restore)
azure-sql Adonet V12 Develop Direct Route Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/adonet-v12-develop-direct-route-ports.md
- Title: Ports beyond 1433
-description: Client connections from ADO.NET to Azure SQL Database can bypass the proxy and interact directly with the database using ports other than 1433.
-------- Previously updated : 06/11/2020-
-# Ports beyond 1433 for ADO.NET 4.5
-
-This topic describes the Azure SQL Database connection behavior for clients that use ADO.NET 4.5 or a later version.
-
-> [!IMPORTANT]
-> For information about connectivity architecture, see [Azure SQL Database connectivity architecture](connectivity-architecture.md).
->
-
-## Outside vs inside
-
-For connections to Azure SQL Database, we must first ask whether your client program runs *outside* or *inside* the Azure cloud boundary. The subsections discuss two common scenarios.
-
-### *Outside:* Client runs on your desktop computer
-
-Port 1433 is the only port that must be open on your desktop computer that hosts your SQL Database client application.
-
-### *Inside:* Client runs on Azure
-
-When your client runs inside the Azure cloud boundary, it uses what we can call a *direct route* to interact with SQL Database. After a connection is established, further interactions between the client and database involve no Azure SQL Database Gateway.
-
-The sequence is as follows:
-
-1. ADO.NET 4.5 (or later) initiates a brief interaction with the Azure cloud, and receives a dynamically identified port number.
-
- * The dynamically identified port number is in the range of 11000-11999.
-2. ADO.NET then connects to SQL Database directly, with no middleware in between.
-3. Queries are sent directly to the database, and results are returned directly to the client.
-
-Ensure that the port ranges of 11000-11999 on your Azure client machine are left available for ADO.NET 4.5 client interactions with SQL Database.
-
-* In particular, ports in the range must be free of any other outbound blockers.
-* On your Azure VM, the **Windows Firewall with Advanced Security** controls the port settings.
-
- * You can use the [firewall's user interface](/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access) to add a rule for which you specify the **TCP** protocol along with a port range with the syntax like **11000-11999**.
-
-## Version clarifications
-
-This section clarifies the monikers that refer to product versions. It also lists some pairings of versions between products.
-
-### ADO.NET
-
-* ADO.NET 4.0 supports the TDS 7.3 protocol, but not 7.4.
-* ADO.NET 4.5 and later supports the TDS 7.4 protocol.
-
-### ODBC
-
-* Microsoft SQL Server ODBC 11 or above
-
-### JDBC
-
-* Microsoft SQL Server JDBC 4.2 or above (JDBC 4.0 actually supports TDS 7.4 but does not implement ΓÇ£redirectionΓÇ¥)
-
-## Related links
-
-* ADO.NET 4.6 was released on July 20, 2015. A blog announcement from the .NET team is available [here](https://devblogs.microsoft.com/dotnet/announcing-net-framework-4-6/).
-* ADO.NET 4.5 was released on August 15, 2012. A blog announcement from the .NET team is available [here](https://devblogs.microsoft.com/dotnet/announcing-the-release-of-net-framework-4-5-rtm-product-and-source-code/).
- * A blog post about ADO.NET 4.5.1 is available [here](https://devblogs.microsoft.com/dotnet/announcing-the-net-framework-4-5-1-preview/).
-
-* Microsoft ODBC Driver 17 for SQL Server
-https://aka.ms/downloadmsodbcsql
-
-* Connect to Azure SQL Database V12 via Redirection
-https://techcommunity.microsoft.com/t5/DataCAT/Connect-to-Azure-SQL-Database-V12-via-Redirection/ba-p/305362
-
-* [TDS protocol version list](https://www.freetds.org/)
-* [SQL Database Development Overview](develop-overview.md)
-* [Azure SQL Database firewall](firewall-configure.md)
azure-sql Advance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/advance-notifications.md
- Title: Advance notifications (Preview) for planned maintenance events
-description: Get notification before planned maintenance for Azure SQL Database.
-------- Previously updated : 04/04/2022-
-# Advance notifications for planned maintenance events (Preview)
-
-Advance notifications (Preview) are available for databases configured to use a non-default [maintenance window](maintenance-window.md) and managed instances with any configuration (including the default one). Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance of any planned event.
-
-Notifications can be configured so you can get texts, emails, Azure push notifications, and voicemails when planned maintenance is due to begin in the next 24 hours. Additional notifications are sent when maintenance begins and when maintenance ends.
-
-> [!IMPORTANT]
-> For Azure SQL Database, advance notifications cannot be configured for the **System default** maintenance window option. Choose a maintenance window other than the **System default** to configure and enable Advance notifications.
-
-> [!NOTE]
-> While [maintenance windows](maintenance-window.md) are generally available, advance notifications for maintenance windows are in public preview for Azure SQL Database and Azure SQL Managed Instance.
-
-## Create an advance notification
-
-Advance notifications are available for Azure SQL databases that have their maintenance window configured.
-
-Complete the following steps to enable a notification.
-
-1. Go to the [Planned maintenance](https://portal.azure.com/#blade/Microsoft_Azure_Health/AzureHealthBrowseBlade/plannedMaintenance) page, select **Health alerts**, then **Add service health alert**.
-
- :::image type="content" source="media/advance-notifications/health-alerts.png" alt-text="create a new health alert menu option":::
-
-2. In the **Actions** section, select **Add action groups**.
-
- :::image type="content" source="media/advance-notifications/add-action-group.png" alt-text="add an action group menu option":::
-
-3. Complete the **Create action group** form, then select **Next: Notifications**.
-
- :::image type="content" source="media/advance-notifications/create-action-group.png" alt-text="create action group form":::
-
-1. On the **Notifications** tab, select the **Notification type**. The **Email/SMS message/Push/Voice** option offers the most flexibility and is the recommended option. Select the pen to configure the notification.
-
- :::image type="content" source="media/advance-notifications/notifications.png" alt-text="configure notifications":::
-
- 1. Complete the *Add or edit notification* form that opens and select **OK**:
-
- 2. Actions and Tags are optional. Here you can configure additional actions to be triggered or use tags to categorize and organize your Azure resources.
-
- 4. Check the details on the **Review + create** tab and select **Create**.
-
-7. After selecting create, the alert rule configuration screen opens and the action group will be selected. Give a name to your new alert rule, then choose the resource group for it, and select **Create alert rule**.
-
-8. Click the **Health alerts** menu item again, and the list of alerts now contains your new alert.
--
-You're all set. Next time there's a planned Azure SQL maintenance event, you'll receive an advance notification.
-
-## Receiving notifications
-
-The following table shows the general-information notifications you may receive:
-
-|Status|Description|
-|:|:|
-|**Planned Deployment**| Received 24 hours prior to the maintenance event. Maintenance is planned on DATE between 5pm - 8am (local time) for DB xyz.|
-|**In-Progress** | Maintenance for database *xyz* is starting.|
-|**Complete** | Maintenance of database *xyz* is complete. |
-
-The following table shows additional notifications that may be sent while maintenance is ongoing:
-
-|Status|Description|
-|:|:|
-|**Extended** | Maintenance is in progress but didn't complete for database *xyz*. Maintenance will continue at the next maintenance window.|
-|**Canceled**| Maintenance for database *xyz* is canceled and will be rescheduled later. |
-|**Blocked**|There was a problem during maintenance for database *xyz*. We'll notify you when we resume.|
-|**Resumed**|The problem has been resolved and maintenance will continue at the next maintenance window.|
-
-## Permissions
-
-While Advance Notifications can be sent to any email address, Azure subscription RBAC (role-based access control) policy determines who can access the links in the email. Querying resource graph is covered by [Azure RBAC](../../role-based-access-control/overview.md) access management. To enable read access, each recipient should have resource group level read access. For more information, see [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
-
-## Retrieve the list of impacted resources
-
-[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service designed to extend Azure Resource Management. The Azure Resource Graph Explorer provides efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.
-
-You can use the Azure Resource Graph Explorer to query for maintenance events. For an introduction on how to run these queries, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
-
-When the advanced notification for planned maintenance is received, you will get a link that opens Azure Resource Graph and executes the query for the exact event, similar to the following. Note that the `notificationId` value is unique per maintenance event.
-
-```kusto
-resources
-| project resource = tolower(id)
-| join kind=inner (
- maintenanceresources
- | where type == "microsoft.maintenance/updates"
- | extend p = parse_json(properties)
- | mvexpand d = p.value
- | where d has 'notificationId' and d.notificationId == 'LNPN-R9Z'
- | project resource = tolower(name), status = d.status, resourceGroup, location, startTimeUtc = d.startTimeUtc, endTimeUtc = d.endTimeUtc, impactType = d.impactType
-) on resource
-| project resource, status, resourceGroup, location, startTimeUtc, endTimeUtc, impactType
-```
-
-For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit [Azure Resource Graph sample queries for Azure Service Health](../../service-health/resource-graph-samples.md).
--
-## Next steps
--- [Maintenance window](maintenance-window.md)-- [Maintenance window FAQ](maintenance-window-faq.yml)-- [Overview of alerts in Microsoft Azure](../../azure-monitor/alerts/alerts-overview.md)-- [Email Azure Resource Manager Role](../../azure-monitor/alerts/action-groups.md#email-azure-resource-manager-role)
azure-sql Alerts Insights Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/alerts-insights-configure-portal.md
- Title: Setup alerts and notifications in the Azure portal
-description: Use the Azure portal to create alerts, which can trigger notifications or automation when the conditions you specify are met.
------- Previously updated : "03/23/2022"--
-# Create alerts for Azure SQL Database and Azure Synapse Analytics using the Azure portal
--
-## Overview
-
-This article shows you how to set up alerts for databases in Azure SQL Database and Azure Synapse Analytics using the Azure portal. Alerts can send you an email or call a web hook when some metric (for example database size or CPU usage) reaches the threshold.
-
-> [!NOTE]
-> For Azure SQL Managed Instance specific instructions, see [Create alerts for Azure SQL Managed Instance](../managed-instance/alerts-create.md).
-
-You can receive an alert based on monitoring metrics for, or events on, your Azure services.
-
-* **Metric values** - The alert triggers when the value of a specified metric crosses a threshold you assign in either direction. That is, it triggers both when the condition is first met and then afterwards when that condition is no longer being met.
-* **Activity log events** - An alert can trigger on *every* event, or, only when a certain number of events occur.
-
-You can configure an alert to do the following when it triggers:
-
-* Send email notifications to the service administrator and co-administrators
-* Send email to additional emails that you specify.
-* Call a webhook
-
-You can configure and get information about alert rules using
-
-* [The Azure portal](../../azure-monitor/alerts/alerts-classic-portal.md)
-* [PowerShell](../../azure-monitor/alerts/alerts-classic-portal.md)
-* [A command-line interface (CLI)](../../azure-monitor/alerts/alerts-classic-portal.md)
-* [Azure Monitor REST API](/rest/api/monitor/alertrules)
-
-## Create an alert rule on a metric with the Azure portal
-
-1. In the [portal](https://portal.azure.com/), locate the resource you are interested in monitoring and select it.
-2. Select **Alerts** in the Monitoring section. The text and icon may vary slightly for different resources.
-
- ![Monitoring](./media/alerts-insights-configure-portal/Alerts.png)
-
-3. Select the **New alert rule** button to open the **Create rule** page.
- ![Create rule](./media/alerts-insights-configure-portal/create-rule.png)
-
-4. In the **Condition** section, click **Add**.
- ![Define condition](./media/alerts-insights-configure-portal/create-rule.png)
-5. In the **Configure signal logic** page, select a signal.
- ![Select signal](./media/alerts-insights-configure-portal/select-signal.png)
-6. After selecting a signal, such as **CPU percentage**, the **Configure signal logic** page appears.
- ![Configure signal logic](./media/alerts-insights-configure-portal/configure-signal-logic.png)
-7. On this page, configure that threshold type, operator, aggregation type, threshold value, aggregation granularity, and frequency of evaluation. Then click **Done**.
-8. On the **Create rule**, select an existing **Action group** or create a new group. An action group enables you to define the action to be taken when an alert condition occurs.
- ![Define action group](./media/alerts-insights-configure-portal/action-group.png)
-
-9. Define a name for the rule, provide an optional description, choose a severity level for the rule, choose whether to enable the rule upon rule creation, and then click **Create rule alert** to create the metric rule alert.
-
-Within 10 minutes, the alert is active and triggers as previously described.
-
-## Next steps
-
-* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md).
azure-sql Always Encrypted Azure Key Vault Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-azure-key-vault-configure.md
- Title: "Configure Always Encrypted by using Azure Key Vault"
-description: This tutorial shows you how to secure sensitive data in a database in Azure SQL Database with data encryption by using the Always Encrypted wizard in SQL Server Management Studio.
-keywords: data encryption, encryption key, cloud encryption
-------- Previously updated : 11/02/2020-
-# Configure Always Encrypted by using Azure Key Vault
--
-This article shows you how to secure sensitive data in a database in Azure SQL Database with data encryption by using the [Always Encrypted wizard](/sql/relational-databases/security/encryption/always-encrypted-wizard) in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). It also includes instructions that will show you how to store each encryption key in Azure Key Vault.
-
-Always Encrypted is a data encryption technology that helps protect sensitive data at rest on the server, during movement between client and server, and while the data is in use. Always Encrypted ensures that sensitive data never appears as plaintext inside the database system. After you configure data encryption, only client applications or app servers that have access to the keys can access plaintext data. For detailed information, see [Always Encrypted (Database Engine)](/sql/relational-databases/security/encryption/always-encrypted-database-engine).
-
-After you configure the database to use Always Encrypted, you will create a client application in C# with Visual Studio to work with the encrypted data.
-
-Follow the steps in this article and learn how to set up Always Encrypted for your database in Azure SQL Database or SQL Managed Instance. In this article you will learn how to perform the following tasks:
--- Use the Always Encrypted wizard in SSMS to create [Always Encrypted keys](/sql/relational-databases/security/encryption/always-encrypted-database-engine#Anchor_3).
- - Create a [column master key (CMK)](/sql/t-sql/statements/create-column-master-key-transact-sql).
- - Create a [column encryption key (CEK)](/sql/t-sql/statements/create-column-encryption-key-transact-sql).
-- Create a database table and encrypt columns.-- Create an application that inserts, selects, and displays data from the encrypted columns.-
-## Prerequisites
---- An Azure account and subscription. If you don't have one, sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial/).-- A database in [Azure SQL Database](single-database-create-quickstart.md) or [Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).-- [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) version 13.0.700.242 or later.-- [.NET Framework 4.6](/dotnet/framework/) or later (on the client computer).-- [Visual Studio](https://www.visualstudio.com/downloads/download-visual-studio-vs.aspx).-- [Azure PowerShell](/powershell/azure/) or [Azure CLI](/cli/azure/install-azure-cli)-
-## Enable client application access
-
-You must enable your client application to access your database in SQL Database by setting up an Azure Active Directory (Azure AD) application and copying the *Application ID* and *key* that you will need to authenticate your application.
-
-To get the *Application ID* and *key*, follow the steps in [create an Azure Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
-
-## Create a key vault to store your keys
-
-Now that your client app is configured and you have your application ID, it's time to create a key vault and configure its access policy so you and your application can access the vault's secrets (the Always Encrypted keys). The *create*, *get*, *list*, *sign*, *verify*, *wrapKey*, and *unwrapKey* permissions are required for creating a new column master key and for setting up encryption with SQL Server Management Studio.
-
-You can quickly create a key vault by running the following script. For a detailed explanation of these commands and more information about creating and configuring a key vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-
-# [PowerShell](#tab/azure-powershell)
-
-> [!IMPORTANT]
-> The PowerShell Azure Resource Manager (RM) module is still supported by Azure SQL Database, but all future development is for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility, see [Introducing the new Azure PowerShell Az module](/powershell/azure/new-azureps-module-az).
-
-```powershell
-$subscriptionName = '<subscriptionName>'
-$userPrincipalName = '<username@domain.com>'
-$applicationId = '<applicationId from AAD application>'
-$resourceGroupName = '<resourceGroupName>' # use the same resource group name when creating your SQL Database below
-$location = '<datacenterLocation>'
-$vaultName = '<vaultName>'
-
-Connect-AzAccount
-$subscriptionId = (Get-AzSubscription -SubscriptionName $subscriptionName).Id
-Set-AzContext -SubscriptionId $subscriptionId
-
-New-AzResourceGroup -Name $resourceGroupName -Location $location
-New-AzKeyVault -VaultName $vaultName -ResourceGroupName $resourceGroupName -Location $location
-
-Set-AzKeyVaultAccessPolicy -VaultName $vaultName -ResourceGroupName $resourceGroupName -PermissionsToKeys create,get,wrapKey,unwrapKey,sign,verify,list -UserPrincipalName $userPrincipalName
-Set-AzKeyVaultAccessPolicy -VaultName $vaultName -ResourceGroupName $resourceGroupName -ServicePrincipalName $applicationId -PermissionsToKeys get,wrapKey,unwrapKey,sign,verify,list
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-$subscriptionName = '<subscriptionName>'
-$userPrincipalName = '<username@domain.com>'
-$applicationId = '<applicationId from AAD application>'
-$resourceGroupName = '<resourceGroupName>' # use the same resource group name when creating your database in Azure SQL Database below
-$location = '<datacenterLocation>'
-$vaultName = '<vaultName>'
-
-az login
-az account set --subscription $subscriptionName
-
-az group create --location $location --name $resourceGroupName
-
-az keyvault create --name $vaultName --resource-group $resourceGroupName --location $location
-
-az keyvault set-policy --name $vaultName --key-permissions create get list sign unwrapKey verify wrapKey --resource-group $resourceGroupName --upn $userPrincipalName
-az keyvault set-policy --name $vaultName --key-permissions get list sign unwrapKey verify wrapKey --resource-group $resourceGroupName --spn $applicationId
-```
---
-## Connect with SSMS
-
-Open SQL Server Management Studio (SSMS) and connect to the server or managed with your database.
-
-1. Open SSMS. (Go to **Connect** > **Database Engine** to open the **Connect to Server** window if it isn't open.)
-
-2. Enter your server name or instance name and credentials.
-
- ![Copy the connection string](./media/always-encrypted-azure-key-vault-configure/ssms-connect.png)
-
-If the **New Firewall Rule** window opens, sign in to Azure and let SSMS create a new firewall rule for you.
-
-## Create a table
-
-In this section, you will create a table to hold patient data. It's not initially encrypted--you will configure encryption in the next section.
-
-1. Expand **Databases**.
-2. Right-click the database and click **New Query**.
-3. Paste the following Transact-SQL (T-SQL) into the new query window and **Execute** it.
-
-```sql
-CREATE TABLE [dbo].[Patients](
- [PatientId] [int] IDENTITY(1,1),
- [SSN] [char](11) NOT NULL,
- [FirstName] [nvarchar](50) NULL,
- [LastName] [nvarchar](50) NULL,
- [MiddleName] [nvarchar](50) NULL,
- [StreetAddress] [nvarchar](50) NULL,
- [City] [nvarchar](50) NULL,
- [ZipCode] [char](5) NULL,
- [State] [char](2) NULL,
- [BirthDate] [date] NOT NULL
- PRIMARY KEY CLUSTERED ([PatientId] ASC) ON [PRIMARY] );
-GO
-```
-
-## Encrypt columns (configure Always Encrypted)
-
-SSMS provides a wizard that helps you easily configure Always Encrypted by setting up the column master key, column encryption key, and encrypted columns for you.
-
-1. Expand **Databases** > **Clinic** > **Tables**.
-2. Right-click the **Patients** table and select **Encrypt Columns** to open the Always Encrypted wizard:
-
- ![Screenshot that highlights the Encrypt Columns... menu option.](./media/always-encrypted-azure-key-vault-configure/encrypt-columns.png)
-
-The Always Encrypted wizard includes the following sections: **Column Selection**, **Master Key Configuration**, **Validation**, and **Summary**.
-
-### Column Selection
-
-Click **Next** on the **Introduction** page to open the **Column Selection** page. On this page, you will select which columns you want to encrypt, [the type of encryption, and what column encryption key (CEK)](/sql/relational-databases/security/encryption/always-encrypted-wizard#Anchor_2) to use.
-
-Encrypt **SSN** and **BirthDate** information for each patient. The SSN column will use deterministic encryption, which supports equality lookups, joins, and group by. The BirthDate column will use randomized encryption, which does not support operations.
-
-Set the **Encryption Type** for the SSN column to **Deterministic** and the BirthDate column to **Randomized**. Click **Next**.
-
-![Encrypt columns](./media/always-encrypted-azure-key-vault-configure/column-selection.png)
-
-### Master Key Configuration
-
-The **Master Key Configuration** page is where you set up your CMK and select the key store provider where the CMK will be stored. Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a hardware security module (HSM).
-
-This tutorial shows how to store your keys in Azure Key Vault.
-
-1. Select **Azure Key Vault**.
-2. Select the desired key vault from the drop-down list.
-3. Click **Next**.
-
-![Master key configuration](./media/always-encrypted-azure-key-vault-configure/master-key-configuration.png)
-
-### Validation
-
-You can encrypt the columns now or save a PowerShell script to run later. For this tutorial, select **Proceed to finish now** and click **Next**.
-
-### Summary
-
-Verify that the settings are all correct and click **Finish** to complete the setup for Always Encrypted.
-
-![Screenshot shows the results page with tasks marked as passed.](./media/always-encrypted-azure-key-vault-configure/summary.png)
-
-### Verify the wizard's actions
-
-After the wizard is finished, your database is set up for Always Encrypted. The wizard performed the following actions:
--- Created a column master key and stored it in Azure Key Vault.-- Created a column encryption key and stored it in Azure Key Vault.-- Configured the selected columns for encryption. The Patients table currently has no data, but any existing data in the selected columns is now encrypted.-
-You can verify the creation of the keys in SSMS by expanding **Clinic** > **Security** > **Always Encrypted Keys**.
-
-## Create a client application that works with the encrypted data
-
-Now that Always Encrypted is set up, you can build an application that performs *inserts* and *selects* on the encrypted columns.
-
-> [!IMPORTANT]
-> Your application must use [SqlParameter](/dotnet/api/system.data.sqlclient.sqlparameter) objects when passing plaintext data to the server with Always Encrypted columns. Passing literal values without using SqlParameter objects will result in an exception.
-
-1. Open Visual Studio and create a new C# **Console Application** (Visual Studio 2015 and earlier) or **Console App (.NET Framework)** (Visual Studio 2017 and later). Make sure your project is set to **.NET Framework 4.6** or later.
-2. Name the project **AlwaysEncryptedConsoleAKVApp** and click **OK**.
-3. Install the following NuGet packages by going to **Tools** > **NuGet Package Manager** > **Package Manager Console**.
-
-Run these two lines of code in the Package Manager Console:
-
- ```powershell
- Install-Package Microsoft.SqlServer.Management.AlwaysEncrypted.AzureKeyVaultProvider
- Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
- ```
-
-## Modify your connection string to enable Always Encrypted
-
-This section explains how to enable Always Encrypted in your database connection string.
-
-To enable Always Encrypted, you need to add the **Column Encryption Setting** keyword to your connection string and set it to **Enabled**.
-
-You can set this directly in the connection string, or you can set it by using [SqlConnectionStringBuilder](/dotnet/api/system.data.sqlclient.sqlconnectionstringbuilder). The sample application in the next section shows how to use **SqlConnectionStringBuilder**.
-
-### Enable Always Encrypted in the connection string
-
-Add the following keyword to your connection string.
-
- `Column Encryption Setting=Enabled`
-
-### Enable Always Encrypted with SqlConnectionStringBuilder
-
-The following code shows how to enable Always Encrypted by setting [SqlConnectionStringBuilder.ColumnEncryptionSetting](/dotnet/api/system.data.sqlclient.sqlconnectionstringbuilder.columnencryptionsetting) to [Enabled](/dotnet/api/system.data.sqlclient.sqlconnectioncolumnencryptionsetting).
-
-```csharp
-// Instantiate a SqlConnectionStringBuilder.
-SqlConnectionStringBuilder connStringBuilder = new SqlConnectionStringBuilder("replace with your connection string");
-
-// Enable Always Encrypted.
-connStringBuilder.ColumnEncryptionSetting = SqlConnectionColumnEncryptionSetting.Enabled;
-```
-
-## Register the Azure Key Vault provider
-The following code shows how to register the Azure Key Vault provider with the ADO.NET driver.
-
-```csharp
-private static ClientCredential _clientCredential;
-
-static void InitializeAzureKeyVaultProvider() {
- _clientCredential = new ClientCredential(applicationId, clientKey);
-
- SqlColumnEncryptionAzureKeyVaultProvider azureKeyVaultProvider = new SqlColumnEncryptionAzureKeyVaultProvider(GetToken);
-
- Dictionary<string, SqlColumnEncryptionKeyStoreProvider> providers = new Dictionary<string, SqlColumnEncryptionKeyStoreProvider>();
-
- providers.Add(SqlColumnEncryptionAzureKeyVaultProvider.ProviderName, azureKeyVaultProvider);
- SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providers);
-}
-```
-
-## Always Encrypted sample console application
-
-This sample demonstrates how to:
--- Modify your connection string to enable Always Encrypted.-- Register Azure Key Vault as the application's key store provider. -- Insert data into the encrypted columns.-- Select a record by filtering for a specific value in an encrypted column.-
-Replace the contents of *Program.cs* with the following code. Replace the connection string for the global connectionString variable in the line that directly precedes the Main method with your valid connection string from the Azure portal. This is the only change you need to make to this code.
-
-Run the app to see Always Encrypted in action.
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using System.Threading.Tasks;
-using System.Data;
-using System.Data.SqlClient;
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
-using Microsoft.SqlServer.Management.AlwaysEncrypted.AzureKeyVaultProvider;
-
-namespace AlwaysEncryptedConsoleAKVApp {
- class Program {
- // Update this line with your Clinic database connection string from the Azure portal.
- static string connectionString = @"<connection string from the portal>";
- static string applicationId = @"<application ID from your AAD application>";
- static string clientKey = "<key from your AAD application>";
-
- static void Main(string[] args) {
- InitializeAzureKeyVaultProvider();
-
- Console.WriteLine("Signed in as: " + _clientCredential.ClientId);
-
- Console.WriteLine("Original connection string copied from the Azure portal:");
- Console.WriteLine(connectionString);
-
- // Create a SqlConnectionStringBuilder.
- SqlConnectionStringBuilder connStringBuilder =
- new SqlConnectionStringBuilder(connectionString);
-
- // Enable Always Encrypted for the connection.
- // This is the only change specific to Always Encrypted
- connStringBuilder.ColumnEncryptionSetting =
- SqlConnectionColumnEncryptionSetting.Enabled;
-
- Console.WriteLine(Environment.NewLine + "Updated connection string with Always Encrypted enabled:");
- Console.WriteLine(connStringBuilder.ConnectionString);
-
- // Update the connection string with a password supplied at runtime.
- Console.WriteLine(Environment.NewLine + "Enter server password:");
- connStringBuilder.Password = Console.ReadLine();
-
- // Assign the updated connection string to our global variable.
- connectionString = connStringBuilder.ConnectionString;
-
- // Delete all records to restart this demo app.
- ResetPatientsTable();
-
- // Add sample data to the Patients table.
- Console.Write(Environment.NewLine + "Adding sample patient data to the database...");
-
- InsertPatient(new Patient() {
- SSN = "999-99-0001",
- FirstName = "Orlando",
- LastName = "Gee",
- BirthDate = DateTime.Parse("01/04/1964")
- });
- InsertPatient(new Patient() {
- SSN = "999-99-0002",
- FirstName = "Keith",
- LastName = "Harris",
- BirthDate = DateTime.Parse("06/20/1977")
- });
- InsertPatient(new Patient() {
- SSN = "999-99-0003",
- FirstName = "Donna",
- LastName = "Carreras",
- BirthDate = DateTime.Parse("02/09/1973")
- });
- InsertPatient(new Patient() {
- SSN = "999-99-0004",
- FirstName = "Janet",
- LastName = "Gates",
- BirthDate = DateTime.Parse("08/31/1985")
- });
- InsertPatient(new Patient() {
- SSN = "999-99-0005",
- FirstName = "Lucy",
- LastName = "Harrington",
- BirthDate = DateTime.Parse("05/06/1993")
- });
-
- // Fetch and display all patients.
- Console.WriteLine(Environment.NewLine + "All the records currently in the Patients table:");
-
- foreach (Patient patient in SelectAllPatients()) {
- Console.WriteLine(patient.FirstName + " " + patient.LastName + "\tSSN: " + patient.SSN + "\tBirthdate: " + patient.BirthDate);
- }
-
- // Get patients by SSN.
- Console.WriteLine(Environment.NewLine + "Now lets locate records by searching the encrypted SSN column.");
-
- string ssn;
-
- // This very simple validation only checks that the user entered 11 characters.
- // In production be sure to check all user input and use the best validation for your specific application.
- do {
- Console.WriteLine("Please enter a valid SSN (ex. 999-99-0003):");
- ssn = Console.ReadLine();
- } while (ssn.Length != 11);
-
- // The example allows duplicate SSN entries so we will return all records
- // that match the provided value and store the results in selectedPatients.
- Patient selectedPatient = SelectPatientBySSN(ssn);
-
- // Check if any records were returned and display our query results.
- if (selectedPatient != null) {
- Console.WriteLine("Patient found with SSN = " + ssn);
- Console.WriteLine(selectedPatient.FirstName + " " + selectedPatient.LastName + "\tSSN: "
- + selectedPatient.SSN + "\tBirthdate: " + selectedPatient.BirthDate);
- }
- else {
- Console.WriteLine("No patients found with SSN = " + ssn);
- }
-
- Console.WriteLine("Press Enter to exit...");
- Console.ReadLine();
- }
-
- private static ClientCredential _clientCredential;
-
- static void InitializeAzureKeyVaultProvider() {
- _clientCredential = new ClientCredential(applicationId, clientKey);
-
- SqlColumnEncryptionAzureKeyVaultProvider azureKeyVaultProvider =
- new SqlColumnEncryptionAzureKeyVaultProvider(GetToken);
-
- Dictionary<string, SqlColumnEncryptionKeyStoreProvider> providers =
- new Dictionary<string, SqlColumnEncryptionKeyStoreProvider>();
-
- providers.Add(SqlColumnEncryptionAzureKeyVaultProvider.ProviderName, azureKeyVaultProvider);
- SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providers);
- }
-
- public async static Task<string> GetToken(string authority, string resource, string scope) {
- var authContext = new AuthenticationContext(authority);
- AuthenticationResult result = await authContext.AcquireTokenAsync(resource, _clientCredential);
-
- if (result == null)
- throw new InvalidOperationException("Failed to obtain the access token");
- return result.AccessToken;
- }
-
- static int InsertPatient(Patient newPatient) {
- int returnValue = 0;
-
- string sqlCmdText = @"INSERT INTO [dbo].[Patients] ([SSN], [FirstName], [LastName], [BirthDate])
- VALUES (@SSN, @FirstName, @LastName, @BirthDate);";
-
- SqlCommand sqlCmd = new SqlCommand(sqlCmdText);
-
- SqlParameter paramSSN = new SqlParameter(@"@SSN", newPatient.SSN);
- paramSSN.DbType = DbType.AnsiStringFixedLength;
- paramSSN.Direction = ParameterDirection.Input;
- paramSSN.Size = 11;
-
- SqlParameter paramFirstName = new SqlParameter(@"@FirstName", newPatient.FirstName);
- paramFirstName.DbType = DbType.String;
- paramFirstName.Direction = ParameterDirection.Input;
-
- SqlParameter paramLastName = new SqlParameter(@"@LastName", newPatient.LastName);
- paramLastName.DbType = DbType.String;
- paramLastName.Direction = ParameterDirection.Input;
-
- SqlParameter paramBirthDate = new SqlParameter(@"@BirthDate", newPatient.BirthDate);
- paramBirthDate.SqlDbType = SqlDbType.Date;
- paramBirthDate.Direction = ParameterDirection.Input;
-
- sqlCmd.Parameters.Add(paramSSN);
- sqlCmd.Parameters.Add(paramFirstName);
- sqlCmd.Parameters.Add(paramLastName);
- sqlCmd.Parameters.Add(paramBirthDate);
-
- using (sqlCmd.Connection = new SqlConnection(connectionString)) {
- try {
- sqlCmd.Connection.Open();
- sqlCmd.ExecuteNonQuery();
- }
- catch (Exception ex) {
- returnValue = 1;
- Console.WriteLine("The following error was encountered: ");
- Console.WriteLine(ex.Message);
- Console.WriteLine(Environment.NewLine + "Press Enter key to exit");
- Console.ReadLine();
- Environment.Exit(0);
- }
- }
- return returnValue;
- }
--
- static List<Patient> SelectAllPatients() {
- List<Patient> patients = new List<Patient>();
-
- SqlCommand sqlCmd = new SqlCommand(
- "SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients]",
- new SqlConnection(connectionString));
-
- using (sqlCmd.Connection = new SqlConnection(connectionString))
-
- using (sqlCmd.Connection = new SqlConnection(connectionString)) {
- try {
- sqlCmd.Connection.Open();
- SqlDataReader reader = sqlCmd.ExecuteReader();
-
- if (reader.HasRows) {
- while (reader.Read()) {
- patients.Add(new Patient() {
- SSN = reader[0].ToString(),
- FirstName = reader[1].ToString(),
- LastName = reader["LastName"].ToString(),
- BirthDate = (DateTime)reader["BirthDate"]
- });
- }
- }
- }
- catch (Exception ex) {
- throw;
- }
- }
-
- return patients;
- }
-
- static Patient SelectPatientBySSN(string ssn) {
- Patient patient = new Patient();
-
- SqlCommand sqlCmd = new SqlCommand(
- "SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients] WHERE [SSN]=@SSN",
- new SqlConnection(connectionString));
-
- SqlParameter paramSSN = new SqlParameter(@"@SSN", ssn);
- paramSSN.DbType = DbType.AnsiStringFixedLength;
- paramSSN.Direction = ParameterDirection.Input;
- paramSSN.Size = 11;
-
- sqlCmd.Parameters.Add(paramSSN);
-
- using (sqlCmd.Connection = new SqlConnection(connectionString)) {
- try {
- sqlCmd.Connection.Open();
- SqlDataReader reader = sqlCmd.ExecuteReader();
-
- if (reader.HasRows) {
- while (reader.Read()) {
- patient = new Patient() {
- SSN = reader[0].ToString(),
- FirstName = reader[1].ToString(),
- LastName = reader["LastName"].ToString(),
- BirthDate = (DateTime)reader["BirthDate"]
- };
- }
- }
- else {
- patient = null;
- }
- }
- catch (Exception ex) {
- throw;
- }
- }
- return patient;
- }
-
- // This method simply deletes all records in the Patients table to reset our demo.
- static int ResetPatientsTable() {
- int returnValue = 0;
-
- SqlCommand sqlCmd = new SqlCommand("DELETE FROM Patients");
- using (sqlCmd.Connection = new SqlConnection(connectionString)) {
- try {
- sqlCmd.Connection.Open();
- sqlCmd.ExecuteNonQuery();
-
- }
- catch (Exception ex) {
- returnValue = 1;
- }
- }
- return returnValue;
- }
- }
-
- class Patient {
- public string SSN { get; set; }
- public string FirstName { get; set; }
- public string LastName { get; set; }
- public DateTime BirthDate { get; set; }
- }
-}
-```
-
-## Verify that the data is encrypted
-
-You can quickly check that the actual data on the server is encrypted by querying the Patients data with SSMS (using your current connection where **Column Encryption Setting** is not yet enabled).
-
-Run the following query on the Clinic database.
-
-```sql
-SELECT FirstName, LastName, SSN, BirthDate FROM Patients;
-```
-
-You can see that the encrypted columns do not contain any plaintext data.
-
- ![Screenshot that shows that the encrypted columns do not contain any plaintext data.](./media/always-encrypted-azure-key-vault-configure/ssms-encrypted.png)
-
-To use SSMS to access the plaintext data, you first need to ensure that the user has proper permissions to the Azure Key Vault: *get*, *unwrapKey*, and *verify*. For detailed information, see [Create and Store Column Master Keys (Always Encrypted)](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted).
-
-Then add the *Column Encryption Setting=enabled* parameter during your connection.
-
-1. In SSMS, right-click your server in **Object Explorer** and choose **Disconnect**.
-2. Click **Connect** > **Database Engine** to open the **Connect to Server** window and click **Options**.
-3. Click **Additional Connection Parameters** and type **Column Encryption Setting=enabled**.
-
- ![Screenshot that shows the Additional Correction Parameters tab.](./media/always-encrypted-azure-key-vault-configure/ssms-connection-parameter.png)
-
-4. Run the following query on the Clinic database.
-
- ```sql
- SELECT FirstName, LastName, SSN, BirthDate FROM Patients;
- ```
-
- You can now see the plaintext data in the encrypted columns.
-
- ![New console application](./media/always-encrypted-azure-key-vault-configure/ssms-plaintext.png)
-
-## Next steps
-
-After your database is configured to use Always Encrypted, you may want to do the following:
--- [Rotate and clean up your keys](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio).-- [Migrate data that is already encrypted with Always Encrypted](/sql/relational-databases/security/encryption/migrate-sensitive-data-protected-by-always-encrypted).-
-## Related information
--- [Always Encrypted (client development)](/sql/relational-databases/security/encryption/always-encrypted-client-development)-- [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption)-- [SQL Server encryption](/sql/relational-databases/security/encryption/sql-server-encryption)-- [Always Encrypted wizard](/sql/relational-databases/security/encryption/always-encrypted-wizard)-- [Always Encrypted blog](/archive/blogs/sqlsecurity/always-encrypted-key-metadata)
azure-sql Always Encrypted Certificate Store Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-certificate-store-configure.md
- Title: Configure Always Encrypted by using the Windows certificate store
-description: This article shows you how to secure sensitive data in Azure SQL Database with database encryption by using the Always Encrypted wizard in SQL Server Management Studio (SSMS). It also shows you how to store your encryption keys in the Windows certificate store.
-keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted
-------
-ms.reviwer:
Previously updated : 04/23/2020--
-# Configure Always Encrypted by using the Windows certificate store
--
-This article shows you how to secure sensitive data in Azure SQL Database or Azure SQL Managed Instance with database encryption by using the [Always Encrypted wizard](/sql/relational-databases/security/encryption/always-encrypted-wizard) in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). It also shows you how to store your encryption keys in the Windows certificate store.
-
-Always Encrypted is a data encryption technology that helps protect sensitive data at rest on the server, during movement between client and server, and while the data is in use, ensuring that sensitive data never appears as plaintext inside the database system. After you encrypt data, only client applications or app servers that have access to the keys can access plaintext data. For detailed information, see [Always Encrypted (Database Engine)](/sql/relational-databases/security/encryption/always-encrypted-database-engine).
-
-After configuring the database to use Always Encrypted, you will create a client application in C# with Visual Studio to work with the encrypted data.
-
-Follow the steps in this article to learn how to set up Always Encrypted for SQL Database or SQL Managed Instance. In this article, you will learn how to perform the following tasks:
-
-* Use the Always Encrypted wizard in SSMS to create [Always Encrypted Keys](/sql/relational-databases/security/encryption/always-encrypted-database-engine#Anchor_3).
- * Create a [Column Master Key (CMK)](/sql/t-sql/statements/create-column-master-key-transact-sql).
- * Create a [Column Encryption Key (CEK)](/sql/t-sql/statements/create-column-encryption-key-transact-sql).
-* Create a database table and encrypt columns.
-* Create an application that inserts, selects, and displays data from the encrypted columns.
-
-## Prerequisites
-
-For this tutorial, you'll need:
-
-* An Azure account and subscription. If you don't have one, sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial/).
-- A database in [Azure SQL Database](single-database-create-quickstart.md) or [Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
-* [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) version 13.0.700.242 or later.
-* [.NET Framework 4.6](/dotnet/framework/) or later (on the client computer).
-* [Visual Studio](https://www.visualstudio.com/downloads/download-visual-studio-vs.aspx).
-
-## Enable client application access
-
-You must enable your client application to access SQL Database or SQL Managed Instance by setting up an Azure Active Directory (AAD) application and copying the *Application ID* and *key* that you will need to authenticate your application.
-
-To get the *Application ID* and *key*, follow the steps in [create an Azure Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
---
-## Connect with SSMS
-
-Open SQL Server Management Studio (SSMS) and connect to the server or managed with your database.
-
-1. Open SSMS. (Click **Connect** > **Database Engine** to open the **Connect to Server** window if it is not open).
-2. Enter your server name and credentials.
-
- ![Copy the connection string](./media/always-encrypted-certificate-store-configure/ssms-connect.png)
-
-If the **New Firewall Rule** window opens, sign in to Azure and let SSMS create a new firewall rule for you.
-
-## Create a table
-
-In this section, you will create a table to hold patient data. This will be a normal table initially--you will configure encryption in the next section.
-
-1. Expand **Databases**.
-2. Right-click the **Clinic** database and click **New Query**.
-3. Paste the following Transact-SQL (T-SQL) into the new query window and **Execute** it.
-
- ```tsql
- CREATE TABLE [dbo].[Patients](
- [PatientId] [int] IDENTITY(1,1),
- [SSN] [char](11) NOT NULL,
- [FirstName] [nvarchar](50) NULL,
- [LastName] [nvarchar](50) NULL,
- [MiddleName] [nvarchar](50) NULL,
- [StreetAddress] [nvarchar](50) NULL,
- [City] [nvarchar](50) NULL,
- [ZipCode] [char](5) NULL,
- [State] [char](2) NULL,
- [BirthDate] [date] NOT NULL
- PRIMARY KEY CLUSTERED ([PatientId] ASC) ON [PRIMARY] );
- GO
- ```
-
-## Encrypt columns (configure Always Encrypted)
-
-SSMS provides a wizard to easily configure Always Encrypted by setting up the CMK, CEK, and encrypted columns for you.
-
-1. Expand **Databases** > **Clinic** > **Tables**.
-2. Right-click the **Patients** table and select **Encrypt Columns** to open the Always Encrypted wizard:
-
- ![Screenshot that shows the Encrypt Colunns... menu option in the Patients table.](./media/always-encrypted-certificate-store-configure/encrypt-columns.png)
-
-The Always Encrypted wizard includes the following sections: **Column Selection**, **Master Key Configuration** (CMK), **Validation**, and **Summary**.
-
-### Column Selection
-
-Click **Next** on the **Introduction** page to open the **Column Selection** page. On this page, you will select which columns you want to encrypt, [the type of encryption, and what column encryption key (CEK)](/sql/relational-databases/security/encryption/always-encrypted-wizard#Anchor_2) to use.
-
-Encrypt **SSN** and **BirthDate** information for each patient. The **SSN** column will use deterministic encryption, which supports equality lookups, joins, and group by. The **BirthDate** column will use randomized encryption, which does not support operations.
-
-Set the **Encryption Type** for the **SSN** column to **Deterministic** and the **BirthDate** column to **Randomized**. Click **Next**.
-
-![Encrypt columns](./media/always-encrypted-certificate-store-configure/column-selection.png)
-
-### Master Key Configuration
-
-The **Master Key Configuration** page is where you set up your CMK and select the key store provider where the CMK will be stored. Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a hardware security module (HSM). This tutorial shows how to store your keys in the Windows certificate store.
-
-Verify that **Windows certificate store** is selected and click **Next**.
-
-![Master key configuration](./media/always-encrypted-certificate-store-configure/master-key-configuration.png)
-
-### Validation
-
-You can encrypt the columns now or save a PowerShell script to run later. For this tutorial, select **Proceed to finish now** and click **Next**.
-
-### Summary
-
-Verify that the settings are all correct and click **Finish** to complete the setup for Always Encrypted.
-
-![Screenshot shows the results page with tasks marked as passed.](./media/always-encrypted-certificate-store-configure/summary.png)
-
-### Verify the wizard's actions
-
-After the wizard is finished, your database is set up for Always Encrypted. The wizard performed the following actions:
-
-* Created a CMK.
-* Created a CEK.
-* Configured the selected columns for encryption. Your **Patients** table currently has no data, but any existing data in the selected columns is now encrypted.
-
-You can verify the creation of the keys in SSMS by going to **Clinic** > **Security** > **Always Encrypted Keys**. You can now see the new keys that the wizard generated for you.
-
-## Create a client application that works with the encrypted data
-
-Now that Always Encrypted is set up, you can build an application that performs *inserts* and *selects* on the encrypted columns. To successfully run the sample application, you must run it on the same computer where you ran the Always Encrypted wizard. To run the application on another computer, you must deploy your Always Encrypted certificates to the computer running the client app.
-
-> [!IMPORTANT]
-> Your application must use [SqlParameter](/dotnet/api/system.data.sqlclient.sqlparameter) objects when passing plaintext data to the server with Always Encrypted columns. Passing literal values without using SqlParameter objects will result in an exception.
-
-1. Open Visual Studio and create a new C# console application. Make sure your project is set to **.NET Framework 4.6** or later.
-2. Name the project **AlwaysEncryptedConsoleApp** and click **OK**.
-
-![Screenshot that shows the newly named AlwaysEncryptedConsoleApp project.](./media/always-encrypted-certificate-store-configure/console-app.png)
-
-## Modify your connection string to enable Always Encrypted
-
-This section explains how to enable Always Encrypted in your database connection string. You will modify the console app you just created in the next section, "Always Encrypted sample console application."
-
-To enable Always Encrypted, you need to add the **Column Encryption Setting** keyword to your connection string and set it to **Enabled**.
-
-You can set this directly in the connection string, or you can set it by using a [SqlConnectionStringBuilder](/dotnet/api/system.data.sqlclient.sqlconnectionstringbuilder). The sample application in the next section shows how to use **SqlConnectionStringBuilder**.
-
-> [!NOTE]
-> This is the only change required in a client application specific to Always Encrypted. If you have an existing application that stores its connection string externally (that is, in a config file), you might be able to enable Always Encrypted without changing any code.
-
-### Enable Always Encrypted in the connection string
-
-Add the following keyword to your connection string:
-
-`Column Encryption Setting=Enabled`
-
-### Enable Always Encrypted with a SqlConnectionStringBuilder
-
-The following code shows how to enable Always Encrypted by setting the [SqlConnectionStringBuilder.ColumnEncryptionSetting](/dotnet/api/system.data.sqlclient.sqlconnectionstringbuilder.columnencryptionsetting) to [Enabled](/dotnet/api/system.data.sqlclient.sqlconnectioncolumnencryptionsetting).
-
-```csharp
-// Instantiate a SqlConnectionStringBuilder.
-SqlConnectionStringBuilder connStringBuilder =
- new SqlConnectionStringBuilder("replace with your connection string");
-
-// Enable Always Encrypted.
-connStringBuilder.ColumnEncryptionSetting =
- SqlConnectionColumnEncryptionSetting.Enabled;
-```
-
-## Always Encrypted sample console application
-
-This sample demonstrates how to:
-
-* Modify your connection string to enable Always Encrypted.
-* Insert data into the encrypted columns.
-* Select a record by filtering for a specific value in an encrypted column.
-
-Replace the contents of **Program.cs** with the following code. Replace the connection string for the global connectionString variable in the line directly above the Main method with your valid connection string from the Azure portal. This is the only change you need to make to this code.
-
-Run the app to see Always Encrypted in action.
-
-```cs
-using System;
-using System.Collections.Generic;
-using System.Data;
-using System.Data.SqlClient;
-using System.Globalization;
-
-namespace AlwaysEncryptedConsoleApp
-{
- class Program
- {
- // Update this line with your Clinic database connection string from the Azure portal.
- static string connectionString = @"Data Source = SPE-T640-01.sys-sqlsvr.local; Initial Catalog = Clinic; Integrated Security = true";
-
- static void Main(string[] args)
- {
- Console.WriteLine("Original connection string copied from the Azure portal:");
- Console.WriteLine(connectionString);
-
- // Create a SqlConnectionStringBuilder.
- SqlConnectionStringBuilder connStringBuilder =
- new SqlConnectionStringBuilder(connectionString);
-
- // Enable Always Encrypted for the connection.
- // This is the only change specific to Always Encrypted
- connStringBuilder.ColumnEncryptionSetting =
- SqlConnectionColumnEncryptionSetting.Enabled;
-
- Console.WriteLine(Environment.NewLine + "Updated connection string with Always Encrypted enabled:");
- Console.WriteLine(connStringBuilder.ConnectionString);
-
- // Update the connection string with a password supplied at runtime.
- Console.WriteLine(Environment.NewLine + "Enter server password:");
- connStringBuilder.Password = Console.ReadLine();
-
- // Assign the updated connection string to our global variable.
- connectionString = connStringBuilder.ConnectionString;
--
- // Delete all records to restart this demo app.
- ResetPatientsTable();
-
- // Add sample data to the Patients table.
- Console.Write(Environment.NewLine + "Adding sample patient data to the database...");
-
- CultureInfo culture = CultureInfo.CreateSpecificCulture("en-US");
- InsertPatient(new Patient()
- {
- SSN = "999-99-0001",
- FirstName = "Orlando",
- LastName = "Gee",
- BirthDate = DateTime.Parse("01/04/1964", culture)
- });
- InsertPatient(new Patient()
- {
- SSN = "999-99-0002",
- FirstName = "Keith",
- LastName = "Harris",
- BirthDate = DateTime.Parse("06/20/1977", culture)
- });
- InsertPatient(new Patient()
- {
- SSN = "999-99-0003",
- FirstName = "Donna",
- LastName = "Carreras",
- BirthDate = DateTime.Parse("02/09/1973", culture)
- });
- InsertPatient(new Patient()
- {
- SSN = "999-99-0004",
- FirstName = "Janet",
- LastName = "Gates",
- BirthDate = DateTime.Parse("08/31/1985", culture)
- });
- InsertPatient(new Patient()
- {
- SSN = "999-99-0005",
- FirstName = "Lucy",
- LastName = "Harrington",
- BirthDate = DateTime.Parse("05/06/1993", culture)
- });
--
- // Fetch and display all patients.
- Console.WriteLine(Environment.NewLine + "All the records currently in the Patients table:");
-
- foreach (Patient patient in SelectAllPatients())
- {
- Console.WriteLine(patient.FirstName + " " + patient.LastName + "\tSSN: " + patient.SSN + "\tBirthdate: " + patient.BirthDate);
- }
-
- // Get patients by SSN.
- Console.WriteLine(Environment.NewLine + "Now let's locate records by searching the encrypted SSN column.");
-
- string ssn;
-
- // This very simple validation only checks that the user entered 11 characters.
- // In production be sure to check all user input and use the best validation for your specific application.
- do
- {
- Console.WriteLine("Please enter a valid SSN (ex. 123-45-6789):");
- ssn = Console.ReadLine();
- } while (ssn.Length != 11);
-
- // The example allows duplicate SSN entries so we will return all records
- // that match the provided value and store the results in selectedPatients.
- Patient selectedPatient = SelectPatientBySSN(ssn);
-
- // Check if any records were returned and display our query results.
- if (selectedPatient != null)
- {
- Console.WriteLine("Patient found with SSN = " + ssn);
- Console.WriteLine(selectedPatient.FirstName + " " + selectedPatient.LastName + "\tSSN: "
- + selectedPatient.SSN + "\tBirthdate: " + selectedPatient.BirthDate);
- }
- else
- {
- Console.WriteLine("No patients found with SSN = " + ssn);
- }
-
- Console.WriteLine("Press Enter to exit...");
- Console.ReadLine();
- }
--
- static int InsertPatient(Patient newPatient)
- {
- int returnValue = 0;
-
- string sqlCmdText = @"INSERT INTO [dbo].[Patients] ([SSN], [FirstName], [LastName], [BirthDate])
- VALUES (@SSN, @FirstName, @LastName, @BirthDate);";
-
- SqlCommand sqlCmd = new SqlCommand(sqlCmdText);
--
- SqlParameter paramSSN = new SqlParameter(@"@SSN", newPatient.SSN);
- paramSSN.DbType = DbType.AnsiStringFixedLength;
- paramSSN.Direction = ParameterDirection.Input;
- paramSSN.Size = 11;
-
- SqlParameter paramFirstName = new SqlParameter(@"@FirstName", newPatient.FirstName);
- paramFirstName.DbType = DbType.String;
- paramFirstName.Direction = ParameterDirection.Input;
-
- SqlParameter paramLastName = new SqlParameter(@"@LastName", newPatient.LastName);
- paramLastName.DbType = DbType.String;
- paramLastName.Direction = ParameterDirection.Input;
-
- SqlParameter paramBirthDate = new SqlParameter(@"@BirthDate", newPatient.BirthDate);
- paramBirthDate.SqlDbType = SqlDbType.Date;
- paramBirthDate.Direction = ParameterDirection.Input;
-
- sqlCmd.Parameters.Add(paramSSN);
- sqlCmd.Parameters.Add(paramFirstName);
- sqlCmd.Parameters.Add(paramLastName);
- sqlCmd.Parameters.Add(paramBirthDate);
-
- using (sqlCmd.Connection = new SqlConnection(connectionString))
- {
- try
- {
- sqlCmd.Connection.Open();
- sqlCmd.ExecuteNonQuery();
- }
- catch (Exception ex)
- {
- returnValue = 1;
- Console.WriteLine("The following error was encountered: ");
- Console.WriteLine(ex.Message);
- Console.WriteLine(Environment.NewLine + "Press Enter key to exit");
- Console.ReadLine();
- Environment.Exit(0);
- }
- }
- return returnValue;
- }
--
- static List<Patient> SelectAllPatients()
- {
- List<Patient> patients = new List<Patient>();
--
- SqlCommand sqlCmd = new SqlCommand(
- "SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients]",
- new SqlConnection(connectionString));
--
- using (sqlCmd.Connection = new SqlConnection(connectionString))
-
- using (sqlCmd.Connection = new SqlConnection(connectionString))
- {
- try
- {
- sqlCmd.Connection.Open();
- SqlDataReader reader = sqlCmd.ExecuteReader();
-
- if (reader.HasRows)
- {
- while (reader.Read())
- {
- patients.Add(new Patient()
- {
- SSN = reader[0].ToString(),
- FirstName = reader[1].ToString(),
- LastName = reader["LastName"].ToString(),
- BirthDate = (DateTime)reader["BirthDate"]
- });
- }
- }
- }
- catch (Exception ex)
- {
- throw;
- }
- }
-
- return patients;
- }
--
- static Patient SelectPatientBySSN(string ssn)
- {
- Patient patient = new Patient();
-
- SqlCommand sqlCmd = new SqlCommand(
- "SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients] WHERE [SSN]=@SSN",
- new SqlConnection(connectionString));
-
- SqlParameter paramSSN = new SqlParameter(@"@SSN", ssn);
- paramSSN.DbType = DbType.AnsiStringFixedLength;
- paramSSN.Direction = ParameterDirection.Input;
- paramSSN.Size = 11;
-
- sqlCmd.Parameters.Add(paramSSN);
--
- using (sqlCmd.Connection = new SqlConnection(connectionString))
- {
- try
- {
- sqlCmd.Connection.Open();
- SqlDataReader reader = sqlCmd.ExecuteReader();
-
- if (reader.HasRows)
- {
- while (reader.Read())
- {
- patient = new Patient()
- {
- SSN = reader[0].ToString(),
- FirstName = reader[1].ToString(),
- LastName = reader["LastName"].ToString(),
- BirthDate = (DateTime)reader["BirthDate"]
- };
- }
- }
- else
- {
- patient = null;
- }
- }
- catch (Exception ex)
- {
- throw;
- }
- }
- return patient;
- }
--
- // This method simply deletes all records in the Patients table to reset our demo.
- static int ResetPatientsTable()
- {
- int returnValue = 0;
-
- SqlCommand sqlCmd = new SqlCommand("DELETE FROM Patients");
- using (sqlCmd.Connection = new SqlConnection(connectionString))
- {
- try
- {
- sqlCmd.Connection.Open();
- sqlCmd.ExecuteNonQuery();
-
- }
- catch (Exception ex)
- {
- returnValue = 1;
- }
- }
- return returnValue;
- }
- }
-
- class Patient
- {
- public string SSN { get; set; }
- public string FirstName { get; set; }
- public string LastName { get; set; }
- public DateTime BirthDate { get; set; }
- }
-}
-```
-
-## Verify that the data is encrypted
-
-You can quickly check that the actual data on the server is encrypted by querying the **Patients** data with SSMS. (Use your current connection where the column encryption setting is not yet enabled.)
-
-Run the following query on the Clinic database.
-
-```tsql
-SELECT FirstName, LastName, SSN, BirthDate FROM Patients;
-```
-
-You can see that the encrypted columns do not contain any plaintext data.
-
- ![Screenshot that shows encrypted data in the encrypted columns.](./media/always-encrypted-certificate-store-configure/ssms-encrypted.png)
-
-To use SSMS to access the plaintext data, you can add the **Column Encryption Setting=enabled** parameter to the connection.
-
-1. In SSMS, right-click your server in **Object Explorer**, and then click **Disconnect**.
-2. Click **Connect** > **Database Engine** to open the **Connect to Server** window, and then click **Options**.
-3. Click **Additional Connection Parameters** and type **Column Encryption Setting=enabled**.
-
- ![Screenshot that shows the Additional Connection Parameters tab with Column Encryption Setting=enabled typed in the box.](./media/always-encrypted-certificate-store-configure/ssms-connection-parameter.png)
-4. Run the following query on the **Clinic** database.
-
- ```tsql
- SELECT FirstName, LastName, SSN, BirthDate FROM Patients;
- ```
-
- You can now see the plaintext data in the encrypted columns.
-
- ![New console application](./media/always-encrypted-certificate-store-configure/ssms-plaintext.png)
-
-> [!NOTE]
-> If you connect with SSMS (or any client) from a different computer, it will not have access to the encryption keys and will not be able to decrypt the data.
-
-## Next steps
-
-After you create a database that uses Always Encrypted, you may want to do the following:
-
-* Run this sample from a different computer. It won't have access to the encryption keys, so it will not have access to the plaintext data and will not run successfully.
-* [Rotate and clean up your keys](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio).
-* [Migrate data that is already encrypted with Always Encrypted](/sql/relational-databases/security/encryption/migrate-sensitive-data-protected-by-always-encrypted).
-* [Deploy Always Encrypted certificates to other client machines](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted#Anchor_1) (see the "Making Certificates Available to Applications and Users" section).
-
-## Related information
-
-* [Always Encrypted (client development)](/sql/relational-databases/security/encryption/always-encrypted-client-development)
-* [Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption)
-* [SQL Server Encryption](/sql/relational-databases/security/encryption/sql-server-encryption)
-* [Always Encrypted Wizard](/sql/relational-databases/security/encryption/always-encrypted-wizard)
-* [Always Encrypted Blog](/archive/blogs/sqlsecurity/always-encrypted-key-metadata)
azure-sql Always Encrypted Enclaves Configure Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-enclaves-configure-attestation.md
- Title: "Configure attestation for Always Encrypted using Azure Attestation"
-description: "Configure Azure Attestation for Always Encrypted with secure enclaves in Azure SQL Database."
-keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
------
-ms.reviwer: vanto
Previously updated : 07/14/2021 ---
-# Configure attestation for Always Encrypted using Azure Attestation
--
-[Microsoft Azure Attestation](../../attestation/overview.md) is a solution for attesting Trusted Execution Environments (TEEs), including Intel Software Guard Extensions (Intel SGX) enclaves.
-
-To use Azure Attestation for attesting Intel SGX enclaves used for [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database, you need to:
-
-1. Create an [attestation provider](../../attestation/basic-concepts.md#attestation-provider) and configure it with the recommended attestation policy.
-
-2. Determine the attestation URL and share it with application administrators.
-
-> [!NOTE]
-> Configuring attestation is the responsibility of the attestation administrator. See [Roles and responsibilities when configuring SGX enclaves and attestation](always-encrypted-enclaves-plan.md#roles-and-responsibilities-when-configuring-sgx-enclaves-and-attestation).
-
-## Create and configure an attestation provider
-
-An [attestation provider](../../attestation/basic-concepts.md#attestation-provider) is a resource in Azure Attestation that evaluates [attestation requests](../../attestation/basic-concepts.md#attestation-request) against [attestation policies](../../attestation/basic-concepts.md#attestation-request) and issues [attestation tokens](../../attestation/basic-concepts.md#attestation-token).
-
-Attestation policies are specified using the [claim rule grammar](../../attestation/claim-rule-grammar.md).
-
-> [!IMPORTANT]
-> An attestation provider gets created with the default policy for Intel SGX enclaves, which does not validate the code running inside the enclave. Microsoft strongly advises you set the below recommended policy, and not use the default policy, for Always Encrypted with secure enclaves.
-
-Microsoft recommends the following policy for attesting Intel SGX enclaves used for Always Encrypted in Azure SQL Database:
-
-```output
-version= 1.0;
-authorizationrules
-{
- [ type=="x-ms-sgx-is-debuggable", value==false ]
- && [ type=="x-ms-sgx-product-id", value==4639 ]
- && [ type=="x-ms-sgx-svn", value>= 0 ]
- && [ type=="x-ms-sgx-mrsigner", value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
- => permit();
-};
-```
-
-The above policy verifies:
--- The enclave inside Azure SQL Database doesn't support debugging.
- > Enclaves can be loaded with debugging disabled or enabled. Debugging support is designed to allow developers to troubleshoot the code running in an enclave. In a production system, debugging could enable an administrator to examine the content of the enclave, which would reduce the level of protection the enclave provides. The recommended policy disables debugging to ensure that if a malicious admin tries to turn on debugging support by taking over the enclave machine, attestation will fail.
-- The product ID of the enclave matches the product ID assigned to Always Encrypted with secure enclaves.
- > Each enclave has a unique product ID that differentiates the enclave from other enclaves. The product ID assigned to the Always Encrypted enclave is 4639.
-- The security version number (SVN) of the library is greater than 0.
- > The SVN allows Microsoft to respond to potential security bugs identified in the enclave code. In case a security issue is dicovered and fixed, Microsoft will deploy a new version of the enclave with a new (incremented) SVN. The above recommended policy will be updated to reflect the new SVN. By updating your policy to match the recommended policy you can ensure that if a malicious administrator tries to load an older and insecure enclave, attestation will fail.
-- The library in the enclave has been signed using the Microsoft signing key (the value of the x-ms-sgx-mrsigner claim is the hash of the signing key).
- > One of the main goals of attestation is to convince clients that the binary running in the enclave is the binary that is supposed to run. Attestation policies provide two mechanisms for this purpose. One is the **mrenclave** claim which is the hash of the binary that is supposed to run in an enclave. The problem with the **mrenclave** is that the binary hash changes even with trivial changes to the code, which makes it hard to rev the code running in the enclave. Hence, we recommend the use of the **mrsigner**, which is a hash of a key that is used to sign the enclave binary. When Microsoft revs the enclave, the **mrsigner** stays the same as long as the signing key does not change. In this way, it becomes feasible to deploy updated binaries without breaking customers' applications.
-
-> [!IMPORTANT]
-> Microsoft may need to rotate the key used to sign the Always Encrypted enclave binary, which is expected to be a rare event. Before a new version of the enclave binary, signed with a new key, is deployed to Azure SQL Database, this article will be updated to provide a new recommended attestation policy and instructions on how you should update the policy in your attestation providers to ensure your applications continue to work uninterrupted.
-
-For instructions for how to create an attestation provider and configure with an attestation policy using:
--- [Quickstart: Set up Azure Attestation with Azure portal](../../attestation/quickstart-portal.md)
- > [!IMPORTANT]
- > When you configure your attestation policy with Azure portal, set Attestation Type to `SGX-IntelSDK`.
-- [Quickstart: Set up Azure Attestation with Azure PowerShell](../../attestation/quickstart-powershell.md)
- > [!IMPORTANT]
- > When you configure your attestation policy with Azure PowerShell, set the `Tee` parameter to `SgxEnclave`.
-- [Quickstart: Set up Azure Attestation with Azure CLI](../../attestation/quickstart-azure-cli.md)
- > [!IMPORTANT]
- > When you configure your attestation policy with Azure CLI, set the `attestation-type` parameter to `SGX-IntelSDK`.
--
-## Determine the attestation URL for your attestation policy
-
-After you've configured an attestation policy, you need to share the attestation URL with administrators of applications that use Always Encrypted with secure enclaves in Azure SQL Database. The attestation URL is the `Attest URI` of the attestation provider containing the attestation policy, which looks like this: `https://MyAttestationProvider.wus.attest.azure.net`.
-
-### Use Azure portal to determine the attestation URL
-
-In the Overview pane for your attestation provider, copy the value of the `Attest URI` property to clipboard.
-
-### Use PowerShell to determine the attestation URL
-
-Use the `Get-AzAttestation` cmdlet to retrieve the attestation provider properties, including AttestURI.
-
-```powershell
-Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroupName
-```
-
-For more information, see [Create and manage an attestation provider](../../attestation/quickstart-powershell.md#create-and-manage-an-attestation-provider).
-
-## Next Steps
--- [Manage keys for Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves-manage-keys)-
-## See also
--- [Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database](always-encrypted-enclaves-getting-started.md)
azure-sql Always Encrypted Enclaves Enable Sgx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-enclaves-enable-sgx.md
- Title: "Enable Intel SGX for Always Encrypted"
-description: "Learn how to enable Intel SGX for Always Encrypted with secure enclaves in Azure SQL Database by selecting SGX-enabled hardware."
------
-ms.reviwer: vanto
Previously updated : 04/06/2022-
-# Enable Intel SGX for Always Encrypted for your Azure SQL Database
---
-[Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves. For Intel SGX to be available, the database must use the [vCore model](service-tiers-vcore.md) and [DC-series](service-tiers-sql-database-vcore.md#dc-series) hardware.
-
-Configuring the DC-series hardware to enable Intel SGX enclaves is the responsibility of the Azure SQL Database administrator. See [Roles and responsibilities when configuring SGX enclaves and attestation](always-encrypted-enclaves-plan.md#roles-and-responsibilities-when-configuring-sgx-enclaves-and-attestation).
-
-> [!NOTE]
-> Intel SGX is not available in hardware configurations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
-
-> [!IMPORTANT]
-> Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For more information, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
-
-For detailed instructions for how to configure a new or existing database to use a specific hardware configuration, see [Hardware configuration](service-tiers-sql-database-vcore.md#hardware-configuration).
-
-## Next steps
--- [Configure Azure Attestation for your Azure SQL database server](always-encrypted-enclaves-configure-attestation.md)-
-## See also
--- [Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database](always-encrypted-enclaves-getting-started.md)
azure-sql Always Encrypted Enclaves Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-enclaves-getting-started.md
- Title: "Tutorial: Getting started with Always Encrypted with secure enclaves"
-description: This tutorial teaches you how to create a basic environment for Always Encrypted with secure enclaves in Azure SQL Database and how to encrypt data in-place, and issue rich confidential queries against encrypted columns using SQL Server Management Studio (SSMS).
------
-ms.reviwer: vanto
Previously updated : 04/06/2022-
-# Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
--
-This tutorial teaches you how to get started with [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database. It will show you:
-
-> [!div class="checklist"]
-> - How to create an environment for testing and evaluating Always Encrypted with secure enclaves.
-> - How to encrypt data in-place and issue rich confidential queries against encrypted columns using SQL Server Management Studio (SSMS).
-
-## Prerequisites
--- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). You need to be a member of the Contributor role or the Owner role for the subscription to be able to create resources and configure an attestation policy.--- SQL Server Management Studio (SSMS), version 18.9.1 or later. See [Download SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) for information on how to download SSMS.-
-### PowerShell requirements
-
-> [!NOTE]
-> The prerequisites listed in this section apply only if you choose to use PowerShell for some of the steps in this tutorial. If you plan to use Azure portal instead, you can skip this section.
-
-Make sure the following PowerShell modules are installed on your machine.
-
-1. Az version 6.5.0 or later. For details on how to install the Az PowerShell module, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps). To determine the version the Az module installed on your machine, run the following command from a PowerShell session.
-
- ```powershell
- Get-InstalledModule -Name Az
- ```
-
-The PowerShell Gallery has deprecated Transport Layer Security (TLS) versions 1.0 and 1.1. TLS 1.2 or a later version is recommended. You may receive the following errors if you are using a TLS version lower than 1.2:
--- `WARNING: Unable to resolve package source 'https://www.powershellgallery.com/api/v2'`-- `PackageManagement\Install-Package: No match was found for the specified search criteria and module name.`-
-To continue to interact with the PowerShell Gallery, run the following command before the Install-Module commands
-
-```powershell
-[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
-```
-
-## Step 1: Create and configure a server and a DC-series database
-
-In this step, you will create a new Azure SQL Database logical server and a new database using DC-series hardware, required for Always Encrypted with secure enclaves. For more information see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
-
-# [Portal](#tab/azure-portal)
-
-1. Browse to the [Select SQL deployment option](https://portal.azure.com/#create/Microsoft.AzureSQL) page.
-1. If you are not already signed in to Azure portal, sign in when prompted.
-1. Under **SQL databases**, leave **Resource type** set to **Single database**, and select **Create**.
-
- :::image type="content" source="./media/single-database-create-quickstart/select-deployment.png" alt-text="Add to Azure SQL":::
-
-1. On the **Basics** tab of the **Create SQL Database** form, under **Project details**, select the desired Azure **Subscription**.
-1. For **Resource group**, select **Create new**, enter a name for your resource group, and select **OK**.
-1. For **Database name** enter *ContosoHR*.
-1. For **Server**, select **Create new**, and fill out the **New server** form with the following values:
- - **Server name**: Enter *mysqlserver*, and add some characters for uniqueness. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. So enter something like mysqlserver135, and the portal lets you know if it is available or not.
- - **Server admin login**: Enter an admin login name, for example: *azureuser*.
- - **Password**: Enter a password that meets requirements, and enter it again in the **Confirm password** field.
- - **Location**: Select a location from the dropdown list.
- > [!IMPORTANT]
- > You need to select a location (an Azure region) that supports both the DC-series hardware and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
-
- Select **OK**.
-1. Leave **Want to use SQL elastic pool** set to **No**.
-1. Under **Compute + storage**, select **Configure database**, and click **Change configuration**.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-configure-database.png" alt-text="Configure database" lightbox="./media/always-encrypted-enclaves/portal-configure-database.png":::
-
-1. Select the **DC-series** hardware configuration, and then select **OK**.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-configure-dc-series-database.png" alt-text="Configure DC-series database":::
-
-1. Select **Apply**.
-1. Back on the **Basics** tab, verify **Compute + storage** is set to **General Purpose**, **DC, 2 vCores, 32 GB storage**.
-1. Select **Next: Networking** at the bottom of the page.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-configure-dc-series-database-basics.png" alt-text="Configure DC-series database - basics":::
-
-1. On the **Networking** tab, for **Connectivity method**, select **Public endpoint**.
-1. For **Firewall rules**, set **Add current client IP address** to **Yes**. Leave **Allow Azure services and resources to access this server** set to **No**.
-1. Select **Review + create** at the bottom of the page.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-configure-database-networking.png" alt-text="New SQL database - networking":::
-
-1. On the **Review + create** page, after reviewing, select **Create**.
-
-# [PowerShell](#tab/azure-powershell)
-
-1. Open a PowerShell console and import the required version of Az.
-
- ```PowerShell
- Import-Module "Az" -MinimumVersion "5.6.0"
- ```
-
-1. Sign into Azure. If needed, [switch to the subscription](/powershell/azure/manage-subscriptions-azureps) you are using for this tutorial.
-
- ```PowerShell
- Connect-AzAccount
- $subscriptionId = "<your subscription ID>"
- $context = Set-AzContext -Subscription $subscriptionId
- ```
-
-1. Create a new resource group.
-
- > [!IMPORTANT]
- > You need to create your resource group in a region (location) that supports both the DC-series hardware and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
-
- ```powershell
- $resourceGroupName = "<your new resource group name>"
- $location = "<Azure region supporting DC-series and Microsoft Azure Attestation>"
- New-AzResourceGroup -Name $resourceGroupName -Location $location
- ```
-
-1. Create an Azure SQL logical server. When prompted, enter the server administrator name and a password. Make sure you remember the admin name and the password - you will need them later to connect to the server.
-
- ```powershell
- $serverName = "<your server name>"
- New-AzSqlServer -ServerName $serverName -ResourceGroupName $resourceGroupName -Location $location
- ```
-
-1. Create a server firewall rule that allows access from the specified IP range.
-
- ```powershell
- $startIp = "<start of IP range>"
- $endIp = "<end of IP range>"
- $serverFirewallRule = New-AzSqlServerFirewallRule -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -FirewallRuleName "AllowedIPs" -StartIpAddress $startIp -EndIpAddress $endIp
- ```
-
-1. Create a DC-series database.
-
- ```powershell
- $databaseName = "ContosoHR"
- $edition = "GeneralPurpose"
- $vCore = 2
- $generation = "DC"
- New-AzSqlDatabase -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -DatabaseName $databaseName `
- -Edition $edition `
- -Vcore $vCore `
- -ComputeGeneration $generation
- ```
---
-## Step 2: Configure an attestation provider
-
-In this step, you'll create and configure an attestation provider in Microsoft Azure Attestation. This is needed to attest the secure enclave your database uses.
-
-# [Portal](#tab/azure-portal)
-
-1. Browse to the [Create attestation provider](https://portal.azure.com/#create/Microsoft.Attestation) page.
-1. On the **Create attestation provider** page, provide the following inputs:
-
- - **Subscription**: Choose the same subscription you created the Azure SQL logical server in.
- - **Resource Group**: Choose the same resource group you created the Azure SQL logical server in.
- - **Name**: Enter *myattestprovider*, and add some characters for uniqueness. We can't provide an exact attestation provider name to use because names must be globally unique. So enter something like myattestprovider12345, and the portal lets you know if it is available or not.
- - **Location**: Choose the location, in which you created the Azure SQL logical server in.
- - **Policy signer certificates file**: Leave this field empty, as you will configure an unsigned policy.
-
-1. After you provide the required inputs, select **Review + create**.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-create-attestation-provider-basics.png" alt-text="Create attestation provider":::
-
-1. Select **Create**.
-1. Once the attestation provider is created, click **Go to resource**.
-1. On the **Overview** tab for the attestation provider, copy the value of the **Attest URI** property to clipboard and save it in a file. This is the attestation URL, you will need in later steps.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-attest-uri.png" alt-text="Attestation URL":::
-
-1. Select **Policy** on the resource menu on the left side of the window or on the lower pane.
-1. Set **Attestation Type** to **SGX-IntelSDK**.
-1. Select **Configure** on the upper menu.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-configure-attestation-policy.png" alt-text="Configure attestation policy":::
-
-1. Set **Policy Format** to **Text**. Leave **Policy options** set to **Enter policy**.
-1. In the **Policy text** field, replace the default policy with the below policy. For information about the below policy, see [Create and configure an attestation provider](always-encrypted-enclaves-configure-attestation.md#create-and-configure-an-attestation-provider).
-
- ```output
- version= 1.0;
- authorizationrules
- {
- [ type=="x-ms-sgx-is-debuggable", value==false ]
- && [ type=="x-ms-sgx-product-id", value==4639 ]
- && [ type=="x-ms-sgx-svn", value>= 0 ]
- && [ type=="x-ms-sgx-mrsigner", value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
- => permit();
- };
- ```
-
-1. Click **Save**.
-
- :::image type="content" source="./media/always-encrypted-enclaves/portal-edit-attestation-policy.png" alt-text="Edit attestation policy":::
-
-1. Click **Refresh** on the upper menu to view the configured policy.
-
-# [PowerShell](#tab/azure-powershell)
-
-1. Copy the below attestation policy and save the policy in a text file (txt). For information about the below policy, see [Create and configure an attestation provider](always-encrypted-enclaves-configure-attestation.md#create-and-configure-an-attestation-provider).
-
- ```output
- version= 1.0;
- authorizationrules
- {
- [ type=="x-ms-sgx-is-debuggable", value==false ]
- && [ type=="x-ms-sgx-product-id", value==4639 ]
- && [ type=="x-ms-sgx-svn", value>= 0 ]
- && [ type=="x-ms-sgx-mrsigner", value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
- => permit();
- };
- ```
-
-1. Import the required version of `Az.Attestation`.
-
- ```powershell
- Import-Module "Az.Attestation" -MinimumVersion "0.1.8"
- ```
-
-1. Create an attestation provider.
-
- ```powershell
- $attestationProviderName = "<your attestation provider name>"
- New-AzAttestation -Name $attestationProviderName -ResourceGroupName $resourceGroupName -Location $location
- ```
-1. Assign yourself to the Attestation Contributor role for the attestation provider, to ensure you have permissions to configure an attestation policy.
-
- ```powershell
- New-AzRoleAssignment -SignInName $context.Account.Id `
- -RoleDefinitionName "Attestation Contributor" `
- -ResourceName $attestationProviderName `
- -ResourceType "Microsoft.Attestation/attestationProviders" `
- -ResourceGroupName $resourceGroupName
- ```
-
-3. Configure your attestation policy.
-
- ```powershell
- $policyFile = "<the pathname of the file from step 1 in this section>"
- $teeType = "SgxEnclave"
- $policyFormat = "Text"
- $policy=Get-Content -path $policyFile -Raw
- Set-AzAttestationPolicy -Name $attestationProviderName `
- -ResourceGroupName $resourceGroupName `
- -Tee $teeType `
- -Policy $policy `
- -PolicyFormat $policyFormat
- ```
-
-1. Retrieve the attestation URL (the Attest URI of your attestation provider).
-
- ```powershell
- $attestationUrl = (Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $resourceGroupName).AttestUri
- Write-Host "Your attestation URL is: $attestationUrl"
- ```
-
- The attestation URL should look like this: `https://myattestprovider12345.eus.attest.azure.net`
----
-## Step 3: Populate your database
-
-In this step, you'll create a table and populate it with some data that you'll later encrypt and query.
-
-1. Open SSMS and connect to the **ContosoHR** database in the Azure SQL logical server you created **without** Always Encrypted enabled in the database connection.
- 1. In the **Connect to Server** dialog, specify the fully qualified name of your server (for example, *myserver135.database.windows.net*), and enter the administrator user name and the password you specified when you created the server.
- 2. Click **Options >>** and select the **Connection Properties** tab. Make sure to select the **ContosoHR** database (not the default, master database).
- 3. Select the **Always Encrypted** tab.
- 4. Make sure the **Enable Always Encrypted (column encryption)** checkbox is **not** selected.
-
- :::image type="content" source="./media/always-encrypted-enclaves/connect-without-always-encrypted-ssms.png" alt-text="Connect without Always Encrypted":::
-
- 5. Click **Connect**.
-
-2. Create a new table, named **Employees**.
-
- ```sql
- CREATE SCHEMA [HR];
- GO
-
- CREATE TABLE [HR].[Employees]
- (
- [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
- [SSN] [char](11) NOT NULL,
- [FirstName] [nvarchar](50) NOT NULL,
- [LastName] [nvarchar](50) NOT NULL,
- [Salary] [money] NOT NULL
- ) ON [PRIMARY];
- GO
- ```
-
-3. Add a few employee records to the **Employees** table.
-
- ```sql
- INSERT INTO [HR].[Employees]
- ([SSN]
- ,[FirstName]
- ,[LastName]
- ,[Salary])
- VALUES
- ('795-73-9838'
- , N'Catherine'
- , N'Abel'
- , $31692);
-
- INSERT INTO [HR].[Employees]
- ([SSN]
- ,[FirstName]
- ,[LastName]
- ,[Salary])
- VALUES
- ('990-00-6818'
- , N'Kim'
- , N'Abercrombie'
- , $55415);
- ```
-
-## Step 4: Provision enclave-enabled keys
-
-In this step, you'll create a column master key and a column encryption key that allow enclave computations.
-
-1. Using the SSMS instance from the previous step, in **Object Explorer**, expand your database and navigate to **Security** > **Always Encrypted Keys**.
-1. Provision a new enclave-enabled column master key:
- 1. Right-click **Always Encrypted Keys** and select **New Column Master Key...**.
- 2. Select your column master key name: **CMK1**.
- 3. Make sure you select either **Windows Certificate Store (Current User or Local Machine)** or **Azure Key Vault**.
- 4. Select **Allow enclave computations**.
- 5. If you selected Azure Key Vault, sign into Azure and select your key vault. For more information on how to create a key vault for Always Encrypted, see [Manage your key vaults from Azure portal](/archive/blogs/kv/manage-your-key-vaults-from-new-azure-portal).
- 6. Select your certificate or Azure Key Value key if it already exists, or click the **Generate Certificate** button to create a new one.
- 7. Select **OK**.
-
- :::image type="content" source="./media/always-encrypted-enclaves/allow-enclave-computations.png" alt-text="Allow enclave computations":::
-
-1. Create a new enclave-enabled column encryption key:
-
- 1. Right-click **Always Encrypted Keys** and select **New Column Encryption Key**.
- 2. Enter a name for the new column encryption key: **CEK1**.
- 3. In the **Column master key** dropdown, select the column master key you created in the previous steps.
- 4. Select **OK**.
-
-## Step 5: Encrypt some columns in place
-
-In this step, you'll encrypt the data stored in the **SSN** and **Salary** columns inside the server-side enclave, and then test a SELECT query on the data.
-
-1. Open a new SSMS instance and connect to your database **with** Always Encrypted enabled for the database connection.
- 1. Start a new instance of SSMS.
- 2. In the **Connect to Server** dialog, specify the fully qualified name of your server (for example, *myserver135.database.windows.net*), and enter the administrator user name and the password you specified when you created the server.
- 3. Click **Options >>** and select the **Connection Properties** tab. Make sure to select the **ContosoHR** database (not the default, master database).
- 4. Select the **Always Encrypted** tab.
- 5. Make sure the **Enable Always Encrypted (column encryption)** checkbox **is** selected.
- 6. Specify your enclave attestation URL that you've obtained by following the steps in [Step 2: Configure an attestation provider](#step-2-configure-an-attestation-provider). See the below screenshot.
-
- :::image type="content" source="./media/always-encrypted-enclaves/connect-to-server-configure-attestation.png" alt-text="Connect with attestation":::
-
- 7. Select **Connect**.
- 8. If you're prompted to enable Parameterization for Always Encrypted queries, select **Enable**.
-
-1. Using the same SSMS instance (with Always Encrypted enabled), open a new query window and encrypt the **SSN** and **Salary** columns by running the below statements.
-
- ```sql
- ALTER TABLE [HR].[Employees]
- ALTER COLUMN [SSN] [char] (11) COLLATE Latin1_General_BIN2
- ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
- WITH
- (ONLINE = ON);
-
- ALTER TABLE [HR].[Employees]
- ALTER COLUMN [Salary] [money]
- ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
- WITH
- (ONLINE = ON);
-
- ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
- ```
-
- > [!NOTE]
- > Notice the ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE statement to clear the query plan cache for the database in the above script. After you have altered the table, you need to clear the plans for all batches and stored procedures that access the table to refresh parameters encryption information.
-
-1. To verify the **SSN** and **Salary** columns are now encrypted, open a new query window in the SSMS instance **without** Always Encrypted enabled for the database connection and execute the below statement. The query window should return encrypted values in the **SSN** and **Salary** columns. If you execute the same query using the SSMS instance with Always Encrypted enabled, you should see the data decrypted.
-
- ```sql
- SELECT * FROM [HR].[Employees];
- ```
-
-## Step 6: Run rich queries against encrypted columns
-
-You can run rich queries against the encrypted columns. Some query processing will be performed inside your server-side enclave.
-
-1. In the SSMS instance **with** Always Encrypted enabled, make sure Parameterization for Always Encrypted is also enabled.
- 1. Select **Tools** from the main menu of SSMS.
- 2. Select **Options...**.
- 3. Navigate to **Query Execution** > **SQL Server** > **Advanced**.
- 4. Ensure that **Enable Parameterization for Always Encrypted** is checked.
- 5. Select **OK**.
-2. Open a new query window, paste in the below query, and execute. The query should return plaintext values and rows meeting the specified search criteria.
-
- ```sql
- DECLARE @SSNPattern [char](11) = '%6818';
- DECLARE @MinSalary [money] = $1000;
- SELECT * FROM [HR].[Employees]
- WHERE SSN LIKE @SSNPattern AND [Salary] >= @MinSalary;
- ```
-
-3. Try the same query again in the SSMS instance that doesn't have Always Encrypted enabled. A failure should occur.
-
-## Next steps
-
-After completing this tutorial, you can go to one of the following tutorials:
-- [Tutorial: Develop a .NET application using Always Encrypted with secure enclaves](/sql/connect/ado-net/sql/tutorial-always-encrypted-enclaves-develop-net-apps)-- [Tutorial: Develop a .NET Framework application using Always Encrypted with secure enclaves](/sql/relational-databases/security/tutorial-always-encrypted-enclaves-develop-net-framework-apps)-- [Tutorial: Creating and using indexes on enclave-enabled columns using randomized encryption](/sql/relational-databases/security/tutorial-creating-using-indexes-on-enclave-enabled-columns-using-randomized-encryption)-
-## See also
--- [Configure and use Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/configure-always-encrypted-enclaves)
azure-sql Always Encrypted Enclaves Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-enclaves-plan.md
- Title: "Plan for Intel SGX enclaves and attestation in Azure SQL Database"
-description: "Plan the deployment of Always Encrypted with secure enclaves in Azure SQL Database."
------
-ms.reviwer: vanto
Previously updated : 04/06/2022--
-# Plan for Intel SGX enclaves and attestation in Azure SQL Database
--
-[Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves and requires [Microsoft Azure Attestation](/sql/relational-databases/security/encryption/always-encrypted-enclaves#secure-enclave-attestation).
-
-## Plan for Intel SGX in Azure SQL Database
-
-Intel SGX is a hardware-based trusted execution environment technology. Intel SGX is available for databases that use the [vCore model](service-tiers-sql-database-vcore.md) and [DC-series](service-tiers-sql-database-vcore.md?#dc-series) hardware. Therefore, to ensure you can use Always Encrypted with secure enclaves in your database, you need to either select the DC-series hardware when you create the database, or you can update your existing database to use the DC-series hardware.
-
-> [!NOTE]
-> Intel SGX is not available in hardware other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
-
-> [!IMPORTANT]
-> Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For details, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
-
-## Plan for attestation in Azure SQL Database
-
-[Microsoft Azure Attestation](../../attestation/overview.md) is a solution for attesting Trusted Execution Environments (TEEs), including Intel SGX enclaves in Azure SQL databases using DC-series hardware.
-
-To use Azure Attestation for attesting Intel SGX enclaves in Azure SQL Database, you need to create an [attestation provider](../../attestation/basic-concepts.md#attestation-provider) and configure it with the Microsoft-provided attestation policy. See [Configure attestation for Always Encrypted using Azure Attestation](always-encrypted-enclaves-configure-attestation.md)
-
-## Roles and responsibilities when configuring SGX enclaves and attestation
-
-Configuring your environment to support Intel SGX enclaves and attestation for Always Encrypted in Azure SQL Database involves setting up components of different types: Microsoft Azure Attestation, Azure SQL Database, and applications that trigger enclave attestation. Configuring components of each type is performed by users assuming one of the below distinct roles:
--- Attestation administrator - creates an attestation provider in Microsoft Azure Attestation, authors the attestation policy, grants Azure SQL logical server access to the attestation provider, and shares the attestation URL that points to the policy to application administrators.-- Azure SQL Database administrator - enables SGX enclaves in databases by selecting the DC-series hardware, and provides the attestation administrator with the identity of the Azure SQL logical server that needs to access the attestation provider.-- Application administrator - configures applications with the attestation URL obtained from the attestation administrator.-
-In production environments (handling real sensitive data), it is important your organization adheres to role separation when configuring attestation, where each distinct role is assumed by different people. In particular, if the goal of deploying Always Encrypted in your organization is to reduce the attack surface area by ensuring Azure SQL Database administrators cannot access sensitive data, Azure SQL Database administrators should not control attestation policies.
-
-## Next steps
--- [Enable Intel SGX for your Azure SQL database](always-encrypted-enclaves-enable-sgx.md)-
-## See also
--- [Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database](always-encrypted-enclaves-getting-started.md)
azure-sql Analyze Prevent Deadlocks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/analyze-prevent-deadlocks.md
- Title: Analyze and prevent deadlocks-
-description: Learn how to analyze deadlocks and prevent them from reoccurring in Azure SQL Database
------- Previously updated : 4/8/2022--
-# Analyze and prevent deadlocks in Azure SQL Database
-
-This article teaches you how to identify deadlocks in Azure SQL Database, use deadlock graphs and Query Store to identify the queries in the deadlock, and plan and test changes to prevent deadlocks from reoccurring.
-
-This article focuses on identifying and analyzing deadlocks due to lock contention. Learn more about other types of deadlocks in [resources that can deadlock](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#deadlock_resources).
-
-## How deadlocks occur in Azure SQL Database
-
-Each new database in Azure SQL Database has the [read committed snapshot](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1) (RCSI) database setting enabled by default. [Blocking](understand-resolve-blocking.md) between sessions reading data and sessions writing data is minimized under RCSI, which uses row versioning to increase concurrency. However, blocking and deadlocks may still occur in databases in Azure SQL Database because:
--- Queries that modify data may block one another.-- Queries may run under isolation levels that increase blocking. Isolation levels may be specified via client library methods, [query hints](/sql/t-sql/queries/hints-transact-sql-query), or [SET statements](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql) in Transact-SQL.-- [RCSI may be disabled](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1), causing the database to use shared (S) locks to protect SELECT statements run under the read committed isolation level. This may increase blocking and deadlocks.-
-### An example deadlock
-
-A deadlock occurs when two or more tasks permanently block one another because each task has a lock on a resource the other task is trying to lock. A deadlock is also called a cyclic dependency: in the case of a two-task deadlock, transaction A has a dependency on transaction B, and transaction B closes the circle by having a dependency on transaction A.
-
-For example:
-
-1. **Session A** begins an explicit transaction and runs an update statement that acquires an update (U) lock on one row on table `SalesLT.Product` that is [converted to an exclusive (X) lock](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-when-modifying-data).
-1. **Session B** runs an update statement that modifies the `SalesLT.ProductDescription` table. The update statement joins to the `SalesLT.Product` table to find the correct rows to update.
- - **Session B** acquires an update (U) lock on 72 rows on the `SalesLT.ProductDescription` table.
- - **Session B** needs a shared lock on rows on the table `SalesLT.Product`, including the row that is locked by **Session A**. **Session B** is blocked on `SalesLT.Product`.
-1. **Session A** continues its transaction, and now runs an update against the `SalesLT.ProductDescription` table. **Session A** is blocked by Session B on `SalesLT.ProductDescription`.
--
-All transactions in a deadlock will wait indefinitely unless one of the participating transactions is rolled back, for example, because its session was terminated.
-
-The database engine deadlock monitor periodically checks for tasks that are in a deadlock. If the deadlock monitor detects a cyclic dependency, it chooses one of the tasks as a victim and terminates its transaction with error 1205, "Transaction (Process ID *N*) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction." Breaking the deadlock in this way allows the other task or tasks in the deadlock to complete their transactions.
-
->[!NOTE]
-> Learn more about the criteria for choosing a deadlock victim in the [Deadlock process list](#deadlock-process-list) section of this article.
--
-The application with the transaction chosen as the deadlock victim should retry the transaction, which usually completes after the other transaction or transactions involved in the deadlock have finished.
-
-It is a best practice to introduce a short, randomized delay before retry to avoid encountering the same deadlock again. Learn more about how to design [retry logic for transient errors](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors).
-
-### Default isolation level in Azure SQL Database
-
-New databases in Azure SQL Database enable read committed snapshot (RCSI) by default. RCSI changes the behavior of the [read committed isolation level](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#database-engine-isolation-levels) to use [row-versioning](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#Row_versioning) to provide statement-level consistency without the use of shared (S) locks for SELECT statements.
-
-With RCSI enabled:
--- Statements reading data do not block statements modifying data.-- Statements modifying data do not block statements reading data. -
-[Snapshot isolation level](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#b-enable-snapshot-isolation-on-a-database) is also enabled by default for new databases in Azure SQL Database. Snapshot isolation is an additional row-based isolation level that provides transaction-level consistency for data and which uses row versions to select rows to update. To use snapshot isolation, queries or connections must explicitly set their transaction isolation level to `SNAPSHOT`. This may only be done when snapshot isolation is enabled for the database.
-
-You can identify if RCSI and/or snapshot isolation are enabled with Transact-SQL. Connect to your database in Azure SQL Database and run the following query:
-
-```sql
-SELECT name, is_read_committed_snapshot_on, snapshot_isolation_state_desc
-FROM sys.databases
-WHERE name = DB_NAME();
-GO
-```
-
-If RCSI is enabled, the `is_read_committed_snapshot_on` column will return the value **1**. If snapshot isolation is enabled, the `snapshot_isolation_state_desc` column will return the value **ON**.
-
-If [RCSI has been disabled](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1) for a database in Azure SQL Database, investigate why RCSI was disabled before re-enabling it. Application code may have been written expecting that queries reading data will be blocked by queries writing data, resulting in incorrect results from race conditions when RCSI is enabled.
-
-### Interpreting deadlock events
-
-A deadlock event is emitted after the deadlock manager in Azure SQL Database detects a deadlock and selects a transaction as the victim. In other words, if you set up alerts for deadlocks, the notification fires after an individual deadlock has been resolved. There is no user action that needs to be taken for that deadlock. Applications should be written to include [retry logic](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors) so that they automatically continue after receiving error 1205, "Transaction (Process ID *N*) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."
-
-It's useful to set up alerts, however, as deadlocks may reoccur. Deadlock alerts enable you to investigate if a pattern of repeat deadlocks is happening in your database, in which case you may choose to take action to prevent deadlocks from reoccurring. Learn more about alerting in the [Monitor and alert on deadlocks](#monitor-and-alert-on-deadlocks) section of this article.
-
-### Top methods to prevent deadlocks
-
-The lowest risk approach to preventing deadlocks from reoccurring is generally to [tune nonclustered indexes](#prevent-a-deadlock-from-reoccurring) to optimize queries involved in the deadlock.
--- Risk is low for this approach because tuning nonclustered indexes doesn't require changes to the query code itself, reducing the risk of a user error when rewriting Transact-SQL that causes incorrect data to be returned to the user.-- Effective nonclustered index tuning helps queries find the data to read and modify more efficiently. By reducing the amount of data that a query needs to access, the likelihood of blocking is reduced and deadlocks can often be prevented.
-
-In some cases, creating or tuning a clustered index can reduce blocking and deadlocks. Because the clustered index is included in all nonclustered index definitions, creating or modifying a clustered index can be an IO intensive and time consuming operation on larger tables with existing nonclustered indexes. Learn more about [Clustered index design guidelines](/sql/relational-databases/sql-server-index-design-guide#Clustered).
-
-When index tuning isn't successful at preventing deadlocks, other methods are available:
--- If the deadlock occurs only when a particular plan is chosen for one of the queries involved in the deadlock, [forcing a query plan](/sql/relational-databases/system-stored-procedures/sp-query-store-force-plan-transact-sql) with Query Store may prevent deadlocks from reoccurring.-- Rewriting Transact-SQL for one or more transactions involved in the deadlock can also help prevent deadlocks. Breaking apart explicit transactions into smaller transactions requires careful coding and testing to ensure data validity when concurrent modifications occur.-
-Learn more about each of these approaches in the [Prevent a deadlock from reoccurring](#prevent-a-deadlock-from-reoccurring) section of this article.
-
-## Monitor and alert on deadlocks
-
-In this article, we will use the `AdventureWorksLT` sample database to set up alerts for deadlocks, cause an example deadlock, analyze the deadlock graph for the example deadlock, and test changes to prevent the deadlock from reoccurring.
-
-We'll use the [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS) client in this article, as it contains functionality to display deadlock graphs in an interactive visual mode. You can use other clients such as [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) to follow along with the examples, but you may only be able to view deadlock graphs as XML.
--
-### Create the AdventureWorksLT database
-
-To follow along with the examples, create a new database in Azure SQL Database and select **Sample** data as the **Data source**.
-
-For detailed instructions on how to create `AdventureWorksLT` with the Azure portal, Azure CLI, or PowerShell, select the approach of your choice in [Quickstart: Create an Azure SQL Database single database](single-database-create-quickstart.md).
-
-### Set up deadlock alerts in the Azure portal
-
-To set up alerts for deadlock events, follow the steps in the article [Create alerts for Azure SQL Database and Azure Synapse Analytics using the Azure portal](alerts-insights-configure-portal.md).
-
-Select **Deadlocks** as the signal name for the alert. Configure the **Action group** to notify you using the method of your choice, such as the **Email/SMS/Push/Voice** action type.
-
-## Collect deadlock graphs in Azure SQL Database with Extended Events
-
-Deadlock graphs are a rich source of information regarding the processes and locks involved in a deadlock. To collect deadlock graphs with Extended Events (XEvents) in Azure SQL Database, capture the `sqlserver.database_xml_deadlock_report` event.
-
-You can collect deadlock graphs with XEvents using either the [ring buffer target](xevent-code-ring-buffer.md) or an [event file target](xevent-code-event-file.md). Considerations for selecting the appropriate target type are summarized in the following table:
--
-|Approach |Benefits |Considerations |Usage scenarios |
-|||||
-|Ring buffer target | <ul><li>Simple setup with Transact-SQL only.</li></ul> | <ul><li>Event data is cleared when the XEvents session is stopped for any reason, such as taking the database offline or a database failover.</li><li>Database resources are used to maintain data in the ring buffer and to query session data.</li></ul> | <ul><li>Collect sample trace data for testing and learning.</li><li>Create for short term needs if you cannot set up a session using an event file target immediately.</li><li>Use as a "landing pad" for trace data, when you have set up an automated process to persist trace data into a table.</li> </ul> |
-Event file target | <ul><li>Persists event data to a blob in Azure Storage so data is available even after the session is stopped.</li><li>Event files may be downloaded from the Azure portal or [Azure Storage Explorer](#use-azure-storage-explorer) and analyzed locally, which does not require using database resources to query session data.</li></ul> | <ul><li>Setup is more complex and requires configuration of an Azure Storage container and database scoped credential.</ul></li> | <ul><li>General use when you want event data to persist even after the event session stops.</li><li>You want to run a trace that generates larger amounts of event data than you would like to persist in memory.</li></ul> |
-
-Select the target type you would like to use:
-
-# [Ring buffer target](#tab/ring-buffer)
-
-The ring buffer target is convenient and easy to set up, but has a limited capacity, which can cause older events to be lost. The ring buffer does not persist events to storage and the ring buffer target is cleared when the XEvents session is stopped. This means that any XEvents collected will not be available when the database engine restarts for any reason, such as a failover. The ring buffer target is best suited to learning and short-term needs if you do not have the ability to set up an XEvents session to an event file target immediately.
-
-This sample code creates an XEvents session that captures deadlock graphs in memory using the [ring buffer target](/sql/relational-databases/extended-events/targets-for-extended-events-in-sql-server#ring_buffer-target). The maximum memory allowed for the ring buffer target is 4 MB, and the session will automatically run when the database comes online, such as after a failover.
-
-To create and then start a XEvents session for the `sqlserver.database_xml_deadlock_report` event that writes to the ring buffer target, connect to your database and run the following Transact-SQL:
-
-```sql
-CREATE EVENT SESSION [deadlocks] ON DATABASE
-ADD EVENT sqlserver.database_xml_deadlock_report
-ADD TARGET package0.ring_buffer
-WITH (STARTUP_STATE=ON, MAX_MEMORY=4 MB)
-GO
-
-ALTER EVENT SESSION [deadlocks] ON DATABASE
- STATE = START;
-GO
-```
-
-# [Event file target](#tab/event-file)
-
-The event file target persists deadlock graphs to files so they are available even after the XEvents session is stopped. The event file target also allows you to capture more deadlock graphs without allocating additional memory for a ring buffer. The event file target is suitable for long term use and for collecting larger amounts of trace data.
-
-To create an XEvents session that writes to an event file target, we will:
-
-1. Configure an Azure Storage container to hold the trace files using the Azure portal.
-1. Create a database scoped credential with Transact-SQL.
-1. Create the XEvents session with Transact-SQL.
-
-### Configure an Azure Storage container
-
-To configure an Azure Storage container, first create or select an existing Azure Storage account, then create the container. Generate a Shared Access Signature (SAS) token for the container. This section describes completing this process in the Azure portal.
-
-> [!NOTE]
-> If you wish to create and configure the Azure Storage blob container with PowerShell, see [Event File target code for extended events in Azure SQL Database](xevent-code-event-file.md). Alternately, you may find it convenient to [Use Azure Storage Explorer](#use-azure-storage-explorer) to create and configure the Azure Storage blob container instead of using the Azure portal.
-
-#### Create or select an Azure Storage account
-
-You can use an existing Azure Storage account or create a new Azure Storage account to host a container for trace files.
-
-To use an existing Azure Storage account:
-1. Navigate to the resource group you want to work with in the Azure portal.
-1. On the **Overview** pane, Under **Resources**, set the **Type** dropdown to *Storage account*.
-1. Select the storage account you want to use.
-
-To create a new Azure Storage account, follow the steps in [Create an Azure storage account](/azure/media-services/latest/storage-create-how-to). Complete the process by selecting **Go to resource** in the final step.
-
-#### Create a container
-
-From the storage account page in the Azure portal:
-
-1. Under **Data storage**, select **Containers**.
-1. Select **+ Container** to create a new container. The New container pane will appear.
-1. Enter a name for the container under **Name**.
-1. Select **Create**.
-1. Select the container from the list after it has been created.
-
-#### Create a shared access token
-
-From the container page in the Azure portal:
-
-1. Under **Settings**, select **Shared access tokens**.
-1. Leave the **Signing method** radio button set to the default selection, **Account key**.
-1. Under the **Permissions** dropdown, select the **Read**, **Write**, and **List** permissions.
-1. Set **Start** to the date and time you would like to be able to write trace files. Optionally, configure the time zone in the dropdown below **Start**.
-1. Set **Expiry** to the date and time you would like these permissions to expire. Optionally, configure the time zone in the dropdown below **Expiry**. You are able to set this to a date far in the future, such as ten years, if you wish.
-1. Select **Generate SAS token and URL**. The Blob SAS token and Blob SAS URL will be displayed on the screen.
-1. Copy and preserve the *Blob SAS token* and *Blob SAS URL* values for use in further steps.
-
-### Create a database scoped credential
-
-Connect to your database in Azure SQL Database with SSMS to run the following steps.
-
-To create a database scoped credential, you must first create a [master key](/sql/t-sql/statements/create-master-key-transact-sql) in the database if one does not exist.
-
-Run the following Transact-SQL to create a master key if one does not exist:
-
-```sql
-IF 0 = (SELECT COUNT(*)
- FROM sys.symmetric_keys
- WHERE symmetric_key_id = 101 and name=N'##MS_DatabaseMasterKey##')
-BEGIN
- PRINT N'Creating master key';
- CREATE MASTER KEY;
-END
-ELSE
-BEGIN
- PRINT N'Master key already exists, no action taken';
-END
-GO
-```
-
-Next, create a database scoped credential with the following Transact-SQL. Before running the code:
-- Modify the URL to reflect your storage account name and your container name. This URL will be present at the beginning of the *Blob SAS URL* you copied when you created the shared access token. You only need the text prior to the first `?` in the string.-- Modify the `SECRET` to contain the *Blob SAS token* value you copied when you created the shared access token.-
-```sql
-CREATE DATABASE SCOPED CREDENTIAL
- [https://yourstorageaccountname.blob.core.windows.net/yourcontainername]
- WITH
- IDENTITY = 'SHARED ACCESS SIGNATURE',
- SECRET = 'sp=r&st=2022-04-08T14:34:21Z&se=2032-04-08T22:34:21Z&sv=2020-08-04&sr=c&sig=pUNbbsmDiMzXr1vuNGZh84zyOMBFaBjgWv53IhOzYWQ%3D'
- ;
-GO
-```
-
-### Create the XEvents session
-
-Create and start the XEvents session with the following Transact-SQL. Before running the statement:
-- Replace the `filename` value to reflect your storage account name and your container name. This URL will be present at the beginning of the *Blob SAS URL* you copied when you created the shared access token. You only need the text prior to the first `?` in the string.-- Optionally change the filename stored. The filename you specify here will be part of the actual filename(s) used for the blob(s) storing event data: additional values will be appended so that all event files have a unique name.-- Optionally add additional events to the session.-
-```sql
-CREATE EVENT SESSION [deadlocks_eventfile] ON DATABASE
-ADD EVENT sqlserver.database_xml_deadlock_report
-ADD TARGET package0.event_file
- (SET filename =
- 'https://yourstorageaccountname.blob.core.windows.net/yourcontainername/deadlocks.xel'
- )
-WITH (STARTUP_STATE=ON, MAX_MEMORY=4 MB)
-GO
-
-ALTER EVENT SESSION [deadlocks_eventfile] ON DATABASE
- STATE = START;
-GO
-```
---
-## Cause a deadlock in AdventureWorksLT
-
-> [!NOTE]
-> This example works in the AdventureWorksLT database with the default schema and data when RCSI has been enabled. See [Create the AdventureWorksLT database](#create-the-adventureworkslt-database) for instructions to create the database.
-
-To cause a deadlock, you will need to connect two sessions to the `AdventureWorksLT` database. We'll refer to these sessions as **Session A** and **Session B**.
-
-In **Session A**, run the following Transact-SQL. This code begins an [explicit transaction](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#starting-transactions) and runs a single statement that updates the `SalesLT.Product` table. To do this, the transaction acquires an [update (U) lock](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-when-modifying-data) on one row on table `SalesLT.Product` which is converted to an exclusive (X) lock. We leave the transaction open.
-
-```sql
-BEGIN TRAN
-
- UPDATE SalesLT.Product SET SellEndDate = SellEndDate + 1
- WHERE Color = 'Red';
-
-```
-
-Now, in **Session B**, run the following Transact-SQL. This code doesn't explicitly begin a transaction. Instead, it operates in [autocommit transaction mode](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#starting-transactions). This statement updates the `SalesLT.ProductDescription` table. The update will take out an update (U) lock on 72 rows on the `SalesLT.ProductDescription` table. The query joins to other tables, including the `SalesLT.Product` table.
-
-```sql
-UPDATE SalesLT.ProductDescription SET Description = Description
- FROM SalesLT.ProductDescription as pd
- JOIN SalesLT.ProductModelProductDescription as pmpd on
- pd.ProductDescriptionID = pmpd.ProductDescriptionID
- JOIN SalesLT.ProductModel as pm on
- pmpd.ProductModelID = pm.ProductModelID
- JOIN SalesLT.Product as p on
- pm.ProductModelID=p.ProductModelID
- WHERE p.Color = 'Silver';
-```
-
-To complete this update, **Session B** needs a shared (S) lock on rows on the table `SalesLT.Product`, including the row that is locked by **Session A**. **Session B** will be blocked on `SalesLT.Product`.
-
-Return to **Session A**. Run the following Transact-SQL statement. This runs a second UPDATE statement as part of the open transaction.
-
-```sql
- UPDATE SalesLT.ProductDescription SET Description = Description
- FROM SalesLT.ProductDescription as pd
- JOIN SalesLT.ProductModelProductDescription as pmpd on
- pd.ProductDescriptionID = pmpd.ProductDescriptionID
- JOIN SalesLT.ProductModel as pm on
- pmpd.ProductModelID = pm.ProductModelID
- JOIN SalesLT.Product as p on
- pm.ProductModelID=p.ProductModelID
- WHERE p.Color = 'Red';
-```
-
-The second update statement in **Session A** will be blocked by **Session B** on the `SalesLT.ProductDescription`.
-
-**Session A** and **Session B** are now mutually blocking one another. Neither transaction can proceed, as they each need a resource that is locked by the other.
-
-After a few seconds, the deadlock monitor will identify that the transactions in **Session A** and **Session B** are mutually blocking one another, and that neither can make progress. You should see a deadlock occur, with **Session A** chosen as the deadlock victim. An error message will appear in **Session A** with text similar to the following:
-
-> Msg 1205, Level 13, State 51, Line 7
-> Transaction (Process ID 91) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
-
-**Session B** will complete successfully.
-
-If you [set up deadlock alerts in the Azure portal](#set-up-deadlock-alerts-in-the-azure-portal), you should receive a notification shortly after the deadlock occurs.
-
-## View deadlock graphs from an XEvents session
-
-If you have [set up an XEvents session to collect deadlocks](#collect-deadlock-graphs-in-azure-sql-database-with-extended-events) and a deadlock has occurred after the session was started, you can view an interactive graphic display of the deadlock graph as well as the XML for the deadlock graph.
-
-Different methods are available to obtain deadlock information for the ring buffer target and event file targets. Select the target you used for your XEvents session:
-
-# [Ring buffer target](#tab/ring-buffer)
-
-If you set up an XEvents session writing to the ring buffer, you can query deadlock information with the following Transact-SQL. Before running the query, replace the value of `@tracename` with the name of your xEvents session.
-
-```sql
-DECLARE @tracename sysname = N'deadlocks';
-
-WITH ring_buffer AS (
- SELECT CAST(target_data AS XML) as rb
- FROM sys.dm_xe_database_sessions AS s
- JOIN sys.dm_xe_database_session_targets AS t
- ON CAST(t.event_session_address AS BINARY(8)) = CAST(s.address AS BINARY(8))
- WHERE s.name = @tracename and
- t.target_name = N'ring_buffer'
-), dx AS (
- SELECT
- dxdr.evtdata.query('.') as deadlock_xml_deadlock_report
- FROM ring_buffer
- CROSS APPLY rb.nodes('/RingBufferTarget/event[@name=''database_xml_deadlock_report'']') AS dxdr(evtdata)
-)
-SELECT
- d.query('/event/data[@name=''deadlock_cycle_id'']/value').value('(/value)[1]', 'int') AS [deadlock_cycle_id],
- d.value('(/event/@timestamp)[1]', 'DateTime2') AS [deadlock_timestamp],
- d.query('/event/data[@name=''database_name'']/value').value('(/value)[1]', 'nvarchar(256)') AS [database_name],
- d.query('/event/data[@name=''xml_report'']/value/deadlock') AS deadlock_xml,
- LTRIM(RTRIM(REPLACE(REPLACE(d.value('.', 'nvarchar(2000)'),CHAR(10),' '),CHAR(13),' '))) as query_text
-FROM dx
-CROSS APPLY deadlock_xml_deadlock_report.nodes('(/event/data/value/deadlock/process-list/process/inputbuf)') AS ib(d)
-ORDER BY [deadlock_timestamp] DESC;
-GO
-```
-
-# [Event file target](#tab/event-file)
-
-If you set up an XEvents session writing to an event file, you can download files from the Azure portal and view them locally, or you can query event files with Transact-SQL.
-
-Downloading files from the Azure portal is recommended because this method does not require using database resources to query session data.
-
-### Optionally restart the XEvents session
-
-If an Extended Events session is currently running and writing to an event file target, the blob container being written to will have a **Lease state** of *Leased* in the Azure portal. The size will be the maximum size of the file. To download a smaller file, you may wish to stop and restart the Extended Events session before downloading files. This will cause the file to change its **Lease state** to *Available*, and the file size will be the space used by events in the file.
-
-To stop and restart an XEvents session, connect to your database and run the following Transact-SQL. Before running the code, replace the name of the xEvents session with the appropriate value.
-
-```sql
-ALTER EVENT SESSION [deadlocks_eventfile] ON DATABASE
- STATE = STOP;
-GO
-ALTER EVENT SESSION [deadlocks_eventfile] ON DATABASE
- STATE = START;
-GO
-```
-
-### Download trace files from the Azure portal
-
-To view deadlock events that have been collected across multiple files, download the event session files to your local computer and view the files in SSMS.
-
-> [!NOTE]
-> You can also use [Use Azure Storage Explorer](#use-azure-storage-explorer) to quickly and conveniently download event session files from a blob container in Azure Storage.
-
-To download the files from the Azure portal:
-
-1. Navigate to the storage account hosting your container in the Azure portal.
-1. Under **Data storage**, select **Containers**.
-1. Select the container holding your XEvent trace files.
-1. For each file you wish to download, select **...**, then **Download**.
-
-### View XEvents trace files in SSMS
-
-If you have download multiple files, you can open events from all of the files together in the XEvents viewer in SSMS. To do so:
-1. Open SSMS.
-1. Select **File**, then **Open**, then **Merge Extended Events files...**.
-1. Select **Add**.
-1. Navigated to the directory where you downloaded the files. Use the **Shift** key to select multiple files.
-1. Select **Open**.
-1. Select **OK** in the **Merge Extended Events Files** dialog.
-
-If you have downloaded a single file, right-click the file and select **Open with**, then **SSMS**. This will open the XEvents viewer in SSMS.
-
-Navigate between events collected by selecting the relevant timestamp. To view the XML for a deadlock, double-click the `xml_report` row in the lower pane.
-
-### Query trace files with Transact-SQL
-
-> [!Important]
-> Querying large (1 GB and larger) XEvents trace files using this method is not recommended because it may consume large amounts of memory in your database or elastic pool.