Updates from: 11/30/2022 02:15:55
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Active Directory Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/active-directory-technical-profile.md
Previously updated : 12/11/2020 Last updated : 12/29/2022
The name of the claim is the name of the Azure AD attribute unless the **Partner
- The value of the **userPrincipalName** claim must be in the format of `user@tenant.onmicrosoft.com`. - The **displayName** claim is required and cannot be an empty string.
-## Azure AD technical provider operations
+## Azure AD technical profile operations
### Read
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs to provide secure hybrid access to on
| ISV partner | Description and integration walkthroughs | |:-|:--|
-| ![Screenshot of an Akamai logo.](./medi) is a Zero Trust Network Access (ZTNA) solution that enables secure remote access to modern and legacy applications that reside in private datacenters. |
+| ![Screenshot of an Akamai logo.](./medi) provides a Zero Trust Network Access (ZTNA) solution that enables secure remote access to modern and legacy applications that reside in private datacenters. |
| ![Screenshot of a Datawiza logo](./medi) enables SSO and granular access control for your applications and extends Azure AD B2C to protect on-premises legacy applications. | | ![Screenshot of a F5 logo](./medi) enables legacy applications to securely expose to the internet through BIG-IP security combined with Azure AD B2C pre-authentication, Conditional Access (CA) and SSO. | | ![Screenshot of a Ping logo](./medi) enables secure hybrid access to on-premises legacy applications across multiple clouds. |
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## November 2022
+
+### New articles
+
+- [Configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access](partner-akamai-secure-hybrid-access.md)
+
+### Updated articles
+
+- [Manage your Azure Active Directory B2C tenant](tenant-management.md)
+- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)
+- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
+- [Roles and resource access control](roles-resource-access-control.md)
+- [Define an Azure Active Directory technical profile in an Azure Active Directory B2C custom policy](active-directory-technical-profile.md)
## October 2022
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 11/12/2022 Last updated : 11/29/2022
After you configure the provisioning agent and ECMA host, it's time to test conn
7. Ensure that you're using a valid certificate that has not expired. Go to the **Settings** tab of the ECMA host to view the certificate expiration date. If the certificate has expired, click `Generate certificate` to generate a new certificate. 8. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**. 1. If you continue to see `The ECMA host is currently importing data from the target application` even after restarting the ECMA Connector Host and the provisioning agent, and waiting for the initial import to complete, then you may need to cancel and start over configuring provisioning to the application in the Azure portal.
- 1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
+1. When configuring the ECMA host, ensure that you provide a certificate with a subject that matches the hostname of your windows server. The certificate that is generated by the ECMA host will do this for you automatically, but should only be used for testing purposes.
+
+```
+Error code: SystemForCrossDomainIdentityManagementCredentialValidationUnavailable
+
+Details: We received this unexpected response from your application: Received response from Web resource. Resource: https://localhost/Users?filter=PLACEHOLDER+eq+"8646d011-1693-4cd3-9ee6-0d7482ca2219" Operation: GET Response Status Code: InternalServerError Response Headers: Response Content: An error occurred while sending the request. Please check the service and try again.
+```
+
+1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
``` https://localhost:8585/ecma2host_connectorName/scim
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
The following table lists the combinations of authentication methods for each bu
|Email One-time pass (Guest)| | | | -->
-<sup>1</sup> Something you have refers to one of the following methods: SMS, voice, push notification, software OATH token. Hardware OATH token is currently not supported.
+<sup>1</sup> Something you have refers to one of the following methods: SMS, voice, push notification, software OATH token and Hardware OATH token.
The following API call can be used to list definitions of all the built-in authentication strengths:
An authentication strength Conditional Access policy works together with [MFA tr
- **Users who signed in by using certificate-based authentication aren't prompted to reauthenticate** - If a user first authenticated by using certificate-based authentication and the authentication strength requires another method, such as a FIDO2 security key, the user isn't prompted to use a FIDO2 security key and authentication fails. The user must restart their session to sign-in with a FIDO2 security key. -- **Authentication methods that are currently not supported by authentication strength** - The following authentication methods are included in the available combinations but currently have limited functionality:
- - Email one-time pass (Guest)
- - Hardware-based OATH token
--- **Authentication strength is not enforced on Register security information user action** ΓÇô If an Authentication strength Conditional Access policy targets **Register security information** user action, the policy would not apply.
+- **Authentication methods that are currently not supported by authentication strength** - The Email one-time pass (Guest) authentication method is not included in the available combinations.
- **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy.
+- **Multiple Conditional Access policies may be created when using "Require authentication strength" grant control**. These are two different policies and you can safely delete one of them.
+
+- **Windows Hello for Business** ΓÇô If the user has used Windows Hello for Business as their primary authentication method it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user has used another method as their primary authenticating method (for example, password) and the authentication strength requires them to use Windows Hello for Business they will not be prompted to use not register for Windows Hello for Business.
+ - **Authentication loop** can happen in one of the following scenarios: 1. **Microsoft Authenticator (Phone Sign-in)** - When the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app ("Sign in with your credentials")](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c) 2. **Conditional Access Policy is targeting all apps** - When the Conditional Access policy is targeting "All apps" but the user is not registered for any of the methods required by the authentication strength, the user will get into an authentication loop. To avoid this issue, target specific applications in the Conditional Access policy or make sure the user is registered for at least one of the authentication methods required by the authentication strength Conditional Access policy.
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
To map the pattern supported by certificateUserIds, administrators must use expr
You can use the following expression for mapping to SKI and SHA1-PUKEY: ```
-IF(IsPresent([alternativeSecurityId]),
+IIF(IsPresent([alternativeSecurityId]),
Where($item,[alternativeSecurityId],BitOr(InStr($item, "x509:<SKI>"),InStr($item, "x509:<SHA1-PUKEY>"))>0),[alternativeSecurityId] ) ```
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Some claims are used to help the Microsoft identity platform secure tokens for r
| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. Since the value is mutable, it must not be used to make authorization decisions. The value can be used for username hints, however, and in human-readable UI as a username. The `profile` scope is required in order to receive this claim. | | `name` | String | Provides a human-readable value that identifies the subject of the token. The value isn't guaranteed to be unique, it's mutable, and is only used for display purposes. The `profile` scope is required in order to receive this claim. | | `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. Only included for user tokens. |
-| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. For application tokens, this set of permissions is used during the client credential flow ([v1.0](../azuread-dev/v1-oauth2-client-creds-grant-flow.md), [v2.0](v2-oauth2-client-creds-grant-flow.md)) in place of user scopes. For user tokens, this set of values is populated with the roles the user was assigned to on the target application. |
+| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. For application tokens, this set of permissions is used during the [client credential flow](v2-oauth2-client-creds-grant-flow.md) in place of user scopes. For user tokens, this set of values is populated with the roles the user was assigned to on the target application. |
| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). This claim is configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). Setting it to `All` or `DirectoryRole` is required. May not be present in tokens obtained through the implicit flow due to token length concerns. | | `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. These values are unique and can be safely used for managing access, such as enforcing authorization to access a resource. The groups included in the groups claim are configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. | | `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. |
Refresh tokens can be revoked by the server due to a change in credentials, or d
| Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | User revokes their refresh tokens by using [PowerShell](/powershell/module/azuread/revoke-azureadsignedinuserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked | | Admin revokes all refresh tokens for a user by using [PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) | Revoked | Revoked |Revoked | Revoked | Revoked |
-| Single sign-out ([v1.0](../azuread-dev/v1-protocols-openid-connect-code.md#single-sign-out), [v2.0](v2-protocols-oidc.md#single-sign-out) ) on web | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
+| [Single sign-out](v2-protocols-oidc.md#single-sign-out) on web | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
#### Non-password-based
Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md)
## Next steps - Learn about [`id_tokens` in Azure AD](id-tokens.md).-- Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](permissions-consent-overview.md)).
+- Learn about [permission and consent](permissions-consent-overview.md).
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
When this flag is on (its value is set to `1`), all MDM-managed apps not in the
#### Enable SSO for all apps with a specific bundle ID prefix - **Key**: `AppPrefixAllowList` - **Type**: `String`-- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in SSO. This parameter allows all apps that start with a particular prefix to participate in SSO.
+- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in SSO. This parameter allows all apps that start with a particular prefix to participate in SSO. For iOS, the default value would be set to `com.apple.` and that would enable SSO for all Apple apps. For macOS, the default value would be set to `com.apple.` and `com.microsoft.` and that would enable SSO for all Apple and Microsoft apps. Admins could override the default value or add apps to `AppBlockList` to prevent them from participating in SSO.
- **Example**: `com.contoso., com.fabrikam.` #### Disable SSO for specific apps
If your users have problems signing in to an application even after you've enabl
- **Key**: `AppCookieSSOAllowList` - **Type**: `String`-- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in the SSO. All apps that start with the listed prefixes will be allowed to participate in SSO.
+- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in the SSO. All apps that start with the listed prefixes will be allowed to participate in SSO. Please note that this key is to be used only for iOS apps and not for macOS apps.
- **Example**: `com.contoso.myapp1, com.fabrikam.myapp2` **Other requirements**: To enable SSO for applications by using `AppCookieSSOAllowList`, you must also add their bundle ID prefixes `AppPrefixAllowList`.
Try this configuration only for applications that have unexpected sign-in failur
| `Enable_SSO_On_All_ManagedApps` | Integer | `1` to enable SSO for all managed apps, `0` to disable SSO for all managed apps. | | `AppAllowList` | String<br/>*(comma-delimited list)* | Bundle IDs of applications allowed to participate in SSO. | | `AppBlockList` | String<br/>*(comma-delimited list)* | Bundle IDs of applications not allowed to participate in SSO. |
-| `AppPrefixAllowList` | String<br/>*(comma-delimited list)* | Bundle ID prefixes of applications allowed to participate in SSO. |
-| `AppCookieSSOAllowList` | String<br/>*(comma-delimited list)* | Bundle ID prefixes of applications allowed to participate in SSO but that use special network settings and have trouble with SSO using the other settings. Apps you add to `AppCookieSSOAllowList` must also be added to `AppPrefixAllowList`. |
+| `AppPrefixAllowList` | String<br/>*(comma-delimited list)* | Bundle ID prefixes of applications allowed to participate in SSO. For iOS, the default value would be set to `com.apple.` and that would enable SSO for all Apple apps. For macOS, the default value would be set to `com.apple.` and `com.microsoft.` and that would enable SSO for all Apple and Microsoft apps. Developers , Customers or Admins could override the default value or add apps to `AppBlockList` to prevent them from participating in SSO. |
+| `AppCookieSSOAllowList` | String<br/>*(comma-delimited list)* | Bundle ID prefixes of applications allowed to participate in SSO but that use special network settings and have trouble with SSO using the other settings. Apps you add to `AppCookieSSOAllowList` must also be added to `AppPrefixAllowList`. Please note that this key is to be used only for iOS apps and not for macOS apps. |
#### Settings for common scenarios
Use these parameters to enable the flag:
- **Key**: `browser_sso_interaction_enabled` - **Type**: `Integer`-- **Value**: 1 or 0
+- **Value**: 1 or 0. This value is set to 1 by default.
macOS requires this setting so it can provide a consistent experience across all apps. iOS and iPadOS don't require this setting because most apps use the Authenticator application for sign-in. But we recommend that you enable this setting because if some of your applications don't use the Authenticator app on iOS or iPadOS, this flag will improve the experience. The setting is disabled by default.
Disable the app prompt and display the account picker:
- **Key**: `disable_explicit_app_prompt` - **Type**: `Integer`-- **Value**: 1 or 0
+- **Value**: 1 or 0. This value is set to 1 by default and this default setting reduces the prompts.
Disable app prompt and select an account from the list of matching SSO accounts automatically: - **Key**: `disable_explicit_app_prompt_and_autologin`
active-directory Consent Framework Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework-links.md
This article is to help you learn more about how the Azure AD consent framework
- Get a general understanding of [how consent allows a resource owner to govern an application's access to resources](./developer-glossary.md#consent). - Get a step-by-step overview of [how the Azure AD consent framework implements consent](./quickstart-register-app.md). - For more depth, learn [how a multi-tenant application can use the consent framework](./howto-convert-app-to-be-multi-tenant.md) to implement "user" and "admin" consent, supporting more advanced multi-tier application patterns.-- For more depth, learn [how consent is supported at the OAuth 2.0 protocol layer during the authorization code grant flow.](../azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code)
+- For more depth, learn [how consent is supported at the OAuth 2.0 protocol layer during the authorization code grant flow.](v2-oauth2-auth-code-flow.md#request-an-authorization-code)
## Next steps [AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Identity Platform Integration Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-platform-integration-checklist.md
Use the following checklist to ensure that your application is effectively integ
![checkbox](./medi). If you must hand-code for the authentication protocols, you should follow the [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx) or similar development methodology. Pay close attention to the security considerations in the standards specifications for each protocol.
-![checkbox](./medi) apps.
+![checkbox](./medi) apps.
![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a ΓÇ£broker redirect URIΓÇ¥ configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible.
Use the following checklist to ensure that your application is effectively integ
![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Where applicable, enrich your application with user data. Using the [Microsoft Graph API](https://developer.microsoft.com/graph) is an easy way to do this. The [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) tool that can help you get started.
-![checkbox](./medi#incremental-and-dynamic-consent) at run time to help users understand why your app is requesting permissions that may concern or confuse users when requested on first start.
+![checkbox](./medi#consent) at run time to help users understand why your app is requesting permissions that may concern or confuse users when requested on first start.
![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Implement a [clean single sign-out experience](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut). ItΓÇÖs a privacy and a security requirement, and makes for a good user experience.
active-directory Migrate Adal Msal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-adal-msal-java.md
MSAL for Java is the auth library we recommend you use with the Microsoft identi
You can learn more about MSAL and get started with an [overview of the Microsoft Authentication Library](msal-overview.md).
-## Differences
-
-If you have been working with the Azure AD for developers (v1.0) endpoint (and ADAL4J), you might want to read [What's different about the Microsoft identity platform?](../azuread-dev/azure-ad-endpoint-comparison.md).
- ## Scopes not resources
-ADAL4J acquires tokens for resources whereas MSAL for Java acquires tokens for scopes. A number of MSAL for Java classes require a scopes parameter. This parameter is a list of strings that declare the desired permissions and resources that are requested. See [Microsoft Graph's scopes](/graph/permissions-reference) to see example scopes.
+ADAL4J acquires tokens for resources whereas MSAL for Java acquires tokens for scopes. Many MSAL for Java classes require a scopes parameter. This parameter is a list of strings that declare the desired permissions and resources that are requested. See [Microsoft Graph's scopes](/graph/permissions-reference) to see example scopes.
-You can add the `/.default` scope suffix to the resource to help migrate your apps from the ADAL to MSAL. For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource is not in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
+You can add the `/.default` scope suffix to the resource to help migrate your apps from the ADAL to MSAL. For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource isn't in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
For more details about the different types of scopes, refer [Permissions and consent in the Microsoft identity platform](./v2-permissions-and-consent.md) and the [Scopes for a Web API accepting v1.0 tokens](./msal-v1-app-scopes.md) articles.
The following table shows how ADAL4J functions map to the new MSAL for Java func
ADAL4J manipulated users. Although a user represents a single human or software agent, it can have one or more accounts in the Microsoft identity system. For example, a user may have several Azure AD, Azure AD B2C, or Microsoft personal accounts.
-MSAL for Java defines the concept of Account via the `IAccount` interface. This is a breaking change from ADAL4J, but it is a good one because it captures the fact that the same user can have several accounts, and perhaps even in different Azure AD directories. MSAL for Java provides better information in guest scenarios because home account information is provided.
+MSAL for Java defines the concept of Account via the `IAccount` interface. This is a breaking change from ADAL4J, but it's a good one because it captures the fact that the same user can have several accounts, and perhaps even in different Azure AD directories. MSAL for Java provides better information in guest scenarios because home account information is provided.
## Cache persistence
-ADAL4J did not have support for token cache.
+ADAL4J didn't have support for token cache.
MSAL for Java adds a [token cache](msal-acquire-cache-tokens.md) to simplify managing token lifetimes by automatically refreshing expired tokens when possible and preventing unnecessary prompts for the user to provide credentials when possible. ## Common Authority
-In v1.0, if you use the `https://login.microsoftonline.com/common` authority, users can sign in with any Azure Active Directory (AAD) account (for any organization).
+In v1.0, if you use the `https://login.microsoftonline.com/common` authority, users can sign in with any Azure Active Directory (Azure AD) account (for any organization).
-If you use the `https://login.microsoftonline.com/common` authority in v2.0, users can sign in with any AAD organization, or even a Microsoft personal account (MSA). In MSAL for Java, if you want to restrict login to any AAD account, use the `https://login.microsoftonline.com/organizations` authority (which is the same behavior as with ADAL4J). To specify an authority, set the `authority` parameter in the [PublicClientApplication.Builder](https://javadoc.io/doc/com.microsoft.azure/msal4j/1.0.0/com/microsoft/aad/msal4j/PublicClientApplication.Builder.html) method when you create your `PublicClientApplication` class.
+If you use the `https://login.microsoftonline.com/common` authority in v2.0, users can sign in with any Azure AD organization, or even a Microsoft personal account (MSA). In MSAL for Java, if you want to restrict login to any Azure AD account, use the `https://login.microsoftonline.com/organizations` authority (which is the same behavior as with ADAL4J). To specify an authority, set the `authority` parameter in the [PublicClientApplication.Builder](https://javadoc.io/doc/com.microsoft.azure/msal4j/1.0.0/com/microsoft/aad/msal4j/PublicClientApplication.Builder.html) method when you create your `PublicClientApplication` class.
## v1.0 and v2.0 tokens
For more information about v1.0 and v2.0 tokens, see [Azure Active Directory acc
In ADAL4J, the refresh tokens were exposed--which allowed developers to cache them. They would then use `AcquireTokenByRefreshToken()` to enable solutions such as implementing long-running services that refresh dashboards on behalf of the user when the user is no longer connected.
-MSAL for Java does not expose refresh tokens for security reasons. Instead, MSAL handles refreshing tokens for you.
+MSAL for Java doesn't expose refresh tokens for security reasons. Instead, MSAL handles refreshing tokens for you.
MSAL for Java has an API that allows you to migrate refresh tokens you acquired with ADAL4j into the ClientApplication: [acquireToken(RefreshTokenParameters)](https://javadoc.io/static/com.microsoft.azure/msal4j/1.0.0/com/microsoft/aad/msal4j/PublicClientApplication.html#acquireToken-com.microsoft.aad.msal4j.RefreshTokenParameters-). With this method, you can provide the previously used refresh token along with any scopes (resources) you desire. The refresh token will be exchanged for a new one and cached for use by your application.
active-directory Migrate Objc Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-objc-adal-msal.md
Title: ADAL to MSAL migration guide (MSAL iOS/macOS)
-description: Learn the differences between MSAL for iOS/macOS and the Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate to MSAL for iOS/macOS.
+description: Learn the differences between MSAL for iOS/macOS and the Azure AD Authentication Library for Objective-C (ADAL.ObjC) and how to migrate to MSAL for iOS/macOS.
The Microsoft identity platform has a few key differences with Azure Active Dire
### Incremental and dynamic consent
-* The Azure Active Directory v1.0 endpoint requires that all permissions be declared in advance during application registration. This means those permissions are static.
-* The Microsoft identity platform allows you to request permissions dynamically. Apps can ask for permissions only as needed and request more as the app needs them.
-
-For more about differences between Azure Active Directory v1.0 and the Microsoft identity platform, see [Why update to Microsoft identity platform?](../azuread-dev/azure-ad-endpoint-comparison.md).
+* The Microsoft identity platform allows you to request permissions dynamically. Apps can ask for permissions only as needed and request more as the app needs them. For more information, see [permissions and consent](./permissions-consent-overview.md#consent).
## ADAL and MSAL library differences
The MSAL public API reflects a few key differences between Azure AD v1.0 and the
`ADAuthenticationContext` is the first object an ADAL app creates. It represents an instantiation of ADAL. Apps create a new instance of `ADAuthenticationContext` for each Azure Active Directory cloud and tenant (authority) combination. The same `ADAuthenticationContext` can be used to get tokens for multiple public client applications.
-In MSAL, the main interaction is through an `MSALPublicClientApplication` object, which is modeled after [OAuth 2.0 Public Client](https://tools.ietf.org/html/rfc6749#section-2.1). One instance of `MSALPublicClientApplication` can be used to interact with multiple AAD clouds, and tenants, without needing to create a new instance for each authority. For most apps, one `MSALPublicClientApplication` instance is sufficient.
+In MSAL, the main interaction is through an `MSALPublicClientApplication` object, which is modeled after [OAuth 2.0 Public Client](https://tools.ietf.org/html/rfc6749#section-2.1). One instance of `MSALPublicClientApplication` can be used to interact with multiple Azure AD clouds, and tenants, without needing to create a new instance for each authority. For most apps, one `MSALPublicClientApplication` instance is sufficient.
### Scopes instead of resources
You can read more information about using the "/.default" scope [here](./v2-perm
ADAL only supports UIWebView/WKWebView for iOS, and WebView for macOS. MSAL for iOS supports more options for displaying web content when requesting an authorization code, and no longer supports `UIWebView`; which can improve the user experience and security.
-By default, MSAL on iOS uses [ASWebAuthenticationSession](https://developer.apple.com/documentation/authenticationservices/aswebauthenticationsession?language=objc), which is the web component Apple recommends for authentication on iOS 12+ devices. It provides Single Sign-On (SSO) benefits through cookie sharing between apps and the Safari browser.
+By default, MSAL on iOS uses [ASWebAuthenticationSession](https://developer.apple.com/documentation/authenticationservices/aswebauthenticationsession?language=objc), which is the web component Apple recommends for authentication on iOS 12+ devices. It provides single sign-on (SSO) benefits through cookie sharing between apps and the Safari browser.
You can choose to use a different web component depending on app requirements and the end-user experience you want. See [supported web view types](customize-webviews.md) for more options.
See [Handling exceptions and errors using MSAL](msal-error-handling-ios.md) for
### Broker support
-MSAL, starting with version 0.3.0, provides support for brokered authentication using the Microsoft Authenticator app. Microsoft Authenticator also enables support for Conditional Access scenarios. Examples of Conditional Access scenarios include device compliance policies that require the user to enroll the device through Intune or register with AAD to get a token. And Mobile Application Management (MAM) Conditional Access policies, which require proof of compliance before your app can get a token.
+MSAL, starting with version 0.3.0, provides support for brokered authentication using the Microsoft Authenticator app. Microsoft Authenticator also enables support for Conditional Access scenarios. Examples of Conditional Access scenarios include device compliance policies that require the user to enroll the device through Intune or register with Azure AD to get a token. And Mobile Application Management (MAM) Conditional Access policies, which require proof of compliance before your app can get a token.
To enable broker for your application:
Objective-C:
### Business to business (B2B)
-In ADAL, you create separate instances of `ADAuthenticationContext` for each tenant that the app requests tokens for. This is no longer a requirement in MSAL. In MSAL, you can create a single instance of `MSALPublicClientApplication` and use it for any AAD cloud and organization by specifying a different authority for acquireToken and acquireTokenSilent calls.
+In ADAL, you create separate instances of `ADAuthenticationContext` for each tenant that the app requests tokens for. This is no longer a requirement in MSAL. In MSAL, you can create a single instance of `MSALPublicClientApplication` and use it for any Azure AD cloud and organization by specifying a different authority for acquireToken and acquireTokenSilent calls.
## SSO in partnership with other SDKs
ADAL and MSAL coexistence between multiple applications is fully supported.
### App registration migration
-You don't need to change your existing AAD application to switch to MSAL and enable AAD accounts. However, if your ADAL-based application doesn't support brokered authentication, you'll need to register a new redirect URI for the application before you can switch to MSAL.
+You don't need to change your existing Azure AD application to switch to MSAL and enable Azure AD accounts. However, if your ADAL-based application doesn't support brokered authentication, you'll need to register a new redirect URI for the application before you can switch to MSAL.
The redirect URI should be in this format: `msauth.<app.bundle.id>://auth`. Replace `<app.bundle.id>` with your application's bundle ID. Specify the redirect URI in the [Azure portal](https://aka.ms/MobileAppReg).
We recommend all apps register both redirect URIs.
If you wish to add support for incremental consent, select the APIs and permissions your app is configured to request access to in your app registration under the **API permissions** tab.
-If you're migrating from ADAL and want to support both AAD and MSA accounts, your existing application registration needs to be updated to support both. We don't recommend you update your existing production app to support both AAD and MSA right away. Instead, create another client ID that supports both AAD and MSA for testing, and after you've verified that all scenarios work, update the existing app.
+If you're migrating from ADAL and want to support both Azure AD and MSA accounts, your existing application registration needs to be updated to support both. We don't recommend you update your existing production app to support both Azure AD and MSA right away. Instead, create another client ID that supports both Azure AD and MSA for testing, and after you've verified that all scenarios work, update the existing app.
### Add MSAL to your app
active-directory Migrate Python Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-python-adal-msal.md
Supports:
- OAuth v2.0 - OpenID Connect (OIDC)
-See [What's different about the Microsoft identity platform?](../azuread-dev/azure-ad-endpoint-comparison.md) for more details.
+For more information about MSAL, see [MSAL overview](./msal-overview.md).
### Scopes not resources
ADAL Python acquires tokens for resources, but MSAL Python acquires tokens for s
You can add the `/.default` scope suffix to the resource to help migrate your apps from the v1.0 endpoint (ADAL) to the Microsoft identity platform (MSAL). For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource is not in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
-For more details about the different types of scopes, refer
-[Permissions and consent in the Microsoft identity platform](./v2-permissions-and-consent.md) and the [Scopes for a Web API accepting v1.0 tokens](./msal-v1-app-scopes.md) articles.
+For more details about the different types of scopes, refer to [Permissions and consent in the Microsoft identity platform](./v2-permissions-and-consent.md) and the [Scopes for a Web API accepting v1.0 tokens](./msal-v1-app-scopes.md) articles.
### Error handling
def get_preexisting_rt_and_their_scopes_from_elsewhere():
# https://github.com/AzureAD/azure-activedirectory-library-for-python/blob/1.2.3/sample/device_code_sample.py#L72 # which uses a resource rather than a scope, # you need to convert your v1 resource into v2 scopes
- # See https://learn.microsoft.com/azure/active-directory/azuread-dev/azure-ad-endpoint-comparison#scopes-not-resources
+ # See https://learn.microsoft.com/azure/active-directory/develop/migrate-python-adal-msal#scopes-not-resources
# You may be able to append "/.default" to your v1 resource to form a scope # See https://learn.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#the-default-scope
for old_rt, scopes in get_preexisting_rt_and_their_scopes_from_elsewhere():
print("Migration completed") ```--
-## Next steps
-
-For more information, refer to [v1.0 and v2.0 comparison](../azuread-dev/azure-ad-endpoint-comparison.md).
active-directory Mobile Sso Support Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-sso-support-overview.md
The best choice for implementing single sign-on in your application is to use [t
> [!NOTE] > It is possible to configure MSAL to use an embedded web view. This will prevent single sign-on. Use the default behavior (that is, the system web browser) to ensure that SSO will work.
-If you're currently using the [ADAL library](../azuread-dev/active-directory-authentication-libraries.md) in your application, then we highly recommend that you [migrate it to MSAL](msal-migration.md), as [ADAL is being deprecated](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/update-your-applications-to-use-microsoft-authentication-library/ba-p/1257363).
+If you're currently using the ADAL library in your application, you need to [migrate it to MSAL](msal-migration.md), as [ADAL is being deprecated](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/update-your-applications-to-use-microsoft-authentication-library/ba-p/1257363).
For iOS applications, we have a [quickstart](quickstart-v2-ios.md) that shows you how to set up sign-ins using MSAL, as well as [guidance for configuring MSAL for various SSO scenarios](single-sign-on-macos-ios.md).
active-directory Msal Acquire Cache Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-acquire-cache-tokens.md
You can also clear the token cache, which is achieved by removing the accounts f
## Scopes when acquiring tokens
-[Scopes](v2-permissions-and-consent.md) are the permissions that a web API exposes that client applications can request access to. Client applications request the user's consent for these scopes when making authentication requests to get tokens to access the web APIs. MSAL allows you to get tokens to access Azure AD for developers (v1.0) and the Microsoft identity platform APIs. v2.0 protocol uses scopes instead of resource in the requests. For more information, read [v1.0 and v2.0 comparison](../azuread-dev/azure-ad-endpoint-comparison.md). Based on the web API's configuration of the token version it accepts, the v2.0 endpoint returns the access token to MSAL.
+[Scopes](v2-permissions-and-consent.md) are the permissions that a web API exposes that client applications can request access to. Client applications request the user's consent for these scopes when making authentication requests to get tokens to access the web APIs. MSAL allows you to get tokens to access Azure AD for developers (v1.0) and the Microsoft identity platform APIs. v2.0 protocol uses scopes instead of resource in the requests. Based on the web API's configuration of the token version it accepts, the v2.0 endpoint returns the access token to MSAL.
Several of MSAL's token acquisition methods require a `scopes` parameter. The `scopes` parameter is a list of strings that declare the desired permissions and the resources requested. Well-known scopes are the [Microsoft Graph permissions](/graph/permissions-reference).
active-directory Msal Compare Msal Js And Adal Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
## Prerequisites -- You must set the **Platform** / **Reply URL Type** to **Single-page application** on App Registration portal (if you have other platforms added in your app registration, such as **Web**, you need to make sure the redirect URIs do not overlap. See: [Redirect URI restrictions](./reply-url.md))-- You must provide [polyfills](./msal-js-use-ie-browser.md) for ES6 features that MSAL.js relies on (e.g. promises) in order to run your apps on **Internet Explorer**-- Make sure you have migrated your Azure AD apps to [v2 endpoint](../azuread-dev/azure-ad-endpoint-comparison.md) if you haven't already
+- You must set the **Platform** / **Reply URL Type** to **Single-page application** on App Registration portal (if you have other platforms added in your app registration, such as **Web**, you need to make sure the redirect URIs don't overlap. See: [Redirect URI restrictions](./reply-url.md))
+- You must provide [polyfills](./msal-js-use-ie-browser.md) for ES6 features that MSAL.js relies on (for example, promises) in order to run your apps on **Internet Explorer**
+- Migrate your Azure AD apps to [v2 endpoint](v2-overview.md) if you haven't already
## Install and import MSAL There are two ways to install the MSAL.js 2.x library:
-### Via NPM:
+### Via npm:
```console npm install @azure/msal-browser
const msalConfig = {
const msalInstance = new msal.PublicClientApplication(msalConfig); ```
-In both ADAL.js and MSAL.js, the authority URI defaults to `https://login.microsoftonline.com/common` if you do not specify it.
+In both ADAL.js and MSAL.js, the authority URI defaults to `https://login.microsoftonline.com/common` if you don't specify it.
> [!NOTE] > If you use the `https://login.microsoftonline.com/common` authority in v2.0, you will allow users to sign in with any Azure AD organization or a personal Microsoft account (MSA). In MSAL.js, if you want to restrict login to any Azure AD account (same behavior as with ADAL.js), use `https://login.microsoftonline.com/organizations` instead.
const getAccessToken = async() => {
## Cache and retrieve tokens
-Like ADAL.js, MSAL.js caches tokens and other authentication artifacts in browser storage, using the [Web Storage API](https://developer.mozilla.org/docs/Web/API/Web_Storage_API). You are recommended to use `sessionStorage` option (see: [configuration](#configure-msal)) because it is more secure in storing tokens that are acquired by your users, but `localStorage` will give you [Single Sign On](./msal-js-sso.md) across tabs and user sessions.
+Like ADAL.js, MSAL.js caches tokens and other authentication artifacts in browser storage, using the [Web Storage API](https://developer.mozilla.org/docs/Web/API/Web_Storage_API). You're recommended to use `sessionStorage` option (see: [configuration](#configure-msal)) because it's more secure in storing tokens that are acquired by your users, but `localStorage` will give you [Single Sign On](./msal-js-sso.md) across tabs and user sessions.
-Importantly, you are not supposed to access the cache directly. Instead, you should use an appropriate MSAL.js API for retrieving authentication artifacts like access tokens or user accounts.
+Importantly, you aren't supposed to access the cache directly. Instead, you should use an appropriate MSAL.js API for retrieving authentication artifacts like access tokens or user accounts.
## Renew tokens with refresh tokens
-ADAL.js uses the [OAuth 2.0 implicit flow](./v2-oauth2-implicit-grant-flow.md), which does not return refresh tokens for security reasons (refresh tokens have longer lifetime than access tokens and are therefore more dangerous in the hands of malicious actors). Hence, ADAL.js performs token renewal using a hidden Iframe so that the user is not repeatedly prompted to authenticate.
+ADAL.js uses the [OAuth 2.0 implicit flow](./v2-oauth2-implicit-grant-flow.md), which doesn't return refresh tokens for security reasons (refresh tokens have longer lifetime than access tokens and are therefore more dangerous in the hands of malicious actors). Hence, ADAL.js performs token renewal using a hidden IFrame so that the user isn't repeatedly prompted to authenticate.
-With the auth code flow with PKCE support, apps using MSAL.js 2.x obtain refresh tokens along with ID and access tokens, which can be used to renew them. The usage of refresh tokens is abstracted away, and the developers are not supposed to build logic around them. Instead, MSAL manages token renewal using refresh tokens by itself. Your previous token cache with ADAL.js will not be transferable to MSAL.js, as the token cache schema has changed and incompatible with the schema used in ADAL.js.
+With the auth code flow with PKCE support, apps using MSAL.js 2.x obtain refresh tokens along with ID and access tokens, which can be used to renew them. The usage of refresh tokens is abstracted away, and the developers aren't supposed to build logic around them. Instead, MSAL manages token renewal using refresh tokens by itself. Your previous token cache with ADAL.js won't be transferable to MSAL.js, as the token cache schema has changed and incompatible with the schema used in ADAL.js.
## Handle errors and exceptions
-When using MSAL.js, the most common type of error you might face is the `interaction_in_progress` error. This error is thrown when an interactive API (`loginPopup`, `loginRedirect`, `acquireTokenPopup`, `acquireTokenRedirect`) is invoked while another interactive API is still in progress. The `login*` and `acquireToken*` APIs are *async* so you will need to ensure that the resulting promises have resolved before invoking another one.
+When using MSAL.js, the most common type of error you might face is the `interaction_in_progress` error. This error is thrown when an interactive API (`loginPopup`, `loginRedirect`, `acquireTokenPopup`, `acquireTokenRedirect`) is invoked while another interactive API is still in progress. The `login*` and `acquireToken*` APIs are *async* so you'll need to ensure that the resulting promises have resolved before invoking another one.
-Another common error is `interaction_required`. This error is often resolved by simply initiating an interactive token acquisition prompt. For instance, the web API you are trying to access might have a [conditional access](../conditional-access/overview.md) policy in place, requiring the user to perform [multifactor authentication](../authentication/concept-mfa-howitworks.md) (MFA). In that case, handling `interaction_required` error by triggering `acquireTokenPopup` or `acquireTokenRedirect` will prompt the user for MFA, allowing them to fullfil it.
+Another common error is `interaction_required`. This error is often resolved by initiating an interactive token acquisition prompt. For instance, the web API you're trying to access might have a [conditional access](../conditional-access/overview.md) policy in place, requiring the user to perform [multifactor authentication](../authentication/concept-mfa-howitworks.md) (MFA). In that case, handling `interaction_required` error by triggering `acquireTokenPopup` or `acquireTokenRedirect` will prompt the user for MFA, allowing them to fullfil it.
-Yet another common error you might face is `consent_required`, which occurs when permissions required for obtaining an access token for a protected resource are not consented by the user. As in `interaction_required`, the solution for `consent_required` error is often initiating an interactive token acquisition prompt, using either `acquireTokenPopup` or `acquireTokenRedirect`.
+Yet another common error you might face is `consent_required`, which occurs when permissions required for obtaining an access token for a protected resource aren't consented by the user. As in `interaction_required`, the solution for `consent_required` error is often initiating an interactive token acquisition prompt, using either `acquireTokenPopup` or `acquireTokenRedirect`.
See for more: [Common MSAL.js errors and how to handle them](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/errors.md)
const callbackId = msalInstance.addEventCallback((message) => {
}); ```
-For performance, it is important to unregister event callbacks when they are no longer needed. See for more: [MSAL.js Events API](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/events.md)
+For performance, it's important to unregister event callbacks when they're no longer needed. See for more: [MSAL.js Events API](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/events.md)
## Handle multiple accounts
-ADAL.js has the concept of a *user* to represent the currently authenticated entity. MSAL.js replaces *users* with *accounts*, given the fact that a user can have more than one account associated with her. This also means that you now need to control for multiple accounts and choose the appropriate one to work with. The snippet below illustrates this process:
+ADAL.js has the concept of a *user* to represent the currently authenticated entity. MSAL.js replaces *users* with *accounts*, given the fact that a user can have more than one account associated with them. This also means that you now need to control for multiple accounts and choose the appropriate one to work with. The snippet below illustrates this process:
```javascript let homeAccountId = null; // Initialize global accountId (can also be localAccountId or username) used for account lookup later, ideally stored in app state
For more information, see: [Accounts in MSAL.js](https://github.com/AzureAD/micr
## Use the wrappers libraries
-If you are developing for Angular and React frameworks, you can use [MSAL Angular v2](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) and [MSAL React](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react), respectively. These wrappers expose the same public API as MSAL.js while offering framework-specific methods and components that can streamline the authentication and token acquisition processes.
+If you're developing for Angular and React frameworks, you can use [MSAL Angular v2](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) and [MSAL React](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react), respectively. These wrappers expose the same public API as MSAL.js while offering framework-specific methods and components that can streamline the authentication and token acquisition processes.
## Run the app
active-directory Msal Error Handling Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-js.md
myMSALObj.acquireTokenSilent(request).then(function (response) {
[!INCLUDE [Active directory error handling claims challenges](../../../includes/active-directory-develop-error-handling-claims-challenges.md)]
-When getting tokens silently (using `acquireTokenSilent`) using MSAL.js, your application may receive errors when a [Conditional Access claims challenge](../azuread-dev/conditional-access-dev-guide.md) such as MFA policy is required by an API you're trying to access.
+When getting tokens silently (using `acquireTokenSilent`) using MSAL.js, your application may receive errors when a [Conditional Access claims challenge](v2-conditional-access-dev-guide.md) such as MFA policy is required by an API you're trying to access.
The pattern to handle this error is to make an interactive call to acquire token in MSAL.js such as `acquireTokenPopup` or `acquireTokenRedirect` as in the following example:
active-directory Msal Net Differences Adal Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-differences-adal-net.md
# Differences between ADAL.NET and MSAL.NET apps
-Migrating your applications from using ADAL to using MSAL comes with security and resiliency benefits. This article outlines differences between MSAL.NET and ADAL.NET. In most cases you want to use MSAL.NET and the Microsoft identity platform, which is the latest generation of Microsoft Authentication Libraries. Using MSAL.NET, you acquire tokens for users signing-in to your application with Azure AD (work and school accounts), Microsoft (personal) accounts (MSA), or Azure AD B2C.
+Migrating your applications from using ADAL to using MSAL comes with security and resiliency benefits. This article outlines differences between MSAL.NET and ADAL.NET. All new applications should use MSAL.NET and the Microsoft identity platform, which is the latest generation of Microsoft Authentication Libraries. Using MSAL.NET, you acquire tokens for users signing-in to your application with Azure AD (work and school accounts), Microsoft (personal) accounts (MSA), or Azure AD B2C. If you have an existing application that is using ADAL.NET, migrate it to MSAL.NET.
-If you're already familiar with ADAL.NET and the Azure AD for developers (v1.0) endpoint, get to know [what's different about the Microsoft identity platform?](../azuread-dev/azure-ad-endpoint-comparison.md). You still need to use ADAL.NET if your application needs to sign in users with earlier versions of [Active Directory Federation Services (ADFS)](/windows-server/identity/active-directory-federation-services). For more information, see [ADFS support](https://aka.ms/msal-net-adfs-support).
+You still need to use ADAL.NET if your application needs to sign in users with earlier versions of [Active Directory Federation Services (ADFS)](/windows-server/identity/active-directory-federation-services). For more information, see [ADFS support](https://aka.ms/msal-net-adfs-support).
+
+## Prerequisites
+
+Go through [MSAL overview](./msal-overview.md) to learn more about MSAL.
+
+## Differences
| | **ADAL NET** | **MSAL NET** | |--|--||
If you're already familiar with ADAL.NET and the Azure AD for developers (v1.0)
| **Token acquisition** | In public clients, ADAL uses `AcquireTokenAsync` and `AcquireTokenSilentAsync` for authentication calls. | In public clients, MSAL uses `AcquireTokenInteractive` and `AcquireTokenSilent` for the same authentication calls. The parameters are different from the ADAL ones. <br><br>In Confidential client applications, there are [token acquisition methods](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Acquiring-Tokens) with an explicit name depending on the scenario. Another difference is that, in MSAL.NET, you no longer have to pass in the `ClientID` of your application in every AcquireTokenXX call. The `ClientID` is set only once when building `IPublicClientApplication` or `IConfidentialClientApplication`.| | **IAccount and IUser** | ADAL defines the notion of user through the IUser interface. However, a user is a human or a software agent. As such, a user can own one or more accounts in the Microsoft identity platform (several Azure AD accounts, Azure AD B2C, Microsoft personal accounts). The user can also be responsible for one or more Microsoft identity platform accounts. | MSAL.NET defines the concept of account (through the IAccount interface). The IAccount interface represents information about a single account. The user can have several accounts in different tenants. MSAL.NET provides better information in guest scenarios, as home account information is provided. You can read more about the [differences between IUser and IAccount](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/msal-net-2-released#iuser-is-replaced-by-iaccount).| | **Cache persistence** | ADAL.NET allows you to extend the `TokenCache` class to implement the desired persistence functionality on platforms without a secure storage (.NET Framework and .NET core) by using the `BeforeAccess`, and `BeforeWrite` methods. For details, see [token cache serialization in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Token-cache-serialization). | MSAL.NET makes the token cache a sealed class, removing the ability to extend it. As such, your implementation of token cache persistence must be in the form of a helper class that interacts with the sealed token cache. This interaction is described in [token cache serialization in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization) article. The serialization for a public client application (See [token cache for a public client application](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-public-client-application)), is different from that of for a confidential client application (See [token cache for a web app or web API](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-public-client-application)). |
-| **Common authority** | ADAL uses Azure AD v1.0. `https://login.microsoftonline.com/common` authority in Azure AD v1.0 (which ADAL uses) allows users to sign in using any AAD organization (work or school) account. Azure AD v1.0 doesn't allow sign in with Microsoft personal accounts. For more information, see [authority validation in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AuthenticationContext:-the-connection-to-Azure-AD#authority-validation). | MSAL uses Azure AD v2.0. `https://login.microsoftonline.com/common` authority in Azure AD v2.0 (which MSAL uses) allows users to sign in with any AAD organization (work or school) account or with a Microsoft personal account. To restrict sign in using only organization accounts (work or school account) in MSAL, you'll need to use the `https://login.microsoftonline.com/organizations` endpoint. For details, see the `authority` parameter in [public client application](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Applications#publicclientapplication). |
+| **Common authority** | ADAL uses Azure AD v1.0. `https://login.microsoftonline.com/common` authority in Azure AD v1.0 (which ADAL uses) allows users to sign in using any Azure AD organization (work or school) account. Azure AD v1.0 doesn't allow sign in with Microsoft personal accounts. For more information, see [authority validation in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AuthenticationContext:-the-connection-to-Azure-AD#authority-validation). | MSAL uses Azure AD v2.0. `https://login.microsoftonline.com/common` authority in Azure AD v2.0 (which MSAL uses) allows users to sign in with any Azure AD organization (work or school) account or with a Microsoft personal account. To restrict sign in using only organization accounts (work or school account) in MSAL, you'll need to use the `https://login.microsoftonline.com/organizations` endpoint. For details, see the `authority` parameter in [public client application](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Applications#publicclientapplication). |
## Supported grants
active-directory Msal Net Initializing Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-initializing-client-applications.md
Previously updated : 09/18/2019 Last updated : 11/23/2019 -+ #Customer intent: As an application developer, I want to learn about initializing client applications so I can decide if this platform meets my application development needs and requirements. # Initialize client applications using MSAL.NET+ This article describes initializing public client and confidential client applications using the Microsoft Authentication Library for .NET (MSAL.NET). To learn more about the client application types, see [Public client and confidential client applications](msal-client-applications.md).
-With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application builders: `PublicClientApplicationBuilder` and `ConfidentialClientApplicationBuilder`. They offer a powerful mechanism to configure the application either from the code, or from a configuration file, or even by mixing both approaches.
+With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application builders: `PublicClientApplicationBuilder` and `ConfidentialClientApplicationBuilder`. They offer a powerful mechanism to configure the application from the code, a configuration file, or even by mixing both approaches.
[API reference documentation](/dotnet/api/microsoft.identity.client) | [Package on NuGet](https://www.nuget.org/packages/Microsoft.Identity.Client/) | [Library source code](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Code samples](sample-v2-code.md) ## Prerequisites
-Before initializing an application, you first need to [register it](quickstart-register-app.md) so that your app can be integrated with the Microsoft identity platform. After registration, you may need the following information (which can be found in the Azure portal):
-- The client ID (a string representing a GUID)-- The identity provider URL (named the instance) and the sign-in audience for your application. These two parameters are collectively known as the authority.-- The tenant ID if you are writing a line of business application solely for your organization (also named single-tenant application).-- The application secret (client secret string) or certificate (of type X509Certificate2) if it's a confidential client app.-- For web apps, and sometimes for public client apps (in particular when your app needs to use a broker), you'll have also set the redirectUri where the identity provider will contact back your application with the security tokens.
+Before initializing an application, you first need to [register it](quickstart-register-app.md) so that your app can be integrated with the Microsoft identity platform. After registration, you may need the following information (which can be found in the Azure portal):
+
+- **Application (client) ID** - This is a string representing a GUID.
+- **Directory (tenant) ID** - Provides identity and access management (IAM) capabilities to applications and resources used by your organization. It can specify if you're writing a line of business application solely for your organization (also named single-tenant application).
+- The identity provider URL (named the **instance**) and the sign-in audience for your application. These two parameters are collectively known as the authority.
+- **Client credentials** - which can take the form of an application secret (client secret string) or certificate (of type X509Certificate2) if it's a confidential client app.
+- For web apps, and sometimes for public client apps (in particular when your app needs to use a broker), you'll have also set the **Redirect URI** where the identity provider will contact back your application with the security tokens.
## Ways to initialize applications+ There are many different ways to instantiate client applications. ### Initializing a public client application from code
-The following code instantiates a public client application, signing-in users in the Microsoft Azure public cloud, with their work and school accounts, or their personal Microsoft accounts.
+The following code instantiates a public client application, signing-in users in the Microsoft Azure public cloud, with their work, school or personal Microsoft accounts.
```csharp IPublicClientApplication app = PublicClientApplicationBuilder.Create(clientId)
IConfidentialClientApplication app = ConfidentialClientApplicationBuilder.Create
.Build(); ```
-As you might know, in production, rather than using a client secret, you might want to share with Azure AD a certificate. The code would then be the following:
+In production however, certificates are recommended as they're more secure than client secrets. They can be created and uploaded to the Azure portal. The code would then be the following:
```csharp IConfidentialClientApplication app = ConfidentialClientApplicationBuilder.Create(clientId)
IPublicClientApplication app = PublicClientApplicationBuilder.CreateWithApplicat
### Initializing a confidential client application from configuration options
-The same kind of pattern applies to confidential client applications. You can also add other parameters using `.WithXXX` modifiers (here a certificate).
+The same kind of pattern applies to confidential client applications. You can also add other parameters using `.WithXXX` modifiers. This example uses `.WithCertificate`.
```csharp ConfidentialClientApplicationOptions options = GetOptions(); // your own method
IConfidentialClientApplication app = ConfidentialClientApplicationBuilder.Create
## Builder modifiers
-In the code snippets using application builders, a number of `.With` methods can be applied as modifiers (for example, `.WithCertificate` and `.WithRedirectUri`).
+In the code snippets using application builders, many `.With` methods can be applied as modifiers (for example, `.WithCertificate` and `.WithRedirectUri`).
### Modifiers common to public and confidential client applications
-The modifiers you can set on a public client or confidential client application builder are:
-
-|Modifier | Description|
-| | |
-|[`.WithAuthority()`](/dotnet/api/microsoft.identity.client.abstractapplicationbuilder-1.withauthority) | Sets the application default authority to an Azure AD authority, with the possibility of choosing the Azure Cloud, the audience, the tenant (tenant ID or domain name), or providing directly the authority URI.|
-|`.WithAdfsAuthority(string)` | Sets the application default authority to be an ADFS authority.|
-|`.WithB2CAuthority(string)` | Sets the application default authority to be an Azure AD B2C authority.|
-|`.WithClientId(string)` | Overrides the client ID.|
-|`.WithComponent(string)` | Sets the name of the library using MSAL.NET (for telemetry reasons). |
-|`.WithDebugLoggingCallback()` | If called, the application will call `Debug.Write` simply enabling debugging traces. See [Logging](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/logging) for more information.|
-|`.WithExtraQueryParameters(IDictionary<string,string> eqp)` | Set the application level extra query parameters that will be sent in all authentication request. This is overridable at each token acquisition method level (with the same `.WithExtraQueryParameters pattern`).|
-|`.WithHttpClientFactory(IMsalHttpClientFactory httpClientFactory)` | Enables advanced scenarios such as configuring for an HTTP proxy, or to force MSAL to use a particular HttpClient (for instance in ASP.NET Core web apps/APIs).|
-|`.WithLogging()` | If called, the application will call a callback with debugging traces. See [Logging](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/logging) for more information.|
-|`.WithRedirectUri(string redirectUri)` | Overrides the default redirect URI. In the case of public client applications, this will be useful for scenarios involving the broker.|
-|`.WithTelemetry(TelemetryCallback telemetryCallback)` | Sets the delegate used to send telemetry.|
-|`.WithTenantId(string tenantId)` | Overrides the tenant ID, or the tenant description.|
+The modifiers you can set on a public client or confidential client application builder can be found in the `AbstractApplicationBuilder<T>` class. The different methods can be found in the [Azure SDK for .NET documentation](/dotnet/api/microsoft.identity.client.abstractapplicationbuilder-1).
### Modifiers specific to Xamarin.iOS applications
The modifiers you can set on a public client application builder on Xamarin.iOS
### Modifiers specific to confidential client applications
-The modifiers you can set on a confidential client application builder are:
+The modifiers you can set that are specific to a confidential client application builder can be found in the `ConfidentialClientApplicationBuilder` class. The different methods can be found in the [Azure SDK for .NET documentation](/dotnet/api/microsoft.identity.client.confidentialclientapplicationbuilder).
-|Modifier | Description|
-| | |
-|`.WithCertificate(X509Certificate2 certificate)` | Sets the certificate identifying the application with Azure AD.|
-|`.WithClientSecret(string clientSecret)` | Sets the client secret (app password) identifying the application with Azure AD.|
-
-These modifiers are mutually exclusive. If you provide both, MSAL will throw a meaningful exception.
+Modifiers such as `.WithCertificate(X509Certificate2 certificate)` and `.WithClientSecret(string clientSecret)` are mutually exclusive. If you provide both, MSAL will throw a meaningful exception.
### Example of usage of modifiers
-Let's assume that your application is a line-of-business application, which is only for your organization. Then you can write:
+Let's assume that your application is a line-of-business application, which is only for your organization. Then you can write:
```csharp IPublicClientApplication app;
app = PublicClientApplicationBuilder.Create(clientId)
.Build(); ```
-Where it becomes interesting is that programming for national clouds has now simplified. If you want your application to be a multi-tenant application in a national cloud, you could write, for instance:
+Where it becomes interesting is that programming for national clouds has simplified. If you want your application to be a multi-tenant application in a national cloud, you could write, for instance:
```csharp IPublicClientApplication app;
app = PublicClientApplicationBuilder.Create(clientId)
.Build(); ```
-There is also an override for ADFS (ADFS 2019 is currently not supported):
+There's also an override for ADFS (ADFS 2019 is currently not supported):
+ ```csharp IPublicClientApplication app; app = PublicClientApplicationBuilder.Create(clientId)
app = PublicClientApplicationBuilder.Create(clientId)
.Build(); ```
-Finally, if you are an Azure AD B2C developer, you can specify your tenant like this:
+Finally, if you're an Azure AD B2C developer, you can specify your tenant like this:
```csharp IPublicClientApplication app;
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
When working with ADAL Node, you were likely using the **Azure AD v1.0 endpoint**. Apps migrating from ADAL to MSAL should switch to **Azure AD v2.0 endpoint**.
-1. Review the [differences between v1 and v2 endpoints](../azuread-dev/azure-ad-endpoint-comparison.md)
-1. Update, if necessary, your existing app registrations accordingly.
- ## Install and import MSAL
-1. install MSAL Node package via NPM:
+1. install MSAL Node package via npm:
-```console
-npm install @azure/msal-node
-```
+ ```console
+ npm install @azure/msal-node
+ ```
1. After that, import MSAL Node in your code:
-```javascript
-const msal = require('@azure/msal-node');
-```
+ ```javascript
+ const msal = require('@azure/msal-node');
+ ```
1. Finally, uninstall the ADAL Node package and remove any references in your code:
-```console
-npm uninstall adal-node
-```
+ ```console
+ npm uninstall adal-node
+ ```
## Initialize MSAL
-In ADAL Node, you initialize an `AuthenticationContext` object, which then exposes the methods you can use in different authentication flows (e.g. `acquireTokenWithAuthorizationCode` for web apps). When initializing, the only mandatory parameter is the **authority URI**:
+In ADAL Node, you initialize an `AuthenticationContext` object, which then exposes the methods you can use in different authentication flows (for example, `acquireTokenWithAuthorizationCode` for web apps). When initializing, the only mandatory parameter is the **authority URI**:
```javascript var adal = require('adal-node');
var authorityURI = "https://login.microsoftonline.com/common";
var authenticationContex = new adal.AuthenticationContext(authorityURI); ```
-In MSAL Node, you have two alternatives instead: If you are building a mobile app or a desktop app, you instantiate a `PublicClientApplication` object. The constructor expects a [configuration object](#configure-msal) that contains the `clientId` parameter at the very least. MSAL defaults the authority URI to `https://login.microsoftonline.com/common` if you do not specify it.
+In MSAL Node, you have two alternatives instead: If you're building a mobile app or a desktop app, you instantiate a `PublicClientApplication` object. The constructor expects a [configuration object](#configure-msal) that contains the `clientId` parameter at the very least. MSAL defaults the authority URI to `https://login.microsoftonline.com/common` if you don't specify it.
```javascript const msal = require('@azure/msal-node');
const pca = new msal.PublicClientApplication({
> [!NOTE] > If you use the `https://login.microsoftonline.com/common` authority in v2.0, you will allow users to sign in with any Azure AD organization or a personal Microsoft account (MSA). In MSAL Node, if you want to restrict login to any Azure AD account (same behavior as with ADAL Node), use `https://login.microsoftonline.com/organizations` instead.
-On the other hand, if you are building a web app or a daemon app, you instantiate a `ConfidentialClientApplication` object. With such apps you also need to supply a *client credential*, such as a client secret or a certificate:
+On the other hand, if you're building a web app or a daemon app, you instantiate a `ConfidentialClientApplication` object. With such apps you also need to supply a *client credential*, such as a client secret or a certificate:
```javascript const msal = require('@azure/msal-node');
Both `PublicClientApplication` and `ConfidentialClientApplication`, unlike ADAL'
## Configure MSAL
-When building apps on Microsoft identity platform, your app will contain many parameters related to authentication. In ADAL Node, the `AuthenticationContext` object has a limited number of configuration parameters that you can instantiate it with, while the remaining parameters hang freely in your code (e.g. *clientSecret*):
+When building apps on Microsoft identity platform, your app will contain many parameters related to authentication. In ADAL Node, the `AuthenticationContext` object has a limited number of configuration parameters that you can instantiate it with, while the remaining parameters hang freely in your code (for example, *clientSecret*):
```javascript var adal = require('adal-node');
var authenticationContext = new adal.AuthenticationContext(authority, validateAu
- `authority`: URL that identifies a token authority - `validateAuthority`: a feature that prevents your code from requesting tokens from a potentially malicious authority-- `cache`: sets the token cache used by this AuthenticationContext instance. If this parameter is not set, then a default, in memory cache is used
+- `cache`: sets the token cache used by this AuthenticationContext instance. If this parameter isn't set, then a default, in memory cache is used
MSAL Node on the other hand uses a configuration object of type [Configuration](https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal_node.html#configuration). It contains the following properties:
const msalConfig = {
const cca = new msal.ConfidentialClientApplication(msalConfig); ```
-As a notable difference, MSAL does not have a flag to disable authority validation and authorities are always validated by default. MSAL compares your requested authority against a list of authorities known to Microsoft or a list of authorities you've specified in your configuration. See for more: [Configuration Options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md)
+As a notable difference, MSAL doesn't have a flag to disable authority validation and authorities are always validated by default. MSAL compares your requested authority against a list of authorities known to Microsoft or a list of authorities you've specified in your configuration. See for more: [Configuration Options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md)
## Switch to MSAL API
var authorityURI = "https://login.microsoftonline.com/common";
var context = new AuthenticationContext(authorityURI, true, cache); ```
-MSAL Node uses an in-memory token cache by default. You do not need to explicitly import it; in-memory token cache is exposed as part of the `ConfidentialClientApplication` and `PublicClientApplication` classes.
+MSAL Node uses an in-memory token cache by default. You don't need to explicitly import it; in-memory token cache is exposed as part of the `ConfidentialClientApplication` and `PublicClientApplication` classes.
```javascript const msalTokenCache = publicClientApplication.getTokenCache(); ```
-Importantly, your previous token cache with ADAL Node will not be transferable to MSAL Node, since cache schemas are incompatible. However, you may use the valid refresh tokens your app obtained previously with ADAL Node in MSAL Node. See the section on [refresh tokens](#remove-logic-around-refresh-tokens) for more.
+Importantly, your previous token cache with ADAL Node won't be transferable to MSAL Node, since cache schemas are incompatible. However, you may use the valid refresh tokens your app obtained previously with ADAL Node in MSAL Node. See the section on [refresh tokens](#remove-logic-around-refresh-tokens) for more.
You can also write your cache to disk by providing your own **cache plugin**. The cache plugin must implement the interface [ICachePlugin](https://azuread.github.io/microsoft-authentication-library-for-js/ref/interfaces/_azure_msal_common.icacheplugin.html). Like logging, caching is part of the configuration options and is created with the initialization of the MSAL Node instance:
const cachePlugin = {
}; ```
-If you are developing [public client applications](./msal-client-applications.md) like desktop apps, the [Microsoft Authentication Extensions for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/extensions/msal-node-extensions) offers secure mechanisms for client applications to perform cross-platform token cache serialization and persistence. Supported platforms are Windows, Mac and Linux.
+If you're developing [public client applications](./msal-client-applications.md) like desktop apps, the [Microsoft Authentication Extensions for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/extensions/msal-node-extensions) offers secure mechanisms for client applications to perform cross-platform token cache serialization and persistence. Supported platforms are Windows, Mac and Linux.
> [!NOTE] > [Microsoft Authentication Extensions for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/extensions/msal-node-extensions) is **not** recommended for web applications, as it may lead to scale and performance issues. Instead, web apps are recommended to persist the cache in session.
In ADAL Node, the refresh tokens (RT) were exposed allowing you to develop solut
- Long running services that do actions including refreshing dashboards on behalf of the users where the users are no longer connected. - WebFarm scenarios for enabling the client to bring the RT to the web service (caching is done client side, encrypted cookie, and not server side).
-MSAL Node, along with other MSALs, does not expose refresh tokens for security reasons. Instead, MSAL handles refreshing tokens for you. As such, you no longer need to build logic for this. However, you **can** make use of your previously acquired (and still valid) refresh tokens from ADAL Node's cache to get a new set of tokens with MSAL Node. To do this, MSAL Node offers `acquireTokenByRefreshToken`, which is equivalent to ADAL Node's `acquireTokenWithRefreshToken` method:
+MSAL Node, along with other MSALs, doesn't expose refresh tokens for security reasons. Instead, MSAL handles refreshing tokens for you. As such, you no longer need to build logic for this. However, you **can** make use of your previously acquired (and still valid) refresh tokens from ADAL Node's cache to get a new set of tokens with MSAL Node. To do this, MSAL Node offers `acquireTokenByRefreshToken`, which is equivalent to ADAL Node's `acquireTokenWithRefreshToken` method:
```javascript var msal = require('@azure/msal-node');
For more information, please refer to the [ADAL Node to MSAL Node migration samp
## Handle errors and exceptions
-When using MSAL Node, the most common type of error you might face is the `interaction_required` error. This error is often resolved by simply initiating an interactive token acquisition prompt. For instance, when using `acquireTokenSilent`, if there are no cached refresh tokens, MSAL Node will not be able to acquire an access token silently. Similarly, the web API you are trying to access might have a [conditional access](../conditional-access/overview.md) policy in place, requiring the user to perform [multi-factor authentication](../authentication/concept-mfa-howitworks.md) (MFA). In such cases, handling `interaction_required` error by triggering `acquireTokenByCode` will prompt the user for MFA, allowing them to fullfil it.
+When using MSAL Node, the most common type of error you might face is the `interaction_required` error. This error is often resolved by initiating an interactive token acquisition prompt. For instance, when using `acquireTokenSilent`, if there are no cached refresh tokens, MSAL Node won't be able to acquire an access token silently. Similarly, the web API you're trying to access might have a [conditional access](../conditional-access/overview.md) policy in place, requiring the user to perform [multi-factor authentication](../authentication/concept-mfa-howitworks.md) (MFA). In such cases, handling `interaction_required` error by triggering `acquireTokenByCode` will prompt the user for MFA, allowing them to fullfil it.
-Yet another common error you might face is `consent_required`, which occurs when permissions required for obtaining an access token for a protected resource are not consented by the user. As in `interaction_required`, the solution for `consent_required` error is often initiating an interactive token acquisition prompt, using the `acquireTokenByCode` method.
+Yet another common error you might face is `consent_required`, which occurs when permissions required for obtaining an access token for a protected resource aren't consented by the user. As in `interaction_required`, the solution for `consent_required` error is often initiating an interactive token acquisition prompt, using the `acquireTokenByCode` method.
## Run the app
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
For example, if you received the error code "AADSTS50058" then do a search in [h
The [OAuth2.0 spec](https://tools.ietf.org/html/rfc6749#section-5.2) provides guidance on how to handle errors during authentication using the `error` portion of the error response.
-Here is a sample error response:
+Here's a sample error response:
```json {
Here is a sample error response:
| `timestamp` | The time at which the error occurred. | | `trace_id` | A unique identifier for the request that can help in diagnostics. | | `correlation_id` | A unique identifier for the request that can help in diagnostics across components. |
-| `error_uri` | A link to the error lookup page with additional information about the error. This is for developer usage only, do not present it to users. Only present when the error lookup system has additional information about the error - not all error have additional information provided.|
+| `error_uri` | A link to the error lookup page with additional information about the error. This is for developer usage only, don't present it to users. Only present when the error lookup system has additional information about the error - not all error have additional information provided.|
The `error` field has several possible values - review the protocol documentation links and OAuth 2.0 specs to learn more about specific errors (for example, `authorization_pending` in the [device code flow](v2-oauth2-device-code.md)) and how to react to them. Some common ones are listed here:
The `error` field has several possible values - review the protocol documentatio
| `unauthorized_client` | The authenticated client isn't authorized to use this authorization grant type. | This usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | | `invalid_client` | Client authentication failed. | The client credentials aren't valid. To fix, the application administrator updates the credentials. | | `unsupported_grant_type` | The authorization server doesn't support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
-| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. During development, this usually indicates an incorrectly setup test tenant or a typo in the name of the scope being requested. |
+| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. During development, this usually indicates an incorrectly setup test tenant or a typo in the name of the scope being requested. |
| `interaction_required` | The request requires user interaction. For example, an additional authentication step is required. | Retry the request with the same resource, interactively, so that the user can complete any challenges required. | | `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. |
The `error` field has several possible values - review the protocol documentatio
| Error | Description | ||| | AADSTS16000 | SelectUserAccount - This is an interrupt thrown by Azure AD, which results in UI that allows the user to select from among multiple valid SSO sessions. This error is fairly common and may be returned to the application if `prompt=none` is specified. |
-| AADSTS16001 | UserAccountSelectionInvalid - You'll see this error if the user clicks on a tile that the session select logic has rejected. When triggered, this error allows the user to recover by picking from an updated list of tiles/sessions, or by choosing another account. This error can occur because of a code defect or race condition. |
-| AADSTS16002 | AppSessionSelectionInvalid - The app-specified SID requirement was not met. |
+| AADSTS16001 | UserAccountSelectionInvalid - You'll see this error if the user selects on a tile that the session select logic has rejected. When triggered, this error allows the user to recover by picking from an updated list of tiles/sessions, or by choosing another account. This error can occur because of a code defect or race condition. |
+| AADSTS16002 | AppSessionSelectionInvalid - The app-specified SID requirement wasn't met. |
| AADSTS16003 | SsoUserAccountNotFoundInResourceTenant - Indicates that the user hasn't been explicitly added to the tenant. | | AADSTS17003 | CredentialKeyProvisioningFailed - Azure AD can't provision the user key. | | AADSTS20001 | WsFedSignInResponseError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS20012 | WsFedMessageInvalid - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS20033 | FedMetadataInvalidTenantName - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
-| AADSTS28002 | Provided value for the input parameter scope '{scope}' isn't valid when requesting an access token. Please specify a valid scope. |
-| AADSTS28003 | Provided value for the input parameter scope can't be empty when requesting an access token using the provided authorization code. Please specify a valid scope.|
+| AADSTS28002 | Provided value for the input parameter scope '{scope}' isn't valid when requesting an access token. Specify a valid scope. |
+| AADSTS28003 | Provided value for the input parameter scope can't be empty when requesting an access token using the provided authorization code. Specify a valid scope.|
| AADSTS40008 | OAuth2IdPUnretryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40009 | OAuth2IdPRefreshTokenRedemptionUserError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. | | AADSTS40010 | OAuth2IdPRetryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50057 | UserDisabled - The user account is disabled. The user object in Active Directory backing this account has been disabled. An admin can re-enable this account [through PowerShell](/powershell/module/activedirectory/enable-adaccount) | | AADSTS50058 | UserInformationNotProvided - Session information isn't sufficient for single-sign-on. This means that a user isn't signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. | | AADSTS50059 | MissingTenantRealmAndNoUserInformationProvided - Tenant-identifying information was not found in either the request or implied by any provided credentials. The user can contact the tenant admin to help resolve the issue. |
-| AADSTS50061 | SignoutInvalidRequest - Unable to complete signout. The request was invalid. |
+| AADSTS50061 | SignoutInvalidRequest - Unable to complete sign out. The request was invalid. |
| AADSTS50064 | CredentialAuthenticationError - Credential validation on username or password has failed. | | AADSTS50068 | SignoutInitiatorNotParticipant - Sign out has failed. The app that initiated sign out isn't a participant in the current session. | | AADSTS50070 | SignoutUnknownSessionIdentifier - Sign out has failed. The sign out request specified a name identifier that didn't match the existing session(s). |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50107 | InvalidRealmUri - The requested federation realm object doesn't exist. Contact the tenant admin. | | AADSTS50120 | ThresholdJwtInvalidJwtFormat - Issue with JWT header. Contact the tenant admin. | | AADSTS50124 | ClaimsTransformationInvalidInputParameter - Claims Transformation contains invalid input parameter. Contact the tenant admin to update the policy. |
-| AADSTS501241 | Mandatory Input '{paramName}' missing from transformation id '{transformId}'. This error is returned while Azure AD is trying to build a SAML response to the application. NameID claim or NameIdentifier is mandatory in SAML response and if Azure AD failed to get source attribute for NameID claim, it will return this error. As a resolution, ensure you add claim rules in Azure Portal > Azure Active Directory > Enterprise Applications > Select your application > Single Sign-On > User Attributes & Claims > Unique User Identifier (Name ID). |
+| AADSTS501241 | Mandatory Input '{paramName}' missing from transformation ID '{transformId}'. This error is returned while Azure AD is trying to build a SAML response to the application. NameID claim or NameIdentifier is mandatory in SAML response and if Azure AD failed to get source attribute for NameID claim, it will return this error. As a resolution, ensure you add claim rules in *Azure portal* > *Azure Active Directory* > *Enterprise Applications* > *Select your application* > *Single Sign-On* > *User Attributes & Claims* > *Unique User Identifier (Name ID)*. |
| AADSTS50125 | PasswordResetRegistrationRequiredInterrupt - Sign-in was interrupted because of a password reset or password registration entry. | | AADSTS50126 | InvalidUserNameOrPassword - Error validating credentials due to invalid username or password. | | AADSTS50127 | BrokerAppNotInstalled - User needs to install a broker app to gain access to this content. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). To learn more, see the troubleshooting article for error [AADSTS650056](/troubleshoot/azure/active-directory/error-code-aadsts650056-misconfigured-app). | | AADSTS650052 | The app needs access to a service `(\"{name}\")` that your organization `\"{organization}\"` has not subscribed to or enabled. Contact your IT Admin to review the configuration of your service subscriptions. | | AADSTS650054 | The application asked for permissions to access a resource that has been removed or is no longer available. Make sure that all resources the app is calling are present in the tenant you're operating in. |
-| AADSTS650056 | Misconfigured application. This could be due to one of the following: the client has not listed any permissions for '{name}' in the requested permissions in the client's application registration. Or, the admin has not consented in the tenant. Or, check the application identifier in the request to ensure it matches the configured client application identifier. Or, check the certificate in the request to ensure it's valid. Please contact your admin to fix the configuration or consent on behalf of the tenant. Client app ID: {id}. Please contact your admin to fix the configuration or consent on behalf of the tenant.|
+| AADSTS650056 | Misconfigured application. This could be due to one of the following: the client has not listed any permissions for '{name}' in the requested permissions in the client's application registration. Or, the admin has not consented in the tenant. Or, check the application identifier in the request to ensure it matches the configured client application identifier. Or, check the certificate in the request to ensure it's valid. Please contact your admin to fix the configuration or consent on behalf of the tenant. Client app ID: {ID}. Please contact your admin to fix the configuration or consent on behalf of the tenant.|
| AADSTS650057 | Invalid resource. The client has requested access to a resource which isn't listed in the requested permissions in the client's application registration. Client app ID: {appId}({appName}). Resource value from request: {resource}. Resource app ID: {resourceAppId}. List of valid resources from app registration: {regList}. | | AADSTS67003 | ActorNotValidServiceIdentity | | AADSTS70000 | InvalidGrant - Authentication failed. The refresh token isn't valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> |
The `error` field has several possible values - review the protocol documentatio
| AADSTS700030 | Invalid certificate - subject name in certificate isn't authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. | | AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. | | AADSTS70005 | UnsupportedResponseType - The app returned an unsupported response type due to the following reasons:<ul><li>response type 'token' isn't enabled for the app</li><li>response type 'id_token' requires the 'OpenID' scope -contains an unsupported OAuth parameter value in the encoded wctx</li></ul> |
-| AADSTS700054 | Response_type 'id_token' isn't enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure Portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
+| AADSTS700054 | Response_type 'id_token' isn't enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
| AADSTS70007 | UnsupportedResponseMode - The app returned an unsupported value of `response_mode` when requesting a token. | | AADSTS70008 | ExpiredOrRevokedGrant - The refresh token has expired due to inactivity. The token was issued on XXX and was inactive for a certain amount of time. | | AADSTS700084 | The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of {time}, which can't be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on {issueDate}.|
The `error` field has several possible values - review the protocol documentatio
| AADSTS900432 | Confidential Client isn't supported in Cross Cloud request.| | AADSTS90051 | InvalidNationalCloudId - The national cloud identifier contains an invalid cloud identifier. | | AADSTS90055 | TenantThrottlingError - There are too many incoming requests. This exception is thrown for blocked tenants. |
-| AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of OAuth 2.0 authorization code flow: [../azuread-dev/v1-protocols-oauth-code.md](../azuread-dev/v1-protocols-oauth-code.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. |
+| AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. |
| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. | | AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message isn't valid. | | AADSTS90082 | OrgIdWsFederationNotSupported - The selected authentication policy for the request isn't currently supported. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS700005 | InvalidGrantRedeemAgainstWrongTenant - Provided Authorization Code is intended to use against other tenant, thus rejected. OAuth2 Authorization Code must be redeemed against same tenant it was acquired for (/common or /{tenant-ID} as appropriate) | | AADSTS1000000 | UserNotBoundError - The Bind API requires the Azure AD user to also authenticate with an external IDP, which hasn't happened yet. | | AADSTS1000002 | BindCompleteInterruptError - The bind completed successfully, but the user must be informed. |
-| AADSTS100007 | AAD Regional ONLY supports auth either for MSIs OR for requests from MSAL using SN+I for 1P apps or 3P apps in Microsoft infrastructure tenants.|
+| AADSTS100007 | Azure AD Regional ONLY supports auth either for MSIs OR for requests from MSAL using SN+I for 1P apps or 3P apps in Microsoft infrastructure tenants.|
| AADSTS1000031 | Application {appDisplayName} can't be accessed at this time. Contact your administrator. | | AADSTS7000112 | UnauthorizedClientApplicationDisabled - The application is disabled. | | AADSTS7000114| Application 'appIdentifier' isn't allowed to make application on-behalf-of calls.|
-| AADSTS7500529 | The value ΓÇÿSAMLId-GuidΓÇÖ isn't a valid SAML ID - Azure AD uses this attribute to populate the InResponseTo attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "id" to the string representation of a GUID. For example, id6c1c178c166d486687be4aaf5e482730 is a valid ID. |
+| AADSTS7500529 | The value ΓÇÿSAMLId-GuidΓÇÖ isn't a valid SAML ID - Azure AD uses this attribute to populate the InResponseTo attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, id6c1c178c166d486687be4aaf5e482730 is a valid ID. |
## Next steps
active-directory Registration Config Sso How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-sso-how-to.md
Enabling federated single sign-on (SSO) in your app is automatically enabled whe
* If youΓÇÖre building a mobile app, you may need additional configurations to enable brokered or non-brokered SSO.
-For Android, see [Enabling Cross App SSO in Android](../azuread-dev/howto-v1-enable-sso-android.md).<br>
+For Android, see [Enabling Cross App SSO in Android](msal-android-single-sign-on.md).
-For iOS, see [Enabling Cross App SSO in iOS](../azuread-dev/howto-v1-enable-sso-ios.md).
+For iOS, see [Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md).
## Next steps [Azure AD SSO](../manage-apps/what-is-single-sign-on.md)<br>
-[Enabling Cross App SSO in Android](../azuread-dev/howto-v1-enable-sso-android.md)<br>
+[Enabling Cross App SSO in Android](msal-android-single-sign-on.md)<br>
-[Enabling Cross App SSO in iOS](../azuread-dev/howto-v1-enable-sso-ios.md)<br>
+[Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md)<br>
[Integrating Apps to AzureAD](./quickstart-register-app.md)<br>
active-directory Scenario Mobile Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-call-api.md
To call the same API several times, or call multiple APIs, then consider the fol
- **Incremental consent**: The Microsoft identity platform allows apps to get user consent when permissions are required rather than all at the start. Each time your app is ready to call an API, it should request only the scopes that it needs. -- **Conditional Access**: When you make several API requests, in certain scenarios you might have to meet additional conditional-access requirements. Requirements can increase in this way if the first request has no conditional-access policies and your app attempts to silently access a new API that requires conditional access. To handle this problem, be sure to catch errors from silent requests, and be prepared to make an interactive request. For more information, see [Guidance for conditional access](../azuread-dev/conditional-access-dev-guide.md).
+- **Conditional Access**: When you make several API requests, in certain scenarios you might have to meet additional conditional-access requirements. Requirements can increase in this way if the first request has no conditional-access policies and your app attempts to silently access a new API that requires conditional access. To handle this problem, be sure to catch errors from silent requests, and be prepared to make an interactive request. For more information, see [Guidance for conditional access](v2-conditional-access-dev-guide.md).
## Call several APIs by using incremental consent and conditional access
active-directory Scenario Protected Web Api Verification Scope App Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles.md
You can also verify the scopes for the whole controller
##### Verify the scopes on a controller with hardcoded scopes
-The following code snippet shows the usage of the `[RequiredScope]` attribute with hardcoded scopes on the controller.
+The following code snippet shows the usage of the `[RequiredScope]` attribute with hardcoded scopes on the controller. To use the RequiredScopeAttribute, you'll need to either:
+
+- Use `AddMicrosoftIdentitWebApi` in the Startup.cs, as seen in [Code configuration](scenario-protected-web-api-app-configuration.md)
+- or otherwise add the `ScopeAuthorizationRequirement` to the authorization policies as explained in [authorization policies](https://github.com/AzureAD/microsoft-identity-web/wiki/authorization-policies).
```csharp using Microsoft.Identity.Web
private void ValidateScopes(IEnumerable<string> acceptedScopes)
For a full version of `ValidateScopes` for ASP.NET Core, [_ScopesRequiredHttpContextExtensions.cs_](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web/Resource/ScopesRequiredHttpContextExtensions.cs) - ## Verify app roles in APIs called by daemon apps If your web API is called by a [daemon app](scenario-daemon-overview.md), that app should require an application permission to your web API. As shown in [Exposing application permissions (app roles)](./scenario-protected-web-api-app-registration.md#expose-application-permissions-app-roles), your API exposes such permissions. One example is the `access_as_application` app role.
public class TodoListController : ApiController
} ``` - Instead, you can use the [Authorize(Roles = "access_as_application")] attributes on the controller or an action (or a razor page). ```CSharp
private void ValidateAppRole(string appRole)
For a full version of `ValidateAppRole` for ASP.NET Core, see [_RolesRequiredHttpContextExtensions.cs_](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web/Resource/RolesRequiredHttpContextExtensions.cs) code. - ### Verify app roles in APIs called on behalf of users Users can also use roles claims in user assignment patterns, as shown in [How to add app roles in your application and receive them in the token](howto-add-app-roles-in-azure-ad-apps.md). If the roles are assignable to both, checking roles will let apps sign in as users and users sign in as apps. We recommend that you declare different roles for users and apps to prevent this confusion.
If you set `AllowWebApiToBeAuthorizedByACL` to true, this is **your responsibili
Move on to the next article in this scenario, [Move to production](scenario-protected-web-api-production.md).+
active-directory Single And Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-and-multi-tenant-apps.md
In the Azure portal, you can configure your app to be single-tenant or multi-ten
Building great multi-tenant apps can be challenging because of the number of different policies that IT administrators can set in their tenants. If you choose to build a multi-tenant app, follow these best practices: -- Test your app in a tenant that has configured [Conditional Access policies](../azuread-dev/conditional-access-dev-guide.md).
+- Test your app in a tenant that has configured [Conditional Access policies](v2-conditional-access-dev-guide.md).
- Follow the principle of least user access to ensure that your app only requests permissions it actually needs. - Provide appropriate names and descriptions for any permissions you expose as part of your app. This helps users and admins know what they're agreeing to when they attempt to use your app's APIs. For more information, see the best practices section in the [permissions guide](v2-permissions-and-consent.md).
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
Currently, service principals can be listed, viewed, hard deleted, or restored v
### Administrative units
-AUs can be listed, viewed, hard deleted, or restored via the deletedItems Microsoft Graph API. To restore AUs using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http).
+AUs can be listed, viewed, or restored via the deletedItems Microsoft Graph API. To restore AUs using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http). Once an AU is deleted it remains in a soft deleted state and can be restored for 30 days, but cannot be hard deleted during that time. Soft deleted AUs are hard deleted automatically after 30 days.
## Hard deletions
A hard deletion is the permanent removal of an object from your Azure AD tenant.
* Users * Microsoft 365 Groups * Application registration
+* Service principal
+* Administrative unit
> [!IMPORTANT] > All other item types are hard deleted. When an item is hard deleted, it can't be restored. It must be re-created. Neither administrators nor Microsoft can restore hard-deleted items. Prepare for this situation by ensuring that you have processes and documentation to minimize potential disruption from a hard delete.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## November 2022
+
+### General Availability - use Web Sign-in on Windows for password-less recovery with Temporary Access Pass
+++
+**Type:** Changed feature
+**Service category:** N/A
+**Product capability:** User Authentication
+
+For users who don't know or use a password, the Temporary Access Pass can now be used to recover Azure AD-joined PCs when the EnableWebSignIn policy is enabled on the device. For more information, see: [Authentication/EnableWebSignIn](/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin).
++++
+### Public Preview - Workload identity Federation for Managed Identities
+++
+**Type:** New feature
+**Service category:** Managed identities for Azure resources
+**Product capability:** Developer Experience
+
+Developers can now use managed identities for their software workloads running anywhere, and for accessing Azure resources, without needing secrets. Key scenarios include:
+
+- Accessing Azure resources from Kubernetes pods running on-premises or in any cloud.
+- GitHub workflows to deploy to Azure, no secrets necessary.
+- Accessing Azure resources from other cloud platforms that support OIDC, such as Google Cloud.
+
+For more information, see:
+- [Configure a user-assigned managed identity to trust an external identity provider (preview)](../develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md)
+- [Workload identity federation](../develop/workload-identity-federation.md)
+- [Use an Azure AD workload identity (preview) on Azure Kubernetes Service (AKS)](/azure/aks/workload-identity-overview)
++++
+### General Availability - Authenticator on iOS is FIPS 140 compliant
+++
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Authenticator version 6.6.8 and higher on iOS will be FIPS 140 compliant for all Azure AD authentications using push multi-factor authentications (MFA), Password-less Phone Sign-In (PSI), and time-based one-time pass-codes (TOTP). No changes in configuration are required in the Authenticator app or Azure portal to enable this capability. For more information, see: [FIPS 140 compliant for Azure AD authentication](../authentication/concept-authentication-authenticator-app.md#fips-140-compliant-for-azure-ad-authentication).
++++
+### General Availability - New Federated Apps available in Azure AD Application gallery - November 2022
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In November 2022, we've added the following 22 new applications in our App gallery with Federation support
+
+[Adstream](/active-directory/saas-apps/adstream-tutorial), [Databook](/active-directory/saas-apps/databook-tutorial), [Ecospend IAM](https://ecospend.com/), [Digital Pigeon](/active-directory/saas-apps/digital-pigeon-tutorial), [Drawboard Projects](/active-directory/saas-apps/drawboard-projects-tutorial), [Vellum](https://www.vellum.ink/request-demo), [Veracity](https://aie-veracity.com/connect/azure), [Microsoft OneNote to Bloomberg Note Sync](https://www.bloomberg.com/professional/support/software-updates/), [DX NetOps Portal](/active-directory/saas-apps/dx-netops-portal-tutorial), [itslearning Outlook integration](https://itslearning.com/global/), [Tranxfer](/active-directory/saas-apps/tranxfer-tutorial), [Occupop](https://app.occupop.com/), [Nialli Workspace](https://ws.nialli.com/), [Tideways](https://app.tideways.io/login), [SOWELL](https://manager.sowellapp.com/#/?sso=true), [Prewise Learning](https://prewiselearning.com/), [CAPTOR for Intune](https://www.inkscreen.com/microsoft), [wayCloud Platform](https://app.way-cloud.de/login), [Nura Space Meeting Room](https://play.google.com/store/apps/details?id=com.meetingroom.prod), [Flexopus Exchange Integration](https://help.flexopus.com/de/microsoft-graph-integration), [Ren Systems](https://app.rensystems.com/login), [Nudge Security](https://www.nudgesecurity.io/login)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
++++
+### General Availability - New provisioning connectors in the Azure AD Application Gallery - November 2022
+++
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Keepabl](../saas-apps/keepabl-provisioning-tutorial.md)
+- [Uber](../saas-apps/uber-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### Public Preview - Dynamic Group Pause Functionality
+++
+**Type:** New feature
+**Service category:** Group Management
+**Product capability:** Directory
+
+Admins can now pause, and resume, the processing of individual dynamic groups in the Entra Admin Center. For more information, see: [Create or update a dynamic group in Azure Active Directory](../enterprise-users/groups-create-rule.md).
++++
+### Public Preview - Enabling extended customization capabilities for sign-in and sign-up pages in Company Branding capabilities.
+++
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+Update the Azure AD and Microsoft 365 sign in experience with new company branding capabilities. You can apply your companyΓÇÖs brand guidance to authentication experiences with pre-defined templates. For more information, see: [Configure your company branding](../fundamentals/customize-branding.md).
++++
+### Public Preview - Enabling customization capabilities for the Self-Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icons in Company Branding.
+++
+**Type:** New feature
+**Service category:** Directory Management
+**Product capability:** Directory
+
+Update the company branding functionality on the Azure AD/Microsoft 365 sign in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icon. For more information, see: [Configure your company branding](../fundamentals/customize-branding.md).
++++
+### General Availability - Soft Delete for Administrative Units
+++
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Administrative Units now support soft deletion. Admins can now list, view properties of, perform ad hoc hard delete, or restore deleted Administrative Units using Microsoft Graph. This functionality restores all configuration for the Administrative Unit when restored from soft delete including: memberships, admin roles, processing rules, and processing rules state.
+
+This functionality greatly enhances recoverability and resilience when using Administrative Units. Now, when an Administrative Unit is accidentally deleted, it can be restored quickly to the same state it was at time of deletion. This removes uncertainty around how things were configured, and makes restoration quick and easy. For more information, see: [Soft deletions](../fundamentals/recover-from-deletions.md#soft-deletions).
++++
+### Public Preview - IPv6 coming to Azure AD
+++
+**Type:** Plan for change
+**Service category:** Identity Protection
+**Product capability:** Platform
+
+With the growing adoption and support of IPv6 across enterprise networks, service providers, and devices, many customers are wondering if their users can continue to access their services and applications from IPv6 clients and networks. Today, weΓÇÖre excited to announce our plan to bring IPv6 support to Microsoft Azure Active Directory (Azure AD). This will allow customers to reach the Azure AD services over both IPv4 and IPv6 network protocols (dual stack).
+For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to de-prioritize IPv4 in any Azure Active Directory features or services.
+We'll begin introducing IPv6 support into Azure AD services in a phased approach, beginning March 31, 2023.
+We have guidance below which is specifically for Azure AD customers who use IPv6 addresses and also use Named Locations in their Conditional Access policies.
+
+Customers who use named locations to identify specific network boundaries in their organization need to:
+1. Conduct an audit of existing named locations to anticipate potential impact.
+1. Work with your network partner to identify egress IPv6 addresses in use in your environment.
+1. Review and update existing named locations to include the identified IPv6 ranges.
+
+Customers who use Conditional Access location based policies to restrict and secure access to their apps from specific networks need to:
+1. Conduct an audit of existing Conditional Access policies to identify use of named locations as a condition to anticipate potential impact.
+1. Review and update existing Conditional Access location based policies to ensure they continue to meet your organizationΓÇÖs security requirements.
+
+We'll continue to share additional guidance on IPv6 enablement in Azure AD at this easy to remember link https://aka.ms/azureadipv6.
++++ ## October 2022 ### General Availability - Upgrade Azure AD Provisioning agent to the latest version (version number: 1.1.977.0)
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
With the separation of duties settings on an access package, you can configure t
For example, you have an access package, *Marketing Campaign*, that people across your organization and other organizations can request access to, to work with your organization's marketing department while that campaign is going on. Since employees in the marketing department should already have access to that marketing campaign material, you don't want employees in the marketing department to request access to that access package. Or, you may already have a dynamic group, *Marketing department employees*, with all of the marketing employees in it. You could indicate that the access package is incompatible with the membership of that dynamic group. Then, if a marketing department employee is looking for an access package to request, they couldn't request access to the *Marketing campaign* access package.
-Similarly, you may have an application with two roles - **Western Sales** and **Eastern Sales** - and want to ensure that a user can only have one sales territory at a time. If you have two access packages, one access package **Western Territory** giving the **Western Sales** role and the other access package **Eastern Territory** giving the **Eastern Sales** role, then you can configure
+Similarly, you may have an application with two app roles - **Western Sales** and **Eastern Sales** - representing sales territories, and you want to ensure that a user can only have one sales territory at a time. If you have two access packages, one access package **Western Territory** giving the **Western Sales** role and the other access package **Eastern Territory** giving the **Eastern Sales** role, then you can configure
- the **Western Territory** access package has the **Eastern Territory** package as incompatible, and - the **Eastern Territory** access package has the **Western Territory** package as incompatible.
If there's an exceptional situation where separation of duties rules might need
For example, if there was a scenario that some users would need to have access to both production and deployment environments at the same time, you could create a new access package **Production and development environments**. That access package could have as its resource roles some of the resource roles of the **Production environment** access package and some of the resource roles of the **Development environment** access package.
-If the motivation of the incompatible access is one resource's roles are particularly problematic, then that resource could be omitted from the combined access package, and require explicit administrator assignment of a user to the role. If that is a third party application or your own application, then you can ensure oversight by monitoring those role assignments using the *Application role assignment activity* workbook described in the next section.
+If the motivation of the incompatible access is one resource's roles are particularly problematic, then that resource could be omitted from the combined access package, and require explicit administrator assignment of a user to the resource's role. If that is a third party application or your own application, then you can ensure oversight by monitoring those role assignments using the *Application role assignment activity* workbook described in the next section.
Depending on your governance processes, that combined access package could have as its policy either:
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
For more information, see [Compare groups](/office365/admin/create-groups/compar
You can have Azure AD automatically assign users access to an Azure AD enterprise application, including both SaaS applications and your organization's applications integrated with Azure AD, when a user is assigned an access package. For applications that integrate with Azure AD through federated single sign-on, Azure AD will issue federation tokens for users assigned to the application.
-Applications can have multiple roles. When you add an application to an access package, if that application has more than one role, you'll need to specify the appropriate role for those users in each access package. If you're developing applications, you can read more about how those roles are added to your applications in [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md).
+Applications can have multiple app roles defined in their manifest. When you add an application to an access package, if that application has more than one app role, you'll need to specify the appropriate role for those users in each access package. If you're developing applications, you can read more about how those roles are added to your applications in [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md).
> [!NOTE]
-> If an application has multiple roles, and more than one role of that application are in an access package, then the user will receive all the roles. If instead you want users to only have some of the roles, then you will need to create multiple access packages in the catalog, with separate access packages for each of the roles.
+> If an application has multiple roles, and more than one role of that application are in an access package, then the user will receive all those application's roles. If instead you want users to only have some of the application's roles, then you will need to create multiple access packages in the catalog, with separate access packages for each of the application roles.
Once an application role is part of an access package:
Once an application role is part of an access package:
Here are some considerations when selecting an application: -- Applications may also have groups assigned to their roles as well. You can choose to add a group in place of an application role in an access package, however then the application will not be visible to the user as part of the access package in the My Access portal.
+- Applications may also have groups assigned to their app roles as well. You can choose to add a group in place of an application role in an access package, however then the application will not be visible to the user as part of the access package in the My Access portal.
- Azure portal may also show service principals for services that cannot be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they cannot be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services. - Applications which only support Personal Microsoft Account users for authentication, and do not support organizational accounts in your directory, do not have application roles and cannot be added to access package catalogs.
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md
# Delegation and roles in Azure AD entitlement management
+In Azure AD, you can use role models to manage access at scale through identity governance.
+
+ * You can use access packages to represent organizational roles in your organization, such as "sales representative". An access package representing that enterprise role would include all the access rights that a sales representative might typically need, across multiple resources.
+ * Applications [can define their own roles](../develop/howto-add-app-roles-in-azure-ad-apps.md). For example, if you had a sales application, and that application included the app role "salesperson", you could then [include that role in an access package](entitlement-management-access-package-resources.md).
+ * You can use roles for delegating administrative access. If you have a catalog for all the access packages needed by sales, you could assign someone to be responsible for that catalog, by assigning them a catalog-specific role.
+
+This article discusses how to use roles to manage aspects within Azure AD entitlement management.
+ By default, Global administrators and Identity governance administrators can create and manage all aspects of Azure AD entitlement management. However, the users in these roles may not know all the situations where access packages are required. Typically it's users within the respective departments, teams, or projects who know who they're collaborating with, using what resources, and for how long. Instead of granting unrestricted permissions to non-administrators, you can grant users the least permissions they need to do their job and avoid creating conflicting or inappropriate access rights. This video provides an overview of how to delegate access governance from IT administrator to users who aren't administrators.
Here is one way that Hana could delegate access governance to the marketing, fin
1. Mamta creates a **Marketing** catalog, which is a container of resources.
-1. Mamta adds the resources that her marketing department owns to this catalog.
+1. Mamta adds the resources that the marketing department owns to this catalog.
-1. Mamta can add other people from her department as catalog owners for this catalog, which helps share the catalog management responsibilities.
+1. Mamta can add other people from that department as catalog owners for this catalog, which helps share the catalog management responsibilities.
1. Mamta can further delegate the creation and management of access packages in the Marketing catalog to project managers in the Marketing department. She can do this by assigning them to the access package manager role. An access package manager can create and manage access packages.
The following diagram shows catalogs with resources for the marketing, finance,
After delegation, the marketing department might have roles similar to the following table.
-| User | Job role | Azure AD role | Entitlement management role |
+| User | Organizational role | Azure AD role | Entitlement management role |
| | | | | | Hana | IT administrator | Global administrator or Identity Governance administrator | | | Mamta | Marketing manager | User | Catalog creator and Catalog owner |
After delegation, the marketing department might have roles similar to the follo
## Entitlement management roles
-Entitlement management has the following roles that apply across all catalogs.
+Entitlement management has the following roles, with permissions for administering entitlement management itself, that apply across all catalogs.
| Entitlement management role | Role definition ID | Description | | | | -- | | Catalog creator | `ba92d953-d8e0-4e39-a797-0cbedb0a89e8` | Create and manage catalogs. Typically an IT administrator who isn't a Global administrator, or a resource owner for a collection of resources. The person that creates a catalog automatically becomes the catalog's first catalog owner, and can add more catalog owners. A catalog creator canΓÇÖt manage or see catalogs that they donΓÇÖt own and canΓÇÖt add resources they donΓÇÖt own to a catalog. If the catalog creator needs to manage another catalog or add resources they donΓÇÖt own, they can request to be a co-owner of that catalog or resource. |
-Entitlement management has the following roles that are defined for each particular catalog. An administrator or a catalog owner can add users, groups of users, or service principals to these roles.
+Entitlement management has the following roles that are defined for each particular catalog, for administering access packages and other configuration within a catalog. An administrator or a catalog owner can add users, groups of users, or service principals to these roles.
| Entitlement management role | Role definition ID | Description | | | | -- |
Also, the chosen approver and a requestor of an access package have rights, alth
| Approver | Authorized by a policy to approve or deny requests to access packages, though they can't change the access package definitions. | | Requestor | Authorized by a policy of an access package to request that access package. |
-The following table lists the tasks that the entitlement management roles can do.
+The following table lists the tasks that the entitlement management roles can do within entitlement management.
| Task | Admin | Catalog creator | Catalog owner | Access package manager | Access package assignment manager | | | :: | :: | :: | :: | :: |
A Global administrator can add or remove any group (cloud-created security group
> [!NOTE] > Users that have been assigned the User administrator role will no longer be able to create catalogs or manage access packages in a catalog they do not own. If users in your organization have been assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, you should instead assign these users the **Identity Governance administrator** role.
-For a user who isn't a global administrator, to add groups, applications, or SharePoint Online sites to a catalog, that user must have *both* an Azure AD directory role or ownership of the resource, and a and catalog owner entitlement management role for the catalog. The following table lists the role combinations that are required to add resources to a catalog. To remove resources from a catalog, you must have the same roles.
+For a user who isn't a global administrator, to add groups, applications, or SharePoint Online sites to a catalog, that user must have *both* an Azure AD directory role or ownership of the resource, and a catalog owner entitlement management role for the catalog. The following table lists the role combinations that are required to add resources to a catalog. To remove resources from a catalog, you must have the same roles.
| Azure AD directory role | Entitlement management role | Can add security group | Can add Microsoft 365 Group | Can add app | Can add SharePoint Online site | | | :: | :: | :: | :: | :: |
For a user who isn't a global administrator, to add groups, applications, or Sha
To determine the least privileged role for a task, you can also reference [Administrator roles by admin task in Azure Active Directory](../roles/delegate-by-task.md#entitlement-management).
-## Manage role assignments programmatically (preview)
+## Manage role assignments to entitlement management roles programmatically (preview)
You can also view and update catalog creators and entitlement management catalog-specific role assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the Graph API to [list the role definitions](/graph/api/rbacapplication-list-roledefinitions) of entitlement management, and [list role assignments](/graph/api/rbacapplication-list-roleassignments) to those role definitions.
-For example, to view the entitlement management-specific roles which a particular user or group has been assigned, use the Graph query to list role assignments, and provide the user or group's ID as the value of the `principalId` query filter, as in
+For example, to view the entitlement management-specific roles that a particular user or group has been assigned, use the Graph query to list role assignments, and provide the user or group's ID as the value of the `principalId` query filter, as in
```http GET https://graph.microsoft.com/beta/roleManagement/entitlementManagement/roleAssignments?$filter=principalId eq '10850a21-5283-41a6-9df3-3d90051dd111'&$expand=roleDefinition&$select=id,appScopeId,roleDefinition
active-directory Identity Governance Applications Define https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-define.md
Once you've identified one or more applications that you want to use Azure AD to
Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. If this application is an existing application in your environment, you may already have documented the access policies for who 'should have access' to this application. If not, you may need to consult with various stakeholders, such as compliance and risk management teams, to ensure that the policies being used to automate access decisions are appropriate for your scenario.
-1. **Collect the roles and permissions that each application provides.** Some applications may have only a single role, for example, an application that only has the role "User". More complex applications may surface multiple roles to be managed through Azure AD. These application roles typically make broad constraints on the access a user with that role would have within the app. For example, an application that has an administrator persona might have two roles, "User" and "Administrator". Other applications may also rely upon group memberships or claims for finer-grained role checks, which can be provided to the application from Azure AD in provisioning or claims issued using federation SSO protocols, or written to AD as a security group membership. Finally, there may be roles that don't surface in Azure AD - perhaps the application doesn't permit defining the administrators in Azure AD, instead relying upon its own authorization rules to identify administrators.
+1. **Collect the roles and permissions that each application provides.** Some applications may have only a single role, for example, an application that only has the role "User". More complex applications may surface multiple roles to be managed through Azure AD. These application roles typically make broad constraints on the access a user with that role would have within the app. For example, an application that has an administrator persona might have two roles, "User" and "Administrator". Other applications may also rely upon group memberships or claims for finer-grained role checks, which can be provided to the application from Azure AD in provisioning or claims issued using federation SSO protocols, or written to AD as a security group membership. Finally, there may be application-specific roles that don't surface in Azure AD - perhaps the application doesn't permit defining the administrators in Azure AD, instead relying upon its own authorization rules to identify administrators.
> [!Note] > If you're using an application from the Azure AD application gallery that supports provisioning, then Azure AD may import defined roles in the application and automatically update the application manifest with the application's roles automatically, once provisioning is configured.
In this section, you'll write down the organizational policies you plan to use t
1. **Determine how long a user who has been approved for access, should have access, and when that access should go away.** For many applications, a user might retain access indefinitely, until they're no longer affiliated with the organization. In some situations, access may be tied to particular projects or milestones, so that when the project ends, access is removed automatically. Or, if only a few users are using an application through a policy, you may configure quarterly or yearly reviews of everyone's access through that policy, so that there's regular oversight. These processes can ensure users lose access eventually when access is no longer needed, even if there isn't a pre-determined project end date.
-1. **Inquire if there are separation of duties constraints.** For example, you may have an application with two roles, *Western Sales* and *Eastern Sales*, and you want to ensure that a user can only have one sales territory at a time. Include a list of any pairs of roles that are incompatible for your application, so that if a user has one role, they aren't allowed to request the second role.
+1. **Inquire if there are separation of duties constraints.** For example, you may have an application with two app roles, *Western Sales* and *Eastern Sales*, and you want to ensure that a user can only have one sales territory at a time. Include a list of any pairs of app roles that are incompatible for your application, so that if a user has one role, they aren't allowed to request the second role.
1. **Select the appropriate conditional access policy for access to the application.** We recommend that you analyze your applications and group them into applications that have the same resource requirements for the same users. If this is the first federated SSO application you're integrating with Azure AD for identity governance, you may need to create a new conditional access policy to express constraints, such as requirements for Multifactor authentication (MFA) or location-based access. You can configure users to be required to agree to [a terms of use](../conditional-access/require-tou.md). See [plan a conditional access deployment](../conditional-access/plan-conditional-access.md) for more considerations on how to define a conditional access policy.
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
Conditional access is only possible for applications that rely upon Azure AD for
1. **Verify users are ready for Azure Active Directory Multi-Factor Authentication.** We recommend requiring Azure AD Multi-Factor Authentication for business critical applications integrated via federation. For these applications, there should be a policy that requires the user to have met a multi-factor authentication requirement prior to Azure AD permitting them to sign into the application. Some organizations may also block access by locations, or [require the user to access from a registered device](../conditional-access/howto-conditional-access-policy-compliant-device.md). If there's no suitable policy already that includes the necessary conditions for authentication, location, device and TOU, then [add a policy to your conditional access deployment](../conditional-access/plan-conditional-access.md). 1. **Bring the application web endpoint into scope of the appropriate conditional access policy**. If you have an existing conditional access policy that was created for another application subject to the same governance requirements, you could update that policy to have it apply to this application as well, to avoid having a large number of policies. Once you have made the updates, check to ensure that the expected policies are being applied. You can see what policies would apply to a user with the [Conditional Access what if tool](../conditional-access/troubleshoot-conditional-access-what-if.md). 1. **Create a recurring access review if any users will need temporary policy exclusions**. In some cases, it may not be possible to immediately enforce conditional access policies for every authorized user. For example, some users may not have an appropriate registered device. If it's necessary to exclude one or more users from the CA policy and allow them access, then configure an access review for the group of [users who are excluded from Conditional Access policies](../governance/conditional-access-exclusion.md).
-1. **Document the token lifetime and applications' session settings.** How long a user who has been denied continued access can continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. The session lifetime for an application depends upon the application itself. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
+1. **Document the token lifetime and application's session settings.** How long a user who has been denied continued access can continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. The session lifetime for an application depends upon the application itself. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
## Deploy entitlement management policies for automating access assignment
In this section, you'll configure Azure AD entitlement management so users can r
1. **Access packages for governed applications should be in a designated catalog.** If you don't already have a catalog for your application governance scenario, [create a catalog](../governance/entitlement-management-catalog-create.md) in Azure AD entitlement management. 1. **Populate the catalog with necessary resources.** Add the application, as well as any Azure AD groups that the application relies upon, [as resources in that catalog](../governance/entitlement-management-catalog-create.md).
-1. **Create an access package for each role or group which users can request.** For each of the applications' roles or groups, [create an access package](../governance/entitlement-management-access-package-create.md) that includes that role or group as its resource. At this stage of configuring that access package, configure the access package assignment policy for direct assignment, so that only administrators can create assignments. In that policy, set the access review requirements for existing users, if any, so that they don't keep access indefinitely.
+1. **Create an access package for each role or group which users can request.** For each of the applications, and for each of their application roles or groups, [create an access package](../governance/entitlement-management-access-package-create.md) that includes that role or group as its resource. At this stage of configuring that access package, configure the access package assignment policy for direct assignment, so that only administrators can create assignments. In that policy, set the access review requirements for existing users, if any, so that they don't keep access indefinitely.
1. **Configure access packages to enforce separation of duties requirements.** If you have [separation of duties](entitlement-management-access-package-incompatible.md) requirements, then configure the incompatible access packages or existing groups for your access package. If your scenario requires the ability to override a separation of duties check, then you can also [set up additional access packages for those override scenarios](entitlement-management-access-package-incompatible.md#configuring-multiple-access-packages-for-override-scenarios). 1. **Add assignments of existing users, who already have access to the application, to the access packages.** For each access package, assign existing users of the application in that role, or members of that group, to the access package. You can [directly assign a user](entitlement-management-access-package-assignments.md) to an access package using the Azure portal, or in bulk via Graph or PowerShell. 1. **Create policies for users to request access.** In each access package, [create additional access package assignment policies](../governance/entitlement-management-access-package-request-policy.md#open-an-existing-access-package-and-add-a-new-policy-with-different-request-settings) for users to request access. Configure the approval and recurring access review requirements in that policy.
At regular intervals, such as weekly, monthly or quarterly, based on the volume
* **Check that provisioning and deprovisioning are working as expected.** If you had previously configured provisioning of users to the application, then when the results of a review are applied, or a user's assignment to an access package expires, Azure AD will begin deprovisioning denied users from the application. You can [monitor the process of deprovisioning users](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md). If provisioning indicates an error with the application, you can [download the provisioning log](../reports-monitoring/concept-provisioning-logs.md) to investigate if there was a problem with the application.
-* **Update the Azure AD configuration with any role or group changes in the application.** If the application adds new roles, updates existing roles, or relies upon additional groups, then you'll need to update the access packages and access reviews to account for those new roles or groups.
+* **Update the Azure AD configuration with any role or group changes in the application.** If the application adds new application roles in its manifest, updates existing roles, or relies upon additional groups, then you'll need to update the access packages and access reviews to account for those new roles or groups.
## Next steps
active-directory Identity Governance Applications Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-integrate.md
Next, if the application implements a provisioning protocol, then you should con
| Integrated Windows Auth (IWA) | Deploy the [application proxy](../app-proxy/application-proxy.md), configure an application for [Integrated Windows authentication SSO](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md), and set firewall rules to prevent access to the application's endpoints except via the proxy.| | header-based authentication | Deploy the [application proxy](../app-proxy/application-proxy.md) and configure an application for [header-based SSO](../app-proxy/application-proxy-configure-single-sign-on-with-headers.md) |
-1. If your application has multiple roles, and relies upon Azure AD to send a user's role as part of a user signing into the application, then configure those application roles in Azure AD on your application. You can use the [app roles UI](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to add those roles.
+1. If your application has multiple roles, and relies upon Azure AD to send a user's application-specific role as a claim of a user signing into the application, then configure those application roles in Azure AD on your application. You can use the [app roles UI](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to add those roles.
1. If the application supports provisioning, then [configure provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md) of assigned users and groups from Azure AD to that application. If this is a private or custom application, you can also select the integration that's most appropriate, based on the location and capabilities of the application.
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Identity Governance helps organizations achieve a balance between *productivity*
![Identity lifecycle](./media/identity-governance-overview/identity-lifecycle.png)
-For many organizations, identity lifecycle for employees is tied to the representation of that user in an HCM (human capital management) system. Azure AD Premium automatically maintains user identities for people represented in Workday and SuccessFactors in both Active Directory and Azure Active Directory, as described in the [cloud HR application to Azure Active Directory user provisioning planning guide](../app-provisioning/plan-cloud-hr-provision.md). Azure AD Premium also includes [Microsoft Identity Manager](/microsoft-identity-manager/), which can import records from on-premises HCM systems such as SAP HCM, Oracle eBusiness, and Oracle PeopleSoft.
+For many organizations, identity lifecycle for employees is tied to the representation of that user in an HCM (human capital management) system. Azure AD Premium, through inbound provisioning, automatically maintains user identities for people represented in Workday and SuccessFactors in both Active Directory and Azure Active Directory, as described in the [cloud HR application to Azure Active Directory user provisioning planning guide](../app-provisioning/plan-cloud-hr-provision.md). Azure AD Premium also includes [Microsoft Identity Manager](/microsoft-identity-manager/), which can import records from on-premises HCM systems such as SAP HCM, Oracle eBusiness, and Oracle PeopleSoft.
Increasingly, scenarios require collaboration with people outside your organization. [Azure AD B2B](/azure/active-directory/b2b/) collaboration enables you to securely share your organization's applications and services with guest users and external partners from any organization, while maintaining control over your own corporate data. [Azure AD entitlement management](entitlement-management-overview.md) enables you to select which organization's users are allowed to request access and be added as B2B guests to your organization's directory, and ensures that these guests are removed when they no longer need access.
Organizations need a process to manage access beyond what was initially provisio
Typically, IT delegates access approval decisions to business decision makers. Furthermore, IT can involve the users themselves. For example, users that access confidential customer data in a company's marketing application in Europe need to know the company's policies. Guest users may be unaware of the handling requirements for data in an organization to which they've been invited.
-Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles. For more information, see the [simplifying identity governance tasks with automation](#simplifying-identity-governance-tasks-with-automation) section below to select the appropriate Azure AD features for your access lifecycle automation scenarios.
+Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Azure AD can also provision access to apps that use [AD groups](../enterprise-users/groups-write-back-portal.md), [other on-premises directories](../app-provisioning/on-premises-ldap-connector-configure.md) or [databases](../app-provisioning/on-premises-sql-connector-configure.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles. For more information, see the [simplifying identity governance tasks with automation](#simplifying-identity-governance-tasks-with-automation) section below to select the appropriate Azure AD features for your access lifecycle automation scenarios.
Lifecycle access can be automated using workflows. [Workflows can be created](create-lifecycle-workflow.md) to automatically add user to groups, where access to applications and resources are granted. Users can also be moved when their condition within the organization changes to different groups, and can even be removed entirely from all groups.
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
Title: Securing workload identities with Azure AD Identity Protection (Preview)
+ Title: Securing workload identities with Azure AD Identity Protection
description: Workload identity risk in Azure Active Directory Identity Protection
-# Securing workload identities with Identity Protection Preview
+# Securing workload identities with Identity Protection
-Azure AD Identity Protection has historically protected users in detecting, investigating, and remediating identity-based risks. We're now extending these capabilities to workload identities to protect applications, service principals, and Managed Identities.
+Azure AD Identity Protection has historically protected users in detecting, investigating, and remediating identity-based risks. We're now extending these capabilities to workload identities to protect applications and service principals.
A [workload identity](../develop/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they:
A [workload identity](../develop/workload-identities-overview.md) is an identity
These differences make workload identities harder to manage and put them at higher risk for compromise. > [!IMPORTANT]
-> In public preview, you can secure workload identities with Identity Protection and Azure Active Directory Premium P2 edition active in your tenant. After general availability, additional licenses might be required.
+> Detections are visible only to Workload Identities Premium customers. Customers without Workload Identities Premium licenses still receive all detections but the reporting of details is limited.
## Prerequisites
-To make use of workload identity risk, including the new **Risky workload identities (preview)** blade and the **Workload identity detections** tab in the **Risk detections** blade, in the Azure portal you must have the following.
+To make use of workload identity risk, including the new **Risky workload identities** blade and the **Workload identity detections** tab in the **Risk detections** blade in the portal, you must have the following.
-- Azure AD Premium P2 licensing
+- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal.
- One of the following administrator roles assigned - Global Administrator - Security Administrator
We detect risk on workload identities across sign-in behavior and offline indica
Organizations can find workload identities that have been flagged for risk in one of two locations: 1. Navigate to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Security** > **Risky workload identities (preview)**.
+1. Browse to **Azure Active Directory** > **Security** > **Risky workload identities**.
1. Or browse to **Azure Active Directory** > **Security** > **Risk detections**.
- 1. Select the **Workload identity detections** tab.
-
+ 1. Select the **Workload identity detections** tab.'
+
:::image type="content" source="media/concept-workload-identity-risk/workload-identity-detections-in-risk-detections-report.png" alt-text="Screenshot showing risks detected against workload identities in the report." lightbox="media/concept-workload-identity-risk/workload-identity-detections-in-risk-detections-report.png"::: ### Graph APIs
Some of the key questions to answer during your investigation include:
The [Azure Active Directory security operations guide for Applications](../fundamentals/security-operations-applications.md) provides detailed guidance on the above investigation areas.
-Once you determine if the workload identity was compromised, dismiss the accountΓÇÖs risk, or confirm the account as compromised in the Risky workload identities (preview) report. You can also select ΓÇ£Disable service principalΓÇ¥ if you want to block the account from further sign-ins.
+Once you determine if the workload identity was compromised, dismiss the accountΓÇÖs risk, or confirm the account as compromised in the Risky workload identities report. You can also select ΓÇ£Disable service principalΓÇ¥ if you want to block the account from further sign-ins.
:::image type="content" source="media/concept-workload-identity-risk/confirm-compromise-or-dismiss-risk.png" alt-text="Confirm workload identity compromise or dismiss the risk in the Azure portal." lightbox="media/concept-workload-identity-risk/confirm-compromise-or-dismiss-risk.png":::
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
-# Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort
+# Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort
-[Silverfort](https://www.silverfort.com/) uses innovative agent-less and proxy-less technology to connect your assets on-premises and in the cloud to Azure Active Directory (Azure AD). This solution enables organizations to apply identity protection, visibility, and user experience across environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and proactively prevents threats.
+[Silverfort](https://www.silverfort.com/) uses agent-less and proxy-less technology to connect your assets on-premises and in the cloud to Azure Active Directory (Azure AD). This solution enables organizations to apply identity protection, visibility, and user experience across environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and helps to prevent threats.
-In this tutorial, learn how to integrate your on-premises Silverfort implementation with Azure AD for [hybrid access](../devices/concept-azure-ad-join-hybrid.md).
+In this tutorial, learn how to integrate your on-premises Silverfort implementation with Azure AD.
-Silverfort connects assets with Azure AD. These bridged assets appear as regular applications in Azure AD and can be protected with Conditional Access, single-sign-on (SSO), multifactor authentication (MFA), auditing and more. Use Silverfort to connect assets including:
+Learn more: [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md).
+
+Silverfort connects assets with Azure AD. These bridged assets appear as regular applications in Azure AD and can be protected with [Conditional Access](../conditional-access/overview.md), single-sign-on (SSO), multi-factor authentication (MFA), auditing and more. Use Silverfort to connect assets including:
- Legacy and homegrown applications - Remote desktop and Secure Shell (SSH)
Use this tutorial to configure and test the Silverfort Azure AD bridge in your A
## Silverfort with Azure AD authentication architecture
-The following diagram describes the authentication architecture orchestrated by Silverfort in a hybrid environment.
+The following diagram shows the authentication architecture orchestrated by Silverfort, in a hybrid environment.
![image shows the architecture diagram](./media/silverfort-azure-ad-integration/silverfort-architecture-diagram.png) ### User flow
-1. User sends authentication request to the original Identity provider (IdP) through protocols such as Kerberos, SAML, NTLM, OIDC, and LDAP(s).
-2. The response is routed as-is to Silverfort for validation to check authentication state.
-3. Silverfort provides visibility, discovery, and bridging to Azure AD.
-4. If the application is bridged, the authentication decision is passed to Azure AD. Azure AD evaluates Conditional Access policies and validates authentication.
-5. The authentication state response goes as-is to the IdP by Silverfort.
-6. IdP grants or denies access to the resource.
-7. User is notified if access request is granted or denied.
+1. User sends authentication request to the original Identity Provider (IdP) through protocols such as Kerberos, SAML, NTLM, OIDC, and LDAP(s)
+2. The response is routed as-is to Silverfort for validation to check authentication state
+3. Silverfort provides visibility, discovery, and a bridge to Azure AD
+4. If the application is bridged, the authentication decision passes to Azure AD. Azure AD evaluates Conditional Access policies and validates authentication.
+5. The authentication state response goes as-is from Silverfort to the IdP
+6. IdP grants or denies access to the resource
+7. User is notified if access request is granted or denied
## Prerequisites
-You need Silverfort deployed in your tenant or infrastructure to perform this tutorial. To deploy Silverfort in your tenant or infrastructure, go to [Silverfort](https://www.silverfort.com/). Install Silverfort Desktop app on your workstations.
+You need Silverfort deployed in your tenant or infrastructure to perform this tutorial. To deploy Silverfort in your tenant or infrastructure, go to silverfort.com [Silverfort](https://www.silverfort.com/) to install the Silverfort desktop app on your workstations.
-This tutorial requires you to set up Silverfort Azure AD Adapter in your Azure AD tenant. You'll need:
+Set up Silverfort Azure AD Adapter in your Azure AD tenant:
- An Azure account with an active subscription - You can create an [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
This tutorial requires you to set up Silverfort Azure AD Adapter in your Azure A
- Cloud Application Administrator - Application Administrator - Service Principal Owner-- The Silverfort Azure AD Adapter application in the Azure AD gallery is pre-configured to support SSO. Add the Silverfort Azure AD Adapter to your tenant as an Enterprise application, from the gallery.
+- The Silverfort Azure AD Adapter application in the Azure AD gallery is pre-configured to support SSO. From the gallery, add the Silverfort Azure AD Adapter to your tenant as an Enterprise application.
## Configure Silverfort and create a policy 1. From a browser, sign in to the Silverfort admin console. 2. In the main menu, navigate to **Settings** and then scroll to **Azure AD Bridge Connector** in the General section. 3. Confirm your tenant ID, and then select **Authorize**.
+4. Select **Save Changes**.
+5. On the **Permissions requested** dialog, select **Accept**.
![image shows azure ad bridge connector](./media/silverfort-azure-ad-integration/azure-ad-bridge-connector.png) ![image shows registration confirmation](./media/silverfort-azure-ad-integration/grant-permission.png)
-4. A registration confirmation appears in a new tab. Close this tab.
+6. A Registration Completed message appears in a new tab. Close this tab.
![image shows registration completed](./media/silverfort-azure-ad-integration/registration-completed.png)
-5. On the **Settings** page, select **Save Changes**.
+7. On the **Settings** page, select **Save Changes**.
![image shows the azure ad adapter](./media/silverfort-azure-ad-integration/silverfort-azure-ad-adapter.png)
-6. Sign in to your Azure AD console. You'll see **Silverfort Azure AD Adapter** application registered as an Enterprise application.
+8. Sign in to your Azure AD console. In the left pane, select **Enterprise applications**. The **Silverfort Azure AD Adapter** application appears as registered.
![image shows enterprise application](./media/silverfort-azure-ad-integration/enterprise-application.png)
-7. In the Silverfort admin console, navigate to the **Policies** page and select **Create Policy**. The **New Policy** dialog appears.
-8. Enter a **Policy Name**, the application name to be created in Azure. For example, if adding multiple servers or applications under this policy, name it to reflect the resources covered by the policy. In the example, we create a policy for the SL-APP1 server.
+9. In the Silverfort admin console, navigate to the **Policies** page and select **Create Policy**. The **New Policy** dialog appears.
+10. Enter a **Policy Name**, the application name to be created in Azure. For example, if adding multiple servers or applications for this policy, name it to reflect the resources covered by the policy. In the example, we create a policy for the SL-APP1 server.
![image shows define policy](./media/silverfort-azure-ad-integration/define-policy.png)
-9. Select the **Authentication** type, and **Protocol**.
+11. Select the **Auth Type**, and **Protocol**.
-10. In the **Users and Groups** field, select the **edit** icon to configure users affected by the policy. These users' authentication will be bridged to Azure AD.
+12. In the **Users and Groups** field, select the **edit** icon to configure users affected by the policy. These users' authentication bridges to Azure AD.
![image shows user and groups](./media/silverfort-azure-ad-integration/user-groups.png)
-11. Search and select users, groups, or Organization Units (OUs).
+13. Search and select users, groups, or Organization Units (OUs).
![image shows search users](./media/silverfort-azure-ad-integration/search-users.png)
-12. Selected users appear in the **SELECTED** box.
+14. Selected users appear in the **SELECTED** box.
![image shows selected user](./media/silverfort-azure-ad-integration/select-user.png)
-13. Select the **Source** for which the policy will apply. In this example, All Devices are selected.
+15. Select the **Source** for which the policy will apply. In this example, **All Devices** is selected.
![image shows source](./media/silverfort-azure-ad-integration/source.png)
-14. Set the **Destination** to SL-App1. Optional: You can select the **edit** button to change or add more resources or groups of resources.
+16. Set the **Destination** to SL-App1. Optional: You can select the **edit** button to change or add more resources, or groups of resources.
![image shows destination](./media/silverfort-azure-ad-integration/destination.png)
-15. For Action, select **AZURE AD BRIDGE**.
+17. For Action, select **AZURE AD BRIDGE**.
![image shows save azure ad bridge](./media/silverfort-azure-ad-integration/save-azure-ad-bridge.png)
-16. Select **Save** to save the policy. You're prompted to enable or activate it.
+18. Select **Save**. You're prompted to turn on the policy.
![image shows change status](./media/silverfort-azure-ad-integration/change-status.png)
-17. The policy appears on the Policies page, in the Azure AD Bridge section.
+19. In the Azure AD Bridge section, the policy appears on the Policies page.
![image shows add policy](./media/silverfort-azure-ad-integration/add-policy.png)
-18. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application appears. You can include this application in [Conditional Access policies](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
+20. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application appears. You can include this application in Conditional Access policies.
+
+Learn more: [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
## Next steps
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
You need the Azure CLI version 2.0.76 or later installed and configured. Run `a
## Limitations and region availability
-AKS clusters can currently be created using availability zones in the following regions:
-
-* Australia East
-* Brazil South
-* Canada Central
-* Central India
-* Central US
-* East Asia
-* East US
-* East US 2
-* France Central
-* Germany West Central
-* Japan East
-* Korea Central
-* North Europe
-* Norway East
-* Southeast Asia
-* South Africa North
-* South Central US
-* Sweden Central
-* Switzerland North
-* UK South
-* US Gov Virginia
-* West Europe
-* West US 2
-* West US 3
+AKS clusters can be created using availability zones in any Azure region that has availability zones.
The following limitations apply when you create an AKS cluster using availability zones:
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Replace *myKeyVaultName* with the name of your key vault. You will also need a
```azurecli-interactive # Retrieve the Key Vault Id and store it in a variable
-$keyVaultId=az keyvault show --name myKeyVaultName --query "[id]" -o tsv
+keyVaultId=$(az keyvault show --name myKeyVaultName --query "[id]" -o tsv)
# Retrieve the Key Vault key URL and store it in a variable
-$keyVaultKeyUrl=az keyvault key show --vault-name myKeyVaultName --name myKeyName --query "[key.kid]" -o tsv
+keyVaultKeyUrl=$(az keyvault key show --vault-name myKeyVaultName --name myKeyName --query "[key.kid]" -o tsv)
# Create a DiskEncryptionSet az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName -g myResourceGroup --source-vault $keyVaultId --key-url $keyVaultKeyUrl
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
The `return-response` policy aborts pipeline execution and returns either a defa
```xml <return-response response-variable-name="existing context variable">
+ <set-status/>
<set-header/> <set-body/>
- <set-status/>
</return-response> ```
The `return-response` policy aborts pipeline execution and returns either a defa
| Element | Description | Required | | | -- | -- | | return-response | Root element. | Yes |
+| set-status | A [set-status](api-management-advanced-policies.md#SetStatus) policy statement. | No |
| set-header | A [set-header](api-management-transformation-policies.md#SetHTTPheader) policy statement. | No | | set-body | A [set-body](api-management-transformation-policies.md#SetBody) policy statement. | No |
-| set-status | A [set-status](api-management-advanced-policies.md#SetStatus) policy statement. | No |
### Attributes
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
Title: Configure Java apps description: Learn how to configure Java apps to run on Azure App Service. This article shows the most common configuration tasks. keywords: azure app service, web app, windows, oss, java, tomcat, jboss- ms.devlang: java Last updated 04/12/2019- zone_pivot_groups: app-service-platform-windows-linux
app-service Deploy Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-best-practices.md
Title: Deployment best practices description: Learn about the key mechanisms of deploying to Azure App Service. Find language-specific recommendations and other caveats. keywords: azure app service, web app, deploy, deployment, pipelines, build-- ms.assetid: bb51e565-e462-4c60-929a-2ff90121f41d Last updated 07/31/2019- # Deployment Best Practices
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-container-github-action.md
Title: Custom container CI/CD from GitHub Actions
description: Learn how to use GitHub Actions to deploy your custom Linux container to App Service from a CI/CD pipeline. Last updated 12/15/2021- ms.devlang: azurecli
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
Title: Configure CI/CD with GitHub Actions
description: Learn how to deploy your code to Azure App Service from a CI/CD pipeline with GitHub Actions. Customize the build tasks and execute complex deployments. Last updated 12/14/2021-
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
app-service Quickstart Java Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java-uiex.md
Title: 'Quickstart: Create a Java app on Azure App Service' description: Deploy your first Java Hello World to Azure App Service in minutes. The Azure Web App Plugin for Maven makes it convenient to deploy Java apps. keywords: azure, app service, web app, windows, linux, java, maven, quickstart- ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Last updated 08/01/2020- zone_pivot_groups: app-service-platform-windows-linux
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
Title: 'Quickstart: Create a Java app on Azure App Service' description: Deploy your first Java Hello World to Azure App Service in minutes. The Azure Web App Plugin for Maven makes it convenient to deploy Java apps. keywords: azure, app service, web app, windows, linux, java, maven, quickstart- ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Last updated 03/03/2022- zone_pivot_groups: app-service-platform-environment adobe-target: true
app-service Quickstart Multi Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md
This article requires version 2.0.32 or later of the Azure CLI. If using Azure C
## Download the sample
-For this quickstart, you use the compose file from [Docker](https://docs.docker.com/compose/wordpress/#define-the-project). The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress).
+For this quickstart, you use the compose file from [Docker](https://docs.docker.com/samples/wordpress/). The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress).
[!code-yml[Main](../../azure-app-service-multi-container/docker-compose-wordpress.yml)]
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
To complete this tutorial, you need experience with [Docker Compose](https://doc
## Download the sample
-For this tutorial, you use the compose file from [Docker](https://docs.docker.com/compose/wordpress/#define-the-project), but you'll modify it to include Azure Database for MySQL, persistent storage, and Redis. The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress). For supported configuration options, see [Docker Compose options](configure-custom-container.md#docker-compose-options).
+For this tutorial, you use the compose file from [Docker](https://docs.docker.com/samples/wordpress/), but you'll modify it to include Azure Database for MySQL, persistent storage, and Redis. The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress). For supported configuration options, see [Docker Compose options](configure-custom-container.md#docker-compose-options).
[!code-yml[Main](../../azure-app-service-multi-container/docker-compose-wordpress.yml)]
application-gateway Self Signed Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/self-signed-certificates.md
Previously updated : 07/23/2019 Last updated : 11/28/2022
To upload the certificate in Application Gateway, you must export the .crt certi
### Azure portal
-To upload the trusted root certificate from the portal, select the **HTTP Settings** and choose the **HTTPS** protocol.
-
-![Add a certificate using the portal](media/self-signed-certificates/portal-cert.png)
+To upload the trusted root certificate from the portal, select the **Backend Settings** and select **HTTPS** in the **Backend protocol**.
### Azure PowerShell Or, you can use Azure CLI or Azure PowerShell to upload the root certificate. The following code is an Azure PowerShell sample.
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
See how data, including name, job title, address, email, and company name, is ex
* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Previously updated : 08/22/2022 Last updated : 11/28/2022
-monikerRange: '>=form-recog-2.1.0'
recommendations: false
-# Custom document models
+# Azure Form Recognizer Custom document model
+ Form Recognizer uses advanced machine learning technology to detect and extract information from forms and documents and returns the extracted data in a structured JSON output. With Form Recognizer, you can use prebuilt or pre-trained models or you can train standalone custom models. Custom models extract and analyze distinct data and use cases from forms and documents specific to your business. Standalone custom models can be combined to create [composed models](concept-composed-models.md). To create a custom model, you label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started. + ## Custom document model types Custom document models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
-### Custom template model (v3.0)
+### Custom template model
The custom template or custom form model relies on a consistent visual template to extract the labeled data. The accuracy of your model is affected by variances in the visual structure of your documents. Structured forms such as questionnaires or applications are examples of consistent visual templates.
Your training set will consist of structured documents where the formatting and
> > For more information, *see* [Interpret and improve accuracy and confidence for custom models](concept-accuracy-confidence.md).
-### Custom neural model (v3.0)
+### Custom neural model
The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
-## Build mode
+### Build mode
The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode.
The following tools are supported by Form Recognizer v3.0:
|||:| |Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[Python SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|***custom-model-id***| ++ The following tools are supported by Form Recognizer v2.1:
-| Feature | Resources | Model ID|
-|||:|
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
+| Feature | Resources |
+|||
+|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try building a custom model
+## Build a custom model
-Try extracting data from your specific or unique documents using custom models. You need the following resources:
+Extract data from your specific or unique documents using custom models. You need the following resources:
* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/). * A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint. :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio
+
+## Sample Labeling tool
+
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
+
+* The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR) services.
+
+* Try the [**Sample Labeling tool quickstart**](quickstarts/try-sample-label-tool.md#train-a-custom-model) to get started building and using a custom model.
+++
+## Form Recognizer Studio
> [!NOTE] > Form Recognizer Studio is available with the v3.0 API.
The following table describes the features available with the associated tools a
| Document type | REST API | SDK | Label and Test Models| |--|--|--|--| | Custom form 2.1 | [Form Recognizer 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Form Recognizer SDK](quickstarts/get-started-v2-1-sdk-rest-api.md?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
-| Custom template 3.0 | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-| Custom neural | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom template 3.0 | [Form Recognizer 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom neural | [Form Recognizer 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
> [!NOTE] > Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API.
The following table describes the features available with the associated tools a
> Training data: > >* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
- > * Please supply only a single instance of the form per document.
+ > * Please supply only a single instance of the form per document.
> * For filled-in forms, use examples that have all their fields filled in. > * Use forms with different values in each field. >* If your form images are of lower quality, use a larger dataset. For example, use 10 to 15 images.
-The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) doesn't support the BMP file format. This limitation relates to the tool, not the Form Recognizer service.
- ## Supported languages and locales >[!NOTE]
The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) doesn't support
The Form Recognizer v3.0 version introduces more language support for custom models. For a list of supported handwritten and printed text, see [Language support](language-support.md).
-## Form Recognizer v3.0
- Form Recognizer v3.0 introduces several new features and capabilities:
-* **Custom model API (v3.0)**: This version supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not.
+* **Custom model API**: This version supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not.
* [Form Recognizer v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the v3.0 version in your applications and workflows.
-* [REST API ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument): This API shows you more about the v3.0 version and new capabilities.
+* [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument): This API shows you more about the v3.0 version and new capabilities.
### Try signature detection
The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) doesn't support
After your training set is labeled, you can train your custom model and use it to analyze documents. The signature fields specify whether a signature was detected or not. ++ ## Next steps
-Explore Form Recognizer quickstarts and REST APIs:
+
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-| Quickstart | REST API|
-|--|--|
-|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-08-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|
-| [v2.1 quickstart](quickstarts/get-started-v2-1-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Previously updated : 11/08/2022 Last updated : 11/28/2022 recommendations: false
recommendations: false
::: moniker range="form-recog-3.0.0"
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning based optical character recognition (OCR) and document understanding technologies to extract print and handwritten text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning-based optical character recognition (OCR) and document understanding technologies to extract text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
You can Use Form Recognizer to automate your document processing in applications
|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> | |[**Layout analysis model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>| |[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
-|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
+|[**W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>| |[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>| |[**Identity document (ID) model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
You can Use Form Recognizer to automate your document processing in applications
::: moniker range="form-recog-2.1.0"
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning based optical character recognition (OCR) and document understanding technologies to extract print and handwritten text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning-based optical character recognition (OCR) and document understanding technologies to extract text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
-|**Document analysis model**| &#9679; [**Layout analysis model**](concept-layout.md) </br> |
-| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
-| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
+|**Document analysis model**| &#9679; [**Layout analysis model**](concept-layout.md?view=form-recog-2.1.0&preserve-view=true) </br> |
+| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md?view=form-recog-2.1.0&preserve-view=true)</br>&#9679; [**Receipt model**](concept-receipt.md?view=form-recog-2.1.0&preserve-view=true) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md?view=form-recog-2.1.0&preserve-view=true) </br>&#9679; [**Business card model**](concept-business-card.md?view=form-recog-2.1.0&preserve-view=true) </br>
+| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md?view=form-recog-2.1.0&preserve-view=true)|
## Which document processing model should I use?
This section will help you decide which Form Recognizer v2.1 supported model you
| Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
-|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables and selection marks.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout analysis model**](concept-layout.md)
-|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md)
- |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
-|**Identity document (ID)** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
-|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)|
-|**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables and selection marks.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout analysis model**](concept-layout.md?view=form-recog-2.1.0&preserve-view=true)
+|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md?view=form-recog-2.1.0&preserve-view=true)
+ |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md?view=form-recog-2.1.0&preserve-view=true)|
+|**Identity document (ID)** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md?view=form-recog-2.1.0&preserve-view=true)|
+|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md?view=form-recog-2.1.0&preserve-view=true)|
+|**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md?view=form-recog-2.1.0&preserve-view=true)|
## Form Recognizer models and development options
Use the links in the table to learn more about each model and browse the API ref
| Model| Description | Development options | |-|--|-|
-|[**Layout analysis**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Identity document (ID) model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout analysis**](concept-layout.md?view=form-recog-2.1.0&preserve-view=true) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Custom model**](concept-custom.md?view=form-recog-2.1.0&preserve-view=true) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md)</li><li>[**Sample Labeling Tool**](concept-custom.md?view=form-recog-2.1.0&preserve-view=true#build-a-custom-model)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md#try-it-prebuilt-model)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Identity document (ID) model**](concept-id-document.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
Use the links in the table to learn more about each model and browse the API ref
::: moniker range="form-recog-3.0.0"
-> [!div class="checklist"]
->
-> * Try our [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more.
-> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
::: moniker-end ::: moniker range="form-recog-2.1.0"
-> [!div class="checklist"]
->
-> * Try our [**Sample Labeling online tool**](https://aka.ms/fott-2.1-ga/)
-> * Follow our [**client library / REST API quickstart**](./quickstarts/try-sdk-rest-api.md) to get started extracting data from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) to learn more.
-> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes.
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
::: moniker-end
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
You'll need the following to get started:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer), or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
> [!TIP] > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
Form Recognizer offers several prebuilt models to choose from. Each model has it
1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
- :::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown menu.":::
+ :::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot of the 'select-form-type' dropdown menu.":::
1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
Train a custom model to analyze and extract data from forms and documents specif
1. Select the save button at the top of the page to save the changes.
- CORS should now be configured to use the storage account from Form Recognizer Studio.
- ### Use the Sample Labeling tool 1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
Choose the Train icon on the left pane to open the Training page. Then select th
* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./get-started-sdks-rest-api.md?pivots=programming-language-rest-api) or [client library](./get-started-sdks-rest-api.md).
-* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling additional forms and retraining to create a new model. We recommend starting by labeling five forms analyzing and testing the results and then if needed adding more forms as needed.
+* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling more forms and retraining to create a new model. We recommend starting by labeling five forms analyzing and testing the results and then if needed adding more forms as needed.
* The list of tags, and the estimated accuracy per tag. For more information, _see_ [Interpret and improve accuracy and confidence](../concept-accuracy-confidence.md). :::image type="content" source="../media/label-tool/custom-3.jpg" alt-text="Training view tool.":::
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
In this migration guide, you've learned how to upgrade your existing Form Recogn
* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) * [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
-
+* [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md)
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Azure Attestation is the preferred choice for attesting TEEs as it offers the fo
- Protects its data while-in use with implementation in an SGX enclave - Highly available service
+## How to establish trust with Azure Attestation
+
+1. **Verify if attestation token is generated by Azure Attestation** - Attestation token generated by Azure Attestation is signed using a self-signed certificate. The signing certificates URL is exposed via an [OpenID metadata endpoint](/rest/api/attestation/metadata-configuration/get?tabs=HTTP#get-openid-metadata). Relying party can retrieve the signing certificate and perform signature verification of the attestation token. See [code samples](https://github.com/Azure-Samples/microsoft-azure-attestation/blob/master/sgx.attest.sample.oe.sdk/validatequotes.net/Helpers/JwtValidationHelper.cs#L21-L22) for more information
+
+2. **Verify if Azure Attestation is running inside an SGX enclave** - The token signing certificates include SGX quote of the TEE inside which Azure Attestation runs. If relying party prefers to check if Azure Attestation is running inside a valid SGX enclave, the SGX quote can be retrieved from the signing certificate and locally validated. See [code samples](https://github.com/Azure-Samples/microsoft-azure-attestation/blob/e7f296ee2ca1dd93b75acdc6bab0cc9a6a20c17c/sgx.attest.sample.oe.sdk/validatequotes.net/MaaQuoteValidator.cs#L62-L65) for more information
+
+3. **Validate binding of Azure Attestation SGX quote with the key that signed the attestation token** ΓÇô Relying party can verify if hash of the public key that signed the attestation token matches the report data field of the Azure Attestation SGX quote. See [code samples](https://github.com/Azure-Samples/microsoft-azure-attestation/blob/e7f296ee2ca1dd93b75acdc6bab0cc9a6a20c17c/sgx.attest.sample.oe.sdk/validatequotes.net/MaaQuoteValidator.cs#L78-L105) for more information
+
+4. **Validate if Azure Attestation code measurements match the Azure published values** - The SGX quote embedded in attestation token signing certificates includes code measurements of Azure Attestation, like mrsigner. If relying party is interested to validate if the SGX quote belongs to Azure Attestation running inside Azure, mrsigner value can be retrieved from the SGX quote in attestation token signing certificate and compared with the value provided by Azure Attestation team. If you are interested to perform this validation, please submit a request on [Azure support](/support/) page. Azure Attestation team will reach out to you when Mrsigner is planned for rotation.
+
+Mrsigner of Azure Attestation is expected to change when code signing certificates are rotated. Azure Attestation team will follow the below rollout schedule for every mrsigner rotation:
+I. Azure Attestation team will notify the upcoming MRSIGNER value with a 2 month grace period for making relevant code changes
+II. After the 2-month grace period, Azure Attestation will start using the new MRSIGNER value
+III. 3 months post notification date, Azure Attestation will stop using the old MRSIGNER value
++ ## Business Continuity and Disaster Recovery (BCDR) support [Business Continuity and Disaster Recovery](../availability-zones/cross-region-replication-azure.md) (BCDR) for Azure Attestation enables to mitigate service disruptions resulting from significant availability issues or disaster events in a region.
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Title: "Upgrade Azure Arc-enabled Kubernetes agents"-- Last updated 09/09/2022 description: "Control agent upgrades for Azure Arc-enabled Kubernetes"
-keywords: "Kubernetes, Arc, Azure, K8s, containers, agent, update, auto upgrade"
# Upgrade Azure Arc-enabled Kubernetes agents
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
Title: "Azure RBAC for Azure Arc-enabled Kubernetes clusters"-- Previously updated : 04/05/2021 Last updated : 11/28/2022 description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters."
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
```azurecli az extension add --name connectedk8s ```
-
- If the `connectedk8s` extension is already installed, you can update it to the latest version by using the following command:
+
+ If the `connectedk8s` extension is already installed, you can update it to the latest version by using the following command:
```azurecli az extension update --name connectedk8s ``` - Connect an existing Azure Arc-enabled Kubernetes cluster:
- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
> [!NOTE] > You can't set up this feature for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc. This feature isn't supported on AKS on Azure Stack HCI. ## Set up Azure AD applications - ### [AzureCLI >= v2.37](#tab/AzureCLI)
-#### Create a server application
-1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `serverApplicationId`.
- ```azurecli
- CLUSTER_NAME="<clusterName>"
- TENANT_ID="<tenant>"
- SERVER_UNIQUE_SUFFIX="<identifier_suffix>"
- SERVER_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Server" --identifier-uris "api://${TENANT_ID}/${SERVER_UNIQUE_SUFFIX}" --query appId -o tsv)
- echo $SERVER_APP_ID
- ```
-
-1. To grant "Sign in and read user profile" API permissions to the server application. Copy this JSON and save it in a file called oauth2-permissions.json:
-
- ```json
- {
- "oauth2PermissionScopes": [
- {
- "adminConsentDescription": "Sign in and read user profile",
- "adminConsentDisplayName": "Sign in and read user profile",
- "id": "<unique_guid>",
- "isEnabled": true,
- "type": "User",
- "userConsentDescription": "Sign in and read user profile",
- "userConsentDisplayName": "Sign in and read user profile",
- "value": "User.Read"
- }
- ]
- }
- ```
+#### Create a server application
-1. Update the application's group membership claims. Run the commands in the same directory as `oauth2-permissions.json` file. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](../../active-directory/develop/supported-accounts-validation.md):
-
- ```azurecli
- az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
- az ad app update --id ${SERVER_APP_ID} --set api=@oauth2-permissions.json
- az ad app update --id ${SERVER_APP_ID} --set signInAudience=AzureADMyOrg
- SERVER_OBJECT_ID=$(az ad app show --id "${SERVER_APP_ID}" --query "id" -o tsv)
- az rest --method PATCH --headers "Content-Type=application/json" --uri https://graph.microsoft.com/v1.0/applications/${SERVER_OBJECT_ID}/ --body '{"api":{"requestedAccessTokenVersion": 1}}'
- ```
+1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `serverApplicationId`.
+ ```azurecli
+ CLUSTER_NAME="<clusterName>"
+ TENANT_ID="<tenant>"
+ SERVER_UNIQUE_SUFFIX="<identifier_suffix>"
+ SERVER_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Server" --identifier-uris "api://${TENANT_ID}/${SERVER_UNIQUE_SUFFIX}" --query appId -o tsv)
+ echo $SERVER_APP_ID
+ ```
+
+1. To grant "Sign in and read user profile" API permissions to the server application. Copy this JSON and save it in a file called oauth2-permissions.json:
+
+ ```json
+ {
+ "oauth2PermissionScopes": [
+ {
+ "adminConsentDescription": "Sign in and read user profile",
+ "adminConsentDisplayName": "Sign in and read user profile",
+ "id": "<unique_guid>",
+ "isEnabled": true,
+ "type": "User",
+ "userConsentDescription": "Sign in and read user profile",
+ "userConsentDisplayName": "Sign in and read user profile",
+ "value": "User.Read"
+ }
+ ]
+ }
+ ```
+
+1. Update the application's group membership claims. Run the commands in the same directory as `oauth2-permissions.json` file. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](../../active-directory/develop/supported-accounts-validation.md):
+
+ ```azurecli
+ az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
+ az ad app update --id ${SERVER_APP_ID} --set api=@oauth2-permissions.json
+ az ad app update --id ${SERVER_APP_ID} --set signInAudience=AzureADMyOrg
+ SERVER_OBJECT_ID=$(az ad app show --id "${SERVER_APP_ID}" --query "id" -o tsv)
+ az rest --method PATCH --headers "Content-Type=application/json" --uri https://graph.microsoft.com/v1.0/applications/${SERVER_OBJECT_ID}/ --body '{"api":{"requestedAccessTokenVersion": 1}}'
+ ```
1. Create a service principal and get its `password` field value. This value is required later as `serverApplicationSecret` when you're enabling this feature on the cluster. Please note that this secret is valid for 1 year by default and will need to be [rotated after that](./azure-rbac.md#refresh-the-secret-of-the-server-application). Please refer to [this](/cli/azure/ad/sp/credential?view=azure-cli-latest&preserve-view=true#az-ad-sp-credential-reset) to set a custom expiry duration.
- ```azurecli
- az ad sp create --id "${SERVER_APP_ID}"
- SERVER_APP_SECRET=$(az ad sp credential reset --id "${SERVER_APP_ID}" --query password -o tsv)
- ```
+ ```azurecli
+ az ad sp create --id "${SERVER_APP_ID}"
+ SERVER_APP_SECRET=$(az ad sp credential reset --id "${SERVER_APP_ID}" --query password -o tsv)
+ ```
-1. Grant "Sign in and read user profile" API permissions to the application. [Additional information](/cli/azure/ad/app/permission?view=azure-cli-latest#az-ad-app-permission-add-examples):
+1. Grant "Sign in and read user profile" API permissions to the application. [Additional information](/cli/azure/ad/app/permission?view=azure-cli-latest&preserve-view=true##az-ad-app-permission-add-examples):
- ```azurecli
- az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
- az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --scope User.Read
- ```
+ ```azurecli
+ az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
+ az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --scope User.Read
+ ```
- > [!NOTE]
- > An Azure tenant administrator has to run this step.
- >
- > For usage of this feature in production, we recommend that you create a different server application for every cluster.
+ > [!NOTE]
+ > An Azure tenant administrator has to run this step.
+ >
+ > For usage of this feature in production, we recommend that you create a different server application for every cluster.
#### Create a client application 1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `clientApplicationId`.
- ```azurecli
- CLIENT_UNIQUE_SUFFIX="<identifier_suffix>"
- CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Client" --is-fallback-public-client --public-client-redirect-uris "api://${TENANT_ID}/${CLIENT_UNIQUE_SUFFIX}" --query appId -o tsv)
- echo $CLIENT_APP_ID
- ```
-
+ ```azurecli
+ CLIENT_UNIQUE_SUFFIX="<identifier_suffix>"
+ CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Client" --is-fallback-public-client --public-client-redirect-uris "api://${TENANT_ID}/${CLIENT_UNIQUE_SUFFIX}" --query appId -o tsv)
+ echo $CLIENT_APP_ID
+ ```
2. Create a service principal for this client application:
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
az rest --method PATCH --headers "Content-Type=application/json" --uri https://graph.microsoft.com/v1.0/applications/${CLIENT_OBJECT_ID}/ --body '{"api":{"requestedAccessTokenVersion": 1}}' ``` - ### [AzureCLI < v2.37](#tab/AzureCLI236)+ #### Create a server application+ 1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `serverApplicationId`.
- ```azurecli
- CLUSTER_NAME="<clusterName>"
- TENANT_ID="<tenant>"
- SERVER_UNIQUE_SUFFIX="<identifier_suffix>"
- SERVER_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Server" --identifier-uris "api://${TENANT_ID}/${SERVER_UNIQUE_SUFFIX}" --query appId -o tsv)
- echo $SERVER_APP_ID
- ```
+ ```azurecli
+ CLUSTER_NAME="<clusterName>"
+ TENANT_ID="<tenant>"
+ SERVER_UNIQUE_SUFFIX="<identifier_suffix>"
+ SERVER_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Server" --identifier-uris "api://${TENANT_ID}/${SERVER_UNIQUE_SUFFIX}" --query appId -o tsv)
+ echo $SERVER_APP_ID
+ ```
-1. Update the application's group membership claims:
- ```azurecli
- az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
- ```
+1. Update the application's group membership claims:
+
+ ```azurecli
+ az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
+ ```
1. Create a service principal and get its `password` field value. This value is required later as `serverApplicationSecret` when you're enabling this feature on the cluster. This secret is valid for one year by default and will need to be [rotated after that](./azure-rbac.md#refresh-the-secret-of-the-server-application). You can also [set a custom expiration duration](/cli/azure/ad/sp/credential?view=azure-cli-latest&preserve-view=true#az-ad-sp-credential-reset).
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --credential-description "ArcSecret" --query password -o tsv) ```
-1. Grant "Sign in and read user profile" API permissions to the application. [Additional information](/cli/azure/ad/app/permission?view=azure-cli-latest#az-ad-app-permission-add-examples):
+1. Grant "Sign in and read user profile" API permissions to the application. [Additional information](/cli/azure/ad/app/permission?view=azure-cli-latest&preserve-view=true##az-ad-app-permission-add-examples):
- ```azurecli
+ ```azurecli
az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 ``` > [!NOTE] > An Azure tenant administrator has to run this step.
- >
+ >
> For usage of this feature in production, we recommend that you create a different server application for every cluster. #### Create a client application 1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `clientApplicationId`.
- ```azurecli
- CLIENT_UNIQUE_SUFFIX="<identifier_suffix>"
- CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Client" --native-app --reply-urls "api://${TENANT_ID}/${CLIENT_UNIQUE_SUFFIX}" --query appId -o tsv)
- echo $CLIENT_APP_ID
- ```
+ ```azurecli
+ CLIENT_UNIQUE_SUFFIX="<identifier_suffix>"
+ CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Client" --native-app --reply-urls "api://${TENANT_ID}/${CLIENT_UNIQUE_SUFFIX}" --query appId -o tsv)
+ echo $CLIENT_APP_ID
+ ```
2. Create a service principal for this client application:
- ```azurecli
- az ad sp create --id "${CLIENT_APP_ID}"
- ```
+ ```azurecli
+ az ad sp create --id "${CLIENT_APP_ID}"
+ ```
3. Get the `oAuthPermissionId` value for the server application:
- ```azurecli
- az ad app show --id "${SERVER_APP_ID}" --query "oauth2Permissions[0].id" -o tsv
- ```
+ ```azurecli
+ az ad app show --id "${SERVER_APP_ID}" --query "oauth2Permissions[0].id" -o tsv
+ ```
4. Grant the required permissions for the client application:
- ```azurecli
- az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
- az ad app permission grant --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}"
- ```
+ ```azurecli
+ az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
+ az ad app permission grant --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}"
+ ```
+ ## Create a role assignment for the server application
The server application needs the `Microsoft.Authorization/*/read` permissions to
1. Create a file named *accessCheck.json* with the following contents:
- ```json
- {
- "Name": "Read authorization",
- "IsCustom": true,
- "Description": "Read authorization",
- "Actions": ["Microsoft.Authorization/*/read"],
- "NotActions": [],
- "DataActions": [],
- "NotDataActions": [],
- "AssignableScopes": [
- "/subscriptions/<subscription-id>"
- ]
- }
- ```
+ ```json
+ {
+ "Name": "Read authorization",
+ "IsCustom": true,
+ "Description": "Read authorization",
+ "Actions": ["Microsoft.Authorization/*/read"],
+ "NotActions": [],
+ "DataActions": [],
+ "NotDataActions": [],
+ "AssignableScopes": [
+ "/subscriptions/<subscription-id>"
+ ]
+ }
+ ```
Replace `<subscription-id>` with the actual subscription ID.
Enable Azure role-based access control (RBAC) on your Azure Arc-enabled Kubernet
```azurecli az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id "${SERVER_APP_ID}" --app-secret "${SERVER_APP_SECRET}" ```
-
+ > [!NOTE] > Before you run the preceding command, ensure that the `kubeconfig` file on the machine is pointing to the cluster on which you'll enable the Azure RBAC feature. >
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
``` 1. Open the `apiserver` manifest in edit mode:
-
- ```console
- sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
- ```
+
+ ```console
+ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
+ ```
1. Add the following specification under `volumes`:
-
- ```yml
- - name: azure-rbac
- hostPath:
- path: /etc/guard
- type: Directory
- ```
+
+ ```yml
+ - name: azure-rbac
+ hostPath:
+ path: /etc/guard
+ type: Directory
+ ```
1. Add the following specification under `volumeMounts`:
- ```yml
- - mountPath: /etc/guard
- name: azure-rbac
- readOnly: true
- ```
+ ```yml
+ - mountPath: /etc/guard
+ name: azure-rbac
+ readOnly: true
+ ```
**If your `kube-apiserver` is a not a static pod:** 1. Open the `apiserver` manifest in edit mode:
-
- ```console
- sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
- ```
+
+ ```console
+ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
+ ```
1. Add the following specification under `volumes`:
-
- ```yml
- - name: azure-rbac
- secret:
- secretName: azure-arc-guard-manifests
- ```
+
+ ```yml
+ - name: azure-rbac
+ secret:
+ secretName: azure-arc-guard-manifests
+ ```
1. Add the following specification under `volumeMounts`:
- ```yml
- - mountPath: /etc/guard
- name: azure-rbac
- readOnly: true
- ```
+ ```yml
+ - mountPath: /etc/guard
+ name: azure-rbac
+ readOnly: true
+ ```
1. Add the following `apiserver` arguments:
- ```yml
- - --authentication-token-webhook-config-file=/etc/guard/guard-authn-webhook.yaml
- - --authentication-token-webhook-cache-ttl=5m0s
- - --authorization-webhook-cache-authorized-ttl=5m0s
- - --authorization-webhook-config-file=/etc/guard/guard-authz-webhook.yaml
- - --authorization-webhook-version=v1
- - --authorization-mode=Node,RBAC,Webhook
- ```
+ ```yml
+ - --authentication-token-webhook-config-file=/etc/guard/guard-authn-webhook.yaml
+ - --authentication-token-webhook-cache-ttl=5m0s
+ - --authorization-webhook-cache-authorized-ttl=5m0s
+ - --authorization-webhook-config-file=/etc/guard/guard-authz-webhook.yaml
+ - --authorization-webhook-version=v1
+ - --authorization-mode=Node,RBAC,Webhook
+ ```
- If the Kubernetes cluster is version 1.19.0 or later, you also need to set the following `apiserver` argument:
+ If the Kubernetes cluster is version 1.19.0 or later, you also need to set the following `apiserver` argument:
- ```yml
- - --authentication-token-webhook-version=v1
- ```
+ ```yml
+ - --authentication-token-webhook-version=v1
+ ```
1. Save and close the editor to update the `apiserver` pod. - ### Cluster created by using Cluster API 1. Copy the guard secret that contains authentication and authorization webhook configuration files from the workload cluster onto your machine:
- ```console
- kubectl get secret azure-arc-guard-manifests -n kube-system -o yaml > azure-arc-guard-manifests.yaml
- ```
+ ```console
+ kubectl get secret azure-arc-guard-manifests -n kube-system -o yaml > azure-arc-guard-manifests.yaml
+ ```
1. Change the `namespace` field in the *azure-arc-guard-manifests.yaml* file to the namespace within the management cluster where you're applying the custom resources for creation of workload clusters. 1. Apply this manifest:
- ```console
- kubectl apply -f azure-arc-guard-manifests.yaml
- ```
+ ```console
+ kubectl apply -f azure-arc-guard-manifests.yaml
+ ```
1. Edit the `KubeadmControlPlane` object by running `kubectl edit kcp <clustername>-control-plane`:
-
- 1. Add the following snippet under `files`:
-
- ```console
- - contentFrom:
- secret:
- key: guard-authn-webhook.yaml
- name: azure-arc-guard-manifests
- owner: root:root
- path: /etc/kubernetes/guard-authn-webhook.yaml
- permissions: "0644"
- - contentFrom:
- secret:
- key: guard-authz-webhook.yaml
- name: azure-arc-guard-manifests
- owner: root:root
- path: /etc/kubernetes/guard-authz-webhook.yaml
- permissions: "0644"
- ```
-
- 1. Add the following snippet under `apiServer` > `extraVolumes`:
-
- ```console
- - hostPath: /etc/kubernetes/guard-authn-webhook.yaml
- mountPath: /etc/guard/guard-authn-webhook.yaml
- name: guard-authn
- readOnly: true
- - hostPath: /etc/kubernetes/guard-authz-webhook.yaml
- mountPath: /etc/guard/guard-authz-webhook.yaml
- name: guard-authz
- readOnly: true
- ```
-
- 1. Add the following snippet under `apiServer` > `extraArgs`:
-
- ```console
- authentication-token-webhook-cache-ttl: 5m0s
- authentication-token-webhook-config-file: /etc/guard/guard-authn-webhook.yaml
- authentication-token-webhook-version: v1
- authorization-mode: Node,RBAC,Webhook
- authorization-webhook-cache-authorized-ttl: 5m0s
- authorization-webhook-config-file: /etc/guard/guard-authz-webhook.yaml
- authorization-webhook-version: v1
- ```
-
- 1. Save and close to update the `KubeadmControlPlane` object. Wait for these changes to appear on the workload cluster.
+ 1. Add the following snippet under `files`:
+
+ ```console
+ - contentFrom:
+ secret:
+ key: guard-authn-webhook.yaml
+ name: azure-arc-guard-manifests
+ owner: root:root
+ path: /etc/kubernetes/guard-authn-webhook.yaml
+ permissions: "0644"
+ - contentFrom:
+ secret:
+ key: guard-authz-webhook.yaml
+ name: azure-arc-guard-manifests
+ owner: root:root
+ path: /etc/kubernetes/guard-authz-webhook.yaml
+ permissions: "0644"
+ ```
+
+ 1. Add the following snippet under `apiServer` > `extraVolumes`:
+
+ ```console
+ - hostPath: /etc/kubernetes/guard-authn-webhook.yaml
+ mountPath: /etc/guard/guard-authn-webhook.yaml
+ name: guard-authn
+ readOnly: true
+ - hostPath: /etc/kubernetes/guard-authz-webhook.yaml
+ mountPath: /etc/guard/guard-authz-webhook.yaml
+ name: guard-authz
+ readOnly: true
+ ```
+
+ 1. Add the following snippet under `apiServer` > `extraArgs`:
+
+ ```console
+ authentication-token-webhook-cache-ttl: 5m0s
+ authentication-token-webhook-config-file: /etc/guard/guard-authn-webhook.yaml
+ authentication-token-webhook-version: v1
+ authorization-mode: Node,RBAC,Webhook
+ authorization-webhook-cache-authorized-ttl: 5m0s
+ authorization-webhook-config-file: /etc/guard/guard-authz-webhook.yaml
+ authorization-webhook-version: v1
+ ```
+
+ 1. Save and close to update the `KubeadmControlPlane` object. Wait for these changes to appear on the workload cluster.
## Create role assignments for users to access the cluster
Copy the following JSON object into a file called *custom-role.json*. Replace th
1. Create the role definition by running the following command from the folder where you saved *custom-role.json*:
- ```azurecli
- az role definition create --role-definition @custom-role.json
- ```
+ ```azurecli
+ az role definition create --role-definition @custom-role.json
+ ```
1. Create a role assignment by using this custom role definition:
- ```azurecli
- az role assignment create --role "Arc Deployment Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name>
- ```
+ ```azurecli
+ az role assignment create --role "Arc Deployment Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name>
+ ```
## Configure kubectl with user credentials
az connectedk8s proxy -n <clusterName> -g <resourceGroupName>
After the proxy process is running, you can open another tab in your console to [start sending your requests to the cluster](#send-requests-to-the-cluster).
-### If the cluster admin shared the kubeconfig file with you
+### If the cluster admin shared the kubeconfig file with you
1. Run the following command to set the credentials for the user:
- ```console
- kubectl config set-credentials <testuser>@<mytenant.onmicrosoft.com> \
- --auth-provider=azure \
- --auth-provider-arg=environment=AzurePublicCloud \
- --auth-provider-arg=client-id=<clientApplicationId> \
- --auth-provider-arg=tenant-id=<tenantId> \
- --auth-provider-arg=apiserver-id=<serverApplicationId>
- ```
+ ```console
+ kubectl config set-credentials <testuser>@<mytenant.onmicrosoft.com> \
+ --auth-provider=azure \
+ --auth-provider-arg=environment=AzurePublicCloud \
+ --auth-provider-arg=client-id=<clientApplicationId> \
+ --auth-provider-arg=tenant-id=<tenantId> \
+ --auth-provider-arg=apiserver-id=<serverApplicationId>
+ ```
1. Open the *kubeconfig* file that you created earlier. Under `contexts`, verify that the context associated with the cluster points to the user credentials that you created in the previous step. 1. Add the **config-mode** setting under `user` > `config`:
- ```console
- name: testuser@mytenant.onmicrosoft.com
- user:
- auth-provider:
- config:
- apiserver-id: $SERVER_APP_ID
- client-id: $CLIENT_APP_ID
- environment: AzurePublicCloud
- tenant-id: $TENANT_ID
- config-mode: "1"
- name: azure
- ```
+ ```console
+ name: testuser@mytenant.onmicrosoft.com
+ user:
+ auth-provider:
+ config:
+ apiserver-id: $SERVER_APP_ID
+ client-id: $CLIENT_APP_ID
+ environment: AzurePublicCloud
+ tenant-id: $TENANT_ID
+ config-mode: "1"
+ name: azure
+ ```
## Send requests to the cluster 1. Run any `kubectl` command. For example:
- * `kubectl get nodes`
- * `kubectl get pods`
+
+ - `kubectl get nodes`
+ - `kubectl get pods`
1. After you're prompted for a browser-based authentication, copy the device login URL (`https://microsoft.com/devicelogin`) and open on your web browser.
To create an example Conditional Access policy to use with the cluster, complete
1. On the menu for Azure Active Directory on the left side, select **Enterprise applications**. 1. On the menu for enterprise applications on the left side, select **Conditional Access**. 1. On the menu for Conditional Access on the left side, select **Policies** > **New policy**.
-
+ [ ![Screenshot that shows the button for adding a conditional access policy.](./media/azure-rbac/conditional-access-new-policy.png) ](./media/azure-rbac/conditional-access-new-policy.png#lightbox) 1. Enter a name for the policy, such as **arc-k8s-policy**.
To create an example Conditional Access policy to use with the cluster, complete
1. Under **Access controls**, select **Grant**. Select **Grant access** > **Require device to be marked as compliant**. [ ![Screenshot that shows selecting to only allow compliant devices for the Conditional Access policy.](./media/azure-rbac/conditional-access-grant-compliant.png) ](./media/azure-rbac/conditional-access-grant-compliant.png#lightbox)
-
+ 1. Under **Enable policy**, select **On** > **Create**. [ ![Screenshot that shows enabling the Conditional Access policy.](./media/azure-rbac/conditional-access-enable-policies.png) ](./media/azure-rbac/conditional-access-enable-policies.png#lightbox)
kubectl get nodes
Follow the instructions to sign in again. An error message states that you're successfully logged in, but your admin requires the device that's requesting access to be managed by Azure AD to access the resource. Follow these steps: 1. In the Azure portal, go to **Azure Active Directory**.
-1. Select **Enterprise applications**. Then under **Activity**, select **Sign-ins**.
+1. Select **Enterprise applications**. Then under **Activity**, select **Sign-ins**.
1. An entry at the top shows **Failed** for **Status** and **Success** for **Conditional Access**. Select the entry, and then select **Conditional Access** in **Details**. Notice that your Conditional Access policy is listed. [ ![Screenshot that shows a failed sign-in entry due to the Conditional Access policy.](./media/azure-rbac/conditional-access-sign-in-activity.png) ](./media/azure-rbac/conditional-access-sign-in-activity.png#lightbox)
SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --creden
``` Update the secret on the cluster. Please add any optional parameters you configured when this command was originally run.+ ```azurecli az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id "${SERVER_APP_ID}" --app-secret "${SERVER_APP_SECRET}" ```
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
## Next steps > [!div class="nextstepaction"]
-> Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).
+> Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use the cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters"-- Last updated 08/30/2022
-description: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters"
+description: "With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall."
# Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters
azure-arc Conceptual Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md
Title: "Azure Arc-enabled Kubernetes agent architecture"--
+ Title: "Azure Arc-enabled Kubernetes agent overview"
Last updated 08/03/2021
-description: "This article provides an architectural overview of Azure Arc-enabled Kubernetes agents."
-keywords: "Kubernetes, Arc, Azure, containers"
+description: "This article provides an overview of the Azure Arc agents deployed on the Kubernetes clusters when connecting them to Azure Arc."
# Azure Arc-enabled Kubernetes agent overview
azure-arc Conceptual Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-azure-rbac.md
Title: "Azure RBAC - Azure Arc-enabled Kubernetes"-- Last updated 04/05/2021
-description: "This article provides a conceptual overview of Azure RBAC capability on Azure Arc-enabled Kubernetes"
+description: "This article provides a conceptual overview of Azure RBAC capability on Azure Arc-enabled Kubernetes."
# Azure RBAC on Azure Arc-enabled Kubernetes
azure-arc Conceptual Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-cluster-connect.md
Title: "Access Azure Arc-enabled Kubernetes clusters from anywhere using cluster connect"-- Last updated 07/22/2022
-description: "This article provides a conceptual overview of cluster connect capability of Azure Arc-enabled Kubernetes."
+description: "Cluster connect allows developers to access their Azure Arc-enabled Kubernetes clusters from anywhere for interactive development and debugging."
# Access Azure Arc-enabled Kubernetes clusters from anywhere using cluster connect
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Title: "GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes"-- Last updated 05/24/2022 description: "This article provides a conceptual overview of GitOps and configurations capability of Azure Arc-enabled Kubernetes."
-keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps"
# GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes
azure-arc Conceptual Connectivity Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-connectivity-modes.md
Title: "Azure Arc-enabled Kubernetes connectivity modes"-- Last updated 08/22/2022 description: "This article provides an overview of the connectivity modes supported by Azure Arc-enabled Kubernetes"
-keywords: "Kubernetes, Arc, Azure, containers"
# Azure Arc-enabled Kubernetes connectivity modes
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-custom-locations.md
Title: "Custom Locations - Azure Arc-enabled Kubernetes"-- Last updated 07/21/2022 description: "This article provides a conceptual overview of the custom locations capability of Azure Arc-enabled Kubernetes"
azure-arc Conceptual Data Exchange https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-data-exchange.md
Title: "Data exchanged between Azure Arc-enabled Kubernetes cluster and Azure"-- Last updated 11/23/2021
-description: "This article provides information on data exchanged between Azure Arc-enabled Kubernetes cluster and Azure"
-keywords: "Kubernetes, Arc, Azure, containers"
+description: "The scenarios enabled by Azure Arc-enabled Kubernetes involve exchange of desired state configurations, metadata, and other scenario specific operational data."
# Data exchanged between Azure Arc-enabled Kubernetes cluster and Azure
-The scenarios enabled by Azure Arc-enabled Kubernetes involve exchange of desired state configurations, metadata, and other scenario specific operational data between the Azure Arc-enabled Kubernetes cluster environment and Azure service. For all types of data, the Azure Arc agents initiate outbound communication to Azure services and thus require only egress access to endpoints listed under the network prerequisites. Enabling inbound ports on firewall is not required for Azure Arc agents.
+Azure Arc-enabled Kubernetes scenarios involve exchange of desired state configurations, metadata, and other scenario specific operational data between the Azure Arc-enabled Kubernetes cluster environment and Azure service. For all types of data, the Azure Arc agents initiate outbound communication to Azure services and thus require only egress access to endpoints listed under the network prerequisites. Enabling inbound ports on firewall is not required for Azure Arc agents.
The following table presents a per-scenario breakdown of the data exchanged between these environments.
azure-arc Conceptual Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-extensions.md
Title: "Cluster extensions - Azure Arc-enabled Kubernetes"-- Last updated 07/12/2022
-description: "This article provides a conceptual overview of cluster extensions capability of Azure Arc-enabled Kubernetes"
+description: "This article provides a conceptual overview of the Azure Arc-enabled Kubernetes cluster extensions capability."
# Cluster extensions
description: "This article provides a conceptual overview of cluster extensions
A cluster operator or admin can use the cluster extensions feature to: - Install and manage key management, data, and application offerings on your Kubernetes cluster. List of available extensions can be found [here](extensions.md#currently-available-extensions)-- Use Azure Policy to automate at-scale deployment of cluster extensions across all clusters in your environment.
+- Use Azure Policy to automate at-scale deployment of cluster extensions across all clusters in your environment.
- Subscribe to release trains (for example, preview or stable) for each extension. - Set up auto-upgrade for extensions or pin to a specific version and manually upgrade versions. - Update extension properties or delete extension instances.
azure-arc Conceptual Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-ci-cd.md
Title: "CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes"-- Last updated 05/24/2022 description: "This article provides a conceptual overview of a CI/CD workflow using GitOps with Flux"
-keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Service, containers, CI, CD, Azure DevOps"
+ # CI/CD workflow using GitOps - Azure Arc-enabled Kubernetes > [!IMPORTANT]
azure-arc Conceptual Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2-ci-cd.md
Title: "CI/CD Workflow using GitOps (Flux v2) - Azure Arc-enabled Kubernetes"
-description: "This article provides a conceptual overview of a CI/CD workflow using GitOps"
-keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Helm, Arc, AKS, CI/CD, Azure DevOps"
--
+description: "This article provides a conceptual overview of a CI/CD workflow using GitOps."
Last updated 11/29/2021
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes" description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters."
-keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops"
-- Previously updated : 10/24/2022 Last updated : 11/29/2022
With GitOps, you declare the desired state of your Kubernetes clusters in files
Because these files are stored in a Git repository, they're versioned, and changes between versions are easily tracked. Kubernetes controllers run in the clusters and continually reconcile the cluster state with the desired state declared in the Git repository. These operators pull the files from the Git repositories and apply the desired state to the clusters. The operators also continuously assure that the cluster remains in the desired state.
-GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets, Azure Blob Storage) and template types (YAML, Helm, and Kustomize). Flux also supports multi-tenancy and deployment dependency management, among [other features](https://fluxcd.io/docs/).
+GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets, Azure Blob Storage) and template types (YAML, Helm, and Kustomize). Flux also supports [multi-tenancy](#multi-tenancy) and deployment dependency management, among [other features](https://fluxcd.io/docs/).
## Flux cluster extension
The `microsoft.flux` extension installs by default the [Flux controllers](https:
:::image type="content" source="media/gitops/flux2-config-install.png" alt-text="Diagram showing the installation of a Flux configuration in an Azure Arc-enabled Kubernetes or Azure Kubernetes Service cluster." lightbox="media/gitops/flux2-config-install.png":::
-You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos, Bucket sources or Azure Blob Storage. When you create a `fluxConfigurations` resource, the values you supply for the parameters, such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
+You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos, Bucket sources or Azure Blob Storage. When you create a `fluxConfigurations` resource, the values you supply for the [parameters](#parameters), such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `microsoft.flux` extension, manage the GitOps configuration process.
Because Azure Resource Manager manages your configurations, you can automate cre
[Learn how to use the built-in policies for Flux v2](./use-azure-policy-flux-2.md).
-## Next steps
+## Parameters
+
+For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation.
+
+You can see the full list of parameters that the `k8s-configuration flux` Azure CLI command supports by using the `-h` parameter:
+
+```azurecli
+az k8s-configuration flux -h
+
+Group
+ az k8s-configuration flux : Commands to manage Flux v2 Kubernetes configurations.
+
+Subgroups:
+ deployed-object : Commands to see deployed objects associated with Flux v2 Kubernetes
+ configurations.
+ kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes
+ configurations.
+
+Commands:
+ create : Create a Flux v2 Kubernetes configuration.
+ delete : Delete a Flux v2 Kubernetes configuration.
+ list : List all Flux v2 Kubernetes configurations.
+ show : Show a Flux v2 Kubernetes configuration.
+ update : Update a Flux v2 Kubernetes configuration.
+```
+
+Here are the parameters for the `k8s-configuration flux create` CLI command:
+
+```azurecli
+az k8s-configuration flux create -h
+
+This command is from the following extension: k8s-configuration
+
+Command
+ az k8s-configuration flux create : Create a Flux v2 Kubernetes configuration.
+
+Arguments
+ --cluster-name -c [Required] : Name of the Kubernetes cluster.
+ --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
+ Allowed values: connectedClusters, managedClusters.
+ --name -n [Required] : Name of the flux configuration.
+ --resource-group -g [Required] : Name of resource group. You can configure the default group
+ using `az configure --defaults group=<name>`.
+ --url -u [Required] : URL of the source to reconcile.
+ --bucket-insecure : Communicate with a bucket without TLS. Allowed values: false,
+ true.
+ --bucket-name : Name of the S3 bucket to sync.
+ --container-name : Name of the Azure Blob Storage container to sync
+ --interval --sync-interval : Time between reconciliations of the source on the cluster.
+ --kind : Source kind to reconcile. Allowed values: bucket, git, azblob.
+ Default: git.
+ --kustomization -k : Define kustomizations to sync sources with parameters ['name',
+ 'path', 'depends_on', 'timeout', 'sync_interval',
+ 'retry_interval', 'prune', 'force'].
+ --namespace --ns : Namespace to deploy the configuration. Default: default.
+ --no-wait : Do not wait for the long-running operation to finish.
+ --scope -s : Specify scope of the operator to be 'namespace' or 'cluster'.
+ Allowed values: cluster, namespace. Default: cluster.
+ --suspend : Suspend the reconciliation of the source and kustomizations
+ associated with this configuration. Allowed values: false,
+ true.
+ --timeout : Maximum time to reconcile the source before timing out.
+
+Auth Arguments
+ --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration
+ namespace to use for communication to the source.
+
+Bucket Auth Arguments
+ --bucket-access-key : Access Key ID used to authenticate with the bucket.
+ --bucket-secret-key : Secret Key used to authenticate with the bucket.
+
+Git Auth Arguments
+ --https-ca-cert : Base64-encoded HTTPS CA certificate for TLS communication with
+ private repository sync.
+ --https-ca-cert-file : File path to HTTPS CA certificate file for TLS communication
+ with private repository sync.
+ --https-key : HTTPS token/password for private repository sync.
+ --https-user : HTTPS username for private repository sync.
+ --known-hosts : Base64-encoded known_hosts data containing public SSH keys
+ required to access private Git instances.
+ --known-hosts-file : File path to known_hosts contents containing public SSH keys
+ required to access private Git instances.
+ --ssh-private-key : Base64-encoded private ssh key for private repository sync.
+ --ssh-private-key-file : File path to private ssh key for private repository sync.
+
+Git Repo Ref Arguments
+ --branch : Branch within the git source to reconcile with the cluster.
+ --commit : Commit within the git source to reconcile with the cluster.
+ --semver : Semver range within the git source to reconcile with the
+ cluster.
+ --tag : Tag within the git source to reconcile with the cluster.
+
+Global Arguments
+ --debug : Increase logging verbosity to show all debug logs.
+ --help -h : Show this help message and exit.
+ --only-show-errors : Only show errors, suppressing warnings.
+ --output -o : Output format. Allowed values: json, jsonc, none, table, tsv,
+ yaml, yamlc. Default: json.
+ --query : JMESPath query string. See http://jmespath.org/ for more
+ information and examples.
+ --subscription : Name or ID of subscription. You can configure the default
+ subscription using `az account set -s NAME_OR_ID`.
+ --verbose : Increase logging verbosity. Use --debug for full debug logs.
+
+Azure Blob Storage Account Auth Arguments
+ --sp_client_id : The client ID for authenticating a service principal with Azure Blob, required for this authentication method
+ --sp_tenant_id : The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method
+ --sp_client_secret : The client secret for authenticating a service principal with Azure Blob
+ --sp_client_cert : The Base64 encoded client certificate for authenticating a service principal with Azure Blob
+ --sp_client_cert_password : The password for the client certificate used to authenticate a service principal with Azure Blob
+ --sp_client_cert_send_chain : Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate
+ --account_key : The Azure Blob Shared Key for authentication
+ --sas_token : The Azure Blob SAS Token for authentication
+ --mi_client_id : The client ID of the managed identity for authentication with Azure Blob
+
+Examples
+ Create a Flux v2 Kubernetes configuration
+ az k8s-configuration flux create --resource-group my-resource-group \
+ --cluster-name mycluster --cluster-type connectedClusters \
+ --name myconfig --scope cluster --namespace my-namespace \
+ --kind git --url https://github.com/Azure/arc-k8s-demo \
+ --branch main --kustomization name=my-kustomization
+
+ Create a Kubernetes v2 Flux Configuration with Bucket Source Kind
+ az k8s-configuration flux create --resource-group my-resource-group \
+ --cluster-name mycluster --cluster-type connectedClusters \
+ --name myconfig --scope cluster --namespace my-namespace \
+ --kind bucket --url https://bucket-provider.minio.io \
+ --bucket-name my-bucket --kustomization name=my-kustomization \
+ --bucket-access-key my-access-key --bucket-secret-key my-secret-key
+
+ Create a Kubernetes v2 Flux Configuration with Azure Blob Storage Source Kind
+ az k8s-configuration flux create --resource-group my-resource-group \
+ --cluster-name mycluster --cluster-type connectedClusters \
+ --name myconfig --scope cluster --namespace my-namespace \
+ --kind azblob --url https://mystorageaccount.blob.core.windows.net \
+ --container-name my-container --kustomization name=my-kustomization \
+ --account-key my-account-key
+```
+
+### Configuration general arguments
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--cluster-name` `-c` | String | Name of the cluster resource in Azure. |
+| `--cluster-type` `-t` | `connectedClusters`, `managedClusters` | Use `connectedClusters` for Azure Arc-enabled Kubernetes clusters and `managedClusters` for AKS clusters. |
+| `--resource-group` `-g` | String | Name of the Azure resource group that holds the Azure Arc or AKS cluster resource. |
+| `--name` `-n`| String | Name of the Flux configuration in Azure. |
+| `--namespace` `--ns` | String | Name of the namespace to deploy the configuration. Default: `default`. |
+| `--scope` `-s` | String | Permission scope for the operators. Possible values are `cluster` (full access) or `namespace` (restricted access). Default: `cluster`.
+| `--suspend` | flag | Suspends all source and kustomize reconciliations defined in this Flux configuration. Reconciliations active at the time of suspension will continue. |
+
+### Source general arguments
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`, `azblob`. Default: `git`. |
+| `--timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Maximum time to attempt to reconcile the source before timing out. Default: `10m`. |
+| `--sync-interval` `--interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Time between reconciliations of the source on the cluster. Default: `10m`. |
+
+### Git repository source reference arguments
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--branch` | String | Branch within the Git source to sync to the cluster. Default: `master`. Newer repositories might have a root branch named `main`, in which case you need to set `--branch=main`. |
+| `--tag` | String | Tag within the Git source to sync to the cluster. Example: `--tag=3.2.0`. |
+| `--semver` | String | Git tag `semver` range within the Git source to sync to the cluster. Example: `--semver=">=3.1.0-rc.1 <3.2.0"`. |
+| `--commit` | String | Git commit SHA within the Git source to sync to the cluster. Example: `--commit=363a6a8fe6a7f13e05d34c163b0ef02a777da20a`. |
+
+For more information, see the [Flux documentation on Git repository checkout strategies](https://fluxcd.io/docs/components/source/gitrepositories/#checkout-strategies).
+
+### Public Git repository
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | http[s]://server/repo[.git] | URL of the Git repository source to reconcile with the cluster. |
+
+### Private Git repository with SSH and Flux-created keys
+
+Add the public key generated by Flux to the user account in your Git service provider.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | ssh://user@server/repo[.git] | `git@` should replace `user@` if the public key is associated with the repository instead of the user account. |
+
+### Private Git repository with SSH and user-provided keys
+
+Use your own private key directly or from a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with a newline (`\n`).
+
+Add the associated public key to the user account in your Git service provider.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | ssh://user@server/repo[.git] | `git@` should replace `user@` if the public key is associated with the repository instead of the user account. |
+| `--ssh-private-key` | Base64 key in [PEM format](https://aka.ms/PEMformat) | Provide the key directly. |
+| `--ssh-private-key-file` | Full path to local file | Provide the full path to the local file that contains the PEM-format key.
+
+### Private Git host with SSH and user-provided known hosts
+
+The Flux operator maintains a list of common Git hosts in its `known_hosts` file. Flux uses this information to authenticate the Git repository before establishing the SSH connection. If you're using an uncommon Git repository or your own Git host, you can supply the host key so that Flux can identify your repository.
+
+Just like private keys, you can provide your `known_hosts` content directly or in a file. When you're providing your own content, use the [known_hosts content format specifications](https://aka.ms/KnownHostsFormat), along with either of the preceding SSH key scenarios.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | ssh://user@server/repo[.git] | `git@` can replace `user@`. |
+| `--known-hosts` | Base64 string | Provide `known_hosts` content directly. |
+| `--known-hosts-file` | Full path to local file | Provide `known_hosts` content in a local file. |
+
+### Private Git repository with an HTTPS user and key
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. |
+| `--https-user` | Raw string | HTTPS username. |
+| `--https-key` | Raw string | HTTPS personal access token or password.
+
+### Private Git repository with an HTTPS CA certificate
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. |
+| `--https-ca-cert` | Base64 string | CA certificate for TLS communication. |
+| `--https-ca-cert-file` | Full path to local file | Provide CA certificate content in a local file. |
+
+### Bucket source arguments
+
+If you use a `bucket` source instead of a `git` source, here are the bucket-specific command arguments.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: http://, https://. |
+| `--bucket-name` | String | Name of the `bucket` to sync. |
+| `--bucket-access-key` | String | Access Key ID used to authenticate with the `bucket`. |
+| `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. |
+| `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. |
+
+### Azure Blob Storage Account source arguments
+
+If you use a `azblob` source, here are the blob-specific command arguments.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | URL String | The URL for the `azblob`. |
+| `--container-name` | String | Name of the Azure Blob Storage container to sync |
+| `--sp_client_id` | String | The client ID for authenticating a service principal with Azure Blob, required for this authentication method |
+| `--sp_tenant_id` | String | The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method |
+| `--sp_client_secret` | String | The client secret for authenticating a service principal with Azure Blob |
+| `--sp_client_cert` | String | The Base64 encoded client certificate for authenticating a service principal with Azure Blob |
+| `--sp_client_cert_password` | String | The password for the client certificate used to authenticate a service principal with Azure Blob |
+| `--sp_client_cert_send_chain` | String | Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate |
+| `--account_key` | String | The Azure Blob Shared Key for authentication |
+| `--sas_token` | String | The Azure Blob SAS Token for authentication |
+| `--mi_client_id` | String | The client ID of the managed identity for authentication with Azure Blob |
+
+### Local secret for authentication with source
+
+You can use a local Kubernetes secret for authentication with a `git`, `bucket` or `azBlob` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--local-auth-ref` `--local-ref` | String | Local reference to a Kubernetes secret in the Flux configuration namespace to use for authentication with the source. |
+
+For HTTPS authentication, you create a secret with the `username` and `password`:
-Advance to the next tutorial to learn how to enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters:
+```azurecli
+kubectl create ns flux-config
+kubectl create secret generic -n flux-config my-custom-secret --from-literal=username=<my-username> --from-literal=password=<my-password-or-key>
+```
+
+For SSH authentication, you create a secret with the `identity` and `known_hosts` fields:
+
+```azurecli
+kubectl create ns flux-config
+kubectl create secret generic -n flux-config my-custom-secret --from-file=identity=./id_rsa --from-file=known_hosts=./known_hosts
+```
+
+For both cases, when you create the Flux configuration, use `--local-auth-ref my-custom-secret` in place of the other authentication parameters:
+
+```azurecli
+az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -n <config_name> -t connectedClusters --scope cluster --namespace flux-config -u <git-repo-url> --kustomization name=kustomization1 --local-auth-ref my-custom-secret
+```
+
+Learn more about using a local Kubernetes secret with these authentication methods:
+
+* [Git repository HTTPS authentication](https://fluxcd.io/docs/components/source/gitrepositories/#https-authentication)
+* [Git repository HTTPS self-signed certificates](https://fluxcd.io/docs/components/source/gitrepositories/#https-self-signed-certificates)
+* [Git repository SSH authentication](https://fluxcd.io/docs/components/source/gitrepositories/#ssh-authentication)
+* [Bucket static authentication](https://fluxcd.io/docs/components/source/buckets/#static-authentication)
+
+> [!NOTE]
+> If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli-connect-using-an-outbound-proxy-server).
+
+### Git implementation
+
+To support various repository providers that implement Git, Flux can be configured to use one of two Git libraries: `go-git` or `libgit2`. See the [Flux documentation](https://fluxcd.io/docs/components/source/gitrepositories/#git-implementation) for details.
+
+The GitOps implementation of Flux v2 automatically determines which library to use for public cloud repositories:
+
+* For GitHub, GitLab, and BitBucket repositories, Flux uses `go-git`.
+* For Azure DevOps and all other repositories, Flux uses `libgit2`.
+
+For on-premises repositories, Flux uses `libgit2`.
+
+### Kustomization
+
+By using `az k8s-configuration flux create`, you can create one or more kustomizations during the configuration.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--kustomization` | No value | Start of a string of parameters that configure a kustomization. You can use it multiple times to create multiple kustomizations. |
+| `name` | String | Unique name for this kustomization. |
+| `path` | String | Path within the Git repository to reconcile with the cluster. Default is the top level of the branch. |
+| `prune` | Boolean | Default is `false`. Set `prune=true` to assure that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted. Using `prune=true` is important for environments where users don't have access to the clusters and can make changes only through the Git repository. |
+| `depends_on` | String | Name of one or more kustomizations (within this configuration) that must reconcile before this kustomization can reconcile. For example: `depends_on=["kustomization1","kustomization2"]`. Note that if you remove a kustomization that has dependent kustomizations, the dependent kustomizations will get a `DependencyNotReady` state and reconciliation will halt.|
+| `timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. |
+| `sync_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. |
+| `retry_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. |
+| `validation` | String | Values: `none`, `client`, `server`. Default: `none`. See [Flux documentation](https://fluxcd.io/docs/) for details.|
+| `force` | Boolean | Default: `false`. Set `force=true` to instruct the kustomize controller to re-create resources when patching fails because of an immutable field change. |
+
+You can also use `az k8s-configuration flux kustomization` to create, update, list, show, and delete kustomizations in a Flux configuration:
+
+```console
+az k8s-configuration flux kustomization -h
+
+Group
+ az k8s-configuration flux kustomization : Commands to manage Kustomizations associated with Flux
+ v2 Kubernetes configurations.
+
+Commands:
+ create : Create a Kustomization associated with a Flux v2 Kubernetes configuration.
+ delete : Delete a Kustomization associated with a Flux v2 Kubernetes configuration.
+ list : List Kustomizations associated with a Flux v2 Kubernetes configuration.
+ show : Show a Kustomization associated with a Flux v2 Kubernetes configuration.
+ update : Update a Kustomization associated with a Flux v2 Kubernetes configuration.
+```
+
+Here are the kustomization creation options:
+
+```azurecli
+az k8s-configuration flux kustomization create -h
+
+This command is from the following extension: k8s-configuration
+
+Command
+ az k8s-configuration flux kustomization create : Create a Kustomization associated with a
+ Kubernetes Flux v2 Configuration.
+
+Arguments
+ --cluster-name -c [Required] : Name of the Kubernetes cluster.
+ --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
+ Allowed values: connectedClusters, managedClusters.
+ --kustomization-name -k [Required] : Specify the name of the kustomization to target.
+ --name -n [Required] : Name of the flux configuration.
+ --resource-group -g [Required] : Name of resource group. You can configure the default
+ group using `az configure --defaults group=<name>`.
+ --dependencies --depends --depends-on : Comma-separated list of kustomization dependencies.
+ --force : Re-create resources that cannot be updated on the
+ cluster (i.e. jobs). Allowed values: false, true.
+ --interval --sync-interval : Time between reconciliations of the kustomization on the
+ cluster.
+ --no-wait : Do not wait for the long-running operation to finish.
+ --path : Specify the path in the source that the kustomization
+ should apply.
+ --prune : Garbage collect resources deployed by the kustomization
+ on the cluster. Allowed values: false, true.
+ --retry-interval : Time between reconciliations of the kustomization on the
+ cluster on failures, defaults to --sync-interval.
+ --timeout : Maximum time to reconcile the kustomization before
+ timing out.
+
+Global Arguments
+ --debug : Increase logging verbosity to show all debug logs.
+ --help -h : Show this help message and exit.
+ --only-show-errors : Only show errors, suppressing warnings.
+ --output -o : Output format. Allowed values: json, jsonc, none,
+ table, tsv, yaml, yamlc. Default: json.
+ --query : JMESPath query string. See http://jmespath.org/ for more
+ information and examples.
+ --subscription : Name or ID of subscription. You can configure the
+ default subscription using `az account set -s
+ NAME_OR_ID`.
+ --verbose : Increase logging verbosity. Use --debug for full debug
+ logs.
+
+Examples
+ Create a Kustomization associated with a Kubernetes v2 Flux Configuration
+ az k8s-configuration flux kustomization create --resource-group my-resource-group \
+ --cluster-name mycluster --cluster-type connectedClusters --name myconfig \
+ --kustomization-name my-kustomization-2 --path ./my/path --prune --force
+```
+
+## Multi-tenancy
+
+Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability has been integrated into Azure GitOps with Flux v2.
+
+> [!NOTE]
+> For the multi-tenancy feature, you need to know if your manifests contain any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare:
+>
+> * Upgrade to Kubernetes version 1.20.6 or greater.
+> * In your Kubernetes manifests, assure that all `sourceRef` are to objects within the same namespace as the GitOps configuration.
+> * If you need time to update your manifests, you can [opt out of multi-tenancy](#opt-out-of-multi-tenancy). However, you still need to upgrade your Kubernetes version.
+
+### Update manifests for multi-tenancy
+
+LetΓÇÖs say you deploy a `fluxConfiguration` to one of our Kubernetes clusters in the **cluster-config** namespace with cluster scope. You configure the source to sync the `https://github.com/fluxcd/flux2-kustomize-helm-example` repo. This is the same sample Git repo used in the [Deploy applications using GitOps with Flux v2 tutorial](tutorial-use-gitops-flux2.md). After Flux syncs the repo, it will deploy the resources described in the manifests (YAML files). Two of the manifests describe HelmRelease and HelmRepository objects.
+
+```yaml
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: nginx
+ namespace: nginx
+spec:
+ releaseName: nginx-ingress-controller
+ chart:
+ spec:
+ chart: nginx-ingress-controller
+ sourceRef:
+ kind: HelmRepository
+ name: bitnami
+ namespace: flux-system
+ version: "5.6.14"
+ interval: 1h0m0s
+ install:
+ remediation:
+ retries: 3
+ # Default values
+ # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml
+ values:
+ service:
+ type: NodePort
+```
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta1
+kind: HelmRepository
+metadata:
+ name: bitnami
+ namespace: flux-system
+spec:
+ interval: 30m
+ url: https://charts.bitnami.com/bitnami
+```
+
+By default, the Flux extension will deploy the `fluxConfigurations` by impersonating the **flux-applier** service account that is deployed only in the **cluster-config** namespace. Using the above manifests, when multi-tenancy is enabled the HelmRelease would be blocked. This is because the HelmRelease is in the **nginx** namespace and is referencing a HelmRepository in the **flux-system** namespace. Also, the Flux helm-controller cannot apply the HelmRelease, because there is no **flux-applier** service account in the **nginx** namespace.
+
+To work with multi-tenancy, the correct approach is to deploy all Flux objects into the same namespace as the `fluxConfigurations`. This avoids the cross-namespace reference issue, and allows the Flux controllers to get the permissions to apply the objects. Thus, for a GitOps configuration created in the **cluster-config** namespace, the above manifests would change to these:
+
+```yaml
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: nginx
+ namespace: cluster-config
+spec:
+ releaseName: nginx-ingress-controller
+ targetNamespace: nginx
+ chart:
+ spec:
+ chart: nginx-ingress-controller
+ sourceRef:
+ kind: HelmRepository
+ name: bitnami
+ namespace: cluster-config
+ version: "5.6.14"
+ interval: 1h0m0s
+ install:
+ remediation:
+ retries: 3
+ # Default values
+ # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml
+ values:
+ service:
+ type: NodePort
+```
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta1
+kind: HelmRepository
+metadata:
+ name: bitnami
+ namespace: cluster-config
+spec:
+ interval: 30m
+ url: https://charts.bitnami.com/bitnami
+```
+
+### Opt out of multi-tenancy
+
+When the `microsoft.flux` extension is installed, multi-tenancy is enabled by default to assure security by default in your clusters. However, if you need to disable multi-tenancy, you can opt out by creating or updating the `microsoft.flux` extension in your clusters with "--configuration-settings multiTenancy.enforce=false":
+
+```azurecli
+az k8s-extension create --extension-type microsoft.flux --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters>
+```
+
+```azurecli
+az k8s-extension update --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters>
+```
+
+## Next steps
-> [!div class="nextstepaction"]
-* [Enable GitOps with Flux](./tutorial-use-gitops-flux2.md)
+* Use our tutorial to learn how to [enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters](tutorial-use-gitops-flux2.md).
+* Learn about [CI/CD workflow using GitOps](conceptual-gitops-flux2-ci-cd.md).
azure-arc Conceptual Inner Loop Gitops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-inner-loop-gitops.md
Title: "Inner Loop Developer Experience for Teams Adopting GitOps"-- Last updated 06/18/2021
-description: "This article provides a conceptual overview of Inner Loop Developer Experience for Teams Adopting GitOps "
-keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Service, containers, CI, CD, Azure DevOps, Inner loop, Dev Experience"
+description: "Learn how an established inner loop can enhance developer productivity and help in a seamless transition for teams adopting GitOps."
# Inner Loop Developer Experience for teams adopting GitOps
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes"- Last updated 11/01/2022
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Title: "Azure Arc-enabled Kubernetes cluster extensions"-- Last updated 10/12/2022
-description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes"
+description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes clusters."
# Deploy and manage Azure Arc-enabled Kubernetes cluster extensions
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/faq.md
Title: "Azure Arc-enabled Kubernetes and GitOps frequently asked questions"-- Last updated 08/22/2022
-description: "This article contains a list of frequently asked questions related to Azure Arc-enabled Kubernetes and Azure GitOps"
-keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps, faq"
+description: "This article contains a list of frequently asked questions related to Azure Arc-enabled Kubernetes and Azure GitOps."
azure-arc Kubernetes Resource View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/kubernetes-resource-view.md
Title: Access Kubernetes resources from Azure portal-- Last updated 07/22/2022 description: Learn how to interact with Kubernetes resources to manage an Azure Arc-enabled Kubernetes cluster from the Azure portal.
azure-arc Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/move-regions.md
Title: "Move Arc-enabled Kubernetes clusters between regions"-- Last updated 03/03/2021
-description: "Manually move your Azure Arc-enabled Kubernetes between regions"
-keywords: "Kubernetes, Arc, Azure, K8s, containers, region, move"
+description: "Manually move your Azure Arc-enabled Kubernetes (or connected cluster resources) between regions."
#Customer intent: As a Kubernetes cluster administrator, I want to move my Arc-enabled Kubernetes cluster to another Azure region.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Title: "Overview of Azure Arc-enabled Kubernetes"-- Last updated 05/03/2022 description: "This article provides an overview of Azure Arc-enabled Kubernetes."
-keywords: "Kubernetes, Arc, Azure, containers"
# What is Azure Arc-enabled Kubernetes?
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/plan-at-scale-deployment.md
Title: How to plan and deploy Azure Arc-enabled Kubernetes-- Last updated 04/12/2021 description: Onboard large number of clusters to Azure Arc-enabled Kubernetes for configuration management
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022 #
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes
description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Last updated 07/07/2022 - # Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"--
-#
Last updated 11/04/2022 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
-keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
# Azure Arc-enabled Kubernetes and GitOps troubleshooting
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster-- Last updated 10/12/2022
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh description: Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster- Last updated 10/12/2022
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
Title: 'Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters'
-description: This tutorial walks through setting up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. For a conceptual take on this workflow, see the CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes article.
-
+description: This tutorial walks through setting up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters.
Last updated 05/24/2021
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
Title: 'Tutorial: Implement CI/CD with GitOps (Flux v2)'
-description: This tutorial walks through setting up a CI/CD solution using GitOps (Flux v2) in Azure Arc-enabled Kubernetes or Azure Kubernetes Service clusters. For a conceptual take on this workflow, see the CI/CD Workflow using GitOps article.
-keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, ci/cd, devops"
+ Title: "Tutorial: Implement CI/CD with GitOps (Flux v2)"
+description: "This tutorial walks through setting up a CI/CD solution using GitOps (Flux v2) in Azure Arc-enabled Kubernetes or Azure Kubernetes Service clusters."
- Last updated 05/24/2022
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Title: 'Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster'
-description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster. For a conceptual take on this process, see the Configurations and GitOps - Azure Arc-enabled Kubernetes article.
-
+description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster.
Last updated 05/24/2022
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters"
+ Title: "Tutorial: Deploy applications using GitOps with Flux v2"
description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters."
-keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops"
-- Previously updated : 10/24/2022 Last updated : 11/29/2022
-# Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or AKS clusters
+# Tutorial: Deploy applications using GitOps with Flux v2
-GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clusters or Azure Arc-enabled Kubernetes connected clusters as a cluster extension. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
+GitOps with Flux v2 can be enabled as a [cluster extension](conceptual-extensions.md) in Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
> [!NOTE] > Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clu
This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md). > [!IMPORTANT]
-> The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension you can upgrade to the latest extension manually using the Azure CLI: "az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>" (use "-t connectedClusters" for Arc clusters and "-t managedClusters" for AKS clusters).
+> The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](conceptual-gitops-flux2.md#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension, you can upgrade to the latest extension manually using the Azure CLI: `az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>` (use `-t connectedClusters` for Arc clusters and `-t managedClusters` for AKS clusters).
> [!TIP] > When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
* An MSI-based AKS cluster that's up and running. > [!IMPORTANT]
- > **Ensure that the AKS cluster is created with MSI** (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
- > For new AKS clusters created with ΓÇ£az aks createΓÇ¥, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI run ΓÇ£az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identityΓÇ¥. For more information, refer to [managed identity docs](../../aks/use-managed-identity.md).
+ > Ensure that the AKS cluster is created with MSI (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters.
+ > For new AKS clusters created with `az aks create`, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI, run `az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identity`ΓÇ¥`. For more information, see [Use a managed identity in AKS](../../aks/use-managed-identity.md).
* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type.
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
* Azure CLI version 2.15 or later. [Install the Azure CLI](/cli/azure/install-azure-cli) or use the following commands to update to the latest version:
- ```console
+ ```azurecli
az version az upgrade ```
-* Registration of the following Azure service providers. (It's OK to re-register an existing provider.)
+* The Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/user-guide/kubectl/). `kubectl` is already installed if you use Azure Cloud Shell.
- ```console
+ Install `kubectl` locally using the [`az aks install-cli`](/cli/azure/aks#az-aks-install-cli) command:
+
+ ```azurecli
+ az aks install-cli
+ ```
+
+* Registration of the following Azure resource providers:
+
+ ```azurecli
az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.KubernetesConfiguration ```
- Registration is an asynchronous process and should finish within 10 minutes. Use the following code to monitor the registration process:
+ Registration is an asynchronous process and should finish within ten minutes. To monitor the registration process, use the following command:
- ```console
+ ```azurecli
az provider show -n Microsoft.KubernetesConfiguration -o table Namespace RegistrationPolicy RegistrationState
The most recent version of the Flux v2 extension and the two previous versions (
### Network requirements
-The GitOps agents require outbound (egress) TCP to the repo source on either port 22 (SSH) or port 443 (HTTPS) to function. The agents also require the following outbound URLs:
+The GitOps agents require outbound (egress) TCP to the repo source on either port 22 (SSH) or port 443 (HTTPS) to function. The agents also require access to the following outbound URLs:
| Endpoint (DNS) | Description | | | |
The GitOps agents require outbound (egress) TCP to the repo source on either por
## Enable CLI extensions
->[!NOTE]
->The `k8s-configuration` CLI extension manages either Flux v2 or Flux v1 configurations. Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
- Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages:
-```console
+```azurecli
az extension add -n k8s-configuration az extension add -n k8s-extension ```
-To update these packages, use the following commands:
+To update these packages to the latest versions:
-```console
+```azurecli
az extension update -n k8s-configuration az extension update -n k8s-extension ```
-To see the list of az CLI extensions installed and their versions, use the following command:
+To see a list of all installed Azure CLI extensions and their versions, use the following command:
-```console
+```azurecli
az extension list -o table Experimental ExtensionType Name Path Preview Version
In the following example:
* The namespace for configuration installation is `cluster-config`. * The URL for the public Git repository is `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`. * The Git repository branch is `main`.
-* The scope of the configuration is `cluster`. This gives the operators permissions to make changes throughout cluster. To use `namespace` scope with this tutorial, [see the changes needed](#multi-tenancy).
+* The scope of the configuration is `cluster`. This gives the operators permissions to make changes throughout cluster. To use `namespace` scope with this tutorial, [see the changes needed](conceptual-gitops-flux2.md#multi-tenancy).
* Two kustomizations are specified with names `infra` and `apps`. Each is associated with a path in the repository. * The `apps` kustomization depends on the `infra` kustomization. (The `infra` kustomization must finish before the `apps` kustomization runs.) * Set `prune=true` on both kustomizations. This setting assures that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted.
-If the `microsoft.flux` extension isn't already installed in the cluster, it'll be installed. When the flux configuration is installed, the initial compliance state may be "Pending" or "Non-compliant" because reconciliation is still on-going. After a minute, you can query the configuration again and see the final compliance state.
-
-```console
+```azurecli
az k8s-configuration flux create -g flux-demo-rg \ -c flux-demo-arc \ -n cluster-config \
az k8s-configuration flux create -g flux-demo-rg \
--branch main \ --kustomization name=infra path=./infrastructure prune=true \ --kustomization name=apps path=./apps/staging prune=true dependsOn=\["infra"\]-
-'Microsoft.Flux' extension not found on the cluster, installing it now. This may take a few minutes...
-'Microsoft.Flux' extension was successfully installed on the cluster
-Creating the flux configuration 'cluster-config' in the cluster. This may take a few minutes...
-{
- "complianceState": "Pending",
- ... (not shown because of pending status)
-}
```
-Show the configuration after allowing time to finish reconciliations.
+The `microsoft.flux` extension will be installed on the cluster if it isn't already. When the flux configuration is first installed, the initial compliance state may be `Pending` or `Non-compliant` because reconciliation is still ongoing. After a minute or so, query the configuration again to see the final compliance state.
```console az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters
-{
- "bucket": null,
- "complianceState": "Compliant",
- "configurationProtectedSettings": {},
- "errorMessage": "",
- "gitRepository": {
- "httpsCaCert": null,
- "httpsUser": null,
- "localAuthRef": null,
- "repositoryRef": {
- "branch": "main",
- "commit": null,
- "semver": null,
- "tag": null
- },
- "sshKnownHosts": null,
- "syncIntervalInSeconds": 600,
- "timeoutInSeconds": 600,
- "url": "https://github.com/Azure/gitops-flux2-kustomize-helm-mt"
- },
- "id": "/subscriptions/REDACTED/resourceGroups/flux-demo-rg/providers/Microsoft.Kubernetes/connectedClusters/flux-demo-arc/providers/Microsoft.KubernetesConfiguration/fluxConfigurations/cluster-config",
- "kustomizations": {
- "apps": {
- "dependsOn": [
- "infra"
- ],
- "force": false,
- "name": "apps",
- "path": "./apps/staging",
- "prune": true,
- "retryIntervalInSeconds": null,
- "syncIntervalInSeconds": 600,
- "timeoutInSeconds": 600
- },
- "infra": {
- "dependsOn": null,
- "force": false,
- "name": "infra",
- "path": "./infrastructure",
- "prune": true,
- "retryIntervalInSeconds": null,
- "syncIntervalInSeconds": 600,
- "timeoutInSeconds": 600
- }
- },
- "name": "cluster-config",
- "namespace": "cluster-config",
- "provisioningState": "Succeeded",
- "repositoryPublicKey": "",
- "resourceGroup": "Flux2-Test-RG-EUS",
- "scope": "cluster",
- "sourceKind": "GitRepository",
- "sourceSyncedCommitId": "main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
- "sourceUpdatedAt": "2022-04-06T17:34:03+00:00",
- "statusUpdatedAt": "2022-04-06T17:44:56.417000+00:00",
- "statuses": [
- {
- "appliedBy": null,
- "complianceState": "Compliant",
- "helmReleaseProperties": null,
- "kind": "GitRepository",
- "name": "cluster-config",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:33:32+00:00",
- "message": "Fetched revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
- "reason": "GitOperationSucceed",
- "status": "True",
- "type": "Ready"
- }
- ]
- },
- {
- "appliedBy": null,
- "complianceState": "Compliant",
- "helmReleaseProperties": null,
- "kind": "Kustomization",
- "name": "cluster-config-apps",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:44:04+00:00",
- "message": "Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
- "reason": "ReconciliationSucceeded",
- "status": "True",
- "type": "Ready"
- }
- ]
- },
- {
- "appliedBy": {
- "name": "cluster-config-apps",
- "namespace": "cluster-config"
- },
- "complianceState": "Compliant",
- "helmReleaseProperties": {
- "failureCount": 0,
- "helmChartRef": {
- "name": "cluster-config-podinfo",
- "namespace": "cluster-config"
- },
- "installFailureCount": 0,
- "lastRevisionApplied": 1,
- "upgradeFailureCount": 0
- },
- "kind": "HelmRelease",
- "name": "podinfo",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:33:43+00:00",
- "message": "Release reconciliation succeeded",
- "reason": "ReconciliationSucceeded",
- "status": "True",
- "type": "Ready"
- },
- {
- "lastTransitionTime": "2022-04-06T17:33:43+00:00",
- "message": "Helm install succeeded",
- "reason": "InstallSucceeded",
- "status": "True",
- "type": "Released"
- }
- ]
- },
- {
- "appliedBy": null,
- "complianceState": "Compliant",
- "helmReleaseProperties": null,
- "kind": "Kustomization",
- "name": "cluster-config-infra",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:43:33+00:00",
- "message": "Applied revision: main/4f1bdad4d0a54b939a5e3d52c51464f67e474fcf",
- "reason": "ReconciliationSucceeded",
- "status": "True",
- "type": "Ready"
- }
- ]
- },
- {
- "appliedBy": {
- "name": "cluster-config-infra",
- "namespace": "cluster-config"
- },
- "complianceState": "Compliant",
- "helmReleaseProperties": null,
- "kind": "HelmRepository",
- "name": "bitnami",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:33:36+00:00",
- "message": "Fetched revision: 46a41610ea410558eb485bcb673fd01c4d1f47b86ad292160b256555b01cce81",
- "reason": "IndexationSucceed",
- "status": "True",
- "type": "Ready"
- }
- ]
- },
- {
- "appliedBy": {
- "name": "cluster-config-infra",
- "namespace": "cluster-config"
- },
- "complianceState": "Compliant",
- "helmReleaseProperties": null,
- "kind": "HelmRepository",
- "name": "podinfo",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:33:33+00:00",
- "message": "Fetched revision: 421665ba04fab9b275b9830947417b2cebf67764eee46d568c94cf2a95a6341d",
- "reason": "IndexationSucceed",
- "status": "True",
- "type": "Ready"
- }
- ]
- },
- {
- "appliedBy": {
- "name": "cluster-config-infra",
- "namespace": "cluster-config"
- },
- "complianceState": "Compliant",
- "helmReleaseProperties": {
- "failureCount": 0,
- "helmChartRef": {
- "name": "cluster-config-nginx",
- "namespace": "cluster-config"
- },
- "installFailureCount": 0,
- "lastRevisionApplied": 1,
- "upgradeFailureCount": 0
- },
- "kind": "HelmRelease",
- "name": "nginx",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:34:13+00:00",
- "message": "Release reconciliation succeeded",
- "reason": "ReconciliationSucceeded",
- "status": "True",
- "type": "Ready"
- },
- {
- "lastTransitionTime": "2022-04-06T17:34:13+00:00",
- "message": "Helm install succeeded",
- "reason": "InstallSucceeded",
- "status": "True",
- "type": "Released"
- }
- ]
- },
- {
- "appliedBy": {
- "name": "cluster-config-infra",
- "namespace": "cluster-config"
- },
- "complianceState": "Compliant",
- "helmReleaseProperties": {
- "failureCount": 0,
- "helmChartRef": {
- "name": "cluster-config-redis",
- "namespace": "cluster-config"
- },
- "installFailureCount": 0,
- "lastRevisionApplied": 1,
- "upgradeFailureCount": 0
- },
- "kind": "HelmRelease",
- "name": "redis",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:33:57+00:00",
- "message": "Release reconciliation succeeded",
- "reason": "ReconciliationSucceeded",
- "status": "True",
- "type": "Ready"
- },
- {
- "lastTransitionTime": "2022-04-06T17:33:57+00:00",
- "message": "Helm install succeeded",
- "reason": "InstallSucceeded",
- "status": "True",
- "type": "Released"
- }
- ]
- },
- {
- "appliedBy": {
- "name": "cluster-config-infra",
- "namespace": "cluster-config"
- },
- "complianceState": "Compliant",
- "helmReleaseProperties": null,
- "kind": "HelmChart",
- "name": "test-chart",
- "namespace": "cluster-config",
- "statusConditions": [
- {
- "lastTransitionTime": "2022-04-06T17:33:40+00:00",
- "message": "Pulled 'redis' chart with version '11.3.4'.",
- "reason": "ChartPullSucceeded",
- "status": "True",
- "type": "Ready"
- }
- ]
- }
- ],
- "suspend": false,
- "systemData": {
- "createdAt": "2022-04-06T17:32:44.646629+00:00",
- "createdBy": null,
- "createdByType": null,
- "lastModifiedAt": "2022-04-06T17:32:44.646629+00:00",
- "lastModifiedBy": null,
- "lastModifiedByType": null
- },
- "type": "Microsoft.KubernetesConfiguration/fluxConfigurations"
-}
``` These namespaces were created:
These namespaces were created:
* `cluster-config`: Holds the Flux configuration objects. * `nginx`, `podinfo`, `redis`: Namespaces for workloads described in manifests in the Git repository.
-```console
+To confirm the namespaces, run the following command:
+
+```azurecli
kubectl get namespaces ```
The `flux-system` namespace contains the Flux extension objects:
* Azure Flux controllers: `fluxconfig-agent`, `fluxconfig-controller` * OSS Flux controllers: `source-controller`, `kustomize-controller`, `helm-controller`, `notification-controller`
-The Flux agent and controller pods should be in a running state.
+The Flux agent and controller pods should be in a running state. Confirm this using the following command:
-```console
+```azurecli
kubectl get pods -n flux-system NAME READY STATUS RESTARTS AGE
volumesnapshots.snapshot.storage.k8s.io 2022-03-28T21:06:12Z
websites.extensions.example.com 2022-03-30T23:42:32Z ```
+Confirm additional details of the configuration by using the following commands.
+ ```console kubectl get fluxconfigs -A
NAME READY AGE
statefulset.apps/redis-master 1/1 68m ```
-### Delete the Flux configuration
-
-You can delete the Flux configuration by using the following command. This action deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository will be removed when the Flux configuration is removed.
-
-```console
-az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters --yes
-```
-
-For an AKS cluster, use the same command but with `-t managedClusters`replacing `-t connectedClusters`.
-
-Note that this action does *not* remove the Flux extension.
-
-### Delete the Flux cluster extension
-
-You can delete the Flux extension by using either the CLI or the portal. The delete action removes both the `microsoft.flux` extension resource in Azure and the Flux extension objects in the cluster.
-
-If the Flux extension was created automatically when the Flux configuration was first created, the extension name will be `flux`.
-
-For an Azure Arc-enabled Kubernetes cluster, use this command:
-
-```console
-az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t connectedClusters --yes
-```
-
-For an AKS cluster, use the same command but with `-t managedClusters`replacing `-t connectedClusters`.
- ### Control which controllers are deployed with the Flux cluster extension The `source`, `helm`, `kustomize`, and `notification` Flux controllers are installed by default. The `image-automation` and `image-reflector` controllers must be enabled explicitly. You can use the `k8s-extension` CLI to make those choices:
For more information on OpenShift guidance for onboarding Flux, refer to the [Fl
## Work with parameters
-For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation.
-
-You can see the full list of parameters that the `k8s-configuration flux` CLI command supports by using the `-h` parameter:
-
-```console
-az k8s-configuration flux -h
-
-Group
- az k8s-configuration flux : Commands to manage Flux v2 Kubernetes configurations.
-
-Subgroups:
- deployed-object : Commands to see deployed objects associated with Flux v2 Kubernetes
- configurations.
- kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes
- configurations.
-
-Commands:
- create : Create a Flux v2 Kubernetes configuration.
- delete : Delete a Flux v2 Kubernetes configuration.
- list : List all Flux v2 Kubernetes configurations.
- show : Show a Flux v2 Kubernetes configuration.
- update : Update a Flux v2 Kubernetes configuration.
-```
-
-Here are the parameters for the `k8s-configuration flux create` CLI command:
-
-```console
-az k8s-configuration flux create -h
-
-This command is from the following extension: k8s-configuration
-
-Command
- az k8s-configuration flux create : Create a Flux v2 Kubernetes configuration.
-
-Arguments
- --cluster-name -c [Required] : Name of the Kubernetes cluster.
- --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
- Allowed values: connectedClusters, managedClusters.
- --name -n [Required] : Name of the flux configuration.
- --resource-group -g [Required] : Name of resource group. You can configure the default group
- using `az configure --defaults group=<name>`.
- --url -u [Required] : URL of the source to reconcile.
- --bucket-insecure : Communicate with a bucket without TLS. Allowed values: false,
- true.
- --bucket-name : Name of the S3 bucket to sync.
- --container-name : Name of the Azure Blob Storage container to sync
- --interval --sync-interval : Time between reconciliations of the source on the cluster.
- --kind : Source kind to reconcile. Allowed values: bucket, git, azblob.
- Default: git.
- --kustomization -k : Define kustomizations to sync sources with parameters ['name',
- 'path', 'depends_on', 'timeout', 'sync_interval',
- 'retry_interval', 'prune', 'force'].
- --namespace --ns : Namespace to deploy the configuration. Default: default.
- --no-wait : Do not wait for the long-running operation to finish.
- --scope -s : Specify scope of the operator to be 'namespace' or 'cluster'.
- Allowed values: cluster, namespace. Default: cluster.
- --suspend : Suspend the reconciliation of the source and kustomizations
- associated with this configuration. Allowed values: false,
- true.
- --timeout : Maximum time to reconcile the source before timing out.
-
-Auth Arguments
- --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration
- namespace to use for communication to the source.
-
-Bucket Auth Arguments
- --bucket-access-key : Access Key ID used to authenticate with the bucket.
- --bucket-secret-key : Secret Key used to authenticate with the bucket.
-
-Git Auth Arguments
- --https-ca-cert : Base64-encoded HTTPS CA certificate for TLS communication with
- private repository sync.
- --https-ca-cert-file : File path to HTTPS CA certificate file for TLS communication
- with private repository sync.
- --https-key : HTTPS token/password for private repository sync.
- --https-user : HTTPS username for private repository sync.
- --known-hosts : Base64-encoded known_hosts data containing public SSH keys
- required to access private Git instances.
- --known-hosts-file : File path to known_hosts contents containing public SSH keys
- required to access private Git instances.
- --ssh-private-key : Base64-encoded private ssh key for private repository sync.
- --ssh-private-key-file : File path to private ssh key for private repository sync.
-
-Git Repo Ref Arguments
- --branch : Branch within the git source to reconcile with the cluster.
- --commit : Commit within the git source to reconcile with the cluster.
- --semver : Semver range within the git source to reconcile with the
- cluster.
- --tag : Tag within the git source to reconcile with the cluster.
-
-Global Arguments
- --debug : Increase logging verbosity to show all debug logs.
- --help -h : Show this help message and exit.
- --only-show-errors : Only show errors, suppressing warnings.
- --output -o : Output format. Allowed values: json, jsonc, none, table, tsv,
- yaml, yamlc. Default: json.
- --query : JMESPath query string. See http://jmespath.org/ for more
- information and examples.
- --subscription : Name or ID of subscription. You can configure the default
- subscription using `az account set -s NAME_OR_ID`.
- --verbose : Increase logging verbosity. Use --debug for full debug logs.
-
-Azure Blob Storage Account Auth Arguments
- --sp_client_id : The client ID for authenticating a service principal with Azure Blob, required for this authentication method
- --sp_tenant_id : The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method
- --sp_client_secret : The client secret for authenticating a service principal with Azure Blob
- --sp_client_cert : The Base64 encoded client certificate for authenticating a service principal with Azure Blob
- --sp_client_cert_password : The password for the client certificate used to authenticate a service principal with Azure Blob
- --sp_client_cert_send_chain : Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate
- --account_key : The Azure Blob Shared Key for authentication
- --sas_token : The Azure Blob SAS Token for authentication
- --mi_client_id : The client ID of the managed identity for authentication with Azure Blob
-
-Examples
- Create a Flux v2 Kubernetes configuration
- az k8s-configuration flux create --resource-group my-resource-group \
- --cluster-name mycluster --cluster-type connectedClusters \
- --name myconfig --scope cluster --namespace my-namespace \
- --kind git --url https://github.com/Azure/arc-k8s-demo \
- --branch main --kustomization name=my-kustomization
-
- Create a Kubernetes v2 Flux Configuration with Bucket Source Kind
- az k8s-configuration flux create --resource-group my-resource-group \
- --cluster-name mycluster --cluster-type connectedClusters \
- --name myconfig --scope cluster --namespace my-namespace \
- --kind bucket --url https://bucket-provider.minio.io \
- --bucket-name my-bucket --kustomization name=my-kustomization \
- --bucket-access-key my-access-key --bucket-secret-key my-secret-key
-
- Create a Kubernetes v2 Flux Configuration with Azure Blob Storage Source Kind
- az k8s-configuration flux create --resource-group my-resource-group \
- --cluster-name mycluster --cluster-type connectedClusters \
- --name myconfig --scope cluster --namespace my-namespace \
- --kind azblob --url https://mystorageaccount.blob.core.windows.net \
- --container-name my-container --kustomization name=my-kustomization \
- --account-key my-account-key
-```
-
-### Configuration general arguments
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--cluster-name` `-c` | String | Name of the cluster resource in Azure. |
-| `--cluster-type` `-t` | `connectedClusters`, `managedClusters` | Use `connectedClusters` for Azure Arc-enabled Kubernetes clusters and `managedClusters` for AKS clusters. |
-| `--resource-group` `-g` | String | Name of the Azure resource group that holds the Azure Arc or AKS cluster resource. |
-| `--name` `-n`| String | Name of the Flux configuration in Azure. |
-| `--namespace` `--ns` | String | Name of the namespace to deploy the configuration. Default: `default`. |
-| `--scope` `-s` | String | Permission scope for the operators. Possible values are `cluster` (full access) or `namespace` (restricted access). Default: `cluster`.
-| `--suspend` | flag | Suspends all source and kustomize reconciliations defined in this Flux configuration. Reconciliations active at the time of suspension will continue. |
-
-### Source general arguments
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`, `azblob`. Default: `git`. |
-| `--timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Maximum time to attempt to reconcile the source before timing out. Default: `10m`. |
-| `--sync-interval` `--interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Time between reconciliations of the source on the cluster. Default: `10m`. |
-
-### Git repository source reference arguments
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--branch` | String | Branch within the Git source to sync to the cluster. Default: `master`. Newer repositories might have a root branch named `main`, in which case you need to set `--branch=main`. |
-| `--tag` | String | Tag within the Git source to sync to the cluster. Example: `--tag=3.2.0`. |
-| `--semver` | String | Git tag `semver` range within the Git source to sync to the cluster. Example: `--semver=">=3.1.0-rc.1 <3.2.0"`. |
-| `--commit` | String | Git commit SHA within the Git source to sync to the cluster. Example: `--commit=363a6a8fe6a7f13e05d34c163b0ef02a777da20a`. |
-
-For more information, see the [Flux documentation on Git repository checkout strategies](https://fluxcd.io/docs/components/source/gitrepositories/#checkout-strategies).
-
-### Public Git repository
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | http[s]://server/repo[.git] | URL of the Git repository source to reconcile with the cluster. |
-
-### Private Git repository with SSH and Flux-created keys
-
-Add the public key generated by Flux to the user account in your Git service provider.
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | ssh://user@server/repo[.git] | `git@` should replace `user@` if the public key is associated with the repository instead of the user account. |
-
-### Private Git repository with SSH and user-provided keys
-
-Use your own private key directly or from a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with a newline (`\n`).
-
-Add the associated public key to the user account in your Git service provider.
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | ssh://user@server/repo[.git] | `git@` should replace `user@` if the public key is associated with the repository instead of the user account. |
-| `--ssh-private-key` | Base64 key in [PEM format](https://aka.ms/PEMformat) | Provide the key directly. |
-| `--ssh-private-key-file` | Full path to local file | Provide the full path to the local file that contains the PEM-format key.
-
-### Private Git host with SSH and user-provided known hosts
-
-The Flux operator maintains a list of common Git hosts in its `known_hosts` file. Flux uses this information to authenticate the Git repository before establishing the SSH connection. If you're using an uncommon Git repository or your own Git host, you can supply the host key so that Flux can identify your repository.
-
-Just like private keys, you can provide your `known_hosts` content directly or in a file. When you're providing your own content, use the [known_hosts content format specifications](https://aka.ms/KnownHostsFormat), along with either of the preceding SSH key scenarios.
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | ssh://user@server/repo[.git] | `git@` can replace `user@`. |
-| `--known-hosts` | Base64 string | Provide `known_hosts` content directly. |
-| `--known-hosts-file` | Full path to local file | Provide `known_hosts` content in a local file. |
-
-### Private Git repository with an HTTPS user and key
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. |
-| `--https-user` | Raw string | HTTPS username. |
-| `--https-key` | Raw string | HTTPS personal access token or password.
-
-### Private Git repository with an HTTPS CA certificate
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | `https://server/repo[.git]` | HTTPS with Basic Authentication. |
-| `--https-ca-cert` | Base64 string | CA certificate for TLS communication. |
-| `--https-ca-cert-file` | Full path to local file | Provide CA certificate content in a local file. |
-
-### Bucket source arguments
-
-If you use a `bucket` source instead of a `git` source, here are the bucket-specific command arguments.
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: http://, https://. |
-| `--bucket-name` | String | Name of the `bucket` to sync. |
-| `--bucket-access-key` | String | Access Key ID used to authenticate with the `bucket`. |
-| `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. |
-| `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. |
-
-### Azure Blob Storage Account source arguments
+Flux supports many parameters to enable various scenarios. For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation.
-If you use a `azblob` source, here are the blob-specific command arguments.
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--url` `-u` | URL String | The URL for the `azblob`. |
-| `--container-name` | String | Name of the Azure Blob Storage container to sync |
-| `--sp_client_id` | String | The client ID for authenticating a service principal with Azure Blob, required for this authentication method |
-| `--sp_tenant_id` | String | The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method |
-| `--sp_client_secret` | String | The client secret for authenticating a service principal with Azure Blob |
-| `--sp_client_cert` | String | The Base64 encoded client certificate for authenticating a service principal with Azure Blob |
-| `--sp_client_cert_password` | String | The password for the client certificate used to authenticate a service principal with Azure Blob |
-| `--sp_client_cert_send_chain` | String | Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate |
-| `--account_key` | String | The Azure Blob Shared Key for authentication |
-| `--sas_token` | String | The Azure Blob SAS Token for authentication |
-| `--mi_client_id` | String | The client ID of the managed identity for authentication with Azure Blob |
-
-### Local secret for authentication with source
-
-You can use a local Kubernetes secret for authentication with a `git`, `bucket` or `azBlob` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--local-auth-ref` `--local-ref` | String | Local reference to a Kubernetes secret in the Flux configuration namespace to use for authentication with the source. |
-
-For HTTPS authentication, you create a secret with the `username` and `password`:
-
-```console
-kubectl create ns flux-config
-kubectl create secret generic -n flux-config my-custom-secret --from-literal=username=<my-username> --from-literal=password=<my-password-or-key>
-```
-
-For SSH authentication, you create a secret with the `identity` and `known_hosts` fields:
-
-```console
-kubectl create ns flux-config
-kubectl create secret generic -n flux-config my-custom-secret --from-file=identity=./id_rsa --from-file=known_hosts=./known_hosts
-```
-
-For both cases, when you create the Flux configuration, use `--local-auth-ref my-custom-secret` in place of the other authentication parameters:
-
-```console
-az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -n <config_name> -t connectedClusters --scope cluster --namespace flux-config -u <git-repo-url> --kustomization name=kustomization1 --local-auth-ref my-custom-secret
-```
-
-Learn more about using a local Kubernetes secret with these authentication methods:
-
-* [Git repository HTTPS authentication](https://fluxcd.io/docs/components/source/gitrepositories/#https-authentication)
-* [Git repository HTTPS self-signed certificates](https://fluxcd.io/docs/components/source/gitrepositories/#https-self-signed-certificates)
-* [Git repository SSH authentication](https://fluxcd.io/docs/components/source/gitrepositories/#ssh-authentication)
-* [Bucket static authentication](https://fluxcd.io/docs/components/source/buckets/#static-authentication)
-
-> [!NOTE]
-> If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli-connect-using-an-outbound-proxy-server).
-
-### Git implementation
-
-To support various repository providers that implement Git, Flux can be configured to use one of two Git libraries: `go-git` or `libgit2`. See the [Flux documentation](https://fluxcd.io/docs/components/source/gitrepositories/#git-implementation) for details.
-
-The GitOps implementation of Flux v2 automatically determines which library to use for public cloud repositories:
-
-* For GitHub, GitLab, and BitBucket repositories, Flux uses `go-git`.
-* For Azure DevOps and all other repositories, Flux uses `libgit2`.
-
-For on-premises repositories, Flux uses `libgit2`.
-
-### Kustomization
-
-By using `az k8s-configuration flux create`, you can create one or more kustomizations during the configuration.
-
-| Parameter | Format | Notes |
-| - | - | - |
-| `--kustomization` | No value | Start of a string of parameters that configure a kustomization. You can use it multiple times to create multiple kustomizations. |
-| `name` | String | Unique name for this kustomization. |
-| `path` | String | Path within the Git repository to reconcile with the cluster. Default is the top level of the branch. |
-| `prune` | Boolean | Default is `false`. Set `prune=true` to assure that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted. Using `prune=true` is important for environments where users don't have access to the clusters and can make changes only through the Git repository. |
-| `depends_on` | String | Name of one or more kustomizations (within this configuration) that must reconcile before this kustomization can reconcile. For example: `depends_on=["kustomization1","kustomization2"]`. Note that if you remove a kustomization that has dependent kustomizations, the dependent kustomizations will get a `DependencyNotReady` state and reconciliation will halt.|
-| `timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. |
-| `sync_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. |
-| `retry_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. |
-| `validation` | String | Values: `none`, `client`, `server`. Default: `none`. See [Flux documentation](https://fluxcd.io/docs/) for details.|
-| `force` | Boolean | Default: `false`. Set `force=true` to instruct the kustomize controller to re-create resources when patching fails because of an immutable field change. |
-
-You can also use `az k8s-configuration flux kustomization` to create, update, list, show, and delete kustomizations in a Flux configuration:
-
-```console
-az k8s-configuration flux kustomization -h
-
-Group
- az k8s-configuration flux kustomization : Commands to manage Kustomizations associated with Flux
- v2 Kubernetes configurations.
-
-Commands:
- create : Create a Kustomization associated with a Flux v2 Kubernetes configuration.
- delete : Delete a Kustomization associated with a Flux v2 Kubernetes configuration.
- list : List Kustomizations associated with a Flux v2 Kubernetes configuration.
- show : Show a Kustomization associated with a Flux v2 Kubernetes configuration.
- update : Update a Kustomization associated with a Flux v2 Kubernetes configuration.
-```
-
-Here are the kustomization creation options:
-
-```console
-az k8s-configuration flux kustomization create -h
-
-This command is from the following extension: k8s-configuration
-
-Command
- az k8s-configuration flux kustomization create : Create a Kustomization associated with a
- Kubernetes Flux v2 Configuration.
-
-Arguments
- --cluster-name -c [Required] : Name of the Kubernetes cluster.
- --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
- Allowed values: connectedClusters, managedClusters.
- --kustomization-name -k [Required] : Specify the name of the kustomization to target.
- --name -n [Required] : Name of the flux configuration.
- --resource-group -g [Required] : Name of resource group. You can configure the default
- group using `az configure --defaults group=<name>`.
- --dependencies --depends --depends-on : Comma-separated list of kustomization dependencies.
- --force : Re-create resources that cannot be updated on the
- cluster (i.e. jobs). Allowed values: false, true.
- --interval --sync-interval : Time between reconciliations of the kustomization on the
- cluster.
- --no-wait : Do not wait for the long-running operation to finish.
- --path : Specify the path in the source that the kustomization
- should apply.
- --prune : Garbage collect resources deployed by the kustomization
- on the cluster. Allowed values: false, true.
- --retry-interval : Time between reconciliations of the kustomization on the
- cluster on failures, defaults to --sync-interval.
- --timeout : Maximum time to reconcile the kustomization before
- timing out.
-
-Global Arguments
- --debug : Increase logging verbosity to show all debug logs.
- --help -h : Show this help message and exit.
- --only-show-errors : Only show errors, suppressing warnings.
- --output -o : Output format. Allowed values: json, jsonc, none,
- table, tsv, yaml, yamlc. Default: json.
- --query : JMESPath query string. See http://jmespath.org/ for more
- information and examples.
- --subscription : Name or ID of subscription. You can configure the
- default subscription using `az account set -s
- NAME_OR_ID`.
- --verbose : Increase logging verbosity. Use --debug for full debug
- logs.
-
-Examples
- Create a Kustomization associated with a Kubernetes v2 Flux Configuration
- az k8s-configuration flux kustomization create --resource-group my-resource-group \
- --cluster-name mycluster --cluster-type connectedClusters --name myconfig \
- --kustomization-name my-kustomization-2 --path ./my/path --prune --force
-```
+For more information about available parameters and how to use them, see [GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md#parameters).
## Manage GitOps configurations by using the Azure portal
-The Azure portal is useful for managing GitOps configurations and the Flux extension in Azure Arc-enabled Kubernetes or AKS clusters. The portal displays all Flux configurations associated with each cluster and enables drilling in to each.
+The Azure portal is useful for managing GitOps configurations and the Flux extension in Azure Arc-enabled Kubernetes or AKS clusters. In the Azure portal, you can see all of the Flux configurations associated with each cluster and get detailed information, including the overall compliance state of each cluster.
-The portal provides the overall compliance state of the cluster. The Flux objects that have been deployed to the cluster are also shown, along with their installation parameters, compliance state, and any errors.
+The Flux objects that have been deployed to each cluster are also shown, along with their installation parameters, compliance state, and any errors.
-You can also use the portal to create, update, and delete GitOps configurations.
+You can also use the Azure portal to create, update, and delete GitOps configurations.
## Manage cluster configuration by using the Flux Kustomize controller
-The Flux Kustomize controller is installed as part of the `microsoft.flux` cluster extension. It allows the declarative management of cluster configuration and application deployment by using Kubernetes manifests synced from a Git repository. These Kubernetes manifests can include a *kustomize.yaml* file, but it isn't required.
+The Flux Kustomize controller is installed as part of the `microsoft.flux` cluster extension. It allows the declarative management of cluster configuration and application deployment by using Kubernetes manifests synced from a Git repository. These Kubernetes manifests can optionally include a *kustomize.yaml* file.
For usage details, see the following:
For usage details, see the following:
* [Flux Helm controller](https://fluxcd.io/docs/components/helm/) > [!TIP]
-> Because of how Helm handles index files, processing helm charts is an expensive operation and can have very high memory footprint. As a result, helm chart reconciliation, when occurring in parallel, can cause memory spikes and OOMKilled if you are reconciling a large number of helm charts at a given time. By default, the source-controller sets its memory limit at 1Gi and its memory requests at 64Mi. If you need to increase this limit and requests due to a high number of large helm chart reconciliations, run the following command after installing the microsoft.flux extension:
+> Because of how Helm handles index files, processing Helm charts is an expensive operation and can have very high memory footprint. As a result, reconciling a large number of Helm charts at once can cause memory spikes and `OOMKilled` errors. By default, the controller sets its memory limit at 1Gi and its memory requests at 64Mi. To increase this limit and requests due to a high number of large Helm chart reconciliations, run the following command after installing the microsoft.flux extension:
> > `az k8s-extension update -g <resource-group> -c <cluster-name> -n flux -t connectedClusters --config source-controller.resources.limits.memory=2Gi source-controller.resources.requests.memory=300Mi` ### Use the GitRepository source for Helm charts
-If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can indicate that the configured source should be used as the source of the Helm charts by adding `clusterconfig.azure.com/use-managed-source: "true"` to your HelmRelease yaml, as shown in the following example:
+If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can indicate that the configured source should be used as the source of the Helm charts by adding `clusterconfig.azure.com/use-managed-source: "true"` to your HelmRelease.yaml file, as shown in the following example:
-```console
+```yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease
spec:
By using this annotation, the HelmRelease that is deployed will be patched with the reference to the configured source. Currently, only `GitRepository` source is supported.
-## Multi-tenancy
-
-Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability has been integrated into Azure GitOps with Flux v2.
-
->[ !NOTE]
-> For the multi-tenancy feature, you need to know if your manifests contain any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare, take these actions:
->
-> * Upgrade to Kubernetes version 1.20.6 or greater.
-> * In your Kubernetes manifests, assure that all `sourceRef` are to objects within the same namespace as the GitOps configuration.
-> * If you need time to update your manifests, you can [opt out of multi-tenancy](#opt-out-of-multi-tenancy). However, you still need to upgrade your Kubernetes version.
-
-### Update manifests for multi-tenancy
+## Delete the Flux configuration and extension
-LetΓÇÖs say you deploy a `fluxConfiguration` to one of our Kubernetes clusters in the **cluster-config** namespace with cluster scope. You configure the source to sync the https://github.com/fluxcd/flux2-kustomize-helm-example repo. This is the same sample Git repo used in the tutorial earlier in this doc. After Flux syncs the repo, it will deploy the resources described in the manifests (YAML files). Two of the manifests describe HelmRelease and HelmRepository objects.
-
-```yaml
-apiVersion: helm.toolkit.fluxcd.io/v2beta1
-kind: HelmRelease
-metadata:
- name: nginx
- namespace: nginx
-spec:
- releaseName: nginx-ingress-controller
- chart:
- spec:
- chart: nginx-ingress-controller
- sourceRef:
- kind: HelmRepository
- name: bitnami
- namespace: flux-system
- version: "5.6.14"
- interval: 1h0m0s
- install:
- remediation:
- retries: 3
- # Default values
- # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml
- values:
- service:
- type: NodePort
-```
+Use the commands below to delete your Flux configuration and, if desired, the Flux extension itself.
-```yaml
-apiVersion: source.toolkit.fluxcd.io/v1beta1
-kind: HelmRepository
-metadata:
- name: bitnami
- namespace: flux-system
-spec:
- interval: 30m
- url: https://charts.bitnami.com/bitnami
-```
+### Delete the Flux configuration
-By default, the Flux extension will deploy the `fluxConfigurations` by impersonating the **flux-applier** service account that is deployed only in the **cluster-config** namespace. Using the above manifests, when multi-tenancy is enabled the HelmRelease would be blocked. This is because the HelmRelease is in the **nginx** namespace and is referencing a HelmRepository in the **flux-system** namespace. Also, the Flux helm-controller cannot apply the HelmRelease, because there is no **flux-applier** service account in the **nginx** namespace.
+The command below deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository will be removed when the Flux configuration is removed. However, this command does not remove the Flux extension itself.
-To work with multi-tenancy, the correct approach is to deploy all Flux objects into the same namespace as the `fluxConfigurations`. This avoids the cross-namespace reference issue, and allows the Flux controllers to get the permissions to apply the objects. Thus, for a GitOps configuration created in the **cluster-config** namespace, the above manifests would change to these:
+For an Azure Arc-enabled Kubernetes cluster, use this command:
-```yaml
-apiVersion: helm.toolkit.fluxcd.io/v2beta1
-kind: HelmRelease
-metadata:
- name: nginx
- namespace: cluster-config
-spec:
- releaseName: nginx-ingress-controller
- targetNamespace: nginx
- chart:
- spec:
- chart: nginx-ingress-controller
- sourceRef:
- kind: HelmRepository
- name: bitnami
- namespace: cluster-config
- version: "5.6.14"
- interval: 1h0m0s
- install:
- remediation:
- retries: 3
- # Default values
- # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml
- values:
- service:
- type: NodePort
+```azurecli
+az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n cluster-config -t connectedClusters --yes
```
-```yaml
-apiVersion: source.toolkit.fluxcd.io/v1beta1
-kind: HelmRepository
-metadata:
- name: bitnami
- namespace: cluster-config
-spec:
- interval: 30m
- url: https://charts.bitnami.com/bitnami
-```
+For an AKS cluster, use the same command but with `-t managedClusters` replacing `-t connectedClusters`.
-### Opt out of multi-tenancy
+### Delete the Flux cluster extension
-When the `microsoft.flux` extension is installed, multi-tenancy is enabled by default to assure security by default in your clusters. However, if you need to disable multi-tenancy, you can opt out by creating or updating the `microsoft.flux` extension in your clusters with "--configuration-settings multiTenancy.enforce=false".
+You can delete the Flux extension by using either Azure CLI or the Azure portal. The delete action removes both the `microsoft.flux` extension resource in Azure and the Flux extension objects in the cluster.
-```console
-az k8s-extension create --extension-type microsoft.flux --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters>
+If the Flux extension was created automatically when the Flux configuration was first created, the extension name will be `flux`.
-or
+For an Azure Arc-enabled Kubernetes cluster, use this command:
-az k8s-extension update --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters>
+```azurecli
+az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t connectedClusters --yes
```
+For an AKS cluster, use the same command but with `-t managedClusters`replacing `-t connectedClusters`.
+ ## Migrate from Flux v1 If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources in the cluster. Use these Azure CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster:
-```console
+```azurecli
az k8s-configuration list --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name> az k8s-configuration delete --name <configuration name> --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name> ```
-You can also use the Azure portal to view and delete GitOps configurations in Azure Arc-enabled Kubernetes or AKS clusters.
+You can also use the Azure portal to view and delete existing GitOps configurations in Azure Arc-enabled Kubernetes or AKS clusters.
-General information about migration from Flux v1 to Flux v2 is available in the fluxed project: [Migrate from Flux v1 to v2](https://fluxcd.io/docs/migration/).
+More information about migration from Flux v1 to Flux v2 is available in the fluxcd project: [Migrate from Flux v1 to v2](https://fluxcd.io/docs/migration/).
## Next steps
-Advance to the next tutorial to learn how to apply configuration at scale with Azure Policy.
-> [!div class="nextstepaction"]
-> [Use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).
+* Read more about [configurations and GitOps](conceptual-gitops-flux2.md).
+* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).
azure-arc Use Azure Policy Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy-flux-2.md
Title: "Deploy applications consistently at scale using Flux v2 configurations and Azure Policy"- Last updated 8/23/2022 description: "Use Azure Policy to apply Flux v2 configurations at scale on Azure Arc-enabled Kubernetes or AKS clusters."
-keywords: "Kubernetes, K8s, Arc, AKS, Azure, containers, GitOps, Flux v2, policy"
# Deploy applications consistently at scale using Flux v2 configurations and Azure Policy
azure-arc Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy.md
Title: "Apply Flux v1 configurations at-scale using Azure Policy"--
-#
Last updated 8/23/2022 description: "Apply Flux v1 configurations at-scale using Azure Policy"
-keywords: "Kubernetes, Arc, Azure, K8s, containers, GitOps, Flux v1, policy"
# Apply Flux v1 configurations at-scale using Azure Policy
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md
Title: "Deploy Helm Charts using GitOps on Azure Arc-enabled Kubernetes cluster"--
-#
Last updated 05/24/2022 description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuration"
-keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Service, containers"
# Deploy Helm Charts using GitOps on an Azure Arc-enabled Kubernetes cluster
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
Title: "Azure Arc-enabled Kubernetes validation"-- Last updated 03/03/2021 description: "Describes Arc validation program for Kubernetes distributions"
-keywords: "Kubernetes, Arc, Azure, K8s, validation"
# Azure Arc-enabled Kubernetes validation
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 10/08/2022 Last updated : 11/18/2022
The Azure Connected Machine agent is designed to manage agent and system resourc
* If the extension installs background services that run independent of Azure Arc, such as the Microsoft Monitoring Agent, those services will not be subject to the resource governance constraints listed above. * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems. * The Azure Monitor Agent can use up to 30% of the CPU during normal operations.
+ * The Linux OS Update Extension (used by Azure Update Management Center) can use up to 30% of the CPU to patch the server.
## Instance metadata
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 10/11/2022 Last updated : 11/18/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.19 - June 2022
+
+### Known issues
+
+- Agents configured to use private endpoints will incorrectly try to download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality.
+- Some systems may incorrectly report their cloud provider as Azure Stack HCI.
+
+### New features
+
+- When installed on a Google Compute Engine virtual machine, the agent will now detect and report Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata.
+
+### Fixed
+
+- An issue that could cause the extension manager to hang during extension installation, update, and removal operations has been resolved.
+- Improved support for TLS 1.3
+ ## Version 1.18 - May 2022 ### New features
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 10/11/2022 Last updated : 11/15/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.24 - November 2022
+
+### New features
+
+- `azcmagent logs` improvements:
+ - Only the most recent log file for each component is collected by default. To collect all log files, use the new `--full` flag.
+ - Journal logs for the agent services are now collected on Linux operating systems
+ - Logs from extensions are now collected
+- Agent telemetry is no longer sent to `dc.services.visualstudio.com`. You may be able to remove this URL from any firewall or proxy server rules if no other applications in your environment require it.
+- Failed extension installs can now be retried without removing the old extension as long as the extension settings are different
+- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Azure Update Management Center extension on Linux to reduce downtime during update operations
+
+### Fixed
+
+- Improved logic for detecting machines running on Azure Stack HCI to reduce false positives
+- Auto-registration of required resource providers only happens when they are unregistered
+- Agent will now detect drift between the proxy settings of the command line tool and background services
+- Fixed a bug with proxy bypass feature that caused the agent to incorrectly use the proxy server for bypassed URLs
+- Improved error handling when extensions don't download successfully, fail validation, or have corrupt state files
+ ## Version 1.23 - October 2022 ### New features
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Agents configured to use private endpoints will now download extensions over the private endpoint - The `--use-private-link` flag on [azcmagent check](manage-agent.md#check) has been renamed to `--enable-pls-check` to more accurately represent its function
-## Version 1.19 - June 2022
-
-### Known issues
--- Agents configured to use private endpoints will incorrectly try to download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality.-- Some systems may incorrectly report their cloud provider as Azure Stack HCI.-
-### New features
--- When installed on a Google Compute Engine virtual machine, the agent will now detect and report Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata.-
-### Fixed
--- An issue that could cause the extension manager to hang during extension installation, update, and removal operations has been resolved.-- Improved support for TLS 1.3- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
The following extensions are available for Windows and Linux machines:
|SUSE Linux Enterprise Server 15 |X |X |X |X |X |X |X |X | |SUSE Linux Enterprise Server 15 SP5 |X |X |X |X |X | |X |X | |SUSE Linux Enterprise Server 12 SP5 |X |X |X |X |X | |X |X |
-|Unbuntu 20.04 LTS |X |X |X |X |X | |X |X |
-|Unbuntu 18.04 LTS |X |X |X |X |X |X |X |X |
-|Unbuntu 16.04 LTS |X |X |X |X | | |X |X |
-|Unbuntu 140.04 LTS | |X | |X | | |X | |
+|Ubuntu 20.04 LTS |X |X |X |X |X | |X |X |
+|Ubuntu 18.04 LTS |X |X |X |X |X |X |X |X |
+|Ubuntu 16.04 LTS |X |X |X |X | | |X |X |
+|Ubuntu 14.04 LTS | |X | |X | | |X | |
For the regional availabilities of different Azure services and VM extensions available for Azure Arc-enabled servers, [refer to Azure Global's Product Availability Roadmap](https://global.azure.com/product-availability/roadmap).
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 09/29/2022 Last updated : 11/15/2022
The table below lists the URLs that must be available in order to install and us
|`*.servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public| |`*.waconazure.com`|For Windows Admin Center connectivity|If using Windows Admin Center|Public| |`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
-|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
+|`dc.services.visualstudio.com`|Agent telemetry|Optional, not used in agent versions 1.24+| Public |
> [!NOTE] > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
The table below lists the URLs that must be available in order to install and us
|`*.his.arc.azure.us`|Metadata and hybrid identity services|Always| Private | |`*.guestconfiguration.azure.us`| Extension management and guest configuration services |Always| Private | |`*.blob.core.usgovcloudapi.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
-|`dc.applicationinsights.us`|Agent telemetry|Optional| Public |
+|`dc.applicationinsights.us`|Agent telemetry|Optional, not used in agent versions 1.24+| Public |
### [Azure China](#tab/azure-china)
The table below lists the URLs that must be available in order to install and us
|`azgn*.servicebus.chinacloudapi.cn`|Notification service for extension and connectivity scenarios|Always| |`*.servicebus.chinacloudapi.cn`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure| |`*.blob.core.chinacloudapi.cn`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints|
-|`dc.applicationinsights.azure.cn`|Agent telemetry|Optional|
+|`dc.applicationinsights.azure.cn`|Agent telemetry|Optional, not used in agent versions 1.24+|
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 10/11/2022 Last updated : 11/18/2022
The following versions of the Windows and Linux operating system are officially
Windows operating systems: * NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
-* Windows PowerShell 4.0 or later is required. No action is required for Windows Server 2012 R2 and above. For Windows Server 2008 R2, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+* Windows PowerShell 4.0 or later is required. No action is required for Windows Server 2012 R2 and above. For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
Linux operating systems: * systemd * wget (to download the installation script)
+* openssl
+* gnupg
## Required permissions
azure-cache-for-redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-insights-overview.md
+
+ Title: Azure Monitor for Azure Cache for Redis | Microsoft Docs
+description: This article describes the Azure Monitor for Azure Redis Cache feature, which provides cache owners with a quick understanding of performance and utilization problems.
+++++ Last updated : 11/22/2022++++
+# Explore Azure Monitor for Azure Cache for Redis
+
+For all of your Azure Cache for Redis resources, Azure Monitor for Azure Cache for Redis provides a unified, interactive view of:
+
+- Overall performance
+- Failures
+- Capacity
+- Operational health
+
+This article helps you understand the benefits of this new monitoring experience. It also shows how to modify and adapt the experience to fit the unique needs of your organization.
+
+## Introduction
+
+Before starting the experience, you should understand how Azure Monitor for Azure Cache for Redis visually presents information.
+
+It delivers:
+
+- **At scale perspective** of your Azure Cache for Redis resources in a single location across all of your subscriptions. You can selectively scope to only the subscriptions and resources you want to evaluate.
+
+- **Drill-down analysis** of a particular Azure Cache for Redis resource. You can diagnose problems and see detailed analysis of utilization, failures, capacity, and operations. Select any of these categories to see an in-depth view of relevant information.
+
+- **Customization** of this experience, which is built atop Azure Monitor workbook templates. The experience lets you change what metrics are displayed and modify or set thresholds that align with your limits. You can save the changes in a custom workbook and then pin workbook charts to Azure dashboards.
+
+This feature doesn't require you to enable or configure anything. Azure Cache for Redis information is collected by default.
+
+>[!NOTE]
+>There is no charge to access this feature. You're charged only for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
+
+## View utilization and performance metrics for Azure Cache for Redis
+
+To view the utilization and performance of your storage accounts across all of your subscriptions, do the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for **Monitor**, and select **Monitor**.
+
+ :::image type="content" source="../cosmos-db/media/insights-overview/search-monitor.png" alt-text="Search box with the word 'Monitor' and the Services search result that shows 'Monitor' with a speedometer symbol":::
+
+1. Select **Azure Cache for Redis**. If this option isn't present, select **More** > **Azure Cache for Redis**.
+
+### Overview
+
+On **Overview**, the table displays interactive Azure Cache for Redis metrics. You can filter the results based on the options you select from the following drop-down lists:
+
+- **Subscriptions**: Only subscriptions that have an Azure Cache for Redis resource are listed.
+
+- **Azure Cache for Redis**: You can select all, a subset, or a single Azure Cache for Redis resource.
+
+- **Time Range**: By default, the table displays the last four hours of information based on the corresponding selections.
+
+There's a counter tile under the drop-down lists. The tile shows the total number of Azure Cache for Redis resources in the selected subscriptions. Conditional color codes or heat maps for workbook columns report transaction metrics. The deepest color represents the highest value. Lighter colors represent lower values.
+
+Selecting a drop-down list arrow next to one of the Azure Cache for Redis resources reveals a breakdown of the performance metrics at the individual resource level.
++
+When you select the Azure Cache for Redis resource name highlighted in blue, you see the default **Overview** table for the associated account. It shows these columns:
+
+- **Used Memory**
+- **Used Memory Percentage**
+- **Server Load**
+- **Server Load Timeline**
+- **CPU**
+- **Connected Clients**
+- **Cache Misses**
+- **Errors (Max)**
+
+### Operations
+
+When you select **Operations** at the top of the page, the **Operations** table of the workbook template opens. It shows these columns:
+
+- **Total Operations**
+- **Total Operations Timeline**
+- **Operations Per Second**
+- **Gets**
+- **Sets**
++
+### Usage
+
+When you select **Usage** at the top of the page, the **Usage** table of the workbook template opens. It shows these columns:
+
+- **Cache Read**
+- **Cache Read Timeline**
+- **Cache Write**
+- **Cache Hits**
+- **Cache Misses**
++
+### Failures
+
+When you select **Failures** at the top of the page, the **Failures** table of the workbook template opens. It shows these columns:
+
+- **Total Errors**
+- **Failover/Errors**
+- **UnresponsiveClient/Errors**
+- **RDB/Errors**
+- **AOF/Errors**
+- **Export/Errors**
+- **Dataloss/Errors**
+- **Import/Errors**
++
+### Metric definitions
+
+For a full list of the metric definitions that form these workbooks, check out the [article on available metrics and reporting intervals](./cache-how-to-monitor.md#create-your-own-metrics).
+
+## View from an Azure Cache for Redis resource
+
+To access Azure Monitor for Azure Cache for Redis directly from an individual resource:
+
+1. In the Azure portal, select Azure Cache for Redis.
+
+2. From the list, choose an individual Azure Cache for Redis resource. In the monitoring section, choose Insights.
+
+ :::image type="content" source="./media/cache-insights-overview/insights.png" alt-text="Screenshot of Menu options with the words 'Insights' highlighted in a red box.":::
+
+These views are also accessible by selecting the resource name of an Azure Cache for Redis resource from the Azure Monitor level workbook.
+
+### Resource-level overview
+
+On the **Overview** workbook for the Azure Redis Cache, it shows several performance metrics that give you access to:
+
+- Interactive performance charts showing the most essential details related to Azure Cache for Redis performance.
+
+- Metrics and status tiles highlighting shard performance, total number of connected clients, and overall latency.
++
+Selecting any of the other tabs for **Performance** or **Operations** opens that workbooks.
+
+### Resource-level performance
++
+### Resource-level operations
++
+## Pin, export, and expand
+
+To pin any metric section to an [Azure dashboard](../azure-portal/azure-portal-dashboards.md), select the pushpin symbol in the section's upper right.
++
+To export your data into an Excel format, select the down arrow symbol to the left of the pushpin symbol.
++
+To expand or collapse all views in a workbook, select the expand symbol to the left of the export symbol.
++
+## Customize Azure Monitor for Azure Cache for Redis
+
+Because this experience is built atop Azure Monitor workbook templates, you can select **Customize** > **Edit** > **Save** to save a copy of your modified version into a custom workbook.
++
+Workbooks are saved within a resource group in either the **My Reports** section or the **Shared Reports** section. **My Reports** is available only to you. **Shared Reports** is available to everyone with access to the resource group.
+
+After you save a custom workbook, go to the workbook gallery to open it.
++
+## Troubleshooting
+
+For troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+
+## Next steps
+
+- Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerts that aid in detecting problems.
+- Learn the scenarios that workbooks support, how to author or customize reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
You may have a limited number of Logic Apps actions per action group.
### Secure webhook
-When you use a secure webhook action, you must use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint. For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md). Follow these steps to take advantage of the secure webhook functionality.
+When you use a secure webhook action, you must use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint.
+
+The secure webhook Action authenticates to the protected API using a Service Principal instance in the AD tenant of the "AZNS AAD Webhook" AAD Application. To make the action group work, this AAD Webhook Service Principal needs to be added as member of a role on the target AAD application that grants access to the target endpoint.
+
+For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md). Follow these steps to take advantage of the secure webhook functionality.
> [!NOTE] >
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
When you're successfully connected and synced:
> [!NOTE] > ServiceNow has a rate limit for requests per hour. To configure the limit, define **Inbound REST API rate limiting** in the ServiceNow instance.
+## Payload structure
+
+The payload that is sent to ServiceNow has a common structure. The structure has a section of `<Description>` that contains all the alert data.
+
+The structure of the payload for all alert types except log search alert is [common schema](./alerts-common-schema.md).
+
+For Log Search Alerts, the structure is:
+
+- Alert (alert rule name) : \<value>
+- Search Query : \<value>
+- Search Start Time(UTC) : \<value>
+- Search End Time(UTC) : \<value>
+- AffectedConfigurationItems : [\<list of impacted configuration items>]
+ ## Next steps * [ITSM Connector overview](itsmc-overview.md)
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
Title: Access the API
-description: There are two endpoints through which you can communicate with the Azure Monitor Log Analytics API.
+ Title: API Access and Authentication
+description: How to Authenticate and access the Azure Monitor Log Analytics API.
Previously updated : 11/18/2021 Last updated : 11/28/2022 # Access the Azure Monitor Log Analytics API
-You can communicate with the Azure Monitor Log Analytics API using this endpoint: `https://api.loganalytics.io`. To access the API, you must authenticate through Azure Active Directory (Azure AD).
-## Public API format
+You can submit a query request to a workspace using the Azure Monitor Log Analytics endpoint `https://api.loganalytics.azure.com`. To access the endpoint, you must authenticate through Azure Active Directory (Azure AD).
+>[!Note]
+> The `api.loganalytics.io` endpoint is being replaced by `api.loganalytics.azure`. `api.loganalytics.io` will continue to be be supported for the forseeable future.
+## Authenticating with a demo API key
-The public API format is:
+To quickly explore the API without Azure Active Directory authentication, use the demonstration workspace with sample data, which supports API key authentication.
+
+To authenticate and run queries against the sample workspace, use `DEMO_WORKSPACE` as the {workspace-id} and pass in the API key `DEMO_KEY`.
+
+If either the Application ID or the API key is incorrect, the API service will return a [403](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error) (Forbidden) error.
+
+The API key `DEMO_KEY` can be passed in three different ways, depending on whether you prefer to use the URL, a header, or basic authentication.
+
+1. **Custom header**: provide the API key in the custom header `X-Api-Key`
+2. **Query parameter**: provide the API key in the URL parameter `api_key`
+3. **Basic authentication**: provide the API key as either username or password. If you provide both, the API key must be in the username.
+
+This example uses the Workspace ID and API key in the header:
+
+```
+ POST https://api.loganalytics.azure.com/v1/workspaces/DEMO_WORKSPACE/query
+ X-Api-Key: DEMO_KEY
+ Content-Type: application/json
+
+ {
+ "query": "AzureActivity | summarize count() by Category"
+ }
+```
+## Public API endpoint
+
+The public API endpoint is:
```
- https://{hostname}/{api-version}/workspaces/{workspaceId}/query?[parameters]
+ https://api.loganalytics.azure.com/{api-version}/workspaces/{workspaceId}
``` where: - **api-version**: The API version. The current version is "v1"
+ - **workspaceId**: Your workspace ID
-## Next Steps
-Get detailed information about the [API format](request-format.md).
+The query is passed in the request body.
+
+For example,
+ ```
+ https://api.loganalytics.azure.com/v1/workspaces/1234abcd-def89-765a-9abc-def1234abcde
+
+ Body:
+ {
+ "query": "Usage"
+ }
+```
+## Set up Authentication
+
+To access the API, you need to register a client app with Azure Active Directory and request a token.
+1. [Register an app in Azure Active Directory](./register-app-for-token.md).
+1. After completing the Active Directory setup and workspace permissions, request an authorization token.
+
+## Request an Authorization Token
+
+Before beginning, make sure you have all the values required to make the request successfully. All requests require:
+- Your Azure Active Directory tenant ID.
+- Your workspace ID.
+- Your Azure Active Directory client ID for the app.
+- An Azure Active Directory client secret for the app.
+
+The Log Analytics API supports Azure Active Directory authentication with three different [Azure AD OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows:
+- Client credentials
+- Authorization code
+- Implicit
++
+### Client Credentials Flow
+
+In the client credentials flow, the token is used with the log analytics endpoint. A single request is made to receive a token, using the credentials provided for your app in the [Register an app for in Azure Active Directory](./register-app-for-token.md) step above.
+Use the `https://api.loganalytics.azure.com` endpoint.
+
+##### Client Credentials Token URL (POST request)
+
+```http
+ POST /<your-tenant-id>/oauth2/v2.0/token
+ Host: https://login.microsoftonline.com
+ Content-Type: application/x-www-form-urlencoded
+
+ grant_type=client_credentials
+ &client_id=<app-client-id>
+ &resource=https://api.loganalytics.io
+ &client_secret=<app-client-secret>
+```
+
+A successful request receives an access token in the response:
+
+```http
+ {
+ token_type": "Bearer",
+ "expires_in": "86399",
+ "ext_expires_in": "86399",
+ "access_token": ""eyJ0eXAiOiJKV1QiLCJ.....Ax"
+ }
+```
+
+Use the token in requests to the log analytics endpoint:
+
+```http
+ POST /v1/workspaces/your workspace id/query?timespan=P1D
+ Host: https://api.loganalytics.azure.com
+ Content-Type: application/json
+ Authorization: bearer <your access token>
+
+ Body:
+ {
+ "query": "AzureActivity |summarize count() by Category"
+ }
+```
++
+Example Response:
+
+```http
+ {
+ "tables": [
+ {
+ "name": "PrimaryResult",
+ "columns": [
+ {
+ "name": "OperationName",
+ "type": "string"
+ },
+ {
+ "name": "Level",
+ "type": "string"
+ },
+ {
+ "name": "ActivityStatus",
+ "type": "string"
+ }
+ ],
+ "rows": [
+ [
+ "Metric Alert",
+ "Informational",
+ "Resolved",
+ ...
+ ],
+ ...
+ ]
+ },
+ ...
+ ]
+ }
+```
+
+### Authorization Code Flow
+
+The main OAuth2 flow supported is through [authorization codes](/azure/active-directory/develop/active-directory-protocols-oauth-code). This method requires two HTTP requests to acquire a token with which to call the Azure Monitor Log Analytics API. There are two URLs, one endpoint per request. Their formats are:
+
+#### Authorization Code URL (GET request):
+
+```http
+ GET https://login.microsoftonline.com/YOUR_Azure AD_TENANT/oauth2/authorize?
+ client_id=<app-client-id>
+ &response_type=code
+ &redirect_uri=<app-redirect-uri>
+ &resource=https://api.loganalytics.io
+```
+
+When making a request to the Authorize URL, the client\_id is the Application ID from your Azure AD App, copied from the App's properties menu. The redirect\_uri is the home page/login URL from the same Azure AD App. When a request is successful, this endpoint redirects you to the sign-in page you provided at sign-up with the authorization code appended to the URL. See the following example:
+
+```http
+ http://<app-client-id>/?code=AUTHORIZATION_CODE&session_state=STATE_GUID
+```
+
+At this point you'll have obtained an authorization code, which you need now to request an access token.
+
+#### Authorization Code Token URL (POST request)
+
+```http
+ POST /YOUR_Azure AD_TENANT/oauth2/token HTTP/1.1
+ Host: https://login.microsoftonline.com
+ Content-Type: application/x-www-form-urlencoded
+
+ grant_type=authorization_code
+ &client_id=<app client id>
+ &code=<auth code fom GET request>
+ &redirect_uri=<app-client-id>
+ &resource=https://api.loganalytics.io
+ &client_secret=<app-client-secret>
+```
+
+All values are the same as before, with some additions. The authorization code is the same code you received in the previous request after a successful redirect. The code is combined with the key obtained from the Azure AD App. If you didn't save the key, you can delete it and create a new one from the keys tab of the Azure AD App menu. The response is a JSON string containing the token with the following schema. Types are indicated for the token values.
+
+Response example:
+
+```http
+ {
+ "access_token": "eyJ0eXAiOiJKV1QiLCJ.....Ax",
+ "expires_in": "3600",
+ "ext_expires_in": "1503641912",
+ "id_token": "not_needed_for_log_analytics",
+ "not_before": "1503638012",
+ "refresh_token": "eyJ0esdfiJKV1ljhgYF.....Az",
+ "resource": "https://api.loganalytics.io",
+ "scope": "Data.Read",
+ "token_type": "bearer"
+ }
+```
+
+The access token portion of this response is what you present to the Log Analytics API in the `Authorization: Bearer` header. You may also use the refresh token in the future to acquire a new access\_token and refresh\_token when yours have gone stale. For this request, the format and endpoint are:
+
+```http
+ POST /YOUR_AAD_TENANT/oauth2/token HTTP/1.1
+ Host: https://login.microsoftonline.com
+ Content-Type: application/x-www-form-urlencoded
+
+ client_id=<app-client-id>
+ &refresh_token=<refresh-token>
+ &grant_type=refresh_token
+ &resource=https://api.loganalytics.io
+ &client_secret=<app-client-secret>
+```
+
+Response example:
+
+```http
+ {
+ "token_type": "Bearer",
+ "expires_in": "3600",
+ "expires_on": "1460404526",
+ "resource": "https://api.loganalytics.io",
+ "access_token": "eyJ0eXAiOiJKV1QiLCJ.....Ax",
+ "refresh_token": "eyJ0esdfiJKV1ljhgYF.....Az"
+ }
+```
+
+### Implicit Code Flow
+
+The Log Analytics API supports the OAuth2 [implicit flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant). For this flow, only a single request is required but no refresh token can be acquired.
+
+#### Implicit Code Authorize URL
+
+```http
+ GET https://login.microsoftonline.com/YOUR_AAD_TENANT/oauth2/authorize?
+ client_id=<app-client-id>
+ &response_type=token
+ &redirect_uri=<app-redirect-uri>
+ &resource=https://api.loganalytics.io
+```
+
+A successful request will produce a redirect to your redirect URI with the token in the URL as follows.
+
+```http
+ http://YOUR_REDIRECT_URI/#access_token=YOUR_ACCESS_TOKEN&token_type=Bearer&expires_in=3600&session_state=STATE_GUID
+```
+
+This access\_token can be used as the `Authorization: Bearer` header value when passed to the Log Analytics API to authorize requests.
+
+## More Information
+
+You can find documentation about OAuth2 with Azure AD here:
+ - [Azure AD Authorization Code flow](/azure/active-directory/develop/active-directory-protocols-oauth-code)
+ - [Azure AD Implicit Grant flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant)
+ - [Azure AD S2S Client Credentials flow](/azure/active-directory/develop/active-directory-protocols-oauth-service-to-service)
++
+## Next steps
+
+- [Request format](./request-format.md)
+- [Response format](./response-format.md)
+- [Querying logs for Azure resources](./azure-resource-queries.md)
+- [Batch queries](./batch-queries.md)
azure-monitor Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/authentication-authorization.md
- Title: Request an authorization token
-description: Set up authentication and authorization for the Azure Monitor Log Analytics API.
-- Previously updated : 11/22/2021--
-# Set Up Authentication and Authorization for the Azure Monitor Log Analytics API
-
-To set up authentication and authorization for the Azure Monitor Log Analytics API:
-
-## Set Up Authentication
-1. [Set up Azure Directory](../../../active-directory/develop/quickstart-register-app.md). During setup, use these settings at the relevant steps:
- - When asked for the API to connect to, select **APIs my organization uses** and then search for "Log Analytics API".
- - For the API permissions, select **Delegated permissions**.
-1. After completing the Active Directory setup, [Request an Authorization Token](#request-an-authorization-token).
-1. (Optional) If you only want to work with sample data in a non-production environment, you can just [use an API key](#authenticating-with-an-api-key).
-## Request an Authorization Token
-
-Before beginning, make sure you have all the values required to make OAuth2 calls successfully. All requests require:
-- Your Azure AD tenant-- Your workspace ID-- Your client ID for the Azure AD app-- A client secret for the Azure AD app (referred to as "keys" in the Azure AD App menu bar).--
-### Client Credentials Flow
-
-In the client credentials flow, the token is used with the ARM endpoint. A single request is made to receive a token, using the application permissions provided during the Azure AD application setup.
-The resource requested is: `https://management.azure.com`.
-You can also use this flow to request a token to `https://api.loganalytics.io`. Replace the "resource" in the example.
-
-#### Client Credentials Token URL (POST request)
-
-```
- POST /YOUR_AAD_TENANT/oauth2/token HTTP/1.1
- Host: https://login.microsoftonline.com
- Content-Type: application/x-www-form-urlencoded
-
- grant_type=client_credentials
- &client_id=YOUR_CLIENT_ID
- &redirect_uri=YOUR_REDIRECT_URI
- &resource=https://management.azure.com/
- &client_secret=YOUR_CLIENT_SECRET
-```
-
-##### Microsoft identity platform v2.0
-
-```
- POST /YOUR_AAD_TENANT/oauth2/v2.0/token HTTP/1.1
- Host: https://login.microsoftonline.com
- Content-Type: application/x-www-form-urlencoded
-
- grant_type=client_credentials
- &client_id=YOUR_CLIENT_ID
- &scope=https://management.azure.com/.default
- &client_secret=YOUR_CLIENT_SECRET
-```
-
-A successful request receives an access token:
-
-```
- {
- "token_type": "Bearer",
- "expires_in": "3600",
- "ext_expires_in": "0",
- "expires_on": "1505929459",
- "not_before": "1505925559",
- "resource": "https://management.azure.com/",
- "access_token": "ey.....A"
- }
-```
-
-The token can be used for authorization against the ARM API endpoint:
-
-```
- GET https://management.azure.com/subscriptions/6c3ac85e-59d5-4e5d-90eb-27979f57cb16/resourceGroups/demo/providers/Microsoft.OperationalInsights/workspaces/demo-ws/api/query
-
- Authorization: Bearer <access_token>
- Prefer: response-v1=true
-
- {
- "query": "AzureActivity | limit 10"
- }
-```
-
-Example Response:
-
-```
- {
- "tables": [
- {
- "name": "PrimaryResult",
- "columns": [
- {
- "name": "OperationName",
- "type": "string"
- },
- {
- "name": "Level",
- "type": "string"
- },
- {
- "name": "ActivityStatus",
- "type": "string"
- }
- ],
- "rows": [
- [
- "Metric Alert",
- "Informational",
- "Resolved",
- ...
- ],
- ...
- ]
- },
- ...
- ]
- }
-```
-
-### Authorization Code Flow
-
-The main OAuth2 flow supported is through [authorization codes](/azure/active-directory/develop/active-directory-protocols-oauth-code). This method requires two HTTP requests to acquire a token with which to call the Azure Monitor Log Analytics API. There are two URLs, one endpoint per request. Their formats are:
-
-#### Authorization Code URL (GET request):
-
-```
- GET https://login.microsoftonline.com/YOUR_Azure AD_TENANT/oauth2/authorize?
- client_id=YOUR_CLIENT_ID
- &response_type=code
- &redirect_uri=YOUR_REDIRECT_URI
- &resource=https://api.loganalytics.io
-```
-
-When making a request to the Authorize URL, the client\_id is the Application ID from your Azure AD App, copied from the App's properties menu. The redirect\_uri is the home page/login URL from the same Azure AD App. When a request is successful, this endpoint redirects you to the sign in page you provided at sign-up with the authorization code appended to the URL. See the following example:
-
-```
- http://YOUR_REDIRECT_URI/?code=AUTHORIZATION_CODE&session_state=STATE_GUID
-```
-
-At this point you will have obtained an authorization code, which you need now to request an access token.
-
-#### Authorization Code Token URL (POST request)
-
-```
- POST /YOUR_Azure AD_TENANT/oauth2/token HTTP/1.1
- Host: https://login.microsoftonline.com
- Content-Type: application/x-www-form-urlencoded
-
- grant_type=authorization_code
- &client_id=YOUR_CLIENT_ID
- &code=AUTHORIZATION_CODE
- &redirect_uri=YOUR_REDIRECT_URI
- &resource=https://api.loganalytics.io
- &client_secret=YOUR_CLIENT_SECRET
-```
-
-All values are the same as before, with some additions. The authorization code is the same code you received in the previous request after a successful redirect. The code is combined with the key obtained from the Azure AD App. If you did not save the key, you can delete it and create a new one from the keys tab of the Azure AD App menu. The response is a JSON string containing the token with the following schema. Exact values are indicated where they should not be changed. Types are indicated for the token values.
-
-Response example:
-
-```
- {
- "access_token": "YOUR_ACCESS_TOKEN",
- "expires_in": "3600",
- "ext_expires_in": "1503641912",
- "id_token": "not_needed_for_log_analytics",
- "not_before": "1503638012",
- "refresh_token": "YOUR_REFRESH_TOKEN",
- "resource": "https://api.loganalytics.io",
- "scope": "Data.Read",
- "token_type": "bearer"
- }
-```
-
-The access token portion of this response is what you present to the Log Analytics API in the `Authorization: Bearer` header. You may also use the refresh token in the future to acquire a new access\_token and refresh\_token when yours have gone stale. For this request, the format and endpoint are:
-
-```
- POST /YOUR_AAD_TENANT/oauth2/token HTTP/1.1
- Host: https://login.microsoftonline.com
- Content-Type: application/x-www-form-urlencoded
-
- client_id=YOUR_CLIENT_ID
- &refresh_token=YOUR_REFRESH_TOKEN
- &grant_type=refresh_token
- &resource=https://api.loganalytics.io
- &client_secret=YOUR_CLIENT_SECRET
-```
-
-Response example:
-
-```
- {
- "token_type": "Bearer",
- "expires_in": "3600",
- "expires_on": "1460404526",
- "resource": "https://api.loganalytics.io",
- "access_token": "YOUR_TOKEN_HERE",
- "refresh_token": "YOUR_REFRESH_TOKEN_HERE"
- }
-```
-
-### Implicit Code Flow
-
-The Log Analytics API also supports the OAuth2 [implicit flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant). For this flow, only a single request is required but no refresh token can be acquired.
-
-#### Implicit Code Authorize URL
-
-```
- GET https://login.microsoftonline.com/YOUR_AAD_TENANT/oauth2/authorize?
- client_id=YOUR_CLIENT_ID
- &response_type=token
- &redirect_uri=YOUR_REDIRECT_URI
- &resource=https://api.loganalytics.io
-```
-
-A successful request will produce a redirect to your redirect URI with the token in the URL as follows.
-
-```
- http://YOUR_REDIRECT_URI/#access_token=YOUR_ACCESS_TOKEN&token_type=Bearer&expires_in=3600&session_state=STATE_GUID
-```
-
-This access\_token can be used as the `Authorization: Bearer` header value when passed to the Log Analytics API to authorize requests.
-
-## Authenticating with an API key
-
-To quickly explore the API without needing to use Azure AD authentication, use the demonstration workspace with sample data, which supports API key authentication.
-
-To authenticate and run queries against the sample workspace, use `DEMO_WORKSPACE` as the {workspace-id} and pass in the API key `DEMO_KEY`.
-
-If either the Application ID or the API key are incorrect, the API service will return a [403](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error) (Forbidden) error.
-
-The API key `DEMO_KEY` can be passed in three different ways, depending on whether you prefer to use the URL, a header, or basic authentication.
-
-1. **Custom header**: provide the API key in the custom header `X-Api-Key`
-2. **Query parameter**: provide the API key in the URL parameter `api_key`
-3. **Basic authentication**: provide the API key as either username or password. If you provide both, the API key must be in the username.
-
-This example uses the Workspace ID and API key in the header:
-
-```
- POST https://api.loganalytics.io/v1/workspaces/DEMO_WORKSPACE/query
- X-Api-Key: DEMO_KEY
- Content-Type: application/json
-
- {
- "query": "AzureActivity | summarize count() by Category"
- }
-```
-## More Information
-
-You can find documentation about OAuth2 with Azure AD here:
azure-monitor Azure Resource Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/azure-resource-queries.md
Consider an Azure resource with a fully qualified identifier:
A query for this resource's logs against the direct API endpoint would go to the following URL: ```
- https://api.loganalytics.io/v1/subscriptions/<sid>/resourceGroups/<rg>/providers/<providerName>/<resourceType>/<resourceName>/query
+ https://api.loganalytics.azure.com/v1/subscriptions/<sid>/resourceGroups/<rg>/providers/<providerName>/<resourceType>/<resourceName>/query
``` A query to the same resource via ARM would use the following URL:
azure-monitor Batch Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/batch-queries.md
The Azure Monitor Log Analytics API supports batching queries together. Batch queries currently require Azure AD authentication. ## Request format
-To batch queries, use the API endpoint, adding $batch at the end of the URL: `https://api.loganalytics.io/v1/$batch`.
+To batch queries, use the API endpoint, adding $batch at the end of the URL: `https://api.loganalytics.azure.com/v1/$batch`.
If no method is included, batching defaults to the GET method. On GET requests, the API ignores the body parameter of the request object.
The body of the request is an array of objects containing the following properti
Example: ```
- POST https://api.loganalytics.io/v1/$batch
+ POST https://api.loganalytics.azure.com/v1/$batch
Content-Type: application/json Authorization: Bearer <user token> Cache-Control: no-cache
azure-monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cache.md
The API supports the standard `max-age`, `no-cache`, and `no-store` directives.
For example, the following request allows a maximum cache age of 30 seconds ```
- POST https://api.loganalytics.io/v1/workspaces/{workspace-id}/query
+ POST https://api.loganalytics.azure.com/v1/workspaces/{workspace-id}/query
Authorization: Bearer <access token> Cache-Control: max-age=30
azure-monitor Cross Workspace Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cross-workspace-queries.md
For implicit syntax, specify the workspaces that you want to include in your que
Example: ```
- POST https://api.loganalytics.io/v1/workspaces/00000000-0000-0000-0000-000000000000/query
+ POST https://api.loganalytics.azure.com/v1/workspaces/00000000-0000-0000-0000-000000000000/query
Authorization: Bearer <user token> Content-Type: application/json
Example:
The same request as a GET (line breaks for readability of query parameters): ```
- GET https://api.loganalytics.io/v1/workspaces/00000000-0000-0000-0000-000000000000/query?query=union+*+%7C+where+TimeGenerated+%3E+ago(1d)+%7C+summarize+count()+by+Type%2C+TenantId&workspaces=AIFabrikamDemo1%2CAIFabrikamDemo2
+ GET https://api.loganalytics.azure.com/v1/workspaces/00000000-0000-0000-0000-000000000000/query?query=union+*+%7C+where+TimeGenerated+%3E+ago(1d)+%7C+summarize+count()+by+Type%2C+TenantId&workspaces=AIFabrikamDemo1%2CAIFabrikamDemo2
Authorization: Bearer <user token>
The syntax to reference another application is: workspace('identifier').table.
Example: ```
- POST https://api.loganalytics.io/v1/workspaces/00000000-0000-0000-0000-000000000000/query
+ POST https://api.loganalytics.azure.com/v1/workspaces/00000000-0000-0000-0000-000000000000/query
Content-Type: application/json Authorization: Bearer <user token>
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/overview.md
Title: Overview
-description: This site describes the REST API created to make the data collected by Azure Log Analytics easily available.
+description: This article describes the REST API, created to make the data collected by Azure Log Analytics easily available.
Previously updated : 11/08/2022 Last updated : 11/27/2022 # Azure Monitor Log Analytics API Overview
-The Log Analytics **Query API** is a REST API that lets you query the full set of data collected by Azure Monitor logs using the same query language used throughout the service. You can use this API to build new visualizations of your data and extend the capabilities of Log Analytics.
+The Log Analytics **Query API** is a REST API that lets you query the full set of data collected by Azure Monitor logs using the same query language used throughout the service. Use this API to retrieve data, build new visualizations of your data, and extend the capabilities of Log Analytics.
## Log Analytics API Authentication
The Log Analytics API supports Azure AD authentication with three different [Azu
- Implicit - Client credentials
-The authorization code flow and implicit flow both require at least one user-interactive login to your application. If you need a completely non-interactive flow, you must use the client credentials flow.
+The authorization code flow and implicit flow both require at least one user interactive sign-in to your application. If you need a non-interactive flow, use the client credentials flow.
-After receiving a token, the process for calling the Log Analytics API is identical for all flows. Requests require the `Authorization: Bearer` header, populated with the token received from the OAuth2 flow.
+After receiving a token, the process for calling the Log Analytics API is the same for all flows. Requests require the `Authorization: Bearer` header, populated with the token received from the OAuth2 flow.
### API key authentication for sample data
-To quickly explore the API without using Azure AD authentication, we provide a demonstration workspace with sample data, which allows [authenticating with an API key](authentication-authorization.md#authenticating-with-an-api-key).
+To quickly explore the API without using Azure AD authentication, we provide a demonstration workspace with sample data, which allows [authenticating with an API key](./access-api.md#authenticating-with-a-demo-api-key).
> [!NOTE] > When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new
To quickly explore the API without using Azure AD authentication, we provide a d
## Log Analytics API Query Limits
-See [the **Query API** section of this page](../../service-limits.md#la-query-api) for information about query limits.
+See [the **Query API** section of this page](../../service-limits.md) for information about query limits.
## Trying the Log Analytics API
azure-monitor Prefer Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/prefer-options.md
The header includes a `render` property in the response that specifies the type
For example, the following request specifies a visualization of a bar chart with title "Perf events in the last day": ```
- POST https://api.loganalytics.io/v1/workspaces/{workspace-id}/query
+ POST https://api.loganalytics.azure.com/v1/workspaces/{workspace-id}/query
Authorization: Bearer <access token> Prefer: include-render=true Content-Type: application/json
azure-monitor Register App For Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/register-app-for-token.md
+
+ Title: Register an App for API Access
+description: How to register an app and assign a role so it can access a log analytics workspace using the API
++ Last updated : 11/18/2021+++
+# Register an App to work with Log Analytics APIs
+
+To access the log analytics API, you can generate a token based on a client ID and secret. This article shows you how to register a client app and assign permissions to access a Log Analytics Workspace.
+
+## Register an App
+
+1. To register an app, open the Active Directory Overview page in the Azure portal.
+
+1. Select **App registrations** from the side bar.
+
+1. Select **New registration**
+1. On the Register an application page, enter a **Name** for the application.
+1. Select **Register**
+1. On the app's overview page, select **API permissions**
+1. Select **Add a permission**
+1. In the **APIs my organization uses** tab search for *log analytics* and select **Log Analytics API** from the list.
+
+1. Select **Delegated permissions**
+1. Check the checkbox for **Data.Read**
+1. Select **Add permissions**
+
+1. On the app's overview page, select **Certificates and Secrets**
+1. Note the **Application (client) ID**. It's used in the HTTP request for a token.
+
+1. In the **Client secrets tab** Select **New client secret**
+1. Enter a **Description** and select **Add**
+ :::image type="content" source="../media/api-register-app/add-a-client-secret.png" alt-text="A screenshot showing the Add client secret page.":::
+
+1. Copy and save the client secret **Value**.
+
+ > [!NOTE]
+ > Client secret values can only be viewed immediately after creation. Be sure to save the secret before leaving the page.
+
+ :::image type="content" source="../media/api-register-app/client-secret.png" alt-text="A screenshot showing the client secrets page.":::
+
+## Grant your app access to a Log Analytics Workspace
+
+1. From your Log analytics Workspace overview page, select **Access control (IAM)**.
+1. Select **Add role assignment**.
+
+ :::image type="content" source="../media/api-register-app/workspace-access-control.png" alt-text="A screenshot showing the access control page for a log analytics workspace.":::
+
+1. Select the **Reader** role then select **Members**
+
+ :::image type="content" source="../media/api-register-app/add-role-assignment.png" alt-text="A screenshot showing the add role assignment page for a log analytics workspace.":::
+
+1. In the Members tab, select **Select members**
+1. Enter the name of your app in the **Select** field.
+1. Choose your app and select **Select**
+1. Select **Review and assign**
+
+ :::image type="content" source="../media/api-register-app/select-members.png" alt-text="A screenshot showing the select members blade on the role assignment page for a log analytics workspace.":::
+
+## Next steps
+
+You can use your client ID and client secret to generate a bearer token to access the Log Analytics API. For more information, see [Access the API](./access-api.md)
+
+> [!NOTE]
+> When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new role-based access control (RBAC) permissions. While permissions are propagating, REST API calls may fail with error code 403.
azure-monitor Request Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/request-format.md
# Azure Monitor Log Analytics API request format There are two endpoints through which you can communicate with the Log Analytics API:-- A direct URL for the API: `https://api.loganalytics.io`
+- A direct URL for the API: `https://api.loganalytics.azure.com`
- Through Azure Resource Manager (ARM). While the URLs are different, the query parameters are the same for each endpoint. Both endpoints require authorization through Azure Active Directory (Azure AD).
The API supports the `POST` and `GET` methods.
The Public API format is: ```
- https://api.loganalytics.io/{api-version}/workspaces/{workspaceId}/query?[parameters]
+ https://api.loganalytics.azure.com/{api-version}/workspaces/{workspaceId}/query?[parameters]
``` where: - **api-version**: The API version. The current version is "v1"
When the HTTP method executed is `GET`, the parameters are included in the query
For example, to count AzureActivity events by Category, make this call: ```
- GET https://api.loganalytics.io/v1/workspaces/{workspace-id}/query?query=AzureActivity%20|%20summarize%20count()%20by%20Category
+ GET https://api.loganalytics.azure.com/v1/workspaces/{workspace-id}/query?query=AzureActivity%20|%20summarize%20count()%20by%20Category
Authorization: Bearer <access token> ``` ## POST /query
When the HTTP method executed is `POST`:
For example, to count AzureActivity events by Category, make this call: ```
- POST https://api.loganalytics.io/v1/workspaces/{workspace-id}/query
+ POST https://api.loganalytics.azure.com/v1/workspaces/{workspace-id}/query
Authorization: Bearer <access token> Content-Type: application/json
azure-monitor Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/timeouts.md
If a query takes longer than the specified timeout (or default timeout, if unspe
For example, the following request allows a maximum server timeout age of 30 seconds ```
- POST https://api.loganalytics.io/v1/workspaces/{workspace-id}/query
+ POST https://api.loganalytics.azure.com/v1/workspaces/{workspace-id}/query
Authorization: Bearer <access token> Prefer: wait=30
azure-monitor Azure Data Explorer Monitor Cross Service Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-monitor-cross-service-query.md
Exporting data from Azure Monitor to an Azure storage account enables low-cost r
Use Azure Data Explorer to query data that was exported from your Log Analytics workspaces. Once configured, supported tables that are sent from your workspaces to an Azure storage account will be available as a data source for Azure Data Explorer. [Query exported data from Azure Monitor using Azure Data Explorer](../logs/azure-data-explorer-query-storage.md).
-[Azure Data Explorer query from storage flow](media\azure-data-explorer-query-storage\exported-data-query.png)
>[!tip]
-> * To export all data from your Log Analytics workspace to an Azure storage account or event hub, use the Log Analytics workspace data export feature of Azure Monitor Logs. [See Log Analytics workspace data export in Azure Monitor](/azure/data-explorer/query-monitor-data).
+> To export all data from your Log Analytics workspace to an Azure storage account or event hub, use the [Log Analytics workspace data export feature](/azure/data-explorer/query-monitor-data).
## Next steps
-Learn more about:
-* [create cross service queries between Azure Data Explorer and Azure Monitor](/azure/data-explorer/query-monitor-data). Query Azure Monitor data from Azure Data Explorer
-* [create cross service queries between Azure Monitor and Azure Data Explorer](./azure-monitor-data-explorer-proxy.md). Query Azure Data Explorer data from Azure Monitor
-* [Log Analytics workspace data export in Azure Monitor](/azure/data-explorer/query-monitor-data). Link and query Azure Blob storage account with Log Analytics Exported data.
+Learn how to:
+* [Query data in Azure Monitor from Azure Data Explorer](/azure/data-explorer/query-monitor-data).
+* [Query data in Azure Data Explorer from Azure Monitor](./azure-monitor-data-explorer-proxy.md).
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Title: Set a table's log data plan in Azure Monitor Logs
-description: Learn how to configure the table log data plan to optimize log ingestion and retention costs in Azure Monitor Logs.
+ Title: Set a table's log data plan to Basic Logs or Analytics Logs
+description: Learn how to use Basic Logs and Analytics Logs to reduce costs and take advantage of advanced features and analytics capabilities in Azure Monitor Logs.
Last updated 11/09/2022
-# Set a table's log data plan in Azure Monitor Logs
+# Set a table's log data plan to Basic or Analytics
Azure Monitor Logs offers two log data plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor's advanced features and analytics capabilities based on your needs:
The following table summarizes the two plans.
| Category | Analytics | Basic | |:|:|:| | Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
-| Log queries | No extra cost. Full query capabilities. | Extra cost.<br>[Subset of query capabilities](basic-logs-query.md#limitations). |
+| Log queries | No extra cost. Full query capabilities. | Extra cost.<br/>[Subset of query capabilities](basic-logs-query.md#limitations). |
| Retention | Configure retention from 30 days to 730 days. | Retention fixed at eight days. | | Alerts | Supported. | Not supported. |
By default, all tables in your Log Analytics workspace are Analytics tables, and
| [AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | Azure Media Services information about requests to streaming endpoints. | | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Azure Container Apps logs, generated within a Container Apps environment. | | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
-| [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | Data plane audit related to Dev Center resources, e.g. dev boxes and environments stop, start, deletes. |
+| [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | Data plane audit related to Dev Center resources; for example, dev boxes and environment stop, start, delete. |
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) don't support Basic Logs.
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
az resource delete \
To delete a resource group, you need access to the delete action for the **Microsoft.Resources/subscriptions/resourceGroups** resource.
+> [!IMPORTANT]
+> The only permission required to delete a resource group is permission to the delete action for deleting resource groups. You do **not** need permission to delete individual resources within that resource group. Addtionally, delete actions that are specified in **notActions** for a roleAssignment are superseded by the resource group delete action. This is consistent with the scope heirarchy in the Azure role-based access control model.
+ For a list of operations, see [Azure resource provider operations](../../role-based-access-control/resource-provider-operations.md). For a list of built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). If you have the required access, but the delete request fails, it may be because there's a [lock on the resources or resource group](lock-resources.md). Even if you didn't manually lock a resource group, it may have been [automatically locked by a related service](lock-resources.md#managed-applications-and-locks). Or, the deletion can fail if the resources are connected to resources in other resource groups that aren't being deleted. For example, you can't delete a virtual network with subnets that are still in use by a virtual machine.
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
azure-video-indexer Add Contributor Role On The Media Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md
This article describes how to assign contributor role on the Media Services acco
2. User-assigned managed identity > [!NOTE]
-> You'll need an Azure subscription where you have access to both the [Contributor][docs-role-contributor] role and the [User Access Administrator][docs-role-administrator] role to the Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure Video Indexer account.
+> You need an Azure subscription with access to both the [Contributor][docs-role-contributor] role and the [User Access Administrator][docs-role-administrator] role to the Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure Video Indexer account.
## Add Contributor role on the Media Services ### [Azure portal](#tab/portal/)
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Connecting a classic account to be ARM-based triggers a 30 days of a transition
The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
-The [invite users](invite-users.md) feature in the Azure Video Indexer portal gets disabled. The invited users on this account lose their access to the Azure Video Indexer account Media in the portal.
+The [invite users](invite-users.md) feature in the [Azure Video Indexer website](https://www.videoindexer.ai/) gets disabled. The invited users on this account lose their access to the Azure Video Indexer account Media in the portal.
However, this can be resolved by assigning the right role-assignment to these users through Azure RBAC, see [How to assign RBAC][docs-rbac-assignment]. Only the account owner, who performed the connect action, is automatically assigned as the owner on the connected account. When [Azure policies][docs-governance-policy] are enforced, they override the settings on the account.
-If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as Azure Video Indexer portal.
+If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as the [Azure Video Indexer website](https://www.videoindexer.ai/).
After the transition state ends, users will only be able to generate a valid access token through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account. > [!NOTE] > If there are invited users you wish to remove access from, do it before connecting the account to ARM.
-Before the end of the 30 days of transition state, you can remove access from users through the Azure Video Indexer portal on the account settings page.
+Before the end of the 30 days of transition state, you can remove access from users through the [Azure Video Indexer website](https://www.videoindexer.ai/) account settings page.
## Get started
Before the end of the 30 days of transition state, you can remove access from us
1. Select the Azure Video Indexer classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*). 1. Click **Settings**.
- :::image type="content" alt-text="Screenshot that shows the Azure Video Indexer portal settings." source="./media/connect-classic-account-to-arm/classic-account-settings.png":::
+ :::image type="content" alt-text="Screenshot that shows the Azure Video Indexer website settings." source="./media/connect-classic-account-to-arm/classic-account-settings.png":::
1. Click **Connect to an ARM-based account**. :::image type="content" alt-text="Screenshot that shows the connect to an ARM-based account dialog." source="./media/connect-classic-account-to-arm/connect-classic-to-arm.png":::
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
To automate the creation of the account is a two steps process:
> [!NOTE] > The Azure Government cloud does not include a *trial* experience of Azure Video Indexer.
-To create a paid account via the Azure Video Indexer portal:
+To create a paid account via the Azure Video Indexer website:
1. Go to https://videoindexer.ai.azure.us 1. Sign-in with your Azure Government Azure AD account.
-1. If you don't have any Azure Video Indexer accounts in Azure Government that you're an owner or a contributor to, you'll get an empty experience from which you can start creating your account.
+1.If you don't have any Azure Video Indexer accounts in Azure Government that you're an owner or a contributor to, you'll get an empty experience from which you can start creating your account.
The rest of the flow is as described in above, only the regions to select from will be Government regions in which Azure Video Indexer is available
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monit
| Category | Display Name | Additional information | |:|:-||
-| VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the Video Indexer portal and the REST API. |
+| VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the [Azure Video Indexer website](https://www.videoindexer.ai/) and the REST API. |
| IndexingLogs | Indexing Logs | Azure Video Indexer indexing logs to monitor all files uploads, indexing and reindexing jobs. | <!-- --**END Examples** - -->
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
For a list of file formats that you can use with Azure Video Indexer, see [Stand
`https://onedrive.live.com/download?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk`
-1. Now enter this URL in the Azure Video Indexer portal in the URL field.
+1. Now enter this URL in the Azure Video Indexer website in the URL field.
+ > [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-url.png" alt-text="Screenshot that shows the onedrive url field.":::
This section describes some of the optional parameters and when to set them. For
#### externalID
-Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure Video Indexer portal can be searched via the specified external ID.
+Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure Video Indexer website can be searched via the specified external ID.
#### callbackUrl
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Also, see the following: the [announcement blog post](https://aka.ms/AAh91ff) an
### Line breaking in transcripts
-Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure Video Indexer portal, such as adding a new line and editing the lineΓÇÖs timestamp. For more information, see [Insert or remove transcript lines](edit-transcript-lines-portal.md).
+Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure Video Indexer website, such as adding a new line and editing the lineΓÇÖs timestamp. For more information, see [Insert or remove transcript lines](edit-transcript-lines-portal.md).
### Azure Monitor integration
The feature is also available in the JSON file generated by Azure Video Indexer.
### Detected acoustic events with **Audio Effects Detection** (preview)
-You can now see the detected acoustic events in the closed captions file. The file can be downloaded from the Azure Video Indexer portal and is available as an artifact in the GetArtifact API.
+You can now see the detected acoustic events in the closed captions file. The file can be downloaded from the Azure Video Indexer website and is available as an artifact in the GetArtifact API.
**Audio Effects Detection** (preview) component detects various acoustics events and classifies them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more). For more information, see [Audio effects detection](audio-effects-detection.md).
Azure Video Indexer unified **authentications** and **operations** into a single
Update a specific section in the transcript using the [Update-Video-Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API.
-### Fix account configuration from the Azure Video Indexer portal
+### Fix account configuration from the Azure Video Indexer website
You can now update Media Services connection configuration in order to self-help with issues like:
You can now update Media Services connection configuration in order to self-help
* password changes * Media Services resources were moved between subscriptions
-To fix the account configuration, in the Azure Video Indexer portal navigate to Settings > Account tab (as owner).
+To fix the account configuration, in the Azure Video Indexer website, navigate to Settings > Account tab (as owner).
### Configure the custom vision account
-Configure the custom vision account on paid accounts using the Azure Video Indexer portal (previously, this was only supported by API). To do that, sign in to the Azure Video Indexer portal, choose Model Customization > Animated characters > Configure.
+Configure the custom vision account on paid accounts using the Azure Video Indexer website (previously, this was only supported by API). To do that, sign in to the Azure Video Indexer website, choose Model Customization > Animated characters > Configure.
### Scenes, shots and keyframes ΓÇô now in one insight pane
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
This section describes some of the optional parameters and when to set them. For
#### externalID
-Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure Video Indexer portal can be searched via the specified external ID.
+Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure Video Indexer website can be searched via the specified external ID.
#### callbackUrl
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
Title: Sign up for Azure Video Indexer and upload your first video - Azure
-description: Learn how to sign up and upload your first video using the Azure Video Indexer portal.
+description: Learn how to sign up and upload your first video using the Azure Video Indexer website.
Last updated 08/24/2022
You can access Azure Video Indexer capabilities in three ways:
-* Azure Video Indexer portal: An easy-to-use solution that lets you evaluate the product, manage the account, and customize models (as described in this article).
+* The [Azure Video Indexer website](https://www.videoindexer.ai/): An easy-to-use solution that lets you evaluate the product, manage the account, and customize models (as described in this article).
* API integration: All of Azure Video Indexer's capabilities are available through a REST API, which lets you integrate the solution into your apps and infrastructure. To get started, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md). * Embeddable widget: Lets you embed the Azure Video Indexer insights, player, and editor experiences into your app. For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Unless specified otherwise, a model is generally available.
* **Translation**: Creates translations of the audio transcript to many different languages. For more information, see [Azure Video Indexer language support](language-support.md). * **Audio effects detection** (preview): Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence.
- The detected acoustic events are in the closed captions file. The file can be downloaded from the Azure Video Indexer portal. For more information, see [Audio effects detection](audio-effects-detection.md).
+ The detected acoustic events are in the closed captions file. The file can be downloaded from the Azure Video Indexer website. For more information, see [Audio effects detection](audio-effects-detection.md).
> [!NOTE] > The full set of events is available only when you choose **Advanced Audio Analysis** when uploading a file, in upload preset. By default, only silence is detected.
backup Backup Afs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-afs.md
In this article, you'll learn how to:
* [Learn](azure-file-share-backup-overview.md) about the Azure file share snapshot-based backup solution. * Ensure that the file share is present in one of the [supported storage account types](azure-file-share-support-matrix.md). * Identify or create a [Recovery Services vault](#create-a-recovery-services-vault) in the same region and subscription as the storage account that hosts the file share.
+* In case you have restricted access to your storage account, check the firewall settings of the account to ensure that the exception "Allow Azure services on the trusted services list to access this storage account" is granted. You can refer to [this](../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) link for the steps to grant an exception.
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 11/25/2022 Last updated : 11/28/2022
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed), [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
**Cross Zonal Restore (preview)** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup). >[!Tip]
backup Backup Azure Backup Sharepoint Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sharepoint-mabs.md
Title: Back up a SharePoint farm to Azure with MABS description: Use Azure Backup Server to back up and restore your SharePoint data. This article provides the information to configure your SharePoint farm so that desired data can be stored in Azure. You can restore protected SharePoint data from disk or from Azure.- Previously updated : 07/30/2021+ Last updated : 11/29/2022++++
-# Back up a SharePoint farm to Azure with MABS
+# Back up a SharePoint farm to Azure using Microsoft Azure Backup Server
-You back up a SharePoint farm to Microsoft Azure by using Microsoft Azure Backup Server (MABS) in much the same way that you back up other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points and gives you retention policy options for various backup points. MABS provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
+This article describes how to back up SharePoint farm using Microsoft Azure Backup Server (MABS).
-Backing up SharePoint to Azure with MABS is a similar process to backing up SharePoint to DPM (Data Protection Manager) locally. Particular considerations for Azure will be noted in this article.
+Microsoft Azure Backup Server (MABS) enables you to back up a SharePoint farm to Microsoft Azure, which gives an experience similar to back up of other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points, and gives you retention policy options for various backup points. It also provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
-## SharePoint supported versions and related protection scenarios
+In this article, you'll learn about:
-For a list of supported SharePoint versions and the MABS versions required to back them up see [the MABS protection matrix](./backup-mabs-protection-matrix.md)
+> [!div class="checklist"]
+> - SharePoint supported scenarios
+> - Prerequisites
+> - Configure the backup
+> - Monitor the operations
+> - Restore a SharePoint item from disk using MABS
+> - Restore a SharePoint database from Azure using MABS
+> - Switch the front-end Web server
+> - Remove a database from a SharePoint farm
-## Before you start
+>[!Note]
+>The backup process for SharePoint to Azure using MABS is similar to back up of SharePoint to Data Protection Manager (DPM) locally. Particular considerations for Azure are noted in this article.
-There are a few things you need to confirm before you back up a SharePoint farm to Azure.
+## SharePoint supported scenarios
-### What's not supported
+You need to confirm the following supported scenarios before you back up a SharePoint farm to Azure.
-* MABS that protects a SharePoint farm doesn't protect search indexes or application service databases. You'll need to configure the protection of these databases separately.
+### Supported scenarios
-* MABS doesn't provide backup of SharePoint SQL Server databases that are hosted on scale-out file server (SOFS) shares.
+For information on the supported SharePoint versions and the MABS versions required to back them up, see [the MABS protection matrix](./backup-mabs-protection-matrix.md).
-### Prerequisites
+### Unsupported scenarios
-Before you continue, make sure that you've met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. Some tasks for prerequisites include: create a backup vault, download vault credentials, install Azure Backup Agent, and register the Azure Backup Server with the vault.
+* MABS that protects a SharePoint farm doesn't protect search indexes or application service databases. You need to configure the protection of these databases separately.
+* MABS doesn't provide backup of SharePoint SQL Server databases that are hosted on scale-out file server (SOFS) shares.
-Additional prerequisites and limitations:
+## Prerequisites
-* By default when you protect SharePoint, all content databases (and the SharePoint_Config and SharePoint_AdminContent* databases) will be protected. If you want to add customizations such as search indexes, templates or application service databases, or the user profile service you'll need to configure these for protection separately. Be sure that you enable protection for all folders that include these types of features or customization files.
+Before you continue, ensure that you've met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. The tasks for prerequisites also include: create a backup vault, download vault credentials, install Azure Backup Agent, and register the Azure Backup Server with the vault.
-* You can't protect SharePoint databases as a SQL Server data source. You can recover individual databases from a farm backup.
+Additional prerequisites:
-* Remember that MABS runs as **Local System**, and to back up SQL Server databases it needs sysadmin privileges on that account for the SQL server. On the SQL Server you want to back up, set NT AUTHORITY\SYSTEM to **sysadmin**.
+* By default when you protect SharePoint, all content databases (and the SharePoint_Config and SharePoint_AdminContent* databases) are protected. If you want to add customizations (such as search indexes, templates or application service databases, or the user profile service) you need to configure these for protection separately. Ensure that you enable protection for all folders that include these types of features or customization files.
+
+* Remember that MABS runs as **Local System**, and it needs *sysadmin* privileges on that account for the SQL server to back up SQL Server databases. On the SQL Server you want to back up, set *NT AUTHORITY\SYSTEM* to **sysadmin**.
* For every 10 million items in the farm, there must be at least 2 GB of space on the volume where the MABS folder is located. This space is required for catalog generation. To enable you to use MABS to perform a specific recovery of items (site collections, sites, lists, document libraries, folders, individual documents, and list items), catalog generation creates a list of the URLs contained within each content database. You can view the list of URLs in the recoverable item pane in the Recovery task area of the MABS Administrator Console.
-* In the SharePoint farm, if you have SQL Server databases that are configured with SQL Server aliases, install the SQL Server client components on the front-end Web server that MABS will protect.
+* In the SharePoint farm, if you've SQL Server databases that are configured with SQL Server aliases, install the SQL Server client components on the front-end Web server that MABS will protect.
+
+### Limitations
+
+* You can't protect SharePoint databases as a SQL Server data source. You can recover individual databases from a farm backup.
* Protecting application store items isn't supported with SharePoint 2013. * MABS doesn't support protecting remote FILESTREAM. The FILESTREAM should be part of the database.
-## Configure backup
+## Configure the backup
+
+To back up the SharePoint farm, configure protection for SharePoint by using *ConfigureSharePoint.exe* and then create a protection group in MABS.
+
+Follow these steps:
-To back up the SharePoint farm, configure protection for SharePoint by using ConfigureSharePoint.exe and then create a protection group in MABS.
+1. **Run ConfigureSharePoint.exe**: This tool configures the SharePoint VSS Writer service \(WSS\) and provides the protection agent with credentials for the SharePoint farm. After you've deployed the protection agent, the ConfigureSharePoint.exe file can be found in the `<MABS Installation Path\>\bin` folder on the front\-end Web server.
+
+ If you've multiple WFE servers, you only need to install it on one of them.
-1. **Run ConfigureSharePoint.exe** - This tool configures the SharePoint VSS Writer service \(WSS\) and provides the protection agent with credentials for the SharePoint farm. After you've deployed the protection agent, the ConfigureSharePoint.exe file can be found in the `<MABS Installation Path\>\bin` folder on the front\-end Web server. If you have multiple WFE servers, you only need to install it on one of them. Run as follows:
+ Run as follows:
- * On the WFE server, at a command prompt navigate to `\<MABS installation location\>\\bin\\` and run `ConfigureSharePoint \[\-EnableSharePointProtection\] \[\-EnableSPSearchProtection\] \[\-ResolveAllSQLAliases\] \[\-SetTempPath <path>\]`, where:
+ 1. On the WFE server, at a command prompt navigate to `\<MABS installation location\>\\bin\\` and run `ConfigureSharePoint \[\-EnableSharePointProtection\] \[\-EnableSPSearchProtection\] \[\-ResolveAllSQLAliases\] \[\-SetTempPath <path>\]`, where:
- * **EnableSharePointProtection** enables protection of the SharePoint farm, enables the VSS writer, and registers the identity of the DCOM application WssCmdletsWrapper to run as a user whose credentials are entered with this option. This account should be a farm admin and also local admin on the front\-end Web Server.
+ 1. **EnableSharePointProtection** enables protection of the SharePoint farm, enables the VSS writer, and registers the identity of the DCOM application WssCmdletsWrapper to run as a user whose credentials are entered with this option. This account should be a farm admin and also local admin on the front\-end Web Server.
- * **EnableSPSearchProtection** enables the protection of WSS 3.0 SP Search by using the registry key SharePointSearchEnumerationEnabled under HKLM\\Software\\Microsoft\\ Microsoft Data Protection Manager\\Agent\\2.0\\ on the front\-end Web Server, and registers the identity of the DCOM application WssCmdletsWrapper to run as a user whose credentials are entered with this option. This account should be a farm admin and also local admin on the front\-end Web Server.
+ * **EnableSPSearchProtection** enables the protection of WSS 3.0 SP Search by using the registry key SharePointSearchEnumerationEnabled under HKLM\\Software\\Microsoft\\ Microsoft Data Protection Manager\\Agent\\2.0\\ on the front\-end Web Server, and registers the identity of the DCOM application WssCmdletsWrapper to run as a user whose credentials are entered with this option. This account should be a farm admin and also local admin on the front\-end Web Server.
- * **ResolveAllSQLAliases** displays all the aliases reported by the SharePoint VSS writer and resolves them to the corresponding SQL server. It also displays their resolved instance names. If the servers are mirrored, it will also display the mirrored server. It reports all the aliases that aren't being resolved to a SQL Server.
+ * **ResolveAllSQLAliases** displays all the aliases reported by the SharePoint VSS writer and resolves them to the corresponding SQL server. It also displays their resolved instance names. If the servers are mirrored, it will also display the mirrored server. It reports all the aliases that aren't being resolved to a SQL Server.
- * **SetTempPath** sets the environment variable TEMP and TMP to the specified path. Item level recovery fails if a large site collection, site, list, or item is being recovered and there's insufficient space in the farm admin Temporary folder. This option allows you to change the folder path of the temporary files to a volume that has sufficient space to store the site collection or site being recovered.
+ * **SetTempPath** sets the environment variable TEMP and TMP to the specified path. Item level recovery fails if a large site collection, site, list, or item is being recovered and there's insufficient space in the farm admin Temporary folder. This option allows you to change the folder path of the temporary files to a volume that has sufficient space to store the site collection or site being recovered.
- * Enter the farm administrator credentials. This account should be a member of the local Administrator group on the WFE server. If the farm administrator isn't a local admin, grant the following permissions on the WFE server:
+ 1. Enter the farm administrator credentials. This account should be a member of the local Administrator group on the WFE server. If the farm administrator isn't a local admin, grant the following permissions on the WFE server:
- * Grant the **WSS_Admin_WPG** group full control to the MABS folder (`%Program Files%\Data Protection Manager\DPM\`).
+ * Grant the **WSS_Admin_WPG** group full control to the MABS folder (`%Program Files%\Data Protection Manager\DPM\`).
- * Grant the **WSS_Admin_WPG** group read access to the MABS Registry key (`HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager`).
+ * Grant the **WSS_Admin_WPG** group read access to the MABS Registry key (`HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager`).
- After running ConfigureSharePoint.exe, you'll need to rerun it if there's a change in the SharePoint farm administrator credentials.
+ After running ConfigureSharePoint.exe, you'll need to rerun it if there's a change in the SharePoint farm administrator credentials.
1. To create a protection group, select **Protection** > **Actions** > **Create Protection Group** to open the **Create New Protection Group** wizard in the MABS console.
-1. In **Select Protection Group Type**, select **Servers**.
+1. On **Select Protection Group Type**, select **Servers**.
+
+1. On **Select Group Members**, expand the server that holds the WFE role.
+
+ If there's more than one WFE server, select the one on which you installed *ConfigureSharePoint.exe*.
-1. In **Select Group Members**, expand the server that holds the WFE role. If there's more than one WFE server, select the one on which you installed ConfigureSharePoint.exe.
+ When you expand the computer running SharePoint, MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the computer that's running SharePoint and on any remote instance of SQL Server. Then, ensure that the MABS agent is installed both on the computer running SharePoint and on the remote instance of SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
- When you expand the computer running SharePoint, MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the computer that's running SharePoint and on any remote instance of SQL Server. Then, ensure that the MABS agent is installed both on the computer running SharePoint and on the remote instance of SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
+1. On **Select data protection method**, specify how you want to handle short and long\-term backup. Short\-term back up is always to disk first, with the option of backing up from the disk to the Azure cloud with Azure Backup \(for short or long\-term\).
-1. In **Select data protection method**, specify how you want to handle short and long\-term backup. Short\-term back up is always to disk first, with the option of backing up from the disk to the Azure cloud with Azure Backup \(for short or long\-term\).
+1. On **Select short\-term goals**, specify how you want to back up to short\-term storage on disk. In **Retention range** you specify how long you want to keep the data on disk. In **Synchronization frequency**, you specify how often you want to run an incremental backup to disk.
-1. In **Select short\-term goals**, specify how you want to back up to short\-term storage on disk. In **Retention range** you specify how long you want to keep the data on disk. In **Synchronization frequency**, you specify how often you want to run an incremental backup to disk. If you don't want to set a backup interval, you can check just before a recovery point so that MABS will run an express full backup just before each recovery point is scheduled.
+ If you don't want to set a backup interval, you can check just before a recovery point so that MABS will run an express full backup just before each recovery point is scheduled.
-1. In the Review disk allocation page, review the storage pool disk space allocated for the protection group.
+1. On the **Review disk allocation** page, review the storage pool disk space allocated for the protection group.
- **Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
+ **Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
-1. In **Choose replica creation method**, select how you want to handle the initial full data replication. If you select to replicate over the network, we recommended you choose an off-peak time. For large amounts of data or less than optimal network conditions, consider replicating the data offline using removable media.
+1. On **Choose replica creation method**, select how you want to handle the initial full data replication.
-1. In **Choose consistency check options**, select how you want to automate consistency checks. You can enable a check to run only when replica data becomes inconsistent, or according to a schedule. If you don't want to configure automatic consistency checking, you can run a manual check at any time by right-clicking the protection group in the **Protection** area of the MABS console, and selecting **Perform Consistency Check**.
+ If you select to replicate over the network, we recommended you choose an off-peak time. For large amounts of data or less than optimal network conditions, consider replicating the data offline using removable media.
+
+1. On **Choose consistency check options**, select how you want to automate consistency checks.
+
+ You can enable a check to run only when replica data becomes inconsistent, or according to a schedule. If you don't want to configure automatic consistency checking, you can run a manual check at any time by right-clicking the protection group in the **Protection** area of the MABS console, and selecting **Perform Consistency Check**.
1. If you've selected to back up to the cloud with Azure Backup, on the **Specify online protection data** page make sure the workloads you want to back up to Azure are selected.
-1. In **Specify online backup schedule**, specify how often incremental backups to Azure should occur. You can schedule backups to run every day/week/month/year and the time/date at which they should run. Backups can occur up to twice a day. Each time a backup runs, a data recovery point is created in Azure from the copy of the backed-up data stored on the MABS disk.
+1. On **Specify online backup schedule**, specify how often incremental backups to Azure should occur.
+
+ You can schedule backups to run every day/week/month/year and the time/date at which they should run. Backups can occur up to twice a day. Each time a backup runs, a data recovery point is created in Azure from the copy of the backed-up data stored on the MABS disk.
+
+1. On **Specify online retention policy**, you can specify how the recovery points created from the daily/weekly/monthly/yearly backups are retained in Azure.
+
+1. On **Choose online replication**, specify how the initial full replication of data will occur.
-1. In **Specify online retention policy**, you can specify how the recovery points created from the daily/weekly/monthly/yearly backups are retained in Azure.
+ You can replicate over the network, or do an offline backup (offline seeding). Offline backup uses the Azure Import feature. [Read more](./backup-azure-backup-import-export.md).
-1. In **Choose online replication**, specify how the initial full replication of data will occur. You can replicate over the network, or do an offline backup (offline seeding). Offline backup uses the Azure Import feature. [Read more](./backup-azure-backup-import-export.md).
+1. On the **Summary** page, review your settings. After you select **Create Group**, initial replication of the data occurs.
-1. On the **Summary** page, review your settings. After you select **Create Group**, initial replication of the data occurs. When it finishes, the protection group status will show as **OK** on the **Status** page. Backup then takes place in line with the protection group settings.
+ When it finishes, the protection group status will show as **OK** on the **Status** page. Backup then takes place in line with the protection group settings.
-## Monitoring
+## Monitor the operations
-After the protection group's been created, the initial replication occurs and MABS starts backing up and synchronizing the SharePoint data. MABS monitors the initial synchronization and subsequent backups. You can monitor the SharePoint data in a couple of ways:
+After the protection group creation is complete, the initial replication happens and MABS starts backing up and synchronizing the SharePoint data. MABS monitors the initial synchronization and subsequent backups. You can monitor the SharePoint data in a couple of ways:
* Using default MABS monitoring, you can set up notifications for proactive monitoring by publishing alerts and configuring notifications. You can send notifications by e-mail for critical, warning, or informational alerts, and for the status of instantiated recoveries.
After the protection group's been created, the initial replication occurs and MA
### Set up monitoring notifications
-1. In the MABS Administrator Console, select **Monitoring** > **Action** > **Options**.
+To set up monitoring notifications, follow these steps:
-2. Select **SMTP Server**, type the server name, port, and email address from which notifications will be sent. The address must be valid.
+1. On the **MABS Administrator Console**, select **Monitoring** > **Action** > **Options**.
-3. In **Authenticated SMTP server**, type a user name and password. The user name and password must be the domain account name of the person whose "From" address is described in the previous step. Otherwise, the notification delivery fails.
+2. Select **SMTP Server**, enter the server name, port, and email address from which notifications will be sent. The address must be valid.
-4. To test the SMTP server settings, select **Send Test E-mail**, type the e-mail address where you want MABS to send the test message, and then select **OK**. Select **Options** > **Notifications** and select the types of alerts about which recipients want to be notified. In **Recipients** type the e-mail address for each recipient to whom you want MABS to send copies of the notifications.
+3. On **Authenticated SMTP server**, enter a user name and password.
+
+ The user name and password must be the domain account name of the person whose "From" address is described in the previous step. Otherwise, the notification delivery fails.
+
+4. To test the SMTP server settings, select **Send Test E-mail**, enter the e-mail address where you want MABS to send the test message, and then select **OK**. Select **Options** > **Notifications**, and then select the types of alerts about which recipients want to be notified. In **Recipients** type the e-mail address for each recipient to whom you want MABS to send copies of the notifications.
### Publish Operations Manager alerts
-1. In the MABS Administrator Console, select **Monitoring** > **Action** > **Options** > **Alert Publishing** > **Publish Active Alerts**
+To publish Operations Manager alerts, follow these steps:
+
+1. On the **MABS Administrator Console**, select **Monitoring** > **Action** > **Options** > **Alert Publishing** > **Publish Active Alerts**.
+
+2. After you enable **Alert Publishing**, all existing MABS alerts that might require a user action are published to the **MABS Alerts** event log.
+
+ The Operations Manager agent that's installed on the MABS server then publishes these alerts to the Operations Manager and continues to update the console as new alerts are generated.
-2. After you enable **Alert Publishing**, all existing MABS alerts that might require a user action are published to the **MABS Alerts** event log. The Operations Manager agent that's installed on the MABS server then publishes these alerts to the Operations Manager and continues to update the console as new alerts are generated.
+## Restore a SharePoint item from disk using MABS
-## Restore a SharePoint item from disk by using MABS
+In the following example, the *Recovering SharePoint item* is accidentally deleted and needs to be recovered.
-In the following example, the *Recovering SharePoint item* has been accidentally deleted and needs to be recovered.
-![MABS SharePoint Protection4](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection5.png)
+![Diagram shows MABS SharePoint Protection item recovery that's accidentally deleted.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection5.png)
-1. Open the **MABS Administrator Console**. All SharePoint farms that are protected by MABS are shown in the **Protection** tab.
+To restore a SharePoint item from disk using MABS, follow these steps:
- ![MABS SharePoint Protection3](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection4.png)
-2. To begin to recover the item, select the **Recovery** tab.
+1. Open the **MABS Administrator Console**.
- ![MABS SharePoint Protection5](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection6.png)
-3. You can search SharePoint for *Recovering SharePoint item* by using a wildcard-based search within a recovery point range.
+ All SharePoint farms that are protected by MABS are shown in the **Protection** tab.
+
+ ![Screenshot shows how to open the tMABS Administrator Console.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection4.png)
+
+2. To start recovering the item, select the **Recovery** tab.
+
+ ![Screenshot shows how to start recovering deleted items.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection6.png)
+
+3. You can search SharePoint for *Recovering SharePoint item* using a wildcard-based search within a recovery point range.
+
+ ![Screenshot shows how to search SharePoint for Recovering SharePoint item.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection7.png)
- ![MABS SharePoint Protection6](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection7.png)
4. Select the appropriate recovery point from the search results, right-click the item, and then select **Recover**. 5. You can also browse through various recovery points and select a database or item to recover. Select **Date > Recovery time**, and then select the correct **Database > SharePoint farm > Recovery point > Item**.
- ![MABS SharePoint Protection7](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection8.png)
-6. Right-click the item, and then select **Recover** to open the **Recovery Wizard**. Select **Next**.
+ ![Screenshot shows how to browse through various recovery points and select a database or item for recovery.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection8.png)
+
+6. Right-click the item, and then select **Recover** to open the **Recovery Wizard**, and then select **Next**.
+
+ ![Screenshot shows how to open the Recovery Wizard.](./media/backup-azure-backup-sharepoint/review-recovery-selection.png)
- ![Review Recovery Selection](./media/backup-azure-backup-sharepoint/review-recovery-selection.png)
7. Select the type of recovery that you want to perform, and then select **Next**.
- ![Recovery Type](./media/backup-azure-backup-sharepoint/select-recovery-type.png)
+ ![Screenshot shows how to select recovery type to perform.](./media/backup-azure-backup-sharepoint/select-recovery-type.png)
> [!NOTE] > The selection of **Recover to original** in the example recovers the item to the original SharePoint site.
- >
- >
+ 8. Select the **Recovery Process** that you want to use. * Select **Recover without using a recovery farm** if the SharePoint farm hasn't changed and is the same as the recovery point that's being restored. * Select **Recover using a recovery farm** if the SharePoint farm has changed since the recovery point was created.
- ![Recovery Process](./media/backup-azure-backup-sharepoint/recovery-process.png)
-9. Provide a staging SQL Server instance location to recover the database temporarily, and provide a staging file share on MABS and the server that's running SharePoint to recover the item.
+ ![Screenshot shows how to select the recovery process.](./media/backup-azure-backup-sharepoint/recovery-process.png)
+
+9. Provide a staging SQL Server instance location to recover the database temporarily, and provide a staging file share on MABS and the server hat's running SharePoint to recover the item.
+
+ ![Screenshot shows how to provide a staging SQL Server instance location to recover the database temporarily.](./media/backup-azure-backup-sharepoint/staging-location1.png)
- ![Staging Location1](./media/backup-azure-backup-sharepoint/staging-location1.png)
+ MABS attaches the content database that's hosting the SharePoint item to the temporary SQL Server instance. From the content database, it recovers the item and puts it on the staging file location on MABS. The recovered item that's on the staging location now needs to be exported to the staging location on the SharePoint farm.
- MABS attaches the content database that's hosting the SharePoint item to the temporary SQL Server instance. From the content database, it recovers the item and puts it on the staging file location on MABS. The recovered item that's on the staging location now needs to be exported to the staging location on the SharePoint farm.
+ ![Screenshot shows the recovery of item and placing it on the staging file location on MABS.](./media/backup-azure-backup-sharepoint/staging-location2.png)
- ![Staging Location2](./media/backup-azure-backup-sharepoint/staging-location2.png)
-10. Select **Specify recovery options**, and apply security settings to the SharePoint farm or apply the security settings of the recovery point. Select **Next**.
+10. Select **Specify recovery options**, and apply security settings to the SharePoint farm or apply the security settings of the recovery point, and then select **Next**.
- ![Recovery Options](./media/backup-azure-backup-sharepoint/recovery-options.png)
+ ![Screenshot shows how to apply security settings to the SharePoint farm.](./media/backup-azure-backup-sharepoint/recovery-options.png)
> [!NOTE] > You can choose to throttle the network bandwidth usage. This minimizes impact to the production server during production hours.
- >
- >
+ 11. Review the summary information, and then select **Recover** to begin recovery of the file.
- ![Recovery summary](./media/backup-azure-backup-sharepoint/recovery-summary.png)
+ ![Screenshot how to review recovery summary.](./media/backup-azure-backup-sharepoint/recovery-summary.png)
12. Now select the **Monitoring** tab in the **MABS Administrator Console** to view the **Status** of the recovery.
- ![Recovery Status](./media/backup-azure-backup-sharepoint/recovery-monitoring.png)
+ ![Screenshot shows the recovery status.](./media/backup-azure-backup-sharepoint/recovery-monitoring.png)
> [!NOTE] > The file is now restored. You can refresh the SharePoint site to check the restored file.
- >
- >
-## Restore a SharePoint database from Azure by using MABS
-1. To recover a SharePoint content database, browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
+## Restore a SharePoint database from Azure using MABS
+
+To restore a SharePoint database from Azure using MABS, follow these steps:
+
+1. Browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
+
+ ![Screenshot shows how to browse through recovery points.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
- ![MABS SharePoint Protection8](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
2. Double-click the SharePoint recovery point to show the available SharePoint catalog information. > [!NOTE] > Because the SharePoint farm is protected for long-term retention in Azure, no catalog information (metadata) is available on the MABS server. As a result, whenever a point-in-time SharePoint content database needs to be recovered, you need to catalog the SharePoint farm again.
- >
- >
+ 3. Select **Re-catalog**.
- ![MABS SharePoint Protection10](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
+ ![Screenshot shows how to select re-catalog.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
- The **Cloud Recatalog** status window opens.
+ The **Cloud Recatalog** status window opens.
- ![MABS SharePoint Protection11](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
+ ![Screenshot shows the Cloud Recatalog status window.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
- After cataloging is finished, the status changes to *Success*. Select **Close**.
+ After cataloging is finished, the status changes to *Success*. Select **Close**.
- ![MABS SharePoint Protection12](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
-4. Select the SharePoint object shown in the MABS **Recovery** tab to get the content database structure. Right-click the item, and then select **Recover**.
+ ![Screenshot shows the status as Success.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
- ![MABS SharePoint Protection13](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
-5. At this point, follow the recovery steps earlier in this article to recover a SharePoint content database from disk.
+4. On the MABS **Recovery** tab, select the SharePoint object to get the content database structure. Right-click the item, and then select **Recover**.
-## Switching the Front-End Web Server
+ ![Screenshot shows how to select the SharePoint object.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
-If you have more than one front-end web server, and want to switch the server that MABS uses to protect the farm, follow the instructions:
+5. Now, [recover a SharePoint content database from disk](#restore-a-sharepoint-item-from-disk-using-mabs).
-The following procedure uses the example of a server farm with two front-end Web servers, *Server1* and *Server2*. MABS uses *Server1* to protect the farm. Change the front-end Web server that MABS uses to *Server2* so that you can remove *Server1* from the farm.
+## Switch the front-end Web server
+
+If you've more than one front-end web server, you can switch the server that MABS uses to protect the farm.
+
+The following sections use the example of a server farm with two front-end Web servers - *Server1* and *Server2*. MABS uses *Server1* to protect the farm. Change the front-end Web server that MABS uses to *Server2* so that you can remove *Server1* from the farm.
> [!NOTE]
-> If the front-end Web server that MABS uses to protect the farm is unavailable, use the following procedure to change the front-end Web server by starting at step 4.
+> If the front-end Web server that MABS uses to protect the farm is unavailable, use the following procedure to change the front-end Web server by starting at *step 4*.
+
+### Change the front-end Web server used by MABS
-### To change the front-end Web server that MABS uses to protect the farm
+To change the front-end Web server that MABS uses to protect the farm, follow these steps:
1. Stop the SharePoint VSS Writer service on *Server1* by running the following command at a command prompt:
The following procedure uses the example of a server farm with two front-end Web
stsadm -o unregisterwsswriter ```
-1. On *Server1*, open the Registry Editor and navigate to the following key:
+1. On *Server1*, open the Registry Editor and go to the following key:
**HKLM\System\CCS\Services\VSS\VssAccessControl**
-1. Check all values listed in the VssAccessControl subkey. If any entry has a value data of 0 and another VSS writer is running under the associated account credentials, change the value data to 1.
+1. Check all values listed in the *VssAccessControl subkey*.
+
+ If any entry has a value data of 0 and another VSS writer is running under the associated account credentials, change the value data to 1.
1. Install a protection agent on *Server2*. > [!WARNING] > You can only switch Web front-end servers if both the servers are on the same domain.
-1. On *Server2*, at a command prompt, change the directory to `_MABS installation location_\bin\` and run **ConfigureSharepoint**. For more information about ConfigureSharePoint, see [Configure backup](#configure-backup).
+1. On *Server2*, at a command prompt, change the directory to `_MABS installation location_\bin\` and run **ConfigureSharepoint**.
+
+ For more information about ConfigureSharePoint, see [Configure backup](#configure-the-backup).
1. Select the protection group that the server farm belongs to, and then select **Modify protection group**.
-1. In the Modify Group Wizard, on the **Select Group Members** page, expand *Server2* and select the server farm, and then complete the wizard.
+1. On the *Modify Group Wizard*, on the **Select Group Members** page, expand *Server2* and select the server farm, and then complete the wizard.
A consistency check will start.
-1. If you performed step 6, you can now remove the volume from the protection group.
+1. If you do *step 6*, you can now remove the volume from the protection group.
## Remove a database from a SharePoint farm
To resolve this alert, follow these steps:
1. In **MABS Administrator Console**, click **Protection** on the navigation bar. 1. In the **Display** pane, right-click the protection group for the SharePoint farm, and then click **Stop Protection of member**. 1. In the **Stop Protection** dialog box, click **Retain Protected Data**.
- 1. Click **Stop Protection**.
+ 1. Select **Stop Protection**.
You can add the SharePoint farm back for protection by using the **Modify Protection Group** wizard. During re-protection, select the SharePoint front-end server and click **Refresh** to update the SharePoint database cache, then select the SharePoint farm and proceed. ## Next steps
-See the [Back up Exchange server](backup-azure-exchange-mabs.md) article.
-See the [Back up SQL Server](backup-azure-sql-mabs.md) article.
+- [Back up Exchange server](backup-azure-exchange-mabs.md)
+- [Back up SQL Server](backup-azure-sql-mabs.md)
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 11/22/2022 Last updated : 11/28/2022
Recovery points on DPM/MABS disk | 64 for file servers, and 448 for app servers.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs and for VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) and [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription (preview)** | Cross Subscription restore can be used to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription (preview)** | Cross Subscription restore can be used to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
**Cross Zonal Restore (preview)** | Cross Zonal restore can be used to restore Azure zone pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Zonal Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore points. It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm).
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is not supported.
[Confidential VM](../confidential-computing/confidential-vm-overview.md) | The backup support is in Limited Preview. <br><br> Backup is supported only for those Confidential VMs with no confidential disk encryption and for Confidential VMs with confidential OS disk encryption using Platform Managed Key (PMK). <br><br> Backup is currently not supported for Confidential VMs with confidential OS disk encryption using Customer Managed Key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where Confidential VM is available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported using [Enhanced Policy](backup-azure-vms-enhanced-policy.md) only. You can configure backup through [Create VM blade](backup-azure-arm-vms-prepare.md), [VM Manage blade](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and File Recovery (Item level Restore) for Confidential VM are currently not supported. ## VM storage support
backup Compliance Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/compliance-offerings.md
Title: Azure Backup compliance offerings description: Summary of compliance offerings for Azure Backup- Previously updated : 03/16/2020+ Last updated : 11/29/2022++++ # Azure Backup compliance offerings
-To help organizations comply with national, regional, and industry-specific requirements governing the collection and use of individuals' data, Microsoft Azure & Azure Backup offer a comprehensive set of certifications and attestations.
+Microsoft Azure & Azure Backup offer a comprehensive set of certifications and attestations that help organizations to comply with national, regional, and industry-specific requirements governing the collection and use of individuals' data.
-You can find below compliance offerings for Azure Backup to ensure your service is regulated when using the Azure Backup service.
+In this article, you'll learn about the various compliance offerings for Azure Backup to ensure that the service is regulated when you use the Azure Backup service.
+
+> [!div class="checklist"]
+> - Global
+> - US Government
+> - Industry
+> - Regional
## Global
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/28/2022
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Previously updated : 11/10/2022 Last updated : 11/28/2022
After you validate your data files, you can use them to build your Custom Neural
- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) (Preview): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=stt-tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language. -- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model) (Preview): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobooks, content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create up to 10 custom styles by providing style samples as additional training data for the same voice.
+- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model) (Preview): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobooks, content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create up to 10 custom styles by providing style samples (at least 100 utterances per style) as additional training data for the same voice.
The language of the training data must be one of the [languages that are supported](language-support.md?tabs=stt-tts) for custom neural voice neural, cross-lingual, or multi-style training.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/language-support.md
Previously updated : 10/13/2022 Last updated : 11/29/2022
Use this article to learn about the languages currently supported by different f
| Language | Language code | [Custom text classification](../custom-text-classification/language-support.md) | [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) | [Conversational language understanding](../conversational-language-understanding/language-support.md) | [Entity linking](../entity-linking/language-support.md) | [Language detection](../language-detection/language-support.md) | [Key phrase extraction](../key-phrase-extraction/language-support.md) | [Named entity recognition(NER)](../named-entity-recognition/language-support.md) | [Orchestration workflow](../orchestration-workflow/language-support.md) | [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents) | [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations) | [Question answering](../question-answering/language-support.md) | [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support) | [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) | [Text Analytics for health](../text-analytics-for-health/language-support.md) | [Summarization](../summarization/language-support.md?tabs=document-summarization) | [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization) | |::|:-:|:-:|:-:|:--:|:-:|::|::|:--:|:--:|:-:|:-:|::|::|:-:|:--:|::|:--:|
-| Afrikaans | `af` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Albanian | `sq` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Amharic | `am` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Arabic | `ar` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
-| Armenian | `hy` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Assamese | `as` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Azerbaijani | `az` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Basque | `eu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Belarusian | `be` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Bengali | `bn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Bosnian | `bs` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Breton | `br` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Bulgarian | `bg` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Burmese | `my` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Catalan | `ca` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Afrikaans | `af` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Albanian | `sq` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Amharic | `am` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Arabic | `ar` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
+| Armenian | `hy` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Assamese | `as` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Azerbaijani | `az` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Basque | `eu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Belarusian | `be` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Bengali | `bn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Bosnian | `bs` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Breton | `br` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Bulgarian | `bg` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Burmese | `my` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Catalan | `ca` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Central Khmer | `km` | | | | | &check; | | | | | | | | | | | |
-| Chinese (Simplified) | `zh-hans` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | | | &check; | |
-| Chinese (Traditional) | `zh-hant` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Chinese (Simplified) | `zh-hans` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
+| Chinese (Traditional) | `zh-hant` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Corsican | `co` | | | | | &check; | | | | | | | | | | | |
-| Croatian | `hr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Czech | `cs` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
-| Danish | `da` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Croatian | `hr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Czech | `cs` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
+| Danish | `da` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Dari | `prs` | | | | | &check; | | | | | | | | | | | | | Divehi | `dv` | | | | | &check; | | | | | | | | | | | |
-| Dutch | `nl` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Dutch | `nl` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| English (UK) | `en-gb` | &check; | &check; | &check; | | &check; | | | | | | | | | | | | | English (US) | `en-us` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
-| Esperanto | `eo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Estonian | `et` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Esperanto | `eo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Estonian | `et` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Fijian | `fj` | | | | | &check; | | | | | | | | | | | |
-| Filipino | `tl` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Finnish | `fi` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Filipino | `tl` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Finnish | `fi` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| French | `fr` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
-| Galician | `gl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Georgian | `ka` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Galician | `gl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Georgian | `ka` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
| German | `de` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
-| Greek | `el` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Gujarati | `gu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Greek | `el` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Gujarati | `gu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Haitian | `ht` | | | | | &check; | | | | | | | | | | | |
-| Hausa | `ha` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Hebrew | `he` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | &check; | | |
-| Hindi | `hi` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Hausa | `ha` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Hebrew | `he` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | &check; | | |
+| Hindi | `hi` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Hmong Daw | `mww` | | | | | &check; | | | | | | | | | | | |
-| Hungarian | `hu` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Hungarian | `hu` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Icelandic | `is` | | | | | &check; | | | | | | &check; | | | | | | | Igbo | `ig` | | | | | &check; | | | | | | | | | | | |
-| Indonesian | `id` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Indonesian | `id` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Inuktitut | `iu` | | | | | &check; | | | | | | | | | | | |
-| Irish | `ga` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Irish | `ga` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Italian | `it` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
-| Japanese | `ja` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | | | &check; | |
-| Javanese | `jv` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Kannada | `kn` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Kazakh | `kk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Khmer | `km` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Japanese | `ja` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
+| Javanese | `jv` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Kannada | `kn` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Kazakh | `kk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Khmer | `km` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
| Kinyarwanda | `rw` | | | | | &check; | | | | | | | | | | | |
-| Korean | `ko` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | | | &check; | |
-| Kurdish (Kurmanji) | `ku` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Kyrgyz | `ky` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Lao | `lo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Latin | `la` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Latvian | `lv` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Lithuanian | `lt` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Korean | `ko` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
+| Kurdish (Kurmanji) | `ku` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Kyrgyz | `ky` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Lao | `lo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Latin | `la` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Latvian | `lv` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Lithuanian | `lt` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Luxembourgish | `lb` | | | | | &check; | | | | | | | | | | | |
-| Macedonian | `mk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Malagasy | `mg` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Malay | `ms` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Malayalam | `ml` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Macedonian | `mk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Malagasy | `mg` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Malay | `ms` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Malayalam | `ml` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Maltese | `mt` | | | | | &check; | | | | | | | | | | | | | Maori | `mi` | | | | | &check; | | | | | | | | | | | |
-| Marathi | `mr` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Mongolian | `mn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Nepali | `ne` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Marathi | `mr` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Mongolian | `mn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Nepali | `ne` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
| Norwegian (Bokmal) | `nb` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | | | | | |
-| Norwegian | `no` | | | | | &check; | | | | | | | &check; | | | | |
+| Norwegian | `no` | | | | | &check; | | | | | | | &check; | &check; | | | |
| Norwegian Nynorsk | `nn` | | | | | &check; | | | | | | | | | | | |
-| Oriya | `or` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Oromo | `om` | | | | | | &check; | | | | | | &check; | | | | |
-| Pashto | `ps` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Persian (Farsi) | `fa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Polish | `pl` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Oriya | `or` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Oromo | `om` | | | | | | &check; | | | | | | &check; | &check; | | | |
+| Pashto | `ps` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Persian (Farsi) | `fa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Polish | `pl` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Portuguese (Brazil) | `pt-br` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | | Portuguese (Portugal) | `pt-pt` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | | &check; | &check; | | &check; | |
-| Punjabi | `pa` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Punjabi | `pa` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Queretaro Otomi | `otq` | | | | | &check; | | | | | | | | | | | |
-| Romanian | `ro` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Russian | `ru` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Romanian | `ro` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Russian | `ru` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Samoan | `sm` | | | | | &check; | | | | | | | | | | | |
-| Sanskrit | `sa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Scottish Gaelic | `gd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Sanskrit | `sa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Scottish Gaelic | `gd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
| Serbian | `sr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | | | | | | | Shona | `sn` | | | | | &check; | | | | | | | | | | | |
-| Sindhi | `sd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Sinhala | `si` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Slovak | `sk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Slovenian | `sl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Somali | `so` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Sindhi | `sd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Sinhala | `si` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Slovak | `sk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Slovenian | `sl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Somali | `so` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
| Spanish | `es` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | | |
-| Sundanese | `su` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Swahili | `sw` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Swedish | `sv` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Sundanese | `su` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Swahili | `sw` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Swedish | `sv` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Tahitian | `ty` | | | | | &check; | | | | | | | | | | | | | Tajik | `tg` | | | | | &check; | | | | | | | | | | | |
-| Tamil | `ta` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Tamil | `ta` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Tatar | `tt` | | | | | &check; | | | | | | | | | | | |
-| Telugu | `te` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Thai | `th` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
+| Telugu | `te` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Thai | `th` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
| Tibetan | `bo` | | | | | &check; | | | | | | | | | | | | | Tigrinya | `ti` | | | | | &check; | | | | | | | | | | | | | Tongan | `to` | | | | | &check; | | | | | | | | | | | |
-| Turkish | `tr` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | | | | |
+| Turkish | `tr` | &check; | &check; | &check; | | &check; | &check; | &check; | | | | &check; | &check; | &check; | | | |
| Turkmen | `tk` | | | | | &check; | | | | | | | | | | | |
-| Ukrainian | `uk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Urdu | `ur` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Uyghur | `ug` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Uzbek | `uz` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Vietnamese | `vi` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | | | | |
-| Welsh | `cy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Western Frisian | `fy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Xhosa | `xh` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
-| Yiddish | `yi` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | | | | |
+| Ukrainian | `uk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Urdu | `ur` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Uyghur | `ug` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Uzbek | `uz` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Vietnamese | `vi` | &check;