Updates from: 05/03/2023 01:12:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
git clone https://github.com/Azure-Samples/active-directory-b2c-msal-node-sign-i
Extract the sample file to a folder. You'll get a web app with the following directory structure:
-```text
+```output
active-directory-b2c-msal-node-sign-in-sign-out-webapp/ Γö£ΓöÇΓöÇ index.js ΓööΓöÇΓöÇ package.json
active-directory-b2c Custom Policies Series Sign Up Or Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in.md
In the `ContosoCustomPolicy.XML` file, locate the `SignInUser` technical profile
<Item Key="SignUpTarget">AccountTypeInputCollectorClaimsExchange</Item> </Metadata> <DisplayClaims>
- <OutputClaim ClaimTypeReferenceId="email" Required="true" />
- <OutputClaim ClaimTypeReferenceId="password" Required="true" />
+ <DisplayClaim ClaimTypeReferenceId="email" Required="true" />
+ <DisplayClaim ClaimTypeReferenceId="password" Required="true" />
</DisplayClaims> <OutputClaims> <OutputClaim ClaimTypeReferenceId="email" />
You can sign in by entering the **Email Address** and **Password** of an existin
- Learn how to [Remove the sign-up link](add-sign-in-policy.md), so users can just sign in. -- Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
+- Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
active-directory-b2c Custom Policies Series Validate User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md
While the *Predicates* define the validation to check against a claim type, the
</Parameters> </Predicate>
- <Predicate Id="AllowedAADCharacters" Method="MatchesRegex" HelpText="An invalid character was provided.">
+ <Predicate Id="AllowedCharacters" Method="MatchesRegex" HelpText="An invalid character was provided.">
<Parameters> <Parameter Id="RegularExpression">(^([0-9A-Za-z\d@#$%^&amp;*\-_+=[\]{}|\\:',?/`~"();! ]|(\.(?!@)))+$)|(^$)</Parameter> </Parameters>
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
Title: Configure Azure Active Directory B2C with WhoIAM Rampart
+ Title: Configure WhoIAM Rampart with Azure Active Directory B2C
description: Learn how to integrate Azure AD B2C authentication with WhoIAM Rampart -+ Previously updated : 06/20/2022 Last updated : 05/02/2023
# Configure WhoIAM Rampart with Azure Active Directory B2C
-In this sample tutorial, you'll learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with Rampart by WhoIAM. Rampart provides features for a fully integrated helpdesk and invitation-gated user registration experience. It allows support specialists to perform tasks like resetting passwords and multi-factor authentication without using Azure. It also enables apps and role-based access control (RBAC) for end-users of Azure AD B2C.
-
+In this tutorial, learn to integrate Azure Active Directory B2C (Azure AD B2C) authentication with WhoIAM Rampart. Rampart features enable an integrated helpdesk and invitation-gated user registration experience. Support specialists can reset passwords and multifactor authentication without using Azure. There are apps and role-based access control (RBAC) for Azure AD B2C users.
## Prerequisites
-To get started, you'll need:
--- An Azure AD subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/)--- An [Azure AD B2C tenant](tutorial-create-tenant.md) linked to your Azure subscription.--- An Azure DevOps Server instance--- A [SendGrid account](https://sendgrid.com/)--- A WhoIAM [trial account](https://www.whoiam.ai/contact-us/)
+* An Azure AD subscription
+ * If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/)
+* An Azure AD B2C tenant linked to the Azure subscription
+ * See, [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
+* An Azure DevOps Server instance
+* A SendGrid account
+ * Go to sengrid.com to [Start for Free](https://sendgrid.com/)
+* A WhoIAM trial account
+ * Go to whoaim.ai [Contact us](https://www.whoiam.ai/contact-us/) to get started
## Scenario description
-WhoIAM Rampart is built entirely in Azure and runs in your Azure environment. The following components comprise the Rampart solution with Azure AD B2C:
+WhoIAM Rampart is built in Azure and runs in the Azure environment. The following components comprise the Rampart solution with Azure AD B2C:
-- **An Azure AD tenant**: Your Azure AD B2C tenant stores your users and manages who has access (and at what scope) to Rampart itself.
+* **An Azure AD tenant** - the Azure AD B2C tenant stores users and manages access (and scope) in Rampart
+* **Custom B2C policies** - to integrate with Rampart
+* **A resource group** - hosts Rampart functionality
-- **Custom B2C policies**: To integrate with Rampart.
+ ![Diagram of the WhoIAM Rampart integration for Azure AD B2C.](./media/partner-whoiam/whoiam-rampart-integration-scenario.png)
-- **A resource group**: It hosts Rampart functionality.
+## Install Rampart
+Go to whoiam.ai [Contact us](https://www.whoiam.ai/contact-us/) to get started.
-## Step 1 - Onboard with Rampart
+Automated templates deploy Azure resources. Templates configure the DevOps instance with code and configuration.
-Contact [WhoIAM](https://www.whoiam.ai/contact-us/) to start the onboarding process. Automated templates will deploy all necessary Azure resources, and they'll configure your DevOps instance with the required code and configuration according to your needs.
+## Configure and integrate Rampart with Azure AD B2C
-## Step 2 - Configure and integrate Rampart with Azure AD B2C
+The solution integration with Azure AD B2C requires custom policies. WhoIAM provides the policies and helps integrate them with applications or policies, or both.
-The tight integration of this solution with Azure AD B2C requires custom policies. WhoIAM provides these policies and assists with integrating them with your applications or existing policies, or both.
+For details about WhoIAM custom policies, go to docs.gatekeeper.whoiamdemos.com for [Set-up Guide, Authorization Policy Execution](https://docs.gatekeeper.whoiamdemos.com/#/setup-guide?id=authorization-policy-execution).
-Follow the steps mentioned in [Authorization policy execution](https://docs.gatekeeper.whoiamdemos.com/#/setup-guide?id=authorization-policy-execution) for details on the custom policies provided by WhoIAM.
+## Test the solution
-## Step 3 - Test the solution
+The following image is an example a list of app registrations in your Azure AD B2C tenant. WhoIAM validates the implementation by testing features and health check status endpoints.
-The image shows an example of how WhoIAM Rampart displays a list of app registrations in your Azure AD B2C tenant. WhoIAM validates the implementation by testing all features and health check status endpoints.
+ ![Screenshot of the user-created application list in the Azure AD B2C tenant.](./media/partner-whoiam/whoiam-rampart-app-registration.png)
+A list of user-created applications in your Azure AD B2C tenant appears. Likewise, the user sees a list of users in your Azure AD B2C directory and user management functions such as invitations, approvals, and RBAC management.
-The applications screen should display a list of all user-created applications in your Azure AD B2C tenant.
+ ![Screenshot of the WhoIAM Rampart user list in the Azure AD B2C tenant.](./media/partner-whoiam/whoiam-rampart-user-list.png)
-Likewise, the user's screen should display a list of all users in your Azure AD B2C directory and user management functions such as invitations, approvals, and RBAC management.
- ## Next steps
-For more information, review the following articles:
--- [WhoIAM Rampart documentation](https://docs.gatekeeper.whoiamdemos.com/#/setup-guide?id=authorization-policy-execution)--- [Custom policies in Azure AD B2C overview](custom-policy-overview.md)---- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Set-up Guide, Authorization Policy Execution](https://docs.gatekeeper.whoiamdemos.com/#/setup-guide?id=authorization-policy-execution)
+- [Azure AD B2C custom policy overview](custom-policy-overview.md)
+- [Tutorial: Create user flows and custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory Configure Automatic User Provisioning Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/configure-automatic-user-provisioning-portal.md
Previously updated : 10/06/2022 Last updated : 05/02/2023 # Managing user account provisioning for enterprise apps in the Azure portal
-This article describes the general steps for managing automatic user account provisioning and de-provisioning for applications that support it. *User account provisioning* is the act of creating, updating, and/or disabling user account records in an applicationΓÇÖs local user profile store. Most cloud and SaaS applications store the users role and permissions in the user's own local user profile store, and presence of such a user record in the user's local store is *required* for single sign-on and access to work. To learn more about automatic user account provisioning, see [Automate User Provisioning and Deprovisioning to SaaS Applications with Azure Active Directory](user-provisioning.md).
+This article describes the general steps for managing automatic user account provisioning and deprovisioning for applications that support it. *User account provisioning* is the act of creating, updating, and/or disabling user account records in an applicationΓÇÖs local user profile store. Most cloud and SaaS applications store the role and permissions in the user's own local user profile store. The presence of such a user record in the user's local store is *required* for single sign-on and access to work. To learn more about automatic user account provisioning, see [Automate User Provisioning and Deprovisioning to SaaS Applications with Azure Active Directory](user-provisioning.md).
> [!IMPORTANT] > Azure Active Directory (Azure AD) has a gallery that contains thousands of pre-integrated applications that are enabled for automatic provisioning with Azure AD. You should start by finding the provisioning setup tutorial specific to your application in the [List of tutorials on how to integrate SaaS apps with Azure Active Directory](../saas-apps/tutorial-list.md). You'll likely find step-by-step guidance for configuring both the app and Azure AD to create the provisioning connection.
Use the Azure portal to view and manage all applications that are configured for
The **Provisioning** pane begins with a **Mode** menu, which shows the provisioning modes supported for an enterprise application, and lets you configure them. The available options include:
-* **Automatic** - This option is shown if Azure AD supports automatic API-based provisioning or de-provisioning of user accounts to this application. Select this mode to display an interface that helps administrators:
+* **Automatic** - This option is shown if Azure AD supports automatic API-based provisioning or deprovisioning of user accounts to this application. Select this mode to display an interface that helps administrators:
* Configure Azure AD to connect to the application's user management API * Create account mappings and workflows that define how user account data should flow between Azure AD and the app
Supported customizations include:
### Settings
-Expand **Settings** to set an email address to receive notifications and whether to receive alerts on errors. You can also select the scope of users to sync. You can choose to sync all users and groups or only those that are assigned.
+Expand **Settings** to set an email address to receive notifications and whether to receive alerts on errors. Also select the scope of users to sync. Choose to sync all users and groups or only users that are assigned.
### Provisioning Status
-If provisioning is being enabled for the first time for an application, turn on the service by changing the **Provisioning Status** to **On**. This change causes the Azure AD provisioning service to run an initial cycle. It reads the users assigned in the **Users and groups** section, queries the target application for them, and then runs the provisioning actions defined in the Azure AD **Mappings** section. During this process, the provisioning service stores cached data about what user accounts it's managing, so non-managed accounts inside the target applications that were never in scope for assignment aren't affected by de-provisioning operations. After the initial cycle, the provisioning service automatically synchronizes user and group objects on a forty-minute interval.
+If provisioning is being enabled for the first time for an application, turn on the service by changing the **Provisioning Status** to **On**. This change causes the Azure AD provisioning service to run an initial cycle. It reads the users assigned in the **Users and groups** section, queries the target application for them, and then runs the provisioning actions defined in the Azure AD **Mappings** section. During this process, the provisioning service stores cached data about what user accounts it's managing. The service stores cached data so nonmanaged accounts inside the target applications that were never in scope for assignment aren't affected in deprovisioning operations. After the initial cycle, the provisioning service automatically synchronizes user and group objects on a forty-minute interval.
Change the **Provisioning Status** to **Off** to pause the provisioning service. In this state, Azure doesn't create, update, or remove any user or group objects in the app. Change the state back to **On** and the service picks up where it left off.
active-directory Define Conditional Rules For Provisioning User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md
Previously updated : 01/23/2023 Last updated : 05/02/2023 zone_pivot_groups: app-provisioning-cross-tenant-synchronization
zone_pivot_groups: app-provisioning-cross-tenant-synchronization
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ::: zone-end
-This article describes how to use scoping filters in the Azure Active Directory (Azure AD) provisioning service to define attribute-based rules that determine which users or groups are provisioned.
+Learn how to use scoping filters in the Azure Active Directory (Azure AD) provisioning service to define attribute based rules. The rules are used to determine which users or groups are provisioned.
## Scoping filter use cases
Scoping filters can be used optionally, in addition to scoping by assignment. A
A scoping filter consists of one or more *clauses*. Clauses determine which users are allowed to pass through the scoping filter by evaluating each user's attributes. For example, you might have one clause that requires that a user's "State" attribute equals "New York", so only New York users are provisioned into the application.
-A single clause defines a single condition for a single attribute value. If multiple clauses are created in a single scoping filter, they're evaluated together by using "AND" logic. This means all clauses must evaluate to "true" in order for a user to be provisioned.
+A single clause defines a single condition for a single attribute value. If multiple clauses are created in a single scoping filter, they're evaluated together using "AND" logic. The "AND" logic means all clauses must evaluate to "true" in order for a user to be provisioned.
-Finally, multiple scoping filters can be created for a single application. If multiple scoping filters are present, they're evaluated together by using "OR" logic. This means that if all the clauses in any of the configured scoping filters evaluate to "true", the user is provisioned.
+Finally, multiple scoping filters can be created for a single application. If multiple scoping filters are present, they're evaluated together by using "OR" logic. The "OR" logic means that if all the clauses in any of the configured scoping filters evaluate to "true", the user is provisioned.
Each user or group processed by the Azure AD provisioning service is always evaluated individually against each scoping filter.
Scoping filters are configured as part of the attribute mappings for each Azure
g. **REGEX MATCH**. Clause returns "true" if the evaluated attribute matches a regular expression pattern. For example: `([1-9][0-9])` matches any number between 10 and 99 (case sensitive).
- h. **NOT REGEX MATCH**. Clause returns "true" if the evaluated attribute doesn't match a regular expression pattern. It will return "false" if the attribute is null / empty.
+ h. **NOT REGEX MATCH**. Clause returns "true" if the evaluated attribute doesn't match a regular expression pattern. It returns "false" if the attribute is null / empty.
i. **Greater_Than.** Clause returns "true" if the evaluated attribute is greater than the value. The value specified on the scoping filter must be an integer and the attribute on the user must be an integer [0,1,2,...].
Scoping filters are configured as part of the attribute mappings for each Azure
## Common scoping filters | Target Attribute| Operator | Value | Description| |-|-|-|-|
-|userPrincipalName|REGEX MATCH|`.\*@domain.com`|All users with userPrincipal that has the domain @domain.com will be in scope for provisioning|
-|userPrincipalName|NOT REGEX MATCH|`.\*@domain.com`|All users with userPrincipal that has the domain @domain.com will be out of scope for provisioning|
+|userPrincipalName|REGEX MATCH|`.\*@domain.com`|All users with `userPrincipal` that have the domain `@domain.com` are in scope for provisioning. |
+|userPrincipalName|NOT REGEX MATCH|`.\*@domain.com`|All users with `userPrincipal` that has the domain `@domain.com` are out of scope for provisioning. |
|department|EQUALS|`sales`|All users from the sales department are in scope for provisioning|
-|workerID|REGEX MATCH|`(1[0-9][0-9][0-9][0-9][0-9][0-9])`| All employees with workerIDs between 1000000 and 2000000 are in scope for provisioning.|
+|workerID|REGEX MATCH|`(1[0-9][0-9][0-9][0-9][0-9][0-9])`| All employees with `workerID` between 1000000 and 2000000 are in scope for provisioning.|
## Related articles * [Automate user provisioning and deprovisioning to SaaS applications](../app-provisioning/user-provisioning.md)
active-directory Application Proxy Understand Cors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-understand-cors-issues.md
You can change your app to support CORS by adding the Access-Control-Allow-Origi
### Option 5: Extend the lifetime of the access token
-Some CORS issues can't be resolved, such as when your app redirects to *login.microsoftonline.com* to authenticate, and the access token expires. The CORS call then fails. A workaround for this scenario is to extend the lifetime of the access token, to prevent it from expiring during a userΓÇÖs session. For more information about how to do this, see [Configurable token lifetimes in Azure AD](../develop/active-directory-configurable-token-lifetimes.md).
+Some CORS issues can't be resolved, such as when your app redirects to *login.microsoftonline.com* to authenticate, and the access token expires. The CORS call then fails. A workaround for this scenario is to extend the lifetime of the access token, to prevent it from expiring during a userΓÇÖs session. For more information about how to do this, see [Configurable token lifetimes in Azure AD](../develop/configurable-token-lifetimes.md).
## See also - [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout tracks the last three bad password hashes to avoid incrementing th
## Default protections
-In addition to Smart lockout, Azure AD also protects against attacks by analyzing signals including IP traffic and identifying anomalous behavior. Azure AD will block these malicious sign-ins by default and return [AADSTS50053 - IdsLocked error code](../develop/reference-aadsts-error-codes.md), regardless of the password validity.
+In addition to Smart lockout, Azure AD also protects against attacks by analyzing signals including IP traffic and identifying anomalous behavior. Azure AD will block these malicious sign-ins by default and return [AADSTS50053 - IdsLocked error code](../develop/reference-error-codes.md), regardless of the password validity.
## Next steps
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Because risk and policy are evaluated in real time, clients that negotiate conti
Token lifetime is increased to long lived, up to 28 hours, in CAE sessions. Revocation is driven by critical events and policy evaluation, not just an arbitrary time period. This change increases the stability of applications without affecting security posture.
-If you aren't using CAE-capable clients, your default access token lifetime will remain 1 hour. The default only changes if you configured your access token lifetime with the [Configurable Token Lifetime (CTL)](../develop/active-directory-configurable-token-lifetimes.md) preview feature.
+If you aren't using CAE-capable clients, your default access token lifetime will remain 1 hour. The default only changes if you configured your access token lifetime with the [Configurable Token Lifetime (CTL)](../develop/configurable-token-lifetimes.md) preview feature.
## Example flow diagrams
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
In persistent browsers, cookies stay stored in the userΓÇÖs device even after a
Conditional Access is an Azure AD Premium capability and requires a premium license. If you would like to learn more about Conditional Access, see [What is Conditional Access in Azure Active Directory?](overview.md#license-requirements) > [!WARNING]
-> If you are using the [configurable token lifetime](../develop/active-directory-configurable-token-lifetimes.md) feature currently in public preview, please note that we donΓÇÖt support creating two different policies for the same user or app combination: one with this feature and another one with configurable token lifetime feature. Microsoft retired the configurable token lifetime feature for refresh and session token lifetimes on January 30, 2021 and replaced it with the Conditional Access authentication session management feature.
+> If you are using the [configurable token lifetime](../develop/configurable-token-lifetimes.md) feature currently in public preview, please note that we donΓÇÖt support creating two different policies for the same user or app combination: one with this feature and another one with configurable token lifetime feature. Microsoft retired the configurable token lifetime feature for refresh and session token lifetimes on January 30, 2021 and replaced it with the Conditional Access authentication session management feature.
> > Before enabling Sign-in Frequency, make sure other reauthentication settings are disabled in your tenant. If "Remember MFA on trusted devices" is enabled, be sure to disable it before using Sign-in frequency, as using these two settings together may lead to prompting users unexpectedly. To learn more about reauthentication prompts and session lifetime, see the article, [Optimize reauthentication prompts and understand session lifetime for Azure AD Multifactor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
If you need to submit a support incident, provide the request ID and time and da
| 53003 | BlockedByConditionalAccess | | 53004 | ProofUpBlockedDueToRisk |
-More information about error codes can be found in the article [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md). Error codes in the list appear with a prefix of `AADSTS` followed by the code seen in the browser, for example `AADSTS53002`.
+More information about error codes can be found in the article [Azure AD Authentication and authorization error codes](../develop/reference-error-codes.md). Error codes in the list appear with a prefix of `AADSTS` followed by the code seen in the browser, for example `AADSTS53002`.
## Service dependencies
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
The default lifetime of an access token is variable. When issued, the Microsoft
Tenants that don't use Conditional Access have a default access token lifetime of two hours for clients such as Microsoft Teams and Microsoft 365.
-Adjust the lifetime of an access token to control how often the client application expires the application session, and how often it requires the user to reauthenticate (either silently or interactively). To override the default access token lifetime variation, use [Configurable token lifetime (CTL)](active-directory-configurable-token-lifetimes.md).
+Adjust the lifetime of an access token to control how often the client application expires the application session, and how often it requires the user to reauthenticate (either silently or interactively). To override the default access token lifetime variation, use [Configurable token lifetime (CTL)](configurable-token-lifetimes.md).
Apply default token lifetime variation to organizations that have Continuous Access Evaluation (CAE) enabled. Apply default token lifetime variation even if the organizations use CTL policies. The default token lifetime for long lived token lifetime ranges from 20 to 28 hours. When the access token expires, the client must use the refresh token to silently acquire a new refresh token and access token.
Refresh tokens are invalidated or revoked at any time, for different reasons. Th
### Token timeouts
-Organizations can use [token lifetime configuration](active-directory-configurable-token-lifetimes.md) to alter the lifetime of refresh tokens Some tokens can go without use. For example, the user doesn't open the application for three months and then the token expires. Applications can encounter scenarios where the login server rejects a refresh token due to its age.
+Organizations can use [token lifetime configuration](configurable-token-lifetimes.md) to alter the lifetime of refresh tokens Some tokens can go without use. For example, the user doesn't open the application for three months and then the token expires. Applications can encounter scenarios where the login server rejects a refresh token due to its age.
- MaxInactiveTime: Specifies the amount of time that a token can be inactive. - MaxSessionAge: If MaxAgeSessionMultiFactor or MaxAgeSessionSingleFactor is set to something other than their default (Until-revoked), the user must reauthenticate after the time set in the MaxAgeSession*. Examples:
active-directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configurable-token-lifetimes.md
+
+ Title: Configurable token lifetimes
+description: Learn how to set lifetimes for access, SAML, and ID tokens issued by the Microsoft identity platform.
++++++++ Last updated : 04/04/2023++++
+# Configurable token lifetimes in the Microsoft identity platform (preview)
+
+You can specify the lifetime of an access, ID, or SAML token issued by the Microsoft identity platform. You can set token lifetimes for all apps in your organization or for a multi-tenant (multi-organization) application. We currently don't support configuring the token lifetimes for service principals or [managed identity service principals](../managed-identities-azure-resources/overview.md).
+
+In Azure AD, a policy object represents a set of rules that are enforced on individual applications or on all applications in an organization. Each policy type has a unique structure, with a set of properties that are applied to objects to which they're assigned.
+
+You can designate a policy as the default policy for your organization. The policy is applied to any application in the organization, as long as it isn't overridden by a policy with a higher priority. You also can assign a policy to specific applications. The order of priority varies by policy type.
+
+For examples, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
+
+> [!NOTE]
+> Configurable token lifetime policy only applies to mobile and desktop clients that access SharePoint Online and OneDrive for Business resources, and does not apply to web browser sessions.
+> To manage the lifetime of web browser sessions for SharePoint Online and OneDrive for Business, use the [Conditional Access session lifetime](../conditional-access/howto-conditional-access-session-lifetime.md) feature. Refer to the [SharePoint Online blog](https://techcommunity.microsoft.com/t5/SharePoint-Blog/Introducing-Idle-Session-Timeout-in-SharePoint-and-OneDrive/ba-p/119208) to learn more about configuring idle session timeouts.
+
+## License requirements
+
+Using this feature requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+
+Customers with [Microsoft 365 Business licenses](/office365/servicedescriptions/office-365-service-descriptions-technet-library) also have access to Conditional Access features.
+
+## Token lifetime policies for access, SAML, and ID tokens
+
+You can set token lifetime policies for access tokens, SAML tokens, and ID tokens.
+
+### Access tokens
+
+Clients use access tokens to access a protected resource. An access token can be used only for a specific combination of user, client, and resource. Access tokens cannot be revoked and are valid until their expiry. A malicious actor that has obtained an access token can use it for extent of its lifetime. Adjusting the lifetime of an access token is a trade-off between improving system performance and increasing the amount of time that the client retains access after the user's account is disabled. Improved system performance is achieved by reducing the number of times a client needs to acquire a fresh access token.
+
+The default lifetime of an access token is variable. When issued, an access token's default lifetime is assigned a random value ranging between 60-90 minutes (75 minutes on average). The default lifetime also varies depending on the client application requesting the token or if conditional access is enabled in the tenant. For more information, see [Access token lifetime](access-tokens.md#access-token-lifetime).
+
+### SAML tokens
+
+SAML tokens are used by many web-based SaaS applications, and are obtained using Azure Active Directory's SAML2 protocol endpoint. They are also consumed by applications using WS-Federation. The default lifetime of the token is 1 hour. From an application's perspective, the validity period of the token is specified by the NotOnOrAfter value of the `<conditions …>` element in the token. After the validity period of the token has ended, the client must initiate a new authentication request, which will often be satisfied without interactive sign in as a result of the Single Sign On (SSO) Session token.
+
+The value of NotOnOrAfter can be changed using the `AccessTokenLifetime` parameter in a `TokenLifetimePolicy`. It will be set to the lifetime configured in the policy if any, plus a clock skew factor of five minutes.
+
+The subject confirmation NotOnOrAfter specified in the `<SubjectConfirmationData>` element is not affected by the Token Lifetime configuration.
+
+### ID tokens
+
+ID tokens are passed to websites and native clients. ID tokens contain profile information about a user. An ID token is bound to a specific combination of user and client. ID tokens are considered valid until their expiry. Usually, a web application matches a user's session lifetime in the application to the lifetime of the ID token issued for the user. You can adjust the lifetime of an ID token to control how often the web application expires the application session, and how often it requires the user to be re-authenticated with the Microsoft identity platform (either silently or interactively).
+
+## Token lifetime policies for refresh tokens and session tokens
+
+You cannot set token lifetime policies for refresh tokens and session tokens. For lifetime, timeout, and revocation information on refresh tokens, see [Refresh tokens](refresh-tokens.md).
+
+> [!IMPORTANT]
+> As of January 30, 2021 you cannot configure refresh and session token lifetimes. Azure Active Directory no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](#configurable-token-lifetime-properties). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement.
+>
+> Existing token's lifetime will not be changed. After they expire, a new token will be issued based on the default value.
+>
+> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+
+## Configurable token lifetime properties
+A token lifetime policy is a type of policy object that contains token lifetime rules. This policy controls how long access, SAML, and ID tokens for this resource are considered valid. Token lifetime policies cannot be set for refresh and session tokens. If no policy is set, the system enforces the default lifetime value.
+
+### Access, ID, and SAML2 token lifetime policy properties
+
+Reducing the Access Token Lifetime property mitigates the risk of an access token or ID token being used by a malicious actor for an extended period of time. (These tokens cannot be revoked.) The trade-off is that performance is adversely affected, because the tokens have to be replaced more often.
+
+For an example, see [Create a policy for web sign-in](configure-token-lifetimes.md).
+
+Access, ID, and SAML2 token configuration are affected by the following properties and their respectively set values:
+
+- **Property**: Access Token Lifetime
+- **Policy property string**: AccessTokenLifetime
+- **Affects**: Access tokens, ID tokens, SAML2 tokens
+- **Default**:
+ - Access tokens: varies, depending on the client application requesting the token. For example, continuous access evaluation (CAE) capable clients that negotiate CAE-aware sessions will see a long lived token lifetime (up to 28 hours).
+ - ID tokens, SAML2 tokens: 1 hour
+- **Minimum**: 10 minutes
+- **Maximum**: 1 day
+
+### Refresh and session token lifetime policy properties
+
+Refresh and session token configuration are affected by the following properties and their respectively set values. After the retirement of refresh and session token configuration on January 30, 2021, Azure AD will only honor the default values described below. If you decide not to use [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to manage sign-in frequency, your refresh and session tokens will be set to the default configuration on that date and you'll no longer be able to change their lifetimes.
+
+|Property |Policy property string |Affects |Default |
+|-|--|||
+|Refresh Token Max Inactive Time |MaxInactiveTime |Refresh tokens |90 days |
+|Single-Factor Refresh Token Max Age |MaxAgeSingleFactor |Refresh tokens (for any users) |Until-revoked |
+|Multi-Factor Refresh Token Max Age |MaxAgeMultiFactor |Refresh tokens (for any users) |Until-revoked |
+|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
+|Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
+
+Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 90 days. Anytime the SSO session token is used within its validity period, the validity period is extended another 24 hours or 90 days. If the SSO session token isn't used within its Max Inactive Time period, it's considered expired and will no longer be accepted. Any changes to this default period should be changed using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+
+You can use PowerShell to find the policies that will be affected by the retirement. Use the [PowerShell cmdlets](configure-token-lifetimes.md#get-started) to see the all policies created in your organization, or to find which apps are linked to a specific policy.
+
+## Policy evaluation and prioritization
+You can create and then assign a token lifetime policy to a specific application and to your organization. Multiple policies might apply to a specific application. The token lifetime policy that takes effect follows these rules:
+
+* If a policy is explicitly assigned to the organization, it's enforced.
+* If no policy is explicitly assigned to the organization, the policy assigned to the application is enforced.
+* If no policy has been assigned to the organization or the application object, the default values are enforced. (See the table in [Configurable token lifetime properties](#configurable-token-lifetime-properties).)
+
+A token's validity is evaluated at the time the token is used. The policy with the highest priority on the application that is being accessed takes effect.
+
+All timespans used here are formatted according to the C# [TimeSpan](/dotnet/api/system.timespan) object - D.HH:MM:SS. So 80 days and 30 minutes would be `80.00:30:00`. The leading D can be dropped if zero, so 90 minutes would be `00:90:00`.
+
+## REST API reference
+
+You can configure token lifetime policies and assign them to apps using Microsoft Graph. For more information, see the [tokenLifetimePolicy resource type](/graph/api/resources/tokenlifetimepolicy) and its associated methods.
+
+## Cmdlet reference
+
+These are the cmdlets in the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation).
+
+### Manage policies
+
+You can use the following cmdlets to manage policies.
+
+| Cmdlet | Description |
+| | |
+| [New-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/new-mgpolicytokenlifetimepolicy) | Creates a new policy. |
+| [Get-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/get-mgpolicytokenlifetimepolicy) | Gets all token lifetime policies or a specified policy. |
+| [Update-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/update-mgpolicytokenlifetimepolicy) | Updates an existing policy. |
+| [Remove-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/remove-mgpolicytokenlifetimepolicy) | Deletes the specified policy. |
+
+### Application policies
+You can use the following cmdlets for application policies.</br></br>
+
+| Cmdlet | Description |
+| | |
+| [New-MgApplicationTokenLifetimePolicyByRef](/powershell/module/microsoft.graph.applications/new-mgapplicationtokenlifetimepolicybyref) | Links the specified policy to an application. |
+| [Get-MgApplicationTokenLifetimePolicyByRef](/powershell/module/microsoft.graph.applications/get-mgapplicationtokenlifetimepolicybyref) | Gets the policies that are assigned to an application. |
+| [Remove-MgApplicationTokenLifetimePolicyByRef](/powershell/module/microsoft.graph.applications/remove-mgapplicationtokenlifetimepolicybyref) | Removes a policy from an application. |
+
+### Service principal policies
+Service principal policies are not supported.
+
+## Next steps
+
+To learn more, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
active-directory Configure Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md
In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific service principal. They can also be set for multi-organizations (multi-tenant application).
-For more information, see [configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
+For more information, see [configurable token lifetimes](configurable-token-lifetimes.md).
## Get started
active-directory Howto Add Branding In Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-branding-in-apps.md
+
+ Title: Sign in with Microsoft branding guidelines | Azure AD
+description: Learn about application branding guidelines for Microsoft identity platform.
++++++++ Last updated : 03/16/2023+++++
+# Sign in with Microsoft: Branding guidelines for applications
+
+When developing applications with the Microsoft identity platform, you need to direct your customers when they want to use their work or school account (managed in Azure AD), or their personal account for sign-up and sign-in to your application.
+
+In this article, you will:
+
+- Learn about the two kinds of user accounts managed by Microsoft and how to refer to Azure AD accounts in your application
+- Learn the requirements for using the Microsoft logo in your app
+- Download the official **Sign in** or **Sign in with Microsoft** images to use in your app
+- Learn about the branding and navigation do's and don'ts
+
+## Personal accounts vs. work or school accounts from Microsoft
+
+Microsoft manages two kinds of user accounts:
+
+- **Personal accounts** (formerly known as Windows Live ID). These accounts represent the relationship between *individual* users and Microsoft, and are used to access consumer devices and services from Microsoft. These accounts are intended for personal use.
+- **Work or school accounts.** These accounts are managed by Microsoft on behalf of organizations that use Azure Active Directory. These accounts are used to sign in to Microsoft 365 and other business services from Microsoft.
+
+Microsoft work or school accounts are typically assigned to end users (employees, students, federal employees) by their organizations (company, school, government agency). These accounts are mastered directly in the cloud (in the Azure AD platform) or synced to Azure AD from an on-premises directory, such as Windows Server Active Directory. Microsoft is the *custodian* of the work or school accounts, but the accounts are owned and controlled by the organization.
+
+## Referring to Azure AD accounts in your application
+
+Microsoft doesnΓÇÖt expose end users to the Azure or the Active Directory brand names, and neither should you.
+
+- Once users are signed in, use the organizationΓÇÖs name and logo as much as possible. This is better than using generic terms like ΓÇ£your organization.ΓÇ¥
+- When users aren't signed in, refer to their accounts as ΓÇ£Work or school accountsΓÇ¥ and use the Microsoft logo to convey that Microsoft manages these accounts. DonΓÇÖt use terms like ΓÇ£enterprise account,ΓÇ¥ ΓÇ£business account,ΓÇ¥ or ΓÇ£corporate account,ΓÇ¥ which create user confusion.
+
+## User account pictogram
+
+In an earlier version of these guidelines, we recommended using a ΓÇ£blue badgeΓÇ¥ pictogram. Based on user and developer feedback, we now recommend the use of the Microsoft logo instead. The Microsoft logo helps users understand that they can reuse the account they use with Microsoft 365 or other Microsoft business services to sign into your app.
+
+## Signing up and signing in with Azure AD
+
+Your app may present separate paths for sign-up and sign-in and the following sections provide visual guidance for both scenarios.
+
+**If your app supports end-user sign-up (for example, free to trial or freemium model)**: You can show a **sign-in** button that allows users to access your app with their work account or their personal account. Azure AD shows a consent prompt the first time they access your app.
+
+**If your app requires permissions that only admins can consent to, or if your app requires organizational licensing**: Separate admin acquisition from user sign-in. The **ΓÇ£get this appΓÇ¥ button** will redirect admins to sign in then ask them to grant consent on behalf of users in their organization, which has the added benefit of suppressing end-user consent prompts to your app.
+
+## Visual guidance for app acquisition
+
+Your ΓÇ£get the appΓÇ¥ link must redirect the user to the Azure AD grant access (authorize) page, to allow an organizationΓÇÖs administrator to authorize your app to have access to their organizationΓÇÖs data, which is hosted by Microsoft. Details on how to request access are discussed in the [Integrating Applications with Azure Active Directory](./quickstart-register-app.md) article.
+
+After admins consent to your app, they can choose to add it to their usersΓÇÖ Microsoft 365 app launcher experience (accessible from the waffle and from [https://www.office.com/](https://www.office.com/)). If you want to advertise this capability, you can use terms like ΓÇ£Add this app to your organizationΓÇ¥ and show a button like the following example:
+
+![Button showing the Microsoft logo and "Add to my organization" text](./media/howto-add-branding-in-apps/add-to-my-org.png)
+
+However, we recommend that you write explanatory text instead of relying on buttons. For example:
+
+> *If you already use Microsoft 365 or other business service from Microsoft, you can grant <your_app_name> access to your organizationΓÇÖs data. This will allow your users to access <your_app_name> with their existing work accounts.*
+
+To download the official Microsoft logo for use in your app, right-click the one you want to use and then save it to your computer.
+
+| Asset | PNG format | SVG format |
+| | - | - |
+| Microsoft logo | ![Downloadable Microsoft logo in PNG format](./media/howto-add-branding-in-apps/ms-symbollockup_mssymbol_19.png) | ![Downloadable Microsoft logo in SVG format](./media/howto-add-branding-in-apps/ms-symbollockup_mssymbol_19.svg) |
+
+## Visual guidance for sign-in
+
+Your app should display a sign-in button that redirects users to the sign-in endpoint that corresponds to the protocol you use to integrate with Azure AD. The following section provides details on what that button should look like.
+
+### Pictogram and ΓÇ£Sign in with MicrosoftΓÇ¥
+
+ItΓÇÖs the association of the Microsoft logo and the ΓÇ£Sign in with MicrosoftΓÇ¥ terms that uniquely represent Azure AD amongst other identity providers your app may support. If you donΓÇÖt have enough space for ΓÇ£Sign in with Microsoft,ΓÇ¥ itΓÇÖs ok to shorten it to ΓÇ£Sign in.ΓÇ¥ You can use a light or dark color scheme for the buttons.
+
+The following diagram shows the Microsoft-recommended redlines when using the assets with your app. The redlines apply to "Sign in with Microsoft" or the shorter "Sign in" version.
+
+![Shows the "Sign in with Microsoft" redlines](./media/howto-add-branding-in-apps/sign-in-with-microsoft-redlines.png)
+
+To download the official images for use in your app, right-click the one you want to use and then save it to your computer.
+
+| Asset | PNG format | SVG format |
+| | - | - |
+| Sign in with Microsoft (dark theme) | ![Downloadable "Sign in with Microsoft" button dark theme PNG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_dark.png) | ![Downloadable "Sign in with Microsoft" button dark theme SVG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_dark.svg) |
+| Sign in with Microsoft (light theme) | ![Downloadable "Sign in with Microsoft" button light theme PNG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_light.png) | ![Downloadable "Sign in with Microsoft" button light theme SVG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_light.svg) |
+| Sign in (dark theme) | ![Downloadable "Sign in" short button dark theme PNG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_dark_short.png) | ![Downloadable "Sign in" short button dark theme SVG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_dark_short.svg) |
+| Sign in (light theme) | ![Downloadable "Sign in" short button light theme PNG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_light_short.png) | ![Downloadable "Sign in" short button light theme SVG](./media/howto-add-branding-in-apps/ms-symbollockup_signin_light_short.svg) |
+
+## Branding DoΓÇÖs and DonΓÇÖts
+
+**DO** use ΓÇ£work or school accountΓÇ¥ in combination with the "Sign in with Microsoft" button to provide additional explanation to help end users recognize whether they can use it. **DONΓÇÖT** use other terms such as ΓÇ£enterprise accountΓÇ¥, ΓÇ£business accountΓÇ¥ or ΓÇ£corporate account.ΓÇ¥
+
+**DONΓÇÖT** use ΓÇ£Microsoft 365 IDΓÇ¥ or ΓÇ£Azure ID.ΓÇ¥ Microsoft 365 is also the name of a consumer offering from Microsoft, which doesnΓÇÖt use Azure AD for authentication.
+
+**DONΓÇÖT** alter the Microsoft logo.
+
+**DONΓÇÖT** expose end users to the Azure or Active Directory brands. ItΓÇÖs ok however to use these terms with developers, IT pros, and admins.
+
+## Navigation DoΓÇÖs and DonΓÇÖts
+
+**DO** provide a way for users to sign out and switch to another user account. While most people have a single personal account from Microsoft/Facebook/Google/Twitter, people are often associated with more than one organization. Support for multiple signed-in users is coming soon.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/id-tokens.md
To ensure that the token size doesn't exceed HTTP header size limits, Azure AD l
By default, an ID token is valid for one hour - after one hour, the client must acquire a new ID token.
-You can adjust the lifetime of an ID token to control how often the client application expires the application session, and how often it requires the user to re-authenticate either silently or interactively. For more information, read [Configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
+You can adjust the lifetime of an ID token to control how often the client application expires the application session, and how often it requires the user to re-authenticate either silently or interactively. For more information, read [Configurable token lifetimes](configurable-token-lifetimes.md).
## Validating an ID token
active-directory Identity Platform Integration Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-platform-integration-checklist.md
Use the following checklist to ensure that your application is effectively integ
## Basics
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Read and understand the [Microsoft Platform Policies](/legal/microsoft-identity-platform/terms-of-use). Ensure that your application adheres to the terms outlined as they're designed to protect users and the platform.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Read and understand the [Microsoft Platform Policies](/legal/microsoft-identity-platform/terms-of-use). Ensure that your application adheres to the terms outlined as they're designed to protect users and the platform.
## Ownership
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Make sure the information associated with the account you used to register and manage apps is up-to-date.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Make sure the information associated with the account you used to register and manage apps is up-to-date.
## Branding
-![checkbox](./medi).
+![checkbox](./medi).
-![checkbox](./medi). Make sure your name and logo are representative of your company/product so that users can make informed decisions. Ensure that you're not violating any trademarks.
+![checkbox](./medi). Make sure your name and logo are representative of your company/product so that users can make informed decisions. Ensure that you're not violating any trademarks.
## Privacy
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Provide links to your app's terms of service and privacy statement.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Provide links to your app's terms of service and privacy statement.
## Security
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Manage your redirect URIs: <ul><li>Maintain ownership of all your redirect URIs and keep the DNS records for them up-to-date.</li><li>Don't use wildcards (*) in your URIs.</li><li>For web apps, make sure all URIs are secure and encrypted (for example, using https schemes).</li><li>For public clients, use platform-specific redirect URIs if applicable (mainly for iOS and Android). Otherwise, use redirect URIs with a high amount of randomness to prevent collisions when calling back to your app.</li><li>If your app is being used from an isolated web agent, you may use `https://login.microsoftonline.com/common/oauth2/nativeclient`.</li><li>Review and trim all unused or unnecessary redirect URIs on a regular basis.</li></ul>
+![checkbox](./media/integration-checklist/checkbox-two.svg) Manage your redirect URIs: <ul><li>Maintain ownership of all your redirect URIs and keep the DNS records for them up-to-date.</li><li>Don't use wildcards (*) in your URIs.</li><li>For web apps, make sure all URIs are secure and encrypted (for example, using https schemes).</li><li>For public clients, use platform-specific redirect URIs if applicable (mainly for iOS and Android). Otherwise, use redirect URIs with a high amount of randomness to prevent collisions when calling back to your app.</li><li>If your app is being used from an isolated web agent, you may use `https://login.microsoftonline.com/common/oauth2/nativeclient`.</li><li>Review and trim all unused or unnecessary redirect URIs on a regular basis.</li></ul>
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) If your app is registered in a directory, minimize and manually monitor the list of app registration owners.
+![checkbox](./media/integration-checklist/checkbox-two.svg) If your app is registered in a directory, minimize and manually monitor the list of app registration owners.
-![checkbox](./medi#suitable-scenarios-for-the-oauth2-implicit-grant).
+![checkbox](./medi#suitable-scenarios-for-the-oauth2-implicit-grant).
-![checkbox](./medi).
+![checkbox](./medi).
-![checkbox](./medi) to store and regularly rotate your credentials.
+![checkbox](./medi) to store and regularly rotate your credentials.
-![checkbox](./medi#permission-types). Only use application permissions if necessary; use delegated permissions where possible. For a full list of Microsoft Graph permissions, see this [permissions reference](/graph/permissions-reference).
+![checkbox](./medi#permission-types). Only use application permissions if necessary; use delegated permissions where possible. For a full list of Microsoft Graph permissions, see this [permissions reference](/graph/permissions-reference).
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) If you're securing an API using the Microsoft identity platform, carefully think through the permissions it should expose. Consider what's the right granularity for your solution and which permission(s) require admin consent. Check for expected permissions in the incoming tokens before making any authorization decisions.
+![checkbox](./media/integration-checklist/checkbox-two.svg) If you're securing an API using the Microsoft identity platform, carefully think through the permissions it should expose. Consider what's the right granularity for your solution and which permission(s) require admin consent. Check for expected permissions in the incoming tokens before making any authorization decisions.
## Implementation
-![checkbox](./medi)) to securely sign in users.
+![checkbox](./medi)) to securely sign in users.
-![checkbox](./medi). If you must hand-code for the authentication protocols, you should follow the [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx) or similar development methodology. Pay close attention to the security considerations in the standards specifications for each protocol.
+![checkbox](./medi). If you must hand-code for the authentication protocols, you should follow the [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx) or similar development methodology. Pay close attention to the security considerations in the standards specifications for each protocol.
-![checkbox](./medi) apps.
+![checkbox](./medi) apps.
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a ΓÇ£broker redirect URIΓÇ¥ configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible.
+![checkbox](./media/integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a ΓÇ£broker redirect URIΓÇ¥ configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible.
-![checkbox](./medi).
+![checkbox](./medi).
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) If the data your app requires is available through [Microsoft Graph](https://developer.microsoft.com/graph), request permissions for this data using the Microsoft Graph endpoint rather than the individual API.
+![checkbox](./media/integration-checklist/checkbox-two.svg) If the data your app requires is available through [Microsoft Graph](https://developer.microsoft.com/graph), request permissions for this data using the Microsoft Graph endpoint rather than the individual API.
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Don't look at the access token value, or attempt to parse it as a client. They can change values, formats, or even become encrypted without warning - always use the id_token if your client needs to learn something about the user, or call Microsoft Graph. Only web APIs should parse access tokens (since they are the ones defining the format and setting the encryption keys).
+![checkbox](./media/integration-checklist/checkbox-two.svg) Don't look at the access token value, or attempt to parse it as a client. They can change values, formats, or even become encrypted without warning - always use the id_token if your client needs to learn something about the user, or call Microsoft Graph. Only web APIs should parse access tokens (since they are the ones defining the format and setting the encryption keys).
## End-user experience
-![checkbox](./medi) and configure the pieces of your appΓÇÖs consent prompt so that end users and admins have enough information to determine if they trust your app.
+![checkbox](./medi) and configure the pieces of your appΓÇÖs consent prompt so that end users and admins have enough information to determine if they trust your app.
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Minimize the number of times a user needs to enter login credentials while using your app by attempting silent authentication (silent token acquisition) before interactive flows.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Minimize the number of times a user needs to enter login credentials while using your app by attempting silent authentication (silent token acquisition) before interactive flows.
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Don't use ΓÇ£prompt=consentΓÇ¥ for every sign-in. Only use prompt=consent if youΓÇÖve determined that you need to ask for consent for additional permissions (for example, if youΓÇÖve changed your appΓÇÖs required permissions).
+![checkbox](./media/integration-checklist/checkbox-two.svg) Don't use ΓÇ£prompt=consentΓÇ¥ for every sign-in. Only use prompt=consent if youΓÇÖve determined that you need to ask for consent for additional permissions (for example, if youΓÇÖve changed your appΓÇÖs required permissions).
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Where applicable, enrich your application with user data. Using the [Microsoft Graph API](https://developer.microsoft.com/graph) is an easy way to do this. The [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) tool that can help you get started.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Where applicable, enrich your application with user data. Using the [Microsoft Graph API](https://developer.microsoft.com/graph) is an easy way to do this. The [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) tool that can help you get started.
-![checkbox](./medi#consent) at run time to help users understand why your app is requesting permissions that may concern or confuse users when requested on first start.
+![checkbox](./medi#consent) at run time to help users understand why your app is requesting permissions that may concern or confuse users when requested on first start.
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Implement a [clean single sign-out experience](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut). ItΓÇÖs a privacy and a security requirement, and makes for a good user experience.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Implement a [clean single sign-out experience](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut). ItΓÇÖs a privacy and a security requirement, and makes for a good user experience.
## Testing
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Test for [Conditional Access policies](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut) that may affect your usersΓÇÖ ability to use your application.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Test for [Conditional Access policies](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut) that may affect your usersΓÇÖ ability to use your application.
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Test your application with all possible accounts that you plan to support (for example, work or school accounts, personal Microsoft accounts, child accounts, and sovereign accounts).
+![checkbox](./media/integration-checklist/checkbox-two.svg) Test your application with all possible accounts that you plan to support (for example, work or school accounts, personal Microsoft accounts, child accounts, and sovereign accounts).
## Additional resources
active-directory Msal Android Handling Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-handling-exceptions.md
Exceptions in the Microsoft Authentication Library (MSAL) are intended to help app developers troubleshoot their application. Exception messages aren't localized.
-When processing exceptions and errors, you can use the exception type itself and the error code to distinguish between exceptions. For a list of error codes, see [Authentication and authorization error codes](reference-aadsts-error-codes.md).
+When processing exceptions and errors, you can use the exception type itself and the error code to distinguish between exceptions. For a list of error codes, see [Authentication and authorization error codes](reference-error-codes.md).
During the sign-in experience, you may encounter errors about consents, Conditional Access (MFA, Device Management, Location-based restrictions), token issuance and redemption, and user properties.
active-directory Msal Error Handling Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-dotnet.md
When processing .NET exceptions, you can use the exception type itself and the `
You can also have a look at the fields of [MsalClientException](/dotnet/api/microsoft.identity.client.msalexception), [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception), and [MsalUIRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception).
-If [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) is thrown, try [Authentication and authorization error codes](reference-aadsts-error-codes.md) to see if the code is listed there.
+If [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) is thrown, try [Authentication and authorization error codes](reference-error-codes.md) to see if the code is listed there.
If [MsalUIRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) is thrown, it's an indication that an interactive flow needs to happen for the user to resolve the issue. In public client apps such as desktop and mobile app, this is resolved by calling `AcquireTokenInteractive`, which displays a browser. In confidential client apps, web apps should redirect the user to the authorization page, and web APIs should return an HTTP status code and header indicative of the authentication failure (401 Unauthorized and a WWW-Authenticate header).
active-directory Msal Error Handling Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-python.md
else:
```
-When an error is returned, the `"error_description"` key also contains a human-readable message, and there is typically also an `"error_code"` key which contains a machine-readable Microsoft identity platform error code. For more information about the various Microsoft identity platform error codes, see [Authentication and authorization error codes](./reference-aadsts-error-codes.md).
+When an error is returned, the `"error_description"` key also contains a human-readable message, and there is typically also an `"error_code"` key which contains a machine-readable Microsoft identity platform error code. For more information about the various Microsoft identity platform error codes, see [Authentication and authorization error codes](./reference-error-codes.md).
In MSAL for Python, exceptions are rare because most errors are handled by returning an error value. The `ValueError` exception is only thrown when there's an issue with how you're attempting to use the library, such as when API parameter(s) are malformed.
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
For more information about SSO, see:
- [MSAL.js prompt behavior](msal-js-prompt-behavior.md) - [Optional token claims](active-directory-optional-claims.md)-- [Configurable token lifetimes](active-directory-configurable-token-lifetimes.md)
+- [Configurable token lifetimes](configurable-token-lifetimes.md)
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
MSAL Supports a wide range of application types and scenarios. Please refer to [
ADAL to MSAL Migration Guide for different platforms are available in the following link. - [Migrate to MSAL iOS and MacOS](migrate-objc-adal-msal.md) - [Migrate to MSAL Java](migrate-adal-msal-java.md)-- [Migrate to MSAL .Net](msal-net-migration.md)
+- [Migrate to MSAL .NET](msal-net-migration.md)
- [Migrate to MSAL Node](msal-node-migration.md) - [Migrate to MSAL Python](migrate-python-adal-msal.md)
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-breaking-changes.md
Error 50105 (the current designation) is emitted when an unassigned user attempt
The error scenario has been updated, so that during non-interactive authentication (where `prompt=none` is used to hide UX), the app will be instructed to perform interactive authentication using an `interaction_required` error response. In the subsequent interactive authentication, Azure AD will now hold the user and show an error message directly, preventing a loop from occurring.
-As a reminder, your application code shouldn't make decisions based on error code strings like `AADSTS50105`. Instead, [follow our error-handling guidance](reference-aadsts-error-codes.md#handling-error-codes-in-your-application) and use the [standardized authentication responses](https://openid.net/specs/openid-connect-core-1_0.html#AuthError) like `interaction_required` and `login_required` found in the standard `error` field in the response. The other response fields are intended for consumption only by humans troubleshooting their issues.
+As a reminder, your application code shouldn't make decisions based on error code strings like `AADSTS50105`. Instead, [follow our error-handling guidance](reference-error-codes.md#handling-error-codes-in-your-application) and use the [standardized authentication responses](https://openid.net/specs/openid-connect-core-1_0.html#AuthError) like `interaction_required` and `login_required` found in the standard `error` field in the response. The other response fields are intended for consumption only by humans troubleshooting their issues.
You can review the current text of the 50105 error and more on the error lookup service: https://login.microsoftonline.com/error?code=50105.
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
+
+ Title: Azure AD authentication & authorization error codes
+description: Learn about the AADSTS error codes that are returned from the Azure AD security token service (STS).
++++++++ Last updated : 03/14/2023+++++
+# Azure AD Authentication and authorization error codes
+
+Looking for info about the AADSTS error codes that are returned from the Azure Active Directory (Azure AD) security token service (STS)? Read this document to find AADSTS error descriptions, fixes, and some suggested workarounds.
+
+> [!NOTE]
+> This information is preliminary and subject to change. Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
+>
+> This documentation is provided for developer and admin guidance, but should never be used by the client itself. Error codes are subject to change at any time in order to provide more granular error messages that are intended to help the developer while building their application. Apps that take a dependency on text or error code numbers will be broken over time.
+
+## Lookup current error code information
+Error codes and messages are subject to change. For the most current info, take a look at the [https://login.microsoftonline.com/error](https://login.microsoftonline.com/error) page to find AADSTS error descriptions, fixes, and some suggested workarounds.
+
+For example, if you received the error code "AADSTS50058" then do a search in [https://login.microsoftonline.com/error](https://login.microsoftonline.com/error) for "50058". You can also link directly to a specific error by adding the error code number to the URL: [https://login.microsoftonline.com/error?code=50058](https://login.microsoftonline.com/error?code=50058).
+
+## Handling error codes in your application
+
+The [OAuth2.0 spec](https://tools.ietf.org/html/rfc6749#section-5.2) provides guidance on how to handle errors during authentication using the `error` portion of the error response.
+
+Here's a sample error response:
+
+```json
+{
+ "error": "invalid_scope",
+ "error_description": "AADSTS70011: The provided value for the input parameter 'scope' isn't valid. The scope https://example.contoso.com/activity.read isn't valid.\r\nTrace ID: 255d1aef-8c98-452f-ac51-23d051240864\r\nCorrelation ID: fb3d2015-bc17-4bb9-bb85-30c5cf1aaaa7\r\nTimestamp: 2016-01-09 02:02:12Z",
+ "error_codes": [
+ 70011
+ ],
+ "timestamp": "2016-01-09 02:02:12Z",
+ "trace_id": "255d1aef-8c98-452f-ac51-23d051240864",
+ "correlation_id": "fb3d2015-bc17-4bb9-bb85-30c5cf1aaaa7",
+ "error_uri":"https://login.microsoftonline.com/error?code=70011"
+}
+```
+
+| Parameter | Description |
+|-|-|
+| `error` | An error code string that can be used to classify types of errors that occur, and should be used to react to errors. |
+| `error_description` | A specific error message that can help a developer identify the root cause of an authentication error. Never use this field to react to an error in your code. |
+| `error_codes` | A list of STS-specific error codes that can help in diagnostics. |
+| `timestamp` | The time at which the error occurred. |
+| `trace_id` | A unique identifier for the request that can help in diagnostics. |
+| `correlation_id` | A unique identifier for the request that can help in diagnostics across components. |
+| `error_uri` | A link to the error lookup page with additional information about the error. This is for developer usage only, don't present it to users. Only present when the error lookup system has additional information about the error - not all error have additional information provided.|
+
+The `error` field has several possible values - review the protocol documentation links and OAuth 2.0 specs to learn more about specific errors (for example, `authorization_pending` in the [device code flow](v2-oauth2-device-code.md)) and how to react to them. Some common ones are listed here:
+
+| Error Code | Description | Client Action |
+|--|--||
+| `invalid_request` | Protocol error, such as a missing required parameter. | Fix and resubmit the request.|
+| `invalid_grant` | Some of the authentication material (auth code, refresh token, access token, PKCE challenge) was invalid, unparseable, missing, or otherwise unusable | Try a new request to the `/authorize` endpoint to get a new authorization code. Consider reviewing and validating that app's use of the protocols. |
+| `unauthorized_client` | The authenticated client isn't authorized to use this authorization grant type. | This usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
+| `invalid_client` | Client authentication failed. | The client credentials aren't valid. To fix, the application administrator updates the credentials. |
+| `unsupported_grant_type` | The authorization server doesn't support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
+| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. During development, this usually indicates an incorrectly setup test tenant or a typo in the name of the scope being requested. |
+| `interaction_required` | The request requires user interaction. For example, an additional authentication step is required. | Retry the request with the same resource, interactively, so that the user can complete any challenges required. |
+| `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. |
+
+## AADSTS error codes
+
+| Error | Description |
+|||
+| AADSTS16000 | SelectUserAccount - This is an interrupt thrown by Azure AD, which results in UI that allows the user to select from among multiple valid SSO sessions. This error is fairly common and may be returned to the application if `prompt=none` is specified. |
+| AADSTS16001 | UserAccountSelectionInvalid - You'll see this error if the user selects on a tile that the session select logic has rejected. When triggered, this error allows the user to recover by picking from an updated list of tiles/sessions, or by choosing another account. This error can occur because of a code defect or race condition. |
+| AADSTS16002 | AppSessionSelectionInvalid - The app-specified SID requirement wasn't met. |
+| AADSTS16003 | SsoUserAccountNotFoundInResourceTenant - Indicates that the user hasn't been explicitly added to the tenant. |
+| AADSTS17003 | CredentialKeyProvisioningFailed - Azure AD can't provision the user key. |
+| AADSTS20001 | WsFedSignInResponseError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
+| AADSTS20012 | WsFedMessageInvalid - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
+| AADSTS20033 | FedMetadataInvalidTenantName - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
+| AADSTS28002 | Provided value for the input parameter scope '{scope}' isn't valid when requesting an access token. Specify a valid scope. |
+| AADSTS28003 | Provided value for the input parameter scope can't be empty when requesting an access token using the provided authorization code. Specify a valid scope.|
+| AADSTS40008 | OAuth2IdPUnretryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
+| AADSTS40009 | OAuth2IdPRefreshTokenRedemptionUserError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
+| AADSTS40010 | OAuth2IdPRetryableServerError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
+| AADSTS40015 | OAuth2IdPAuthCodeRedemptionUserError - There's an issue with your federated Identity Provider. Contact your IDP to resolve this issue. |
+| AADSTS50000 | TokenIssuanceError - There's an issue with the sign-in service. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to resolve this issue. |
+| AADSTS50001 | InvalidResource - The resource is disabled or doesn't exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you're trying to access. |
+| AADSTS50002 | NotAllowedTenant - Sign-in failed because of a restricted proxy access on the tenant. If it's your own tenant policy, you can change your restricted tenant settings to fix this issue. |
+| AADSTS500011 | InvalidResourceServicePrincipalNotFound - The resource principal named {name} was not found in the tenant named {tenant}. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant. If you expect the app to be installed, you may need to provide administrator permissions to add it. Check with the developers of the resource and application to understand what the right setup for your tenant is. |
+| AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
+| AADSTS500022 | Access to '{tenant}' tenant is denied. AADSTS500022 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
+| AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. To learn more, see the troubleshooting article for error [AADSTS50003](/troubleshoot/azure/active-directory/error-code-aadsts50003-cert-or-key-not-configured). If you still see issues, contact the app owner or an app admin. |
+| AADSTS50005 | DevicePolicyError - User tried to log in to a device from a platform that's currently not supported through Conditional Access policy. |
+| AADSTS50006 | InvalidSignature - Signature verification failed because of an invalid signature. |
+| AADSTS50007 | PartnerEncryptionCertificateMissing - The partner encryption certificate was not found for this app. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Microsoft to get this fixed. |
+| AADSTS50008 | InvalidSamlToken - SAML assertion is missing or misconfigured in the token. Contact your federation provider. |
+| AADSTS50010 | AudienceUriValidationFailed - Audience URI validation for the app failed since no token audiences were configured. |
+| AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or doesn't match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).|
+| AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate isn't authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain isn't valid</li><li>The signing certificate isn't valid</li><li>Policy isn't configured on the tenant</li><li>Thumbprint of the signing certificate isn't authorized</li><li>Client assertion contains an invalid signature</li></ul> |
+| AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion isn't a primary refresh token. Contact the app developer. |
+| AADSTS50014 | GuestUserInPendingState - The user account doesnΓÇÖt exist in the directory. An application likely chose the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. For further information, please visit [add B2B users](/azure/active-directory/b2b/add-users-administrator). |
+| AADSTS50015 | ViralUserLegalAgeConsentRequiredState - The user requires legal age group consent. |
+| AADSTS50017 | CertificateValidationFailed - Certification validation failed, reasons for the following reasons:<ul><li>Cannot find issuing certificate in trusted certificates list</li><li>Unable to find expected CrlSegment</li><li>Cannot find issuing certificate in trusted certificates list</li><li>Delta CRL distribution point is configured without a corresponding CRL distribution point</li><li>Unable to retrieve valid CRL segments because of a timeout issue</li><li>Unable to download CRL</li></ul>Contact the tenant admin. |
+| AADSTS50020 | UserUnauthorized - Users are unauthorized to call this endpoint. User account '{email}' from identity provider '{idp}' does not exist in tenant '{tenant}' and cannot access the application '{appid}'({appName}) in that tenant. This account needs to be added as an external user in the tenant first. Sign out and sign in again with a different Azure Active Directory user account. If this user should be a member of the tenant, they should be invited via the [B2B system](/azure/active-directory/b2b/add-users-administrator). For additional information, visit [AADSTS50020](/troubleshoot/azure/active-directory/error-code-aadsts50020-user-account-identity-provider-does-not-exist). |
+| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that doesn't allow access to the resource tenant. |
+| AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy doesn't allow this user to access this tenant. |
+| AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format isn't proper</li><li>External ID token from issuer failed signature verification.</li></ul> |
+| AADSTS50029 | Invalid URI - domain name contains invalid characters. Contact the tenant admin. |
+| AADSTS50032 | WeakRsaKey - Indicates the erroneous user attempt to use a weak RSA key. |
+| AADSTS50033 | RetryableError - Indicates a transient error not related to the database operations. |
+| AADSTS50034 | UserAccountNotFound - To sign into this application, the account must be added to the directory. This error can occur because the user mis-typed their username, or isn't in the tenant. An application may have chosen the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. See docs here: [Add B2B users](../external-identities/add-users-administrator.md). |
+| AADSTS50042 | UnableToGeneratePairwiseIdentifierWithMissingSalt - The salt required to generate a pairwise identifier is missing in principle. Contact the tenant admin. |
+| AADSTS50043 | UnableToGeneratePairwiseIdentifierWithMultipleSalts |
+| AADSTS50048 | SubjectMismatchesIssuer - Subject mismatches Issuer claim in the client assertion. Contact the tenant admin. |
+| AADSTS50049 | NoSuchInstanceForDiscovery - Unknown or invalid instance. |
+| AADSTS50050 | MalformedDiscoveryRequest - The request is malformed. |
+| AADSTS50053 | This error can result from two different reasons: <br><ul><li>IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md).</li><li>Or, sign-in was blocked because it came from an IP address with malicious activity.</li></ul> <br>To determine which failure reason caused this error, sign in to the [Azure portal](https://portal.azure.com). Navigate to your Azure AD tenant and then **Monitoring** -> **Sign-ins**. Find the failed user sign-in with **Sign-in error code** 50053 and check the **Failure reason**.|
+| AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. The user's password is expired, and therefore their login or session was ended. They will be offered the opportunity to reset it, or may ask an admin to reset it via [Reset a user's password using Azure Active Directory](../fundamentals/active-directory-users-reset-password-azure-portal.md). |
+| AADSTS50056 | Invalid or null password: password doesn't exist in the directory for this user. The user should be asked to enter their password again. |
+| AADSTS50057 | UserDisabled - The user account is disabled. The user object in Active Directory backing this account has been disabled. An admin can re-enable this account [through PowerShell](/powershell/module/activedirectory/enable-adaccount) |
+| AADSTS50058 | UserInformationNotProvided - Session information isn't sufficient for single-sign-on. This means that a user isn't signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
+| AADSTS50059 | MissingTenantRealmAndNoUserInformationProvided - Tenant-identifying information was not found in either the request or implied by any provided credentials. The user can contact the tenant admin to help resolve the issue. |
+| AADSTS50061 | SignoutInvalidRequest - Unable to complete sign out. The request was invalid. |
+| AADSTS50064 | CredentialAuthenticationError - Credential validation on username or password has failed. |
+| AADSTS50068 | SignoutInitiatorNotParticipant - Sign out has failed. The app that initiated sign out isn't a participant in the current session. |
+| AADSTS50070 | SignoutUnknownSessionIdentifier - Sign out has failed. The sign out request specified a name identifier that didn't match the existing session(s). |
+| AADSTS50071 | SignoutMessageExpired - The logout request has expired. |
+| AADSTS50072 | UserStrongAuthEnrollmentRequiredInterrupt - User needs to enroll for second factor authentication (interactive). |
+| AADSTS50074 | UserStrongAuthClientAuthNRequiredInterrupt - Strong authentication is required and the user did not pass the MFA challenge. |
+| AADSTS50076 | UserStrongAuthClientAuthNRequired - Due to a configuration change made by the admin such as a Conditional Access policy, per-user enforcement, or because you moved to a new location, the user must use multi-factor authentication to access the resource. Retry with a new authorize request for the resource. |
+| AADSTS50078 | UserStrongAuthExpired- Presented multi-factor authentication has expired due to policies configured by your administrator, you must refresh your multi-factor authentication to access '{resource}'.|
+| AADSTS50079 | UserStrongAuthEnrollmentRequired - Due to a configuration change made by the admin such as a Conditional Access policy, per-user enforcement, or because the user moved to a new location, the user is required to use multi-factor authentication. Either a managed user needs to register security info to complete multi-factor authentication, or a federated user needs to get the multi-factor claim from the federated identity provider. |
+| AADSTS50085 | Refresh token needs social IDP login. Have user try signing-in again with username -password |
+| AADSTS50086 | SasNonRetryableError |
+| AADSTS50087 | SasRetryableError - A transient error has occurred during strong authentication. Please try again. |
+| AADSTS50088 | Limit on telecom MFA calls reached. Please try again in a few minutes. |
+| AADSTS50089 | Authentication failed due to flow token expired. Expected - auth codes, refresh tokens, and sessions expire over time or are revoked by the user or an admin. The app will request a new login from the user. |
+| AADSTS50097 | DeviceAuthenticationRequired - Device authentication is required. |
+| AADSTS50099 | PKeyAuthInvalidJwtUnauthorized - The JWT signature is invalid. |
+| AADSTS50105 | EntitlementGrantsNotFound - The signed in user isn't assigned to a role for the signed in app. Assign the user to the app. To learn more, see the troubleshooting article for error [AADSTS50105](/troubleshoot/azure/active-directory/error-code-aadsts50105-user-not-assigned-role). |
+| AADSTS50107 | InvalidRealmUri - The requested federation realm object doesn't exist. Contact the tenant admin. |
+| AADSTS50120 | ThresholdJwtInvalidJwtFormat - Issue with JWT header. Contact the tenant admin. |
+| AADSTS50124 | ClaimsTransformationInvalidInputParameter - Claims Transformation contains invalid input parameter. Contact the tenant admin to update the policy. |
+| AADSTS501241 | Mandatory Input '{paramName}' missing from transformation ID '{transformId}'. This error is returned while Azure AD is trying to build a SAML response to the application. NameID claim or NameIdentifier is mandatory in SAML response and if Azure AD failed to get source attribute for NameID claim, it will return this error. As a resolution, ensure you add claim rules in *Azure portal* > *Azure Active Directory* > *Enterprise Applications* > *Select your application* > *Single Sign-On* > *User Attributes & Claims* > *Unique User Identifier (Name ID)*. |
+| AADSTS50125 | PasswordResetRegistrationRequiredInterrupt - Sign-in was interrupted because of a password reset or password registration entry. |
+| AADSTS50126 | InvalidUserNameOrPassword - Error validating credentials due to invalid username or password. The user didn't enter the right credentials. It's expected to see some number of these errors in your logs due to users making mistakes. |
+| AADSTS50127 | BrokerAppNotInstalled - User needs to install a broker app to gain access to this content. |
+| AADSTS50128 | Invalid domain name - No tenant-identifying information found in either the request or implied by any provided credentials. |
+| AADSTS50129 | DeviceIsNotWorkplaceJoined - Workplace join is required to register the device. |
+| AADSTS50131 | ConditionalAccessFailed - Indicates various Conditional Access errors such as bad Windows device state, request blocked due to suspicious activity, access policy, or security policy decisions. |
+| AADSTS50132 | SsoArtifactInvalidOrExpired - The session isn't valid due to password expiration or recent password change. |
+| AADSTS50133 | SsoArtifactRevoked - The session isn't valid due to password expiration or recent password change. |
+| AADSTS50134 | DeviceFlowAuthorizeWrongDatacenter - Wrong data center. To authorize a request that was initiated by an app in the OAuth 2.0 device flow, the authorizing party must be in the same data center where the original request resides. |
+| AADSTS50135 | PasswordChangeCompromisedPassword - Password change is required due to account risk. |
+| AADSTS50136 | RedirectMsaSessionToApp - Single MSA session detected. |
+| AADSTS50139 | SessionMissingMsaOAuth2RefreshToken - The session is invalid due to a missing external refresh token. |
+| AADSTS50140 | KmsiInterrupt - This error occurred due to "Keep me signed in" interrupt when the user was signing-in. This is an expected part of the login flow, where a user is asked if they want to remain signed into their current browser to make further logins easier. For more information, see [The new Azure AD sign-in and ΓÇ£Keep me signed inΓÇ¥ experiences rolling out now!](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/the-new-azure-ad-sign-in-and-keep-me-signed-in-experiences/m-p/128267). You can [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details.|
+| AADSTS50143 | Session mismatch - Session is invalid because user tenant doesn't match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. |
+| AADSTS50144 | InvalidPasswordExpiredOnPremPassword - User's Active Directory password has expired. Generate a new password for the user or have the user use the self-service reset tool to reset their password. |
+| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. Please contact the owner of the application. |
+| AADSTS501461 | AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key. |
+| AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. |
+| AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.|
+| AADSTS501491 | InvalidCodeChallengeMethodInvalidSize - Invalid size of Code_Challenge parameter.|
+| AADSTS50155 | DeviceAuthenticationFailed - Device authentication failed for this user. |
+| AADSTS50158 | ExternalSecurityChallenge - External security challenge was not satisfied. |
+| AADSTS50161 | InvalidExternalSecurityChallengeConfiguration - Claims sent by external provider isn't enough or Missing claim requested to external provider. |
+| AADSTS50166 | ExternalClaimsProviderThrottled - Failed to send the request to the claims provider. |
+| AADSTS50168 | ChromeBrowserSsoInterruptRequired - The client is capable of obtaining an SSO token through the Windows 10 Accounts extension, but the token was not found in the request or the supplied token was expired. |
+| AADSTS50169 | InvalidRequestBadRealm - The realm isn't a configured realm of the current service namespace. |
+| AADSTS50170 | MissingExternalClaimsProviderMapping - The external controls mapping is missing. |
+| AADSTS50173 | FreshTokenNeeded - The provided grant has expired due to it being revoked, and a fresh auth token is needed. Either an admin or a user revoked the tokens for this user, causing subsequent token refreshes to fail and require reauthentication. Have the user sign in again. |
+| AADSTS50177 | ExternalChallengeNotSupportedForPassthroughUsers - External challenge isn't supported for passthrough users. |
+| AADSTS50178 | SessionControlNotSupportedForPassthroughUsers - Session control isn't supported for passthrough users. |
+| AADSTS50180 | WindowsIntegratedAuthMissing - Integrated Windows authentication is needed. Enable the tenant for Seamless SSO. |
+| AADSTS50187 | DeviceInformationNotProvided - The service failed to perform device authentication. |
+| AADSTS50194 | Application '{appId}'({appName}) isn't configured as a multi-tenant application. Usage of the /common endpoint isn't supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. |
+| AADSTS50196 | LoopDetected - A client loop has been detected. Check the appΓÇÖs logic to ensure that token caching is implemented, and that error conditions are handled correctly. The app has made too many of the same request in too short a period, indicating that it is in a faulty state or is abusively requesting tokens. |
+| AADSTS50197 | ConflictingIdentities - The user could not be found. Try signing in again. |
+| AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Interrupt is shown for all scheme redirects in mobile browsers. <br />No action required. The user was asked to confirm that this app is the application they intended to sign into. <br />This is a security feature that helps prevent spoofing attacks. This occurs because a system webview has been used to request a token for a native application. <br />To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />chrome-extension:// (desktop Chrome browser only) |
+| AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. |
+| AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. |
+| AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. |
+| AADSTS51004 | UserAccountNotInDirectory - The user account doesnΓÇÖt exist in the directory. An application likely chose the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. For further information, please visit [add B2B users](/azure/active-directory/b2b/add-users-administrator). |
+| AADSTS51005 | TemporaryRedirect - Equivalent to HTTP status 307, which indicates that the requested information is located at the URI specified in the location header. When you receive this status, follow the location header associated with the response. When the original request method was POST, the redirected request will also use the POST method. |
+| AADSTS51006 | ForceReauthDueToInsufficientAuth - Integrated Windows authentication is needed. User logged in using a session token that is missing the integrated Windows authentication claim. Request the user to log in again. |
+| AADSTS52004 | DelegationDoesNotExistForLinkedIn - The user has not provided consent for access to LinkedIn resources. |
+| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. For additional information, please visit [Conditional Access device remediation](../conditional-access/troubleshoot-conditional-access.md). |
+| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. |
+| AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
+| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure Portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](../conditional-access/troubleshoot-conditional-access.md). |
+| AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. |
+| AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. |
+| AADSTS53011 | User blocked due to risk on home tenant. |
+| AADSTS530034 | DelegatedAdminBlockedDueToSuspiciousActivity - A delegated administrator was blocked from accessing the tenant due to account risk in their home tenant. |
+| AADSTS54000 | MinorUserBlockedLegalAgeGroupRule |
+| AADSTS54005 | OAuth2 Authorization code was already redeemed, please retry with a new valid code or use an existing refresh token. |
+| AADSTS65001 | DelegationDoesNotExist - The user or administrator has not consented to use the application with ID X. Send an interactive authorization request for this user and resource. |
+| AADSTS65002 | Consent between first party application '{applicationId}' and first party resource '{resourceId}' must be configured via preauthorization - applications owned and operated by Microsoft must get approval from the API owner before requesting tokens for that API. A developer in your tenant may be attempting to reuse an App ID owned by Microsoft. This error prevents them from impersonating a Microsoft application to call other APIs. They must move to another app ID they register in https://portal.azure.com.|
+| AADSTS65004 | UserDeclinedConsent - User declined to consent to access the app. Have the user retry the sign-in and consent to the app|
+| AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). To learn more, see the troubleshooting article for error [AADSTS650056](/troubleshoot/azure/active-directory/error-code-aadsts650056-misconfigured-app). |
+| AADSTS650052 | The app needs access to a service `(\"{name}\")` that your organization `\"{organization}\"` has not subscribed to or enabled. Contact your IT Admin to review the configuration of your service subscriptions. |
+| AADSTS650054 | The application asked for permissions to access a resource that has been removed or is no longer available. Make sure that all resources the app is calling are present in the tenant you're operating in. |
+| AADSTS650056 | Misconfigured application. This could be due to one of the following: the client has not listed any permissions for '{name}' in the requested permissions in the client's application registration. Or, the admin has not consented in the tenant. Or, check the application identifier in the request to ensure it matches the configured client application identifier. Or, check the certificate in the request to ensure it's valid. Please contact your admin to fix the configuration or consent on behalf of the tenant. Client app ID: {ID}. Please contact your admin to fix the configuration or consent on behalf of the tenant.|
+| AADSTS650057 | Invalid resource. The client has requested access to a resource which isn't listed in the requested permissions in the client's application registration. Client app ID: {appId}({appName}). Resource value from request: {resource}. Resource app ID: {resourceAppId}. List of valid resources from app registration: {regList}. |
+| AADSTS67003 | ActorNotValidServiceIdentity |
+| AADSTS70000 | InvalidGrant - Authentication failed. The refresh token isn't valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> |
+| AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). |
+| AADSTS700011 | UnauthorizedClientAppNotFoundInOrgIdTenant - Application with identifier {appIdentifier} was not found in the directory. A client application requested a token from your tenant, but the client app doesn't exist in your tenant, so the call failed. |
+| AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). |
+| AADSTS700025 | InvalidClientPublicClientWithCredential - Client is public so neither 'client_assertion' nor 'client_secret' should be presented. |
+| AADSTS700027| Client assertion failed signature validation. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
+| AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. |
+| AADSTS700030 | Invalid certificate - subject name in certificate isn't authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. |
+| AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. |
+| AADSTS70005 | UnsupportedResponseType - The app returned an unsupported response type due to the following reasons:<ul><li>response type 'token' isn't enabled for the app</li><li>response type 'id_token' requires the 'OpenID' scope -contains an unsupported OAuth parameter value in the encoded wctx</li></ul> |
+| AADSTS700054 | Response_type 'id_token' isn't enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
+| AADSTS70007 | UnsupportedResponseMode - The app returned an unsupported value of `response_mode` when requesting a token. |
+| AADSTS70008 | ExpiredOrRevokedGrant - The refresh token has expired due to inactivity. The token was issued on XXX and was inactive for a certain amount of time. |
+| AADSTS700082 | ExpiredOrRevokedGrantInactiveToken - The refresh token has expired due to inactivity. The token was issued on {issueDate} and was inactive for {time}. Expected part of the token lifecycle - the user went an extended period of time without using the application, so the token was expired when the app attempted to refresh it. |
+| AADSTS700084 | The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of {time}, which can't be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on {issueDate}.|
+| AADSTS70011 | InvalidScope - The scope requested by the app is invalid. |
+| AADSTS70012 | MsaServerError - A server error occurred while authenticating an MSA (consumer) user. Try again. If it continues to fail, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) |
+| AADSTS70016 | AuthorizationPending - OAuth 2.0 device flow error. Authorization is pending. The device will retry polling the request. |
+| AADSTS70018 | BadVerificationCode - Invalid verification code due to User typing in wrong user code for device code flow. Authorization isn't approved. |
+| AADSTS70019 | CodeExpired - Verification code expired. Have the user retry the sign-in. |
+| AADSTS70043 | The refresh token has expired or is invalid due to sign-in frequency checks by conditional access. The token was issued on {issueDate} and the maximum allowed lifetime for this request is {time}. |
+| AADSTS75001 | BindingSerializationError - An error occurred during SAML message binding. |
+| AADSTS75003 | UnsupportedBindingError - The app returned an error related to unsupported binding (SAML protocol response can't be sent via bindings other than HTTP POST). |
+| AADSTS75005 | Saml2MessageInvalid - Azure AD doesnΓÇÖt support the SAML request sent by the app for SSO. To learn more, see the troubleshooting article for error [AADSTS75005](/troubleshoot/azure/active-directory/error-code-aadsts75005-not-a-valid-saml-request). |
+| AADSTS7500514 | A supported type of SAML response was not found. The supported response types are 'Response' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:protocol') or 'Assertion' (in XML namespace 'urn:oasis:names:tc:SAML:2.0:assertion'). Application error - the developer will handle this error.|
+| AADSTS750054 | SAMLRequest or SAMLResponse must be present as query string parameters in HTTP request for SAML Redirect binding. To learn more, see the troubleshooting article for error [AADSTS750054](/troubleshoot/azure/active-directory/error-code-aadsts750054-saml-request-not-present). |
+| AADSTS75008 | RequestDeniedError - The request from the app was denied since the SAML request had an unexpected destination. |
+| AADSTS75011 | NoMatchedAuthnContextInOutputClaims - The authentication method by which the user authenticated with the service doesn't match requested authentication method. To learn more, see the troubleshooting article for error [AADSTS75011](/troubleshoot/azure/active-directory/error-code-aadsts75011-auth-method-mismatch). |
+| AADSTS75016 | Saml2AuthenticationRequestInvalidNameIDPolicy - SAML2 Authentication Request has invalid NameIdPolicy. |
+| AADSTS76026 | RequestIssueTimeExpired - IssueTime in an SAML2 Authentication Request is expired. |
+| AADSTS80001 | OnPremiseStoreIsNotAvailable - The Authentication Agent is unable to connect to Active Directory. Make sure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they are able to connect to Active Directory. |
+| AADSTS80002 | OnPremisePasswordValidatorRequestTimedout - Password validation request timed out. Make sure that Active Directory is available and responding to requests from the agents. |
+| AADSTS80005 | OnPremisePasswordValidatorUnpredictableWebException - An unknown error occurred while processing the response from the Authentication Agent. Retry the request. If it continues to fail, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to get more details on the error. |
+| AADSTS80007 | OnPremisePasswordValidatorErrorOccurredOnPrem - The Authentication Agent is unable to validate user's password. Check the agent logs for more info and verify that Active Directory is operating as expected. |
+| AADSTS80010 | OnPremisePasswordValidationEncryptionException - The Authentication Agent is unable to decrypt password. |
+| AADSTS80012 | OnPremisePasswordValidationAccountLogonInvalidHours - The users attempted to log on outside of the allowed hours (this is specified in AD). |
+| AADSTS80013 | OnPremisePasswordValidationTimeSkew - The authentication attempt could not be completed due to time skew between the machine running the authentication agent and AD. Fix time sync issues. |
+| AADSTS80014 | OnPremisePasswordValidationAuthenticationAgentTimeout - Validation request responded after maximum elapsed time exceeded. Open a support ticket with the error code, correlation ID, and timestamp to get more details on this error. |
+| AADSTS81004 | DesktopSsoIdentityInTicketIsNotAuthenticated - Kerberos authentication attempt failed. |
+| AADSTS81005 | DesktopSsoAuthenticationPackageNotSupported - The authentication package isn't supported. |
+| AADSTS81006 | DesktopSsoNoAuthorizationHeader - No authorization header was found. |
+| AADSTS81007 | DesktopSsoTenantIsNotOptIn - The tenant isn't enabled for Seamless SSO. |
+| AADSTS81009 | DesktopSsoAuthorizationHeaderValueWithBadFormat - Unable to validate user's Kerberos ticket. |
+| AADSTS81010 | DesktopSsoAuthTokenInvalid - Seamless SSO failed because the user's Kerberos ticket has expired or is invalid. |
+| AADSTS81011 | DesktopSsoLookupUserBySidFailed - Unable to find user object based on information in the user's Kerberos ticket. |
+| AADSTS81012 | DesktopSsoMismatchBetweenTokenUpnAndChosenUpn - The user trying to sign in to Azure AD is different from the user signed into the device. |
+| AADSTS90002 | InvalidTenantName - The tenant name wasn't found in the data store. Check to make sure you have the correct tenant ID. The application developer will receive this error if their app attempts to sign into a tenant that we cannot find. Often, this is because a cross-cloud app was used against the wrong cloud, or the developer attempted to sign in to a tenant derived from an email address, but the domain isn't registered. |
+| AADSTS90004 | InvalidRequestFormat - The request isn't properly formatted. |
+| AADSTS90005 | InvalidRequestWithMultipleRequirements - Unable to complete the request. The request isn't valid because the identifier and login hint can't be used together. |
+| AADSTS90006 | ExternalServerRetryableError - The service is temporarily unavailable.|
+| AADSTS90007 | InvalidSessionId - Bad request. The passed session ID can't be parsed. |
+| AADSTS90008 | TokenForItselfRequiresGraphPermission - The user or administrator hasn't consented to use the application. At the minimum, the application requires access to Azure AD by specifying the sign-in and read user profile permission. |
+| AADSTS90009 | TokenForItselfMissingIdenticalAppIdentifier - The application is requesting a token for itself. This scenario is supported only if the resource that's specified is using the GUID-based application ID. |
+| AADSTS90010 | NotSupported - Unable to create the algorithm. |
+| AADSTS9001023 |The grant type isn't supported over the /common or /consumers endpoints. Please use the /organizations or tenant-specific endpoint.|
+| AADSTS90012 | RequestTimeout - The requested has timed out. |
+| AADSTS90013 | InvalidUserInput - The input from the user isn't valid. |
+| AADSTS90014 | MissingRequiredField - This error code may appear in various cases when an expected field isn't present in the credential. |
+| AADSTS900144 | The request body must contain the following parameter: '{name}'. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
+| AADSTS90015 | QueryStringTooLong - The query string is too long. |
+| AADSTS90016 | MissingRequiredClaim - The access token isn't valid. The required claim is missing. |
+| AADSTS90019 | MissingTenantRealm - Azure AD was unable to determine the tenant identifier from the request. |
+| AADSTS90020 | The SAML 1.1 Assertion is missing ImmutableID of the user. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
+| AADSTS90022 | AuthenticatedInvalidPrincipalNameFormat - The principal name format isn't valid, or doesn't meet the expected `name[/host][@realm]` format. The principal name is required, host and realm are optional and may be set to null. |
+| AADSTS90023 | InvalidRequest - The authentication service request isn't valid. |
+| AADSTS900236| InvalidRequestSamlPropertyUnsupported- The SAML authentication request property '{propertyName}' is not supported and must not be set.
+| AADSTS9002313 | InvalidRequest - Request is malformed or invalid. - The issue here is because there was something wrong with the request to a certain endpoint. The suggestion to this issue is to get a fiddler trace of the error occurring and looking to see if the request is actually properly formatted or not. |
+| AADSTS9002332 | Application '{principalId}'({principalName}) is configured for use by Azure Active Directory users only. Please do not use the /consumers endpoint to serve this request. |
+| AADSTS90024 | RequestBudgetExceededError - A transient error has occurred. Try again. |
+| AADSTS90027 | We are unable to issue tokens from this API version on the MSA tenant. Please contact the application vendor as they need to use version 2.0 of the protocol to support this.|
+| AADSTS90033 | MsodsServiceUnavailable - The Microsoft Online Directory Service (MSODS) isn't available. |
+| AADSTS90036 | MsodsServiceUnretryableFailure - An unexpected, non-retryable error from the WCF service hosted by MSODS has occurred. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to get more details on the error. |
+| AADSTS90038 | NationalCloudTenantRedirection - The specified tenant 'Y' belongs to the National Cloud 'X'. Current cloud instance 'Z' does not federate with X. A cloud redirect error is returned. |
+| AADSTS90043 | NationalCloudAuthCodeRedirection - The feature is disabled. |
+| AADSTS900432 | Confidential Client isn't supported in Cross Cloud request.|
+| AADSTS90051 | InvalidNationalCloudId - The national cloud identifier contains an invalid cloud identifier. |
+| AADSTS90055 | TenantThrottlingError - There are too many incoming requests. This exception is thrown for blocked tenants. |
+| AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. |
+| AADSTS900561 | BadResourceRequestInvalidRequest - The endpoint only accepts {valid_verbs} requests. Received a {invalid_verb} request. {valid_verbs} represents a list of HTTP verbs supported by the endpoint (for example, POST), {invalid_verb} is an HTTP verb used in the current request (for example, GET). This can be due to developer error, or due to users pressing the back button in their browser, triggering a bad request. It can be ignored. |
+| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. For more information, please visit [configuring external identities](../external-identities/external-identities-overview.md). |
+| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message isn't valid. |
+| AADSTS90082 | OrgIdWsFederationNotSupported - The selected authentication policy for the request isn't currently supported. |
+| AADSTS90084 | OrgIdWsFederationGuestNotAllowed - Guest accounts aren't allowed for this site. |
+| AADSTS90085 | OrgIdWsFederationSltRedemptionFailed - The service is unable to issue a token because the company object hasn't been provisioned yet. |
+| AADSTS90086 | OrgIdWsTrustDaTokenExpired - The user DA token is expired. |
+| AADSTS90087 | OrgIdWsFederationMessageCreationFromUriFailed - An error occurred while creating the WS-Federation message from the URI. |
+| AADSTS90090 | GraphRetryableError - The service is temporarily unavailable. |
+| AADSTS90091 | GraphServiceUnreachable |
+| AADSTS90092 | GraphNonRetryableError |
+| AADSTS90093 | GraphUserUnauthorized - Graph returned with a forbidden error code for the request. |
+| AADSTS90094 | AdminConsentRequired - Administrator consent is required. |
+| AADSTS900382 | Confidential Client isn't supported in Cross Cloud request. |
+| AADSTS90095 | AdminConsentRequiredRequestAccess- In the Admin Consent Workflow experience, an interrupt that appears when the user is told they need to ask the admin for consent. |
+| AADSTS90099 | The application '{appId}' ({appName}) has not been authorized in the tenant '{tenant}'. Applications must be authorized to access the customer tenant before partner delegated administrators can use them. Provide pre-consent or execute the appropriate Partner Center API to authorize the application. |
+| AADSTS900971| No reply address provided.|
+| AADSTS90100 | InvalidRequestParameter - The parameter is empty or not valid. |
+| AADSTS901002 | AADSTS901002: The 'resource' request parameter isn't supported. |
+| AADSTS90101 | InvalidEmailAddress - The supplied data isn't a valid email address. The email address must be in the format `someone@example.com`. |
+| AADSTS90102 | InvalidUriParameter - The value must be a valid absolute URI. |
+| AADSTS90107 | InvalidXml - The request isn't valid. Make sure your data doesn't have invalid characters.|
+| AADSTS90114 | InvalidExpiryDate - The bulk token expiration timestamp will cause an expired token to be issued. |
+| AADSTS90117 | InvalidRequestInput |
+| AADSTS90119 | InvalidUserCode - The user code is null or empty.|
+| AADSTS90120 | InvalidDeviceFlowRequest - The request was already authorized or declined. |
+| AADSTS90121 | InvalidEmptyRequest - Invalid empty request.|
+| AADSTS90123 | IdentityProviderAccessDenied - The token can't be issued because the identity or claim issuance provider denied the request. |
+| AADSTS90124 | V1ResourceV2GlobalEndpointNotSupported - The resource isn't supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
+| AADSTS90125 | DebugModeEnrollTenantNotFound - The user isn't in the system. Make sure you entered the user name correctly. |
+| AADSTS90126 | DebugModeEnrollTenantNotInferred - The user type isn't supported on this endpoint. The system can't infer the user's tenant from the user name. |
+| AADSTS90130 | NonConvergedAppV2GlobalEndpointNotSupported - The application isn't supported over the `/common` or `/consumers` endpoints. Use the `/organizations` or tenant-specific endpoint instead. |
+| AADSTS120000 | PasswordChangeIncorrectCurrentPassword |
+| AADSTS120002 | PasswordChangeInvalidNewPasswordWeak |
+| AADSTS120003 | PasswordChangeInvalidNewPasswordContainsMemberName |
+| AADSTS120004 | PasswordChangeOnPremComplexity |
+| AADSTS120005 | PasswordChangeOnPremSuccessCloudFail |
+| AADSTS120008 | PasswordChangeAsyncJobStateTerminated - A non-retryable error has occurred.|
+| AADSTS120011 | PasswordChangeAsyncUpnInferenceFailed |
+| AADSTS120012 | PasswordChangeNeedsToHappenOnPrem |
+| AADSTS120013 | PasswordChangeOnPremisesConnectivityFailure |
+| AADSTS120014 | PasswordChangeOnPremUserAccountLockedOutOrDisabled |
+| AADSTS120015 | PasswordChangeADAdminActionRequired |
+| AADSTS120016 | PasswordChangeUserNotFoundBySspr |
+| AADSTS120018 | PasswordChangePasswordDoesnotComplyFuzzyPolicy |
+| AADSTS120020 | PasswordChangeFailure |
+| AADSTS120021 | PartnerServiceSsprInternalServiceError |
+| AADSTS130004 | NgcKeyNotFound - The user principal doesn't have the NGC ID key configured. |
+| AADSTS130005 | NgcInvalidSignature - NGC key signature verified failed.|
+| AADSTS130006 | NgcTransportKeyNotFound - The NGC transport key isn't configured on the device. |
+| AADSTS130007 | NgcDeviceIsDisabled - The device is disabled. |
+| AADSTS130008 | NgcDeviceIsNotFound - The device referenced by the NGC key wasn't found. |
+| AADSTS135010 | KeyNotFound |
+| AADSTS135011 | Device used during the authentication is disabled.|
+| AADSTS140000 | InvalidRequestNonce - Request nonce isn't provided. |
+| AADSTS140001 | InvalidSessionKey - The session key isn't valid.|
+| AADSTS165004 | Actual message content is runtime specific. Please see returned exception message for details. |
+| AADSTS165900 | InvalidApiRequest - Invalid request. |
+| AADSTS220450 | UnsupportedAndroidWebViewVersion - The Chrome WebView version isn't supported. |
+| AADSTS220501 | InvalidCrlDownload |
+| AADSTS221000 | DeviceOnlyTokensNotSupportedByResource - The resource isn't configured to accept device-only tokens. |
+| AADSTS240001 | BulkAADJTokenUnauthorized - The user isn't authorized to register devices in Azure AD. |
+| AADSTS240002 | RequiredClaimIsMissing - The id_token can't be used as `urn:ietf:params:oauth:grant-type:jwt-bearer` grant.|
+| AADSTS530032 | BlockedByConditionalAccessOnSecurityPolicy - The tenant admin has configured a security policy that blocks this request. Check the security policies that are defined on the tenant level to determine if your request meets the policy requirements. |
+| AADSTS700016 | UnauthorizedClient_DoesNotMatchRequest - The application wasn't found in the directory/tenant. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You might have misconfigured the identifier value for the application or sent your authentication request to the wrong tenant. |
+| AADSTS700020 | InteractionRequired - The access grant requires interaction. |
+| AADSTS700022 | InvalidMultipleResourcesScope - The provided value for the input parameter scope isn't valid because it contains more than one resource. |
+| AADSTS700023 | InvalidResourcelessScope - The provided value for the input parameter scope isn't valid when request an access token. |
+| AADSTS7000215 | Invalid client secret is provided. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.|
+| AADSTS7000218 | The request body must contain the following parameter: 'client_assertion' or 'client_secret'. |
+| AADSTS7000222 | InvalidClientSecretExpiredKeysProvided - The provided client secret keys are expired. Visit the Azure portal to create new keys for your app, or consider using certificate credentials for added security: [https://aka.ms/certCreds](./active-directory-certificate-credentials.md) |
+| AADSTS700005 | InvalidGrantRedeemAgainstWrongTenant - Provided Authorization Code is intended to use against other tenant, thus rejected. OAuth2 Authorization Code must be redeemed against same tenant it was acquired for (/common or /{tenant-ID} as appropriate) |
+| AADSTS1000000 | UserNotBoundError - The Bind API requires the Azure AD user to also authenticate with an external IDP, which hasn't happened yet. |
+| AADSTS1000002 | BindCompleteInterruptError - The bind completed successfully, but the user must be informed. |
+| AADSTS100007 | Azure AD Regional ONLY supports auth either for MSIs OR for requests from MSAL using SN+I for 1P apps or 3P apps in Microsoft infrastructure tenants.|
+| AADSTS1000031 | Application {appDisplayName} can't be accessed at this time. Contact your administrator. |
+| AADSTS7000112 | UnauthorizedClientApplicationDisabled - The application is disabled. |
+| AADSTS7000114| Application 'appIdentifier' isn't allowed to make application on-behalf-of calls.|
+| AADSTS7500529 | The value ΓÇÿSAMLId-GuidΓÇÖ isn't a valid SAML ID - Azure AD uses this attribute to populate the InResponseTo attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, id6c1c178c166d486687be4aaf5e482730 is a valid ID. |
+
+## Next steps
+
+* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Refresh tokens can be revoked by the server because of a change in credentials,
## Next steps -- Learn about [configurable token lifetimes](active-directory-configurable-token-lifetimes.md)
+- Learn about [configurable token lifetimes](configurable-token-lifetimes.md)
- Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) for more details on primary refresh tokens.
active-directory Scenario Desktop Acquire Token Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-interactive.md
WithParentActivityOrWindow(IWin32Window window)
// Mac WithParentActivityOrWindow(NSWindow window)
-// .NET Standard (this will be on all platforms at runtime, but only on NetStandard at build time)
+// .NET Standard (this will be on all platforms at runtime, but only on .NET Standard platforms at build time)
WithParentActivityOrWindow(object parent). ```
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
# Single-page application: Acquire a token to call an API
-The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a non-expired access token exists and returns it. If no access token is found or the access token found has expired, it attempts to use its refresh token to get a fresh access token. If the refresh token's 24-hour lifetime has also expired, MSAL.js will open a hidden iframe to silently request a new authorization code by leveraging the existing active session with Azure AD (if any), which will then be exchanged for a fresh set of tokens (access _and_ refresh tokens). For more information about single sign-on (SSO) session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md). For more information on MSAL.js cache lookup policy, see: [Acquiring an Access Token](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/acquire-token.md#acquiring-an-access-token).
+The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a non-expired access token exists and returns it. If no access token is found or the access token found has expired, it attempts to use its refresh token to get a fresh access token. If the refresh token's 24-hour lifetime has also expired, MSAL.js will open a hidden iframe to silently request a new authorization code by leveraging the existing active session with Azure AD (if any), which will then be exchanged for a fresh set of tokens (access _and_ refresh tokens). For more information about single sign-on (SSO) session and token lifetime values in Azure AD, see [Token lifetimes](configurable-token-lifetimes.md). For more information on MSAL.js cache lookup policy, see: [Acquiring an Access Token](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/acquire-token.md#acquiring-an-access-token).
The silent token requests to Azure AD might fail for reasons like a password change or updated conditional access policies. More often, failures are due to the refresh token's 24-hour lifetime expiring and [the browser blocking third party cookies](reference-third-party-cookies-spas.md), which prevents the use of hidden iframes to continue authenticating the user. In these cases, you should invoke one of the interactive methods (which may prompt the user) to acquire tokens:
active-directory Scenario Spa Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-production.md
Last updated 05/07/2019
-#Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
+#Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
# Single-page application: Move to production
active-directory Signing Key Rollover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/signing-key-rollover.md
+
+ Title: Signing Key Rollover in Microsoft identity platform
+description: This article discusses the signing key rollover best practices for Azure Active Directory
++++++++ Last updated : 03/16/2023+++++
+# Signing key rollover in the Microsoft identity platform
+This article discusses what you need to know about the public keys that are used by the Microsoft identity platform to sign security tokens. It's important to note that these keys roll over on a periodic basis and, in an emergency, could be rolled over immediately. All applications that use the Microsoft identity platform should be able to programmatically handle the key rollover process. Continue reading to understand how the keys work, how to assess the impact of the rollover to your application and how to update your application or establish a periodic manual rollover process to handle key rollover if necessary.
+
+## Overview of signing keys in the Microsoft identity platform
+The Microsoft identity platform uses public-key cryptography built on industry standards to establish trust between itself and the applications that use it. In practical terms, this works in the following way: The Microsoft identity platform uses a signing key that consists of a public and private key pair. When a user signs in to an application that uses the Microsoft identity platform for authentication, the Microsoft identity platform creates a security token that contains information about the user. This token is signed by the Microsoft identity platform using its private key before it's sent back to the application. To verify that the token is valid and originated from Microsoft identity platform, the application must validate the tokenΓÇÖs signature using the public keys exposed by the Microsoft identity platform that is contained in the tenantΓÇÖs [OpenID Connect discovery document](https://openid.net/specs/openid-connect-discovery-1_0.html) or SAML/WS-Fed [federation metadata document](../azuread-dev/azure-ad-federation-metadata.md).
+
+For security purposes, the Microsoft identity platformΓÇÖs signing key rolls on a periodic basis and, in the case of an emergency, could be rolled over immediately. There's no set or guaranteed time between these key rolls - any application that integrates with the Microsoft identity platform should be prepared to handle a key rollover event no matter how frequently it may occur. If your application doesn't handle sudden refreshes, and attempts to use an expired key to verify the signature on a token, your application will incorrectly reject the token. Checking every 24 hours for updates is a best practice, with throttled (once every five minutes at most) immediate refreshes of the key document if a token is encountered that doesn't validate with the keys in your application's cache.
+
+There's always more than one valid key available in the OpenID Connect discovery document and the federation metadata document. Your application should be prepared to use any and all of the keys specified in the document, since one key may be rolled soon, another may be its replacement, and so forth. The number of keys present can change over time based on the internal architecture of the Microsoft identity platform as we support new platforms, new clouds, or new authentication protocols. Neither the order of the keys in the JSON response nor the order in which they were exposed should be considered meaningful to your app.
+
+Applications that support only a single signing key, or those that require manual updates to the signing keys, are inherently less secure and less reliable. They should be updated to use [standard libraries](reference-v2-libraries.md) to ensure that they're always using up-to-date signing keys, among other best practices.
+
+## How to assess if your application will be affected and what to do about it
+How your application handles key rollover depends on variables such as the type of application or what identity protocol and library was used. The sections below assess whether the most common types of applications are impacted by the key rollover and provide guidance on how to update the application to support automatic rollover or manually update the key.
+
+* [Native client applications accessing resources](#nativeclient)
+* [Web applications / APIs accessing resources](#webclient)
+* [Web applications / APIs protecting resources and built using Azure App Services](#appservices)
+* [Web applications / APIs protecting resources using .NET OWIN OpenID Connect, WS-Fed or WindowsAzureActiveDirectoryBearerAuthentication middleware](#owin)
+* [Web applications / APIs protecting resources using .NET Core OpenID Connect or JwtBearerAuthentication middleware](#owincore)
+* [Web applications / APIs protecting resources using Node.js passport-azure-ad module](#passport)
+* [Web applications / APIs protecting resources and created with Visual Studio 2015 or later](#vs2015)
+* [Web applications protecting resources and created with Visual Studio 2013](#vs2013)
+* Web APIs protecting resources and created with Visual Studio 2013
+* [Web applications protecting resources and created with Visual Studio 2012](#vs2012)
+* [Web applications / APIs protecting resources using any other libraries or manually implementing any of the supported protocols](#other)
+
+This guidance is **not** applicable for:
+
+* Applications added from Azure AD Application Gallery (including Custom) have separate guidance with regard to signing keys. [More information.](../manage-apps/manage-certificates-for-federated-single-sign-on.md)
+* On-premises applications published via application proxy don't have to worry about signing keys.
+
+### <a name="nativeclient"></a>Native client applications accessing resources
+Applications that are only accessing resources (for example, Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) only obtain a token and pass it along to the resource owner. Given that they aren't protecting any resources, they don't inspect the token and therefore don't need to ensure it's properly signed.
+
+Native client applications, whether desktop or mobile, fall into this category and are thus not impacted by the rollover.
+
+### <a name="webclient"></a>Web applications / APIs accessing resources
+Applications that are only accessing resources (such as Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) only obtain a token and pass it along to the resource owner. Given that they aren't protecting any resources, they don't inspect the token and therefore don't need to ensure it's properly signed.
+
+Web applications and web APIs that are using the app-only flow (client credentials / client certificate) to request tokens fall into this category and are thus not impacted by the rollover.
+
+### <a name="appservices"></a>Web applications / APIs protecting resources and built using Azure App Services
+Azure App Services' Authentication / Authorization (EasyAuth) functionality already has the necessary logic to handle key rollover automatically.
+
+### <a name="owin"></a>Web applications / APIs protecting resources using .NET OWIN OpenID Connect, WS-Fed or WindowsAzureActiveDirectoryBearerAuthentication middleware
+If your application is using the .NET OWIN OpenID Connect, WS-Fed or WindowsAzureActiveDirectoryBearerAuthentication middleware, it already has the necessary logic to handle key rollover automatically.
+
+You can confirm that your application is using any of these by looking for any of the following snippets in your application's Startup.cs or Startup.Auth.cs files.
+
+```csharp
+app.UseOpenIdConnectAuthentication(
+ new OpenIdConnectAuthenticationOptions
+ {
+ // ...
+ });
+```
+
+```csharp
+app.UseWsFederationAuthentication(
+ new WsFederationAuthenticationOptions
+ {
+ // ...
+ });
+```
+
+```csharp
+app.UseWindowsAzureActiveDirectoryBearerAuthentication(
+ new WindowsAzureActiveDirectoryBearerAuthenticationOptions
+ {
+ // ...
+ });
+```
+
+### <a name="owincore"></a>Web applications / APIs protecting resources using .NET Core OpenID Connect or JwtBearerAuthentication middleware
+If your application is using the .NET Core OWIN OpenID Connect or JwtBearerAuthentication middleware, it already has the necessary logic to handle key rollover automatically.
+
+You can confirm that your application is using any of these by looking for any of the following snippets in your application's Startup.cs or Startup.Auth.cs
+
+```
+app.UseOpenIdConnectAuthentication(
+ new OpenIdConnectAuthenticationOptions
+ {
+ // ...
+ });
+```
+```
+app.UseJwtBearerAuthentication(
+ new JwtBearerAuthenticationOptions
+ {
+ // ...
+ });
+```
+
+### <a name="passport"></a>Web applications / APIs protecting resources using Node.js passport-azure-ad module
+If your application is using the Node.js passport-ad module, it already has the necessary logic to handle key rollover automatically.
+
+You can confirm that your application passport-ad by searching for the following snippet in your application's app.js
+
+```
+var OIDCStrategy = require('passport-azure-ad').OIDCStrategy;
+
+passport.use(new OIDCStrategy({
+ //...
+));
+```
+
+### <a name="vs2015"></a>Web applications / APIs protecting resources and created with Visual Studio 2015 or later
+If your application was built using a web application template in Visual Studio 2015 or later and you selected **Work Or School Accounts** from the **Change Authentication** menu, it already has the necessary logic to handle key rollover automatically. This logic, embedded in the OWIN OpenID Connect middleware, retrieves and caches the keys from the OpenID Connect discovery document and periodically refreshes them.
+
+If you added authentication to your solution manually, your application might not have the necessary key rollover logic. You'll need to write it yourself, or follow the steps in [Web applications / APIs using any other libraries or manually implementing any of the supported protocols](#other).
+
+### <a name="vs2013"></a>Web applications protecting resources and created with Visual Studio 2013
+If your application was built using a web application template in Visual Studio 2013 and you selected **Organizational Accounts** from the **Change Authentication** menu, it already has the necessary logic to handle key rollover automatically. This logic stores your organizationΓÇÖs unique identifier and the signing key information in two database tables associated with the project. You can find the connection string for the database in the projectΓÇÖs Web.config file.
+
+If you added authentication to your solution manually, your application might not have the necessary key rollover logic. You'll need to write it yourself, or follow the steps in [Web applications / APIs using any other libraries or manually implementing any of the supported protocols.](#other).
+
+The following steps help you verify that the logic is working properly in your application.
+
+1. In Visual Studio 2013, open the solution, and then select on the **Server Explorer** tab on the right window.
+2. Expand **Data Connections**, **DefaultConnection**, and then **Tables**. Locate the **IssuingAuthorityKeys** table, right-click it, and then select **Show Table Data**.
+3. In the **IssuingAuthorityKeys** table, there will be at least one row, which corresponds to the thumbprint value for the key. Delete any rows in the table.
+4. Right-click the **Tenants** table, and then click **Show Table Data**.
+5. In the **Tenants** table, there will be at least one row, which corresponds to a unique directory tenant identifier. Delete any rows in the table. If you don't delete the rows in both the **Tenants** table and **IssuingAuthorityKeys** table, you will get an error at runtime.
+6. Build and run the application. After you have logged in to your account, you can stop the application.
+7. Return to the **Server Explorer** and look at the values in the **IssuingAuthorityKeys** and **Tenants** table. YouΓÇÖll notice that they have been automatically repopulated with the appropriate information from the federation metadata document.
+
+### <a name="vs2013"></a>Web APIs protecting resources and created with Visual Studio 2013
+If you created a web API application in Visual Studio 2013 using the Web API template, and then selected **Organizational Accounts** from the **Change Authentication** menu, you already have the necessary logic in your application.
+
+If you manually configured authentication, follow the instructions below to learn how to configure your web API to automatically update its key information.
+
+The following code snippet demonstrates how to get the latest keys from the federation metadata document, and then use the [JWT Token Handler](/previous-versions/dotnet/framework/windows-identity-foundation/json-web-token-handler) to validate the token. The code snippet assumes that you will use your own caching mechanism for persisting the key to validate future tokens from Microsoft identity platform, whether it be in a database, configuration file, or elsewhere.
+
+```
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+using System.IdentityModel.Tokens;
+using System.Configuration;
+using System.Security.Cryptography.X509Certificates;
+using System.Xml;
+using System.IdentityModel.Metadata;
+using System.ServiceModel.Security;
+using System.Threading;
+
+namespace JWTValidation
+{
+ public class JWTValidator
+ {
+ private string MetadataAddress = "[Your Federation Metadata document address goes here]";
+
+ // Validates the JWT Token that's part of the Authorization header in an HTTP request.
+ public void ValidateJwtToken(string token)
+ {
+ JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler()
+ {
+ // Do not disable for production code
+ CertificateValidator = X509CertificateValidator.None
+ };
+
+ TokenValidationParameters validationParams = new TokenValidationParameters()
+ {
+ AllowedAudience = "[Your App ID URI goes here, as registered in the Azure Portal]",
+ ValidIssuer = "[The issuer for the token goes here, such as https://sts.windows.net/68b98905-130e-4d7c-b6e1-a158a9ed8449/]",
+ SigningTokens = GetSigningCertificates(MetadataAddress)
+
+ // Cache the signing tokens by your desired mechanism
+ };
+
+ Thread.CurrentPrincipal = tokenHandler.ValidateToken(token, validationParams);
+ }
+
+ // Returns a list of certificates from the specified metadata document.
+ public List<X509SecurityToken> GetSigningCertificates(string metadataAddress)
+ {
+ List<X509SecurityToken> tokens = new List<X509SecurityToken>();
+
+ if (metadataAddress == null)
+ {
+ throw new ArgumentNullException(metadataAddress);
+ }
+
+ using (XmlReader metadataReader = XmlReader.Create(metadataAddress))
+ {
+ MetadataSerializer serializer = new MetadataSerializer()
+ {
+ // Do not disable for production code
+ CertificateValidationMode = X509CertificateValidationMode.None
+ };
+
+ EntityDescriptor metadata = serializer.ReadMetadata(metadataReader) as EntityDescriptor;
+
+ if (metadata != null)
+ {
+ SecurityTokenServiceDescriptor stsd = metadata.RoleDescriptors.OfType<SecurityTokenServiceDescriptor>().First();
+
+ if (stsd != null)
+ {
+ IEnumerable<X509RawDataKeyIdentifierClause> x509DataClauses = stsd.Keys.Where(key => key.KeyInfo != null && (key.Use == KeyType.Signing || key.Use == KeyType.Unspecified)).
+ Select(key => key.KeyInfo.OfType<X509RawDataKeyIdentifierClause>().First());
+
+ tokens.AddRange(x509DataClauses.Select(token => new X509SecurityToken(new X509Certificate2(token.GetX509RawData()))));
+ }
+ else
+ {
+ throw new InvalidOperationException("There is no RoleDescriptor of type SecurityTokenServiceType in the metadata");
+ }
+ }
+ else
+ {
+ throw new Exception("Invalid Federation Metadata document");
+ }
+ }
+ return tokens;
+ }
+ }
+}
+```
+
+### <a name="vs2012"></a>Web applications protecting resources and created with Visual Studio 2012
+If your application was built in Visual Studio 2012, you probably used the Identity and Access Tool to configure your application. ItΓÇÖs also likely that you are using the [Validating Issuer Name Registry (VINR)](/previous-versions/dotnet/framework/windows-identity-foundation/validating-issuer-name-registry). The VINR is responsible for maintaining information about trusted identity providers (Microsoft identity platform) and the keys used to validate tokens issued by them. The VINR also makes it easy to automatically update the key information stored in a Web.config file by downloading the latest federation metadata document associated with your directory, checking if the configuration is out of date with the latest document, and updating the application to use the new key as necessary.
+
+If you created your application using any of the code samples or walkthrough documentation provided by Microsoft, the key rollover logic is already included in your project. You will notice that the code below already exists in your project. If your application does not already have this logic, follow the steps below to add it and to verify that itΓÇÖs working correctly.
+
+1. In **Solution Explorer**, add a reference to the **System.IdentityModel** assembly for the appropriate project.
+2. Open the **Global.asax.cs** file and add the following using directives:
+ ```
+ using System.Configuration;
+ using System.IdentityModel.Tokens;
+ ```
+3. Add the following method to the **Global.asax.cs** file:
+ ```
+ protected void RefreshValidationSettings()
+ {
+ string configPath = AppDomain.CurrentDomain.BaseDirectory + "\\" + "Web.config";
+ string metadataAddress =
+ ConfigurationManager.AppSettings["ida:FederationMetadataLocation"];
+ ValidatingIssuerNameRegistry.WriteToConfig(metadataAddress, configPath);
+ }
+ ```
+4. Invoke the **RefreshValidationSettings()** method in the **Application_Start()** method in **Global.asax.cs** as shown:
+ ```
+ protected void Application_Start()
+ {
+ AreaRegistration.RegisterAllAreas();
+ ...
+ RefreshValidationSettings();
+ }
+ ```
+
+Once you have followed these steps, your applicationΓÇÖs Web.config will be updated with the latest information from the federation metadata document, including the latest keys. This update will occur every time your application pool recycles in IIS; by default IIS is set to recycle applications every 29 hours.
+
+Follow the steps below to verify that the key rollover logic is working.
+
+1. After you have verified that your application is using the code above, open the **Web.config** file and navigate to the **\<issuerNameRegistry>** block, specifically looking for the following few lines:
+ ```
+ <issuerNameRegistry type="System.IdentityModel.Tokens.ValidatingIssuerNameRegistry, System.IdentityModel.Tokens.ValidatingIssuerNameRegistry">
+ <authority name="https://sts.windows.net/ec4187af-07da-4f01-b18f-64c2f5abecea/">
+ <keys>
+ <add thumbprint="3A38FA984E8560F19AADC9F86FE9594BB6AD049B" />
+ </keys>
+ ```
+2. In the **\<add thumbprint="">** setting, change the thumbprint value by replacing any character with a different one. Save the **Web.config** file.
+3. Build the application, and then run it. If you can complete the sign-in process, your application is successfully updating the key by downloading the required information from your directoryΓÇÖs federation metadata document. If you are having issues signing in, ensure the changes in your application are correct by reading the [Adding Sign-On to Your Web Application Using Microsoft identity platform](https://github.com/Azure-Samples/active-directory-dotnet-webapp-openidconnect) article, or downloading and inspecting the following code sample: [Multi-Tenant Cloud Application for Azure Active Directory](https://code.msdn.microsoft.com/multi-tenant-cloud-8015b84b).
+
+### <a name="other"></a>Web applications / APIs protecting resources using any other libraries or manually implementing any of the supported protocols
+If you are using some other library or manually implemented any of the supported protocols, you'll need to review the library or your implementation to ensure that the key is being retrieved from either the OpenID Connect discovery document or the federation metadata document. One way to check for this is to do a search in your code or the library's code for any calls out to either the OpenID discovery document or the federation metadata document.
+
+If the key is being stored somewhere or hardcoded in your application, you can manually retrieve the key and update it accordingly by performing a manual rollover as per the instructions at the end of this guidance document. **It is strongly encouraged that you enhance your application to support automatic rollover** using any of the approaches outline in this article to avoid future disruptions and overhead if the Microsoft identity platform increases its rollover cadence or has an emergency out-of-band rollover.
+
+## How to test your application to determine if it will be affected
+
+You can validate whether your application supports automatic key rollover by using the following PowerShell scripts.
+
+To check and update signing keys with PowerShell, you'll need the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module.
+
+1. Install the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module:
+
+ ```powershell
+ Install-Module -Name MSIdentityTools
+ ```
+
+1. Sign in by using the Connect-MgGraph command with an admin account to consent to the required scopes:
+
+ ```powershell
+ Connect-MgGraph -Scope "Application.ReadWrite.All"
+ ```
+
+1. Get the list of available signing key thumbprints:
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint
+ ```
+
+1. Pick any of the key thumbprints and configure Azure Active Directory to use that key with your application (get the app ID from the [Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps)):
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -KeyThumbprint <Thumbprint>
+ ```
+
+1. Test the web application by signing in to get a new token. The key update change is instantaneous, but make sure you use a new browser session (using, for example, Internet Explorer's "InPrivate," Chrome's "Incognito," or Firefox's "Private" mode) to ensure you are issued a new token.
+
+1. For each of the returned signing key thumbprints, run the `Update-MsIdApplicationSigningKeyThumbprint` cmdlet and test your web application sign-in process.
+
+1. If the web application signs you in properly, it supports automatic rollover. If it doesn't, modify your application to support manual rollover. Check out [Establishing a manual rollover process](#how-to-perform-a-manual-rollover-if-your-application-does-not-support-automatic-rollover) for more information.
+
+1. Run the following script to revert to normal behavior:
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -Default
+ ```
+
+## How to perform a manual rollover if your application does not support automatic rollover
+If your application doesn't support automatic rollover, you need to establish a process that periodically monitors Microsoft identity platform's signing keys and performs a manual rollover accordingly.
+
+To check and update signing keys with PowerShell, you'll need the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module.
+
+1. Install the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module:
+
+ ```powershell
+ Install-Module -Name MSIdentityTools
+ ```
+
+1. Get the latest signing key (get the tenant ID from the [Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview)):
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint -Tenant <tenandId> -Latest
+ ```
+
+1. Compare this key against the key your application is currently hardcoded or configured to use.
+
+1. If the latest key is different from the key your application is using, download the latest signing key:
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint -Latest -DownloadPath <DownloadFolderPath>
+ ```
+
+1. Update your application's code or configuration to use the new key.
+
+1. Configure Azure Active Directory to use that latest key with your application (get the app ID from the [portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps)):
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint -Latest | Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId>
+ ```
+
+1. Test the web application by signing in to get a new token. The key update change is instantaneous, but make sure you use a new browser session (using, for example, Internet Explorer's "InPrivate," Chrome's "Incognito," or Firefox's "Private" mode) to ensure you are issued a new token.
+
+1. If you experience any issues, revert to the previous key you were using and contact Azure support:
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -KeyThumbprint <PreviousKeyThumbprint>
+ ```
+
+1. After you update your application to support manual rollover, revert to normal behavior:
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -Default
+ ```
active-directory Test Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md
Replicating permission grant policies ensures you don't encounter unexpected pro
Replicating token lifetime policies ensures tokens issued to your application don't expire unexpectedly in production.
-Token lifetime policies can currently only be managed through PowerShell. Read about [configurable token lifetimes](active-directory-configurable-token-lifetimes.md) to learn about identifying any token lifetime policies that apply to your whole production organization. Copy those policies to your test tenant.
+Token lifetime policies can currently only be managed through PowerShell. Read about [configurable token lifetimes](configurable-token-lifetimes.md) to learn about identifying any token lifetime policies that apply to your whole production organization. Copy those policies to your test tenant.
## Set up a test environment in your production tenant If you can safely constrain your test app in your production tenant, go ahead and set up your tenant for testing purposes.
active-directory Tutorial V2 Angular Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md
Previously updated : 03/25/2022 Last updated : 04/28/2023
# Tutorial: Sign in users and call the Microsoft Graph API from an Angular single-page application (SPA) using auth code flow
-In this tutorial, you build an Angular single-page application (SPA) that signs in users and calls the Microsoft Graph API by using the authorization code flow with PKCE. The SPA you build uses the Microsoft Authentication Library (MSAL) for Angular v2.
+In this tutorial, you'll build an Angular single-page application (SPA) that signs in users and calls the Microsoft Graph API by using the authorization code flow with PKCE. The SPA you build uses the Microsoft Authentication Library (MSAL) for Angular v2.
In this tutorial: > [!div class="checklist"]
-> * Create an Angular project with `npm`
> * Register the application in the Azure portal
+> * Create an Angular project with `npm`
> * Add code to support user sign-in and sign-out > * Add code to call Microsoft Graph API > * Test the app
This tutorial uses the following libraries:
You can find the source code for all of the MSAL.js libraries in the [AzureAD/microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) repository on GitHub.
-## Create your project
-
-Once you have [Node.js](https://nodejs.org/en/download/) installed, open up a terminal window and then run the following commands to generate a new Angular application:
-
-```bash
-npm install -g @angular/cli # Install the Angular CLI
-ng new msal-angular-tutorial --routing=true --style=css --strict=false # Generate a new Angular app
-cd msal-angular-tutorial # Change to the app directory
-npm install @angular/material @angular/cdk # Install the Angular Material component library (optional, for UI)
-npm install @azure/msal-browser @azure/msal-angular # Install MSAL Browser and MSAL Angular in your application
-ng generate component home # To add a home page
-ng generate component profile # To add a profile page
-```
+## Register the application and record identifiers
-## Register your application
+To complete registration, provide the application a name, specify the supported account types, and add a redirect URI. Once registered, the application **Overview** pane displays the identifiers needed in the application source code.
-Follow the [instructions to register a single-page application](./scenario-spa-app-registration.md) in the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations > New registration**.
+1. Enter a **Name** for the application, such as *Angular-SPA-auth-code*.
+1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option.
+1. Under **Redirect URI (optional)**, use the drop-down menu to select **Single-page-application (SPA)** and enter `http://localhost:4200` into the text box.
+1. Select **Register**.
+1. The application's **Overview** pane is displayed when registration is complete. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in your application source code.
-On the app **Overview** page of your registration, note the **Application (client) ID** value for later use.
+## Create your project
-Register your **Redirect URI** value as **http://localhost:4200/** and type as 'SPA'.
+1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project.
+1. Open a new terminal by selecting **Terminal** > **New Terminal**.
+ 1. You may need to switch terminal types. Select the down arrow next to the **+** icon in the terminal and select **Command Prompt**.
+1. Run the following commands to create a new Angular project with the name *msal-angular-tutorial*, install Angular Material component libraries, MSAL Browser, MSAL Angular and generate home and profile components.
+
+ ```cmd
+ npm install -g @angular/cli
+ ng new msal-angular-tutorial --routing=true --style=css --strict=false
+ cd msal-angular-tutorial
+ npm install @angular/material @angular/cdk
+ npm install @azure/msal-browser @azure/msal-angular
+ ng generate component home
+ ng generate component profile
+ ```
-## Configure the application
+## Configure the application and edit the base UI
-1. In the *src/app* folder, edit *app.module.ts* and add `MsalModule` and `MsalInterceptor` to `imports` as well as the `isIE` constant. Your code should look like this:
+1. Open *src/app/app.module.ts*. The `MsalModule` and `MsalInterceptor` need to be added to `imports` along with the `isIE` constant. You'll also add the material modules. Replace the entire contents of the file with the following snippet:
```javascript import { BrowserModule } from '@angular/platform-browser';
+ import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { NgModule } from '@angular/core';
+ import { MatButtonModule } from '@angular/material/button';
+ import { MatToolbarModule } from '@angular/material/toolbar';
+ import { MatListModule } from '@angular/material/list';
+ import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { HomeComponent } from './home/home.component'; import { ProfileComponent } from './profile/profile.component';
- import { MsalModule } from '@azure/msal-angular';
+ import { MsalModule, MsalRedirectComponent} from '@azure/msal-angular';
import { PublicClientApplication } from '@azure/msal-browser'; const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
], imports: [ BrowserModule,
+ BrowserAnimationsModule,
AppRoutingModule,
+ MatButtonModule,
+ MatToolbarModule,
+ MatListModule,
MsalModule.forRoot( new PublicClientApplication({ auth: { clientId: 'Enter_the_Application_Id_here', // Application (client) ID from the app registration
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
}), null, null) ], providers: [],
- bootstrap: [AppComponent]
+ bootstrap: [AppComponent, MsalRedirectComponent]
}) export class AppModule { } ```
- Replace these values:
-
- |Value name|About|
- |||
- |Enter_the_Application_Id_Here|On the **Overview** page of your application registration, this is your **Application (client) ID** value. |
- |Enter_the_Cloud_Instance_Id_Here|This is the instance of the Azure cloud. For the main or global Azure cloud, enter **https://login.microsoftonline.com**. For national clouds (for example, China), see [National clouds](./authentication-national-cloud.md).|
- |Enter_the_Tenant_Info_Here| Set to one of the following options: If your application supports *accounts in this organizational directory*, replace this value with the directory (tenant) ID or tenant name (for example, **contoso.microsoft.com**). If your application supports *accounts in any organizational directory*, replace this value with **organizations**. If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**. |
- |Enter_the_Redirect_Uri_Here|Replace with **http://localhost:4200**.|
+1. Replace the following values with the values obtained from the Azure portal. For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
+ - `clientId` - The identifier of the application, also referred to as the client. Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application.
+ - `authority` - This is composed of two parts:
+ - The *Instance* is endpoint of the cloud provider. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. Check with the different available endpoints in [National clouds](authentication-national-cloud.md#azure-ad-authentication-endpoints).
+ - The *Tenant ID* is the identifier of the tenant where the application is registered. Replace the `_Enter_the_Tenant_Info_Here` with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ - `redirectUri` - the location where the authorization server sends the user once the app has been successfully authorized and granted an authorization code or access token. Replace `Enter_the_Redirect_Uri_Here` with `http://localhost:4200`.
- For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
-
-2. Add routes to the home and profile components in the *src/app/app-routing.module.ts*. Your code should look like the following:
+1. Open *src/app/app-routing.module.ts* and add routes to the *home* and *profile* components. Replace the entire contents of the file with the following snippet:
```javascript import { NgModule } from '@angular/core';
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
export class AppRoutingModule { } ```
-## Replace base UI
-
-1. Replace the placeholder code in *src/app/app.component.html* with the following:
+1. Open *src/app/app.component.html* and replace the existing code with the following:
```HTML <mat-toolbar color="primary">
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
</div> ```
-2. Add material modules to *src/app/app.module.ts*. Your `AppModule` should look like this:
-
- ```javascript
- import { BrowserModule } from '@angular/platform-browser';
- import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
- import { NgModule } from '@angular/core';
-
- import { MatButtonModule } from '@angular/material/button';
- import { MatToolbarModule } from '@angular/material/toolbar';
- import { MatListModule } from '@angular/material/list';
-
- import { AppRoutingModule } from './app-routing.module';
- import { AppComponent } from './app.component';
- import { HomeComponent } from './home/home.component';
- import { ProfileComponent } from './profile/profile.component';
-
- import { MsalModule } from '@azure/msal-angular';
- import { PublicClientApplication } from '@azure/msal-browser';
-
- const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
-
- @NgModule({
- declarations: [
- AppComponent,
- HomeComponent,
- ProfileComponent
- ],
- imports: [
- BrowserModule,
- BrowserAnimationsModule,
- AppRoutingModule,
- MatButtonModule,
- MatToolbarModule,
- MatListModule,
- MsalModule.forRoot( new PublicClientApplication({
- auth: {
- clientId: 'Enter_the_Application_Id_here',
- authority: 'Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here',
- redirectUri: 'Enter_the_Redirect_Uri_Here'
- },
- cache: {
- cacheLocation: 'localStorage',
- storeAuthStateInCookie: isIE,
- }
- }), null, null)
- ],
- providers: [],
- bootstrap: [AppComponent]
- })
- export class AppModule { }
- ```
-
-3. (OPTIONAL) Add CSS to *src/style.css*:
+1. Open *src/style.css* to define the CSS:
```css @import '~@angular/material/prebuilt-themes/deeppurple-amber.css';
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
.container { margin: 1%; } ```
-4. (OPTIONAL) Add CSS to *src/app/app.component.css*:
+4. Open *src/app/app.component.css* to add CSS styling to the application:
```css .toolbar-spacer {
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
} ```
-## Sign in a user
-
-Add the code from the following sections to invoke login using a pop-up window or a full-frame redirect:
-
-### Sign in using pop-ups
+## Sign in using pop-ups
-1. Change the code in *src/app/app.component.ts* to the following to sign in a user using a pop-up window:
+1. Open *src/app/app.component.ts* and replace the contents of the file to the following to sign in a user using a pop-up window:
```javascript import { MsalService } from '@azure/msal-angular';
Add the code from the following sections to invoke login using a pop-up window o
} ```
-> [!NOTE]
-> The rest of this tutorial uses the `loginRedirect` method with Microsoft Internet Explorer because of a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md) related to the handling of pop-up windows by Internet Explorer.
+## Sign in using redirects
-### Sign in using redirects
-
-1. Update *src/app/app.module.ts* to bootstrap the `MsalRedirectComponent`. This is a dedicated redirect component which will handle redirects. Your code should now look like this:
+1. Update *src/app/app.module.ts* to bootstrap the `MsalRedirectComponent`. This is a dedicated redirect component, which handles redirects. Change the `MsalModule` import and `AppComponent` bootstrap to resemble the following:
```javascript
- import { BrowserModule } from '@angular/platform-browser';
- import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
- import { NgModule } from '@angular/core';
-
- import { MatButtonModule } from '@angular/material/button';
- import { MatToolbarModule } from '@angular/material/toolbar';
- import { MatListModule } from '@angular/material/list';
-
- import { AppRoutingModule } from './app-routing.module';
- import { AppComponent } from './app.component';
- import { HomeComponent } from './home/home.component';
- import { ProfileComponent } from './profile/profile.component';
-
+ ...
import { MsalModule, MsalRedirectComponent } from '@azure/msal-angular'; // Updated import
- import { PublicClientApplication } from '@azure/msal-browser';
-
- const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
-
- @NgModule({
- declarations: [
- AppComponent,
- HomeComponent,
- ProfileComponent
- ],
- imports: [
- BrowserModule,
- BrowserAnimationsModule,
- AppRoutingModule,
- MatButtonModule,
- MatToolbarModule,
- MatListModule,
- MsalModule.forRoot( new PublicClientApplication({
- auth: {
- clientId: 'Enter_the_Application_Id_here',
- authority: 'Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here',
- redirectUri: 'Enter_the_Redirect_Uri_Here'
- },
- cache: {
- cacheLocation: 'localStorage',
- storeAuthStateInCookie: isIE,
- }
- }), null, null)
- ],
- providers: [],
+ ...
bootstrap: [AppComponent, MsalRedirectComponent] // MsalRedirectComponent bootstrapped here
- })
- export class AppModule { }
+ ...
```
-2. Add the `<app-redirect>` selector to *src/https://docsupdatetracker.net/index.html*. This selector is used by the `MsalRedirectComponent`. Your *src/https://docsupdatetracker.net/index.html* should look like this:
+2. Open *src/https://docsupdatetracker.net/index.html* and replace the entire contents of the file with the following snippet, which adds the `<app-redirect>` selector:
```HTML <!doctype html>
Add the code from the following sections to invoke login using a pop-up window o
</html> ```
-3. Replace the code in *src/app/app.component.ts* with the following to sign in a user using a full-frame redirect:
+3. Open *src/app/app.component.ts* and replace the code with the following to sign in a user using a full-frame redirect:
```javascript import { MsalService } from '@azure/msal-angular';
Add the code from the following sections to invoke login using a pop-up window o
} ```
-4. Replace existing code in *src/app/home/home.component.ts* to subscribe to the `LOGIN_SUCCESS` event. This will allow you to access the result from the successful login with redirect. Your code should look like this:
+4. Navigate to *src/app/home/home.component.ts* and replace the entire contents of the file with the following snippet to subscribe to the `LOGIN_SUCCESS` event:
```javascript import { Component, OnInit } from '@angular/core';
Add the code from the following sections to invoke login using a pop-up window o
## Conditional rendering
-In order to render certain UI only for authenticated users, components have to subscribe to the `MsalBroadcastService` to see if users have been signed in and interaction has completed.
+In order to render certain User Interface (UI) only for authenticated users, components have to subscribe to the `MsalBroadcastService` to see if users have been signed in, and interaction has completed.
1. Add the `MsalBroadcastService` to *src/app/app.component.ts* and subscribe to the `inProgress$` observable to check if interaction is complete and an account is signed in before rendering UI. Your code should now look like this:
In order to render certain UI only for authenticated users, components have to s
</div> ```
-## Guarding routes
+## Implement Angular Guard
-### Angular Guard
+The `MsalGuard` class is one you can use to protect routes and require authentication before accessing the protected route. The following steps add the `MsalGuard` to the `Profile` route. Protecting the `Profile` route means that even if a user doesn't sign in using the `Login` button, if they try to access the `Profile` route or select the `Profile` button, the `MsalGuard` prompts the user to authenticate via pop-up or redirect before showing the `Profile` page.
-MSAL Angular provides `MsalGuard`, a class you can use to protect routes and require authentication before accessing the protected route. The steps below add the `MsalGuard` to the `Profile` route. Protecting the `Profile` route means that even if a user does not sign in using the `Login` button, if they try to access the `Profile` route or click the `Profile` button, the `MsalGuard` will prompt the user to authenticate via pop-up or redirect before showing the `Profile` page.
-
-`MsalGuard` is a convenience class you can use improve the user experience, but it should not be relied upon for security. Attackers can potentially get around client-side guards, and you should ensure that the server does not return any data the user should not access.
+`MsalGuard` is a convenience class you can use to improve the user experience, but it shouldn't be relied upon for security. Attackers can potentially get around client-side guards, and you should ensure that the server doesn't return any data the user shouldn't access.
1. Add the `MsalGuard` class as a provider in your application in *src/app/app.module.ts*, and add the configurations for the `MsalGuard`. Scopes needed for acquiring tokens later can be provided in the `authRequest`, and the type of interaction for the Guard can be set to `Redirect` or `Popup`. Your code should look like the following:
MSAL Angular provides `MsalGuard`, a class you can use to protect routes and req
MSAL Angular provides an `Interceptor` class that automatically acquires tokens for outgoing requests that use the Angular `http` client to known protected resources.
-1. Add the `Interceptor` class as a provider to your application in *src/app/app.module.ts*, with its configurations. Your code should now look like this:
+1. Add the `Interceptor` class as a provider to your application in *src/app/app.module.ts*, with its configurations. Your code should now look like the following:
```javascript import { BrowserModule } from '@angular/platform-browser';
MSAL Angular provides an `Interceptor` class that automatically acquires tokens
* `["<Application ID URL>/scope"]` for custom web APIs (that is, `api://<Application ID>/access_as_user`) Modify the values in the `protectedResourceMap` as described here:
+ - `Enter_the_Graph_Endpoint_Here` is the instance of the Microsoft Graph API the application should communicate with. For the **global** Microsoft Graph API endpoint, replace this string with `https://graph.microsoft.com`. For endpoints in **national** cloud deployments, see [National cloud deployments](/graph/deployments) in the Microsoft Graph documentation.
- |Value name| About|
- |-||
- |`Enter_the_Graph_Endpoint_Here`| The instance of the Microsoft Graph API the application should communicate with. For the **global** Microsoft Graph API endpoint, replace both instances of this string with `https://graph.microsoft.com`. For endpoints in **national** cloud deployments, see [National cloud deployments](/graph/deployments) in the Microsoft Graph documentation.|
-
-2. Replace the code in *src/app/profile/profile.component.ts* to retrieve a user's profile with an HTTP request:
+2. Replace the code in *src/app/profile/profile.component.ts* to retrieve a user's profile with an HTTP request, and replace the `GRAPH_ENDPOINT` with the Microsoft Graph endpoint:
```JavaScript import { Component, OnInit } from '@angular/core';
MSAL Angular provides an `Interceptor` class that automatically acquires tokens
} ```
-3. Replace the UI in *src/app/profile/profile.component.html* to display profile information:
+3. Replace the UI in *src/app/profile/profile.component.html* to display profile information:
```HTML <div>
MSAL Angular provides an `Interceptor` class that automatically acquires tokens
## Sign out
-Update the code in *src/app/app.component.html* to conditionally display a `Logout` button:
-
-```HTML
-<mat-toolbar color="primary">
- <a class="title" href="/">{{ title }}</a>
-
- <div class="toolbar-spacer"></div>
-
- <a mat-button [routerLink]="['profile']">Profile</a>
-
- <button mat-raised-button *ngIf="!loginDisplay" (click)="login()">Login</button>
- <button mat-raised-button *ngIf="loginDisplay" (click)="logout()">Logout</button>
+1. Update the code in *src/app/app.component.html* to conditionally display a `Logout` button:
-</mat-toolbar>
-<div class="container">
- <!--This is to avoid reload during acquireTokenSilent() because of hidden iframe -->
- <router-outlet *ngIf="!isIframe"></router-outlet>
-</div>
-```
+ ```HTML
+ <mat-toolbar color="primary">
+ <a class="title" href="/">{{ title }}</a>
+
+ <div class="toolbar-spacer"></div>
+
+ <a mat-button [routerLink]="['profile']">Profile</a>
+
+ <button mat-raised-button *ngIf="!loginDisplay" (click)="login()">Login</button>
+ <button mat-raised-button *ngIf="loginDisplay" (click)="logout()">Logout</button>
+
+ </mat-toolbar>
+ <div class="container">
+ <!--This is to avoid reload during acquireTokenSilent() because of hidden iframe -->
+ <router-outlet *ngIf="!isIframe"></router-outlet>
+ </div>
+ ```
### Sign out using redirects
-Update the code in *src/app/app.component.ts* to sign out a user using redirects:
-
-```javascript
-import { Component, OnInit, OnDestroy, Inject } from '@angular/core';
-import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
-import { InteractionStatus, RedirectRequest } from '@azure/msal-browser';
-import { Subject } from 'rxjs';
-import { filter, takeUntil } from 'rxjs/operators';
-
-@Component({
- selector: 'app-root',
- templateUrl: './app.component.html',
- styleUrls: ['./app.component.css']
-})
-export class AppComponent implements OnInit, OnDestroy {
- title = 'msal-angular-tutorial';
- isIframe = false;
- loginDisplay = false;
- private readonly _destroying$ = new Subject<void>();
-
- constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
-
- ngOnInit() {
- this.isIframe = window !== window.parent && !window.opener;
-
- this.broadcastService.inProgress$
- .pipe(
- filter((status: InteractionStatus) => status === InteractionStatus.None),
- takeUntil(this._destroying$)
- )
- .subscribe(() => {
- this.setLoginDisplay();
- })
- }
+1. Update the code in *src/app/app.component.ts* to sign out a user using redirects:
- login() {
- if (this.msalGuardConfig.authRequest){
- this.authService.loginRedirect({...this.msalGuardConfig.authRequest} as RedirectRequest);
- } else {
- this.authService.loginRedirect();
+ ```javascript
+ import { Component, OnInit, OnDestroy, Inject } from '@angular/core';
+ import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
+ import { InteractionStatus, RedirectRequest } from '@azure/msal-browser';
+ import { Subject } from 'rxjs';
+ import { filter, takeUntil } from 'rxjs/operators';
+
+ @Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+ })
+ export class AppComponent implements OnInit, OnDestroy {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+
+ this.broadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginRedirect({...this.msalGuardConfig.authRequest} as RedirectRequest);
+ } else {
+ this.authService.loginRedirect();
+ }
+ }
+
+ logout() { // Add log out function here
+ this.authService.logoutRedirect({
+ postLogoutRedirectUri: 'http://localhost:4200'
+ });
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
}
- }
-
- logout() { // Add log out function here
- this.authService.logoutRedirect({
- postLogoutRedirectUri: 'http://localhost:4200'
- });
- }
-
- setLoginDisplay() {
- this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
- }
-
- ngOnDestroy(): void {
- this._destroying$.next(undefined);
- this._destroying$.complete();
- }
-}
-```
+ ```
### Sign out using pop-ups
-Update the code in *src/app/app.component.ts* to sign out a user using pop-ups:
-
-```javascript
-import { Component, OnInit, OnDestroy, Inject } from '@angular/core';
-import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
-import { InteractionStatus, PopupRequest } from '@azure/msal-browser';
-import { Subject } from 'rxjs';
-import { filter, takeUntil } from 'rxjs/operators';
-
-@Component({
- selector: 'app-root',
- templateUrl: './app.component.html',
- styleUrls: ['./app.component.css']
-})
-export class AppComponent implements OnInit, OnDestroy {
- title = 'msal-angular-tutorial';
- isIframe = false;
- loginDisplay = false;
- private readonly _destroying$ = new Subject<void>();
-
- constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
-
- ngOnInit() {
- this.isIframe = window !== window.parent && !window.opener;
-
- this.broadcastService.inProgress$
- .pipe(
- filter((status: InteractionStatus) => status === InteractionStatus.None),
- takeUntil(this._destroying$)
- )
- .subscribe(() => {
- this.setLoginDisplay();
- })
- }
+1. Update the code in *src/app/app.component.ts* to sign out a user using pop-ups:
- login() {
- if (this.msalGuardConfig.authRequest){
- this.authService.loginPopup({...this.msalGuardConfig.authRequest} as PopupRequest)
- .subscribe({
- next: (result) => {
- console.log(result);
- this.setLoginDisplay();
- },
- error: (error) => console.log(error)
- });
- } else {
- this.authService.loginPopup()
- .subscribe({
- next: (result) => {
- console.log(result);
- this.setLoginDisplay();
- },
- error: (error) => console.log(error)
+ ```javascript
+ import { Component, OnInit, OnDestroy, Inject } from '@angular/core';
+ import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
+ import { InteractionStatus, PopupRequest } from '@azure/msal-browser';
+ import { Subject } from 'rxjs';
+ import { filter, takeUntil } from 'rxjs/operators';
+
+ @Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+ })
+ export class AppComponent implements OnInit, OnDestroy {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+
+ this.broadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginPopup({...this.msalGuardConfig.authRequest} as PopupRequest)
+ .subscribe({
+ next: (result) => {
+ console.log(result);
+ this.setLoginDisplay();
+ },
+ error: (error) => console.log(error)
+ });
+ } else {
+ this.authService.loginPopup()
+ .subscribe({
+ next: (result) => {
+ console.log(result);
+ this.setLoginDisplay();
+ },
+ error: (error) => console.log(error)
+ });
+ }
+ }
+
+ logout() { // Add log out function here
+ this.authService.logoutPopup({
+ mainWindowRedirectUri: "/"
});
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
}
- }
-
- logout() { // Add log out function here
- this.authService.logoutPopup({
- mainWindowRedirectUri: "/"
- });
- }
-
- setLoginDisplay() {
- this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
- }
-
- ngOnDestroy(): void {
- this._destroying$.next(undefined);
- this._destroying$.complete();
- }
-}
-```
+ ```
## Test your code
-1. Start the web server to listen to the port by running the following commands at a command-line prompt from the application folder:
+1. Start the web server to listen to the port by running the following commands at a command-line prompt from the application folder:
```bash npm install npm start ```
-1. In your browser, enter **http://localhost:4200** or **http://localhost:{port}**, where *port* is the port that your web server is listening on. You should see a page that looks like the one below.
+1. In your browser, enter `http://localhost:4200`, and you should see a page that looks like the following.
:::image type="content" source="media/tutorial-v2-angular-auth-code/angular-01-not-signed-in.png" alt-text="Web browser displaying sign-in dialog":::
+1. Select **Accept** to grant the app permissions to your profile. This will happen the first time that you start to sign in.
-### Provide consent for application access
-
-The first time that you start to sign in to your application, you're prompted to grant it access to your profile and allow it to sign you in:
--
-If you consent to the requested permissions, the web application shows a successful login page:
+ :::image type="content" source="media/tutorial-v2-javascript-auth-code/spa-02-consent-dialog.png" alt-text="Content dialog displayed in web browser":::
+1. After consenting, the following If you consent to the requested permissions, the web application shows a successful login page.
-### Call the Graph API
+ :::image type="content" source="media/tutorial-v2-angular-auth-code/angular-02-signed-in.png" alt-text="Results of a successful sign-in in the web browser":::
-After you sign in, select **Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API:
+1. Select **Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API:
+ :::image type="content" source="media/tutorial-v2-angular-auth-code/angular-03-profile-data.png" alt-text="Profile information from Microsoft Graph displayed in the browser":::
## Add scopes and delegated permissions
-The Microsoft Graph API requires the _User.Read_ scope to read a user's profile. The _User.Read_ scope is added automatically to every app registration you create in the Azure portal. Other APIs for Microsoft Graph, as well as custom APIs for your back-end server, might require additional scopes. For example, the Microsoft Graph API requires the _Mail.Read_ scope in order to list the user's email.
+The Microsoft Graph API requires the _User.Read_ scope to read a user's profile. The _User.Read_ scope is added automatically to every app registration you create in the Azure portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require other scopes. For example, the Microsoft Graph API requires the _Mail.Read_ scope in order to list the user's email.
-As you add scopes, your users might be prompted to provide additional consent for the added scopes.
+As you add scopes, your users might be prompted to provide extra consent for the added scopes.
>[!NOTE] >The user might be prompted for additional consents as you increase the number of scopes.
As you add scopes, your users might be prompted to provide additional consent fo
## Next steps
-Delve deeper into single-page application (SPA) development on the Microsoft identity platform in our the multi-part article series.
+Delve deeper into single-page application (SPA) development on the Microsoft identity platform in our multi-part article series.
> [!div class="nextstepaction"] > [Scenario: Single-page application](scenario-spa-overview.md)
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
In this flow, the app receives an authorization code from the Microsoft identity
![Shows the native app authentication flow](./media/v2-app-types/convergence-scenarios-native.svg) > [!NOTE]
-> If the application uses the default system webview, check the information about "Confirm My Sign-In" functionality and error code AADSTS50199 in [Azure AD authentication and authorization error codes](reference-aadsts-error-codes.md).
+> If the application uses the default system webview, check the information about "Confirm My Sign-In" functionality and error code AADSTS50199 in [Azure AD authentication and authorization error codes](reference-error-codes.md).
## Daemons and server-side apps
active-directory V2 Howto Get Appsource Certified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-howto-get-appsource-certified.md
The customer-led trial is the experience that AppSource recommends as it offers
1. A user finds your application in the AppSource web site, then selects **Free trial** option.
- ![Shows Free trial for customer-led trial experience](./media/active-directory-devhowto-appsource-certified/customer-led-trial-step1.png)
+ ![Shows Free trial for customer-led trial experience](./media/devhowto-appsource-certified/customer-led-trial-step1.png)
2. AppSource redirects the user to a URL in your web site. Your web site starts the *single-sign-on* process automatically (on page load).
- ![Shows how user is redirected to a URL in your web site](./media/active-directory-devhowto-appsource-certified/customer-led-trial-step2.png)
+ ![Shows how user is redirected to a URL in your web site](./media/devhowto-appsource-certified/customer-led-trial-step2.png)
3. The user is redirected to the Microsoft sign-in page and the user provides credentials to sign in.
- ![Shows the Microsoft sign-in page](./media/active-directory-devhowto-appsource-certified/customer-led-trial-step3.png)
+ ![Shows the Microsoft sign-in page](./media/devhowto-appsource-certified/customer-led-trial-step3.png)
4. The user gives consent for your application.
- ![Example: Consent page for an application](./media/active-directory-devhowto-appsource-certified/customer-led-trial-step4.png)
+ ![Example: Consent page for an application](./media/devhowto-appsource-certified/customer-led-trial-step4.png)
5. Sign-in completes and the user is redirected back to your web site. The user starts the free trial.
- ![Shows the experience the user sees when redirected back to your site](./media/active-directory-devhowto-appsource-certified/customer-led-trial-step5.png)
+ ![Shows the experience the user sees when redirected back to your site](./media/devhowto-appsource-certified/customer-led-trial-step5.png)
### Contact me (partner-led trial experience)
You can use the partner trial experience when a manual or a long-term operation
1. A user finds your application in AppSource web site, then selects **Contact Me**.
- ![Shows Contact me for partner-led trial experience](./media/active-directory-devhowto-appsource-certified/partner-led-trial-step1.png)
+ ![Shows Contact me for partner-led trial experience](./media/devhowto-appsource-certified/partner-led-trial-step1.png)
2. The user fills out a form with contact information.
- ![Shows an example form with contact info](./media/active-directory-devhowto-appsource-certified/partner-led-trial-step2.png)
+ ![Shows an example form with contact info](./media/devhowto-appsource-certified/partner-led-trial-step2.png)
3. You receive the user's information, set up a trial instance, and send the hyperlink to access your application to the user.
- ![Shows placeholder for user information](./media/active-directory-devhowto-appsource-certified/usercontact.png)
+ ![Shows placeholder for user information](./media/devhowto-appsource-certified/usercontact.png)
4. The user accesses your application and completes the single sign-on process.
- ![Shows the application sign-in screen](./media/active-directory-devhowto-appsource-certified/partner-led-trial-step3.png)
+ ![Shows the application sign-in screen](./media/devhowto-appsource-certified/partner-led-trial-step3.png)
5. The user gives consent for your application.
- ![Shows an example consent page for an application](./media/active-directory-devhowto-appsource-certified/partner-led-trial-step4.png)
+ ![Shows an example consent page for an application](./media/devhowto-appsource-certified/partner-led-trial-step4.png)
6. Sign-in completes and the user is redirected back to your web site. The user starts the free trial.
- ![Shows the experience the user sees when redirected back to your site](./media/active-directory-devhowto-appsource-certified/customer-led-trial-step5.png)
+ ![Shows the experience the user sees when redirected back to your site](./media/devhowto-appsource-certified/customer-led-trial-step5.png)
### More information
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 04/03/2023 Last updated : 05/02/2023
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## April 2023
+
+### New articles
+
+- [Configure token lifetime policies (preview)](configure-token-lifetimes.md)
+- [Secure applications and APIs by validating claims](claims-validation.md)
+
+### Updated articles
+
+- [Authentication flow support in MSAL](msal-authentication-flows.md)
+- [A web app that calls web APIs: Acquire a token for the app](scenario-web-app-call-api-acquire-token.md)
+- [A web app that calls web APIs: Code configuration](scenario-web-app-call-api-app-configuration.md)
+- [Customize claims issued in the JSON web token (JWT) for enterprise applications (Preview)](active-directory-jwt-claims-customization.md)
+- [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)
+- [Daemon app that calls web APIs - acquire a token](scenario-daemon-acquire-token.md)
+- [Daemon app that calls web APIs - call a web API from the app](scenario-daemon-call-api.md)
+- [Daemon app that calls web APIs - code configuration](scenario-daemon-app-configuration.md)
+- [Desktop app that calls web APIs: Acquire a token using WAM](scenario-desktop-acquire-token-wam.md)
+- [Microsoft identity platform access tokens](access-tokens.md)
+- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)
+- [Tutorial: Sign in users and call the Microsoft Graph API from an Android application](tutorial-v2-android.md)
+- [Web app that signs in users: App registration](scenario-web-app-sign-user-app-registration.md)
+- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)
+- [Web app that signs in users: Sign-in and sign-out](scenario-web-app-sign-user-sign-in.md)
+ ## March 2023 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Overview of shared device mode](msal-shared-devices.md) - [Run automated integration tests](test-automate-integration-testing.md) - [Tutorial: Sign in users and call Microsoft Graph in Windows Presentation Foundation (WPF) desktop app](tutorial-v2-windows-desktop.md)-
-## January 2023
-
-### New articles
--- [Customize claims issued in the JSON web token (JWT) for enterprise applications](jwt-claims-customization.md)-
-### Updated articles
--- [Access Azure AD protected resources from an app in Google Cloud](workload-identity-federation-create-trust-gcp.md)-- [Configure SAML app multi-instancing for an application in Azure Active Directory](reference-app-multi-instancing.md)-- [Customize browsers and WebViews for iOS/macOS](customize-webviews.md)-- [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- [Enable cross-app SSO on Android using MSAL](msal-android-single-sign-on.md)-- [Using redirect URIs with the Microsoft Authentication Library (MSAL) for iOS and macOS](redirect-uris-ios.md)
active-directory Clean Up Unmanaged Azure Ad Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-unmanaged-azure-ad-accounts.md
Previously updated : 03/28/2023 Last updated : 05/02/2023
To learn more, see, [What is self-service sign-up for Azure AD?](./directory-sel
Use the following guidance to remove unmanaged Azure AD accounts from Azure AD tenants. Tool features help identify viral users in the Azure AD tenant. You can reset the user redemption status.
-* Use the sample application in [Azure-samples/Remove-unmanaged-guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests)
-* Use PowerShell cmdlets in [AzureAD/MSIdentityTools](https://github.com/AzureAD/MSIdentityTools/wiki/)
+* Use the sample application in [Azure-samples/Remove-unmanaged-guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests).
+* Use PowerShell cmdlets in [AzureAD/MSIdentityTools](https://github.com/AzureAD/MSIdentityTools/wiki/).
+
+### Redeem invitations
After you run a tool, users with unmanaged Azure AD accounts access the tenant, and re-redeem their invitations. However, Azure AD prevents users from redeeming with an unmanaged Azure AD account. They can redeem with another account type. Google Federation and SAML/WS-Federation aren't enabled by default. Therefore, users redeem with a Microsoft account (MSA) or email one-time password (OTP). MSA is recommended.
To delete unmanaged Azure AD accounts, run:
* `Connect-MgGraph -Scopes User.ReadWriteAll` * `Get-MsIdUnmanagedExternalUser | Remove-MgUser`
-## Resources
+## Resource
-See, [Get-MSIdUnmanagedExternalUser](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MsIdUnmanagedExternalUser). The tool returns a list of external unmanaged users, or viral users, in the tenant.
+The following tool returns a list of external unmanaged users, or viral users, in the tenant. </br> See, [Get-MSIdUnmanagedExternalUser](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MsIdUnmanagedExternalUser).
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
The following are recommended baseline settings:
| What to monitor| Risk level| Recommendation| Roles| Notes | | - |- |- |- |- |
-| Azure AD roles assignment| High| Require justification for activation.Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication (MFA). Set maximum elevation duration to 8 hrs.| Privileged Role Administration, Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. |
+| Azure AD roles assignment| High| Require justification for activation. Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication (MFA). Set maximum elevation duration to 8 hrs.| Privileged Role Administration, Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. |
| Azure Resource Role Configuration| High| Require justification for activation. Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication. Set maximum elevation duration to 8 hrs.| Owner, Resource Administrator, User Access, Administrator, Global Administrator, Security Administrator| Investigate immediately if not a planned change. This setting might enable attacker access to Azure subscriptions in your environment. | ## Azure AD roles assignment
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The refreshed Authentication Methods Activity dashboard gives admins an overview
**Service category:** Other **Product capability:** User Authentication
-Refresh and session token lifetimes configurability in CTL are retired. Azure Active Directory no longer honors refresh and session token configuration in existing policies. [Learn more](../develop/active-directory-configurable-token-lifetimes.md#token-lifetime-policies-for-refresh-tokens-and-session-tokens).
+Refresh and session token lifetimes configurability in CTL are retired. Azure Active Directory no longer honors refresh and session token configuration in existing policies. [Learn more](../develop/configurable-token-lifetimes.md#token-lifetime-policies-for-refresh-tokens-and-session-tokens).
active-directory Access Reviews Application Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-application-preparation.md
Once the reviews have started, you can monitor their progress, and update the ap
1. If you wish, you can also download a [review history report](access-reviews-downloadable-review-history.md) of completed reviews.
-1. How long a user who has been denied continued access is able to continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. If the applications used Kerberos, since Kerberos caches the group memberships of a user when they sign into a domain, the users may continue to have access until their Kerberos tickets expire. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
+1. How long a user who has been denied continued access is able to continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. If the applications used Kerberos, since Kerberos caches the group memberships of a user when they sign into a domain, the users may continue to have access until their Kerberos tickets expire. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/configurable-token-lifetimes.md).
## Next steps
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
Conditional access is only possible for applications that rely upon Azure AD for
1. **Verify users are ready for Azure Active Directory Multi-Factor Authentication.** We recommend requiring Azure AD Multi-Factor Authentication for business critical applications integrated via federation. For these applications, there should be a policy that requires the user to have met a multi-factor authentication requirement prior to Azure AD permitting them to sign into the application. Some organizations may also block access by locations, or [require the user to access from a registered device](../conditional-access/howto-conditional-access-policy-compliant-device.md). If there's no suitable policy already that includes the necessary conditions for authentication, location, device and TOU, then [add a policy to your conditional access deployment](../conditional-access/plan-conditional-access.md). 1. **Bring the application web endpoint into scope of the appropriate conditional access policy**. If you have an existing conditional access policy that was created for another application subject to the same governance requirements, you could update that policy to have it apply to this application as well, to avoid having a large number of policies. Once you have made the updates, check to ensure that the expected policies are being applied. You can see what policies would apply to a user with the [Conditional Access what if tool](../conditional-access/troubleshoot-conditional-access-what-if.md). 1. **Create a recurring access review if any users will need temporary policy exclusions**. In some cases, it may not be possible to immediately enforce conditional access policies for every authorized user. For example, some users may not have an appropriate registered device. If it's necessary to exclude one or more users from the CA policy and allow them access, then configure an access review for the group of [users who are excluded from Conditional Access policies](../governance/conditional-access-exclusion.md).
-1. **Document the token lifetime and application's session settings.** How long a user who has been denied continued access can continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. The session lifetime for an application depends upon the application itself. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
+1. **Document the token lifetime and application's session settings.** How long a user who has been denied continued access can continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. The session lifetime for an application depends upon the application itself. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/configurable-token-lifetimes.md).
## Deploy entitlement management policies for automating access assignment
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
To see the default behavior in your environment for newly created groups, use th
You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md).
-> Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | Select-Object -ExpandProperty Values`
+> Example: `Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | Select-Object -ExpandProperty Values`
> If nothing is returned, you're using the default directory settings. Newly created Microsoft 365 groups *will automatically* be written back.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites - Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2022.-- The minimum .NET Framework version required is 4.6.2, and newer versions of .Net are also supported.
+- The minimum .NET Framework version required is 4.6.2, and newer versions of .NET are also supported.
- Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration.
active-directory Access Panel Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/access-panel-collections.md
Last updated 09/02/2021
+ #customer intent: As an admin, I want to enable and create collections for My Apps portal in Azure AD so that I can create a simpler My Apps experience for users.
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Last updated 01/26/2023
zone_pivot_groups: enterprise-apps-minus-aad-powershell+ #Customer intent: As an administrator of an Azure AD tenant, I want to configure the properties of an enterprise application.
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
Last updated 04/14/2023 + # Add an OpenID Connect-based single sign-on application
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
Last updated 09/29/2022 -++ #Customer intent: As an administrator of an Azure AD tenant, I want to enable single sign-on for an enterprise application.
active-directory Admin Consent Workflow Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-faq.md
Last updated 05/27/2022
-+ + # Azure Active Directory admin consent workflow frequently asked questions ## I enabled a workflow, but when testing the functionality, why canΓÇÖt I see the new ΓÇ£Approval requiredΓÇ¥ prompt that allows me to request access?
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
Last updated 11/02/2022
+ #customer intent: As an admin, I want to learn about the admin consent workflow and how it affects end-user and admin consent experience
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md
Last updated 02/18/2021 + # Azure Active Directory PowerShell examples for Application Management
active-directory App Management Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-videos.md
Last updated 05/31/2022 + # Application management videos
active-directory Application List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-list.md
Last updated 01/07/2022 + # Applications listed in Enterprise applications
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-management-certs-faq.md
Last updated 03/03/2023 + # Application Management certificates frequently asked questions
active-directory Application Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md
Last updated 09/06/2022 ++ #Customer intent: As an administrator of an Azure AD tenant, I want to learn more about the properties of an enterprise application that I can configure.
active-directory Application Sign In Other Problem Access Panel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
Last updated 02/01/2022 -+ # Troubleshoot application sign-in
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
Last updated 09/06/2022
+ # An app page shows an error message after the user signs in
In this scenario, Azure Active Directory (Azure AD) signs the user in. But the a
There are several possible reasons why the app didn't accept the response from Azure AD. If there's an error message or code displayed, use the following resources to diagnose the error:
-* [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md)
+* [Azure AD Authentication and authorization error codes](../develop/reference-error-codes.md)
* [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
To change the signing algorithm, follow these steps:
* [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md).
-* [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md)
+* [Azure AD Authentication and authorization error codes](../develop/reference-error-codes.md)
* [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
active-directory Application Sign In Problem First Party Microsoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
Last updated 09/10/2018
+ # Problems signing in to a Microsoft application
active-directory Application Sign In Unexpected User Consent Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
Last updated 09/06/2022
+ # Unexpected error when performing consent to an application
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Last updated 09/07/2022
+ # Unexpected consent prompt when signing in to an application
active-directory Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-app-owners.md
Last updated 01/26/2023
zone_pivot_groups: enterprise-apps-minus-aad-powershell+ #Customer intent: As an Azure AD administrator, I want to assign owners to enterprise applications.
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Last updated 11/22/2022 -+ zone_pivot_groups: enterprise-apps-all #customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using PowerShell
active-directory Certificate Signing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/certificate-signing-options.md
Last updated 07/21/2022 -+
active-directory Cloud App Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloud-app-security.md
Last updated 07/29/2021
+ # Cloud app visibility and control
active-directory Cloudflare Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-azure-ad-integration.md
Last updated 6/27/2022 + # Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Last updated 09/02/2022
-+ #customer intent: As an admin, I want to configure the admin consent workflow.
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Last updated 03/16/2023 -+ zone_pivot_groups: home-realm-discovery
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-linked-sign-on.md
Last updated 09/22/2021 + # Customer intent: As an IT admin, I need to know how to implement linked single sign-on in Azure Active Directory.
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
Last updated 04/25/2023 + # Customer intent: As an IT admin, I need to know how to implement password-based single sign-on in Azure Active Directory.
active-directory Configure Risk Based Step Up Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
Last updated 11/17/2021 -+ #customer intent: As an admin, I want to configure risk-based step-up consent.
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md
Last updated 09/06/2022 -+ #customer intent: As an admin, I want to configure group owner consent to apps accessing group data using Azure AD
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Last updated 04/19/2023 -+ zone_pivot_groups: enterprise-apps-minus-aad-powershell
active-directory Custom Security Attributes Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/custom-security-attributes-apps.md
Last updated 02/28/2023
zone_pivot_groups: enterprise-apps-all+
active-directory Datawiza Azure Ad Sso Mfa Oracle Ebs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-mfa-oracle-ebs.md
Last updated 01/26/2023 + # Configure Datawiza for Azure AD Multi-Factor Authentication and single sign-on to Oracle EBS
active-directory Datawiza Azure Ad Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-jde.md
Last updated 01/24/2023 + # Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
Last updated 01/25/2023 + # Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle PeopleSoft
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
Last updated 01/23/2023 -+ # Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md
Last updated 05/27/2022+ # Debug SAML-based single sign-on to applications
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
Last updated 07/28/2022
zone_pivot_groups: enterprise-apps-all+ #Customer intent: As an administrator of an Azure AD tenant, I want to delete an enterprise application.
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Last updated 2/23/2023 -+ zone_pivot_groups: enterprise-apps-all
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
Last updated 12/08/2022 + # End-user experiences for applications
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Last updated 12/13/2022 + # Integrate F5 BIG-IP with Azure Active Directory
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Last updated 12/13/2022
+ # Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
Last updated 03/27/2023 + # Configure F5 BIG-IP Access Policy Manager for form-based SSO
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
Last updated 03/22/2023 + # Tutorial: Configure F5 BIG-IP Access Policy Manager for header-based single sign-on
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
Last updated 03/27/2023 + # Tutorial: Configure F5 BIG-IP Easy Button for header-based SSO
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
Last updated 12/13/2022 + # Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
Last updated 12/14/2022 + # Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
Last updated 12/14/2022 + # Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
Last updated 03/23/2023 + # Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
Last updated 02/03/2022 + # Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE
active-directory F5 Big Ip Oracle Peoplesoft Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md
Last updated 02/26/2022 + # Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to SAP ERP
-description: Learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+description: Learn to secure SAP ERP using Azure AD with F5 BIG-IP Easy Button Guided Configuration.
Previously updated : 3/1/2022 Last updated : 05/02/2023 +
-# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to SAP ERP
+# Tutorial: Configure F5 BIG-IP Easy Button for SSO to SAP ERP
-In this article, learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+In this article, learn to secure SAP ERP using Azure Active Directory (Azure AD), with F5 BIG-IP Easy Button Guided Configuration 16.1. Integrating a BIG-IP with Azure AD has many benefits:
-Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+* [What is Conditional Access?](../conditional-access/overview.md)
+* Single sign-on (SSO) between Azure AD and BIG-IP published services
+* Manage identities and access from the [Azure portal](https://portal.azure.com/)
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
+Learn more:
-* Full SSO between Azure AD and BIG-IP published services
-
-* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-
-To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+* [Integrate F5 BIG-IP with Azure AD](./f5-aad-integration.md)
+* [Enable SSO for an enterprise application](add-application-portal-setup-sso.md).
## Scenario description
-This scenario looks at the classic **SAP ERP application using Kerberos authentication** to manage access to protected content.
+This scenario includes the SAP ERP application using Kerberos authentication to manage access to protected content.
-Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+Legacy applications lack modern protocols to support integration with Azure AD. Modernization is costly, requires planning, and introduces potential downtime risk. Instead, use an F5 BIG-IP Application Delivery Controller (ADC) to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
+A BIG-IP in front of the application enables overlay of the service with Azure AD preauthentication and headers-based SSO. This configuration improves overall application security posture.
## Scenario architecture
-The SHA solution for this scenario is made up of the following:
-
-**SAP ERP application:** BIG-IP published service to be protected by and Azure AD SHA.
-
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP.
+The secure hybrid access (SHA) solution has the following components:
-**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the SAP service.
+* **SAP ERP application** - a BIG-IP published service protected by Azure AD SHA
+* **Azure AD** - Security Assertion Markup Language (SAML) identity provider (IdP) that verifies user credentials, Conditional Access, and SAML-based SSO to the BIG-IP
+* **BIG-IP** - reverse-proxy and SAML service provider (SP) to the application. BIG-IP delegates authentication to the SAML IdP then performs header-based SSO to the SAP service
-SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+SHA supports SP and IdP initiated flows. The following image illustrates the SP-initiated flow.
-![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-sap-erp/sp-initiated-flow.png)
+ ![Diagram of secure hybrid access, the SP initiated flow.](./media/f5-big-ip-easy-button-sap-erp/sp-initiated-flow.png)
-| Steps| Description|
-| -- |-|
-| 1| User connects to application endpoint (BIG-IP) |
-| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
-| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
-| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
-| 5| BIG-IP requests Kerberos ticket from KDC |
-| 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
-| 7| Application authorizes request and returns payload |
+1. User connects to application endpoint (BIG-IP)
+2. BIG-IP APM access policy redirects user to Azure AD (SAML IdP)
+3. Azure AD pre-authenticates user and applies enforced Conditional Access policies
+4. User is redirected to BIG-IP (SAML SP) and SSO occurs with issued SAML token
+5. BIG-IP requests Kerberos ticket from KDC
+6. BIG-IP sends request to back-end application, with the Kerberos ticket for SSO
+7. Application authorizes request and returns payload
## Prerequisites
-Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
-
-* An Azure AD free subscription or above
-
-* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
-
-* Any of the following F5 BIG-IP license offers
+* An Azure AD free account, or higher
+ * If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/active-directory/)
+* A BIG-IP or a BIG-IP Virtual Edition (VE) in Azure
+ * See, [Deploy F5 BIG-IP Virtual Edition VM in Azure](./f5-bigip-deployment-guide.md)
+* Any of the following F5 BIG-IP licenses:
* F5 BIG-IP® Best bundle- * F5 BIG-IP APM standalone license- * F5 BIG-IP APM add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)-
- * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
-
-* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory
-
-* An account with Azure AD Application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-
-* An [SSL Web certificate](./f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certs while testing
-
-* An existing SAP ERP environment configured for Kerberos authentication
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php)
+* User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to the on-premises directory
+ * See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)
+* An account with Azure AD Application Admin permissions
+ * See, [Azure AD built-in roles](../roles/permissions-reference.md)
+* An SSL Web certificate to publish services over HTTPS, or use default BIG-IP certs for testing
+ * See, [Deploy F5 BIG-IP Virtual Edition VM in Azure](./f5-bigip-deployment-guide.md)
+* An SAP ERP environment configured for Kerberos authentication
## BIG-IP configuration methods
-There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template.
-
-With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+This tutorial uses Guided Configuration 16.1 with an Easy Button template. With the Easy Button, admins don't go between Azure AD and a BIG-IP to enable services for SHA. The APM Guided Configuration wizard and Microsoft Graph handle deployment and policy management. This integration ensures applications support identity federation, SSO, and Conditional Access.
->[!NOTE]
-> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+ >[!NOTE]
+ > Replace example strings or values in this guide with those in your environment.
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
-
-The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
-
-1. Sign-in to the [Azure portal](https://portal.azure.com/) using an account with Application Administrative rights
-
-2. From the left navigation pane, select the **Azure Active Directory** service
-
-3. Under Manage, select **App registrations > New registration**
-
-4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button*
+Before a client or service accesses Microsoft Graph, the Microsoft identity platform must trust it.
-5. Specify who can use the application > **Accounts in this organizational directory only**
+See, [Quickstart: Register an application with the Microsoft identity platform](../develop/quickstart-register-app.md)
-6. Select **Register** to complete the initial app registration
+Register the Easy Button client in Azure AD, then it's allowed to establish a trust between SAML SP instances of a BIG-IP published application, and Azure AD as the SAML IdP.
-7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+1. Sign in to the [Azure portal](https://portal.azure.com/) with Application Administrator permissions.
+2. In the left navigation pane, select the **Azure Active Directory** service.
+3. Under Manage, select **App registrations > New registration**.
+4. Enter a **Name**.
+5. In **Accounts in this organizational directory only**, specify who can use the application.
+6. Select **Register**.
+7. Navigate to **API permissions**.
+8. Authorize the following Microsoft Graph Application permissions:
* Application.Read.All * Application.ReadWrite.All
The Easy Button client must also be registered in Azure AD, before it is allowed
* Policy.ReadWrite.ConditionalAccess * User.Read.All
-8. Grant admin consent for your organization
+9. Grant admin consent for your organization.
+10. On **Certificates & Secrets**, generate a new **client secret**.
+11. Note the secret to use later.
+12. From **Overview**, note the **Client ID** and **Tenant ID**.
-9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down
+## Configure the Easy Button
-10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
+1. Initiate the APM Guided Configuration.
+2. Launch the Easy Button template.
+3. From a browser, sign-in to the F5 BIG-IP management console.
+4. Navigate to **Access > Guided Configuration > Microsoft Integration**.
+5. Select **Azure AD Application**.
-## Configure Easy Button
+ ![Screenshot of the Azure AD Application option on Guided Configuration.](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+6. Review the configuration list.
+7. Select **Next**.
-1. From a browser, sign-in to the **F5 BIG-IP management console**
+ ![Screenshot of the configuration list and the Next button.](./media/f5-big-ip-easy-button-ldap/config-steps.png)
-2. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+8. Follow the configuration sequence under **Azure AD Application Configuration**.
- ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-
-3. Review the list of configuration steps and select **Next**
-
- ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
-
-4. Follow the sequence of steps required to publish your application.
-
- ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
+ ![Screenshot of configuration sequence.](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
### Configuration Properties
-These are general and service account properties. The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
-
-Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
-
-1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
+The **Configuration Properties** tab has service account properties and creates a BIG-IP application config and SSO object. The **Azure Service Account Details** section represents the client you registered as an application, in the Azure AD tenant. Use the settings for BIG-IP OAuth client to individually register a SAML SP in the tenant, with the SSO properties. Easy Button does this action for BIG-IP services published and enabled for SHA.
-2. Enable **Single Sign-On (SSO) & HTTP Headers**
+ > [!NOTE]
+ > Some settings are global and can be re-used to publish more applications.
-3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted when registering the Easy Button client in your tenant
+1. Enter a **Configuration Name**. Unique names differentiate Easy Button configurations.
+2. For **Single Sign-On (SSO) & HTTP Headers**, select **On**.
+3. For **Tenant ID, Client ID,** and **Client Secret**, enter the Tenant ID, Client ID, and Client Secret you noted during tenant registration.
+4. Select **Test Connection**. This action confirms the BIG-IP connects to your tenant.
+5. Select **Next**.
-4. Confirm the BIG-IP can successfully connect to your tenant and select **Next**
-
- ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-sap-erp/configuration-general-and-service-account-properties.png)
+ ![Screenshot of options and selections for Configuration Properties.](./media/f5-big-ip-easy-button-sap-erp/configuration-general-and-service-account-properties.png)
### Service Provider
-The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
-
-1. Enter **Host**. This is the public FQDN of the application being secured
+Use the Service Provider settings to define SAML SP instance properties of the application secured by SHA.
-2. Enter **Entity ID.** This is the identifier Azure AD will use to identify the SAML SP requesting a token
+1. For **Host**, enter the public fully qualified domain name (FQDN) of the application being secured.
+2. For **Entity ID**, enter the identifier Azure AD uses to identify the SAML SP requesting a token.
- ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-sap-erp/service-provider-settings.png)
+ ![Screenshot options and selections for Service Provider.](./media/f5-big-ip-easy-button-sap-erp/service-provider-settings.png)
- The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
-
-3. From the **Assertion Decryption Private Key** list, select **Create New**
+3. (Optional) Use **Security Settings** to indicate Azure AD encrypts issued SAML assertions. Assertions encrypted between Azure AD and the BIG-IP APM increase assurance that content tokens aren't intercepted, nor data compromised.
+4. From **Assertion Decryption Private Key**, select **Create New**.
- ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
-
-4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+ ![Screenshot of the Create New option from the Assertion Decryption Private Key list.](./media/f5-big-ip-oracle/configure-security-create-new.png)
-5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab
+5. Select **OK**.
+6. The **Import SSL Certificate and Keys** dialog appears in a new tab.
- ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-easy-button-sap-erp/import-ssl-certificates-and-keys.png)
+7. To import the certificate and private key, select **PKCS 12 (IIS)**.
+8. Close the browser tab to return to the main tab.
-6. Check **Enable Encrypted Assertion**
+ ![Screenshot of options and selections for Import SSL Certificates and Keys.](./media/f5-big-ip-easy-button-sap-erp/import-ssl-certificates-and-keys.png)
-7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+9. For **Enable Encrypted Assertion**, check the box.
+10. If you enabled encryption, from the **Assertion Decryption Private Key** list, select the private key for the certificate BIG-IP APM uses to decrypt Azure AD assertions.
+11. If you enabled encryption, from the **Assertion Decryption Certificate** list, select the certificate BIG-IP uploads to Azure AD to encrypt the issued SAML assertions.
-8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions
-
- ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+ ![Screenshot of options and selections for Service Provider.](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
+Easy Button has application templates for Oracle PeopleSoft, Oracle E-Business Suite, Oracle JD Edwards, SAP ERP, and a generic SHA template.
-Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario, select **SAP ERP Central Component > Add** to start the Azure configurations.
+1. To start Azure configuration, select **SAP ERP Central Component > Add**.
- ![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-sap-erp/azure-config-add-app.png)
+ ![Screenshot of the SAP ERP Central Component option on Azure Configuration and the Add button.](./media/f5-big-ip-easy-button-sap-erp/azure-config-add-app.png)
+
+ > [!NOTE]
+ > You can use the information in the following sections when manually configuring a new BIG-IP SAML application in an Azure AD tenant.
#### Azure Configuration
-1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see in [MyApps portal](https://myapplications.microsoft.com/)
+1. For **Display Name** enter the app BIG-IP creates in the Azure AD tenant. The name appears on the icon in the [My Apps](https://myapplications.microsoft.com/) portal.
+2. (Optional) leave **Sign On URL (optional)** blank.
-2. Leave the **Sign On URL (optional)** blank to enable IdP initiated sign-on
+ ![Screenshot of entries for Display Name and Sign On URL.](./media/f5-big-ip-easy-button-sap-erp/azure-configuration-add-display-info.png)
- ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-sap-erp/azure-configuration-add-display-info.png)
+3. Next to **Signing Key** select **refresh**.
+4. Select **Signing Certificate**. This action locates the certificate you entered.
+5. For **Signing Key Passphrase**, enter the certificate password.
+6. (Optional) Enable **Signing Option**. This option ensures BIG-IP accepts tokens and claims signed by Azure AD
-3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
-
-5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+ ![Screenshot of entries for Signing Key, Signing Certificate, and Signing Key Passphrase.](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+7. **User and User Groups** are dynamically queried from your Azure AD tenant. Groups help authorize application access.
+8. Add a user or group for testing, otherwise access is denied.
- ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-
-7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
-
- ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+ ![Screenshot of the Add button on User And User Groups.](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
#### User Attributes & Claims
-When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+When users authenticate to Azure AD, it issues a SAML token with default claims and attributes identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. Use it to configure more claims.
-As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](./f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
+This tutorial is based on a .com domain suffix used internally and externally. No other attributes are required to achieve a functional Kerberos constrained delegation (KCD) SSO implementation.
- ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-sap-erp/user-attributes-claims.png)
+ ![Screenshot of the User Attributes & Claims tab.](./media/f5-big-ip-easy-button-sap-erp/user-attributes-claims.png)
-You can include additional Azure AD attributes, if necessary, but for this scenario SAP ERP only requires the default attributes.
+You can include more Azure AD attributes. For this tutorial, SAP ERP requires the default attributes.
+
+Learn more: [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](./f5-big-ip-kerberos-advanced.md). See, instructions on multiple domains or user sign in with alternate suffixes.
#### Additional User Attributes
-The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+The **Additional User Attributes** tab supports distributed systems requiring attributes stored in other directories, for session augmentation. Thus, attributes from an LDAP source are injected as more SSO headers to control role-based access, Partner IDs, etc.
- ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
+ ![Screenshot of the Addtional User Attributes tab.](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
->[!NOTE]
->This feature has no correlation to Azure AD but is another source of attributes.
+ >[!NOTE]
+ >This feature has no correlation to Azure AD but is another attribute source.
#### Conditional Access Policy
-CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+Conditional Access policies are enforced after Azure AD preauthentication. This action controls access based on device, application, location, and risk signals.
-The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
+The **Available Policies** view lists Conditional Access policies without user-based actions.
-The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+The **Selected Policies** view lists policies targeting cloud apps. You can't deselect these policies, nor move them to the Available Policies list because they're enforced at the tenant level.
-To select a policy to be applied to the application being published:
+To select a policy for the application being published:
-1. Select the desired policy in the **Available Policies** list
-2. Select the right arrow and move it to the **Selected Policies** list
+1. From the **Available Policies** list, select the policy.
+2. Select the right arrow.
+3. Move the policy to the **Selected Policies** list.
-Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
+Selected policies have an **Include** or **Exclude** option checked. If both options are checked, the selected policy isn't enforced.
-![ Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+ ![Screenshot of excluded policies in Selected Policies.](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
->[!NOTE]
->The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+ >[!NOTE]
+ >The policy list appears when you initially select this tab. Use the **refresh** button to query your tenant. Refresh appears when the application is deployed.
### Virtual Server Properties
-A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing
-
-2. Enter **Service Port** as *443* for HTTPS
-
-3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+A virtual server is a BIG-IP data plane object represented by a virtual IP address. This server listens for client requests to the application. Received traffic is processed and evaluated against the APM profile associated with the virtual server. Traffic is then directed according to policy.
-4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+1. Enter a **Destination Address**. Use the IPv4/IPv6 address BIG-IP uses to receive client traffic. A corresponding record is in DNS, which enables clients to resolve the external URL of BIG-IP published application to this IP. You can use a test computer localhost DNS for testing.
+2. For **Service Port**, enter **443**.
+3. Select **HTTPS**.
+4. For **Enable Redirect Port**, check the box.
+5. For **Redirect Port**, enter a number and select **HTTP**. This option redirects incoming HTTP client traffic to HTTPS.
+6. Select the **Client SSL Profile** you created. Or, leave the default for testing. The Client SSL Profile enables the virtual server for HTTPS, so client connections are encrypted over TLS.
- ![ Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+ ![Screenshot of options and selections for Virtual Server Properties.](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
### Pool Properties
-The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+The **Application Pool** tab has services behind a BIG-IP, represented as a pool with application servers.
-1. Choose from **Select a Pool.** Create a new pool or select an existing one
+1. For **Select a Pool**, select **Create New**, or select a pool.
+2. For **Load Balancing Method**, select **Round Robin**.
+3. For **Pool Servers** select a server node, or enter an IP and port for the back-end node hosting the header-based application.
-2. Choose the **Load Balancing Method** as *Round Robin*
-
-3. For **Pool Servers** select an existing server node or specify an IP and port for the backend node hosting the header-based application
-
- ![ Screenshot for Application pool](./media/f5-big-ip-easy-button-ldap/application-pool.png)
+ ![Screenshot of options and selections for Application Pool.](./media/f5-big-ip-easy-button-ldap/application-pool.png)
#### Single Sign-On & HTTP Headers
-Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO. You will need the Kerberos delegation account created earlier to complete this step.
-
-Enable **Kerberos** and **Show Advanced Setting** to enter the following:
-
-* **Username Source:** Specifies the preferred username to cache for SSO. You can provide any session variable as the source of the user ID, but *session.saml.last.identity* tends to work best as it holds the Azure AD claim containing the logged in user ID
-
-* **User Realm Source:** Required if the user domain is different to the BIG-IPΓÇÖs kerberos realm. In that case, the APM session variable would contain the logged in user domain. For example,*session.saml.last.attr.name.domain*
-
- ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-kerberos-easy-button/sso-headers.png)
-
-* **KDC:** IP of a Domain Controller (Or FQDN if DNS is configured & efficient)
+Use SSO to enable access BIG-IP published services without entering credentials. The Easy Button wizard supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO. For the following instructions, you need the Kerberos delegation account you created.
-* **UPN Support:** Enable for the APM to use the UPN for kerberos ticketing
+1. On **Single Sign-On & HTTP Headers**, for **Advanced Settings**, select **On**.
+2. For **Selected Single Sign-On Type**, select **Kerberos**.
+3. For **Username Source**, enter a session variable as the user ID source. `session.saml.last.identity` holds the Azure AD claim with the signed-in user ID.
+4. The **User Realm Source** option is required if the user domain differs from the BIG-IP kerberos realm. Thus, the APM session variable contains the signed in user domain. For example, `session.saml.last.attr.name.domain`.
-* **SPN Pattern:** Use HTTP/%h to inform the APM to use the host header of the client request and build the SPN that it is requesting a kerberos token for.
+ ![Screenshot of options and selections for Single Sign-On & HTTP Headers.](./media/f5-big-ip-kerberos-easy-button/sso-headers.png)
-* **Send Authorization:** Disable for applications that prefer negotiating authentication instead of receiving the kerberos token in the first request. For example, *Tomcat.*
+5. For **KDC**, enter a domain controller IP, or FQDN if the DNS is configured.
+6. For **UPN Support**, check the box. The APM uses the UPN for kerberos ticketing.
+7. For **SPN Pattern**, enter **HTTP/%h**. This action informs the APM to use the client-request host header and build the SPN for which it's requesting a kerberos token.
+8. For **Send Authorization,** disable the option for applications that negotiate authentication. For example, **Tomcat**.
- ![Screenshot for SSO method configuration](./media/f5-big-ip-kerberos-easy-button/sso-method-config.png)
+ ![Screenshot of options and selections for SSO Method Configuration.](./media/f5-big-ip-kerberos-easy-button/sso-method-config.png)
### Session Management
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation]( https://support.f5.com/csp/article/K18390492) for details on these settings.
+Use BIG-IP session management settings to define conditions when user sessions terminate or continue. Conditions include limits for users and IP addresses, and corresponding user info.
-What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users log off.
- When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Microsoft [MyApps portal]( https://support.microsoft.com/en-us/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) also terminate the session between the BIG-IP and a client.
+To learn more, go to my.f5.com for [K18390492: Security | BIG-IP APM operations guide]( https://support.f5.com/csp/article/K18390492)
-During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs terminate the session between a client and Azure AD.
+The operations guide doesn't cover Single Log-Out (SLO). This feature ensures sessions between the IdP, the BIG-IP, and the user agent terminate when users sign out. The Easy Button deploys a SAML application to the Azure AD tenant. It populates the Logout URL with the APM SLO endpoint. IdP initiated sign out from the [My Apps]( https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) portal terminates the BIG-IP and client session.
-## Summary
+During deployment, the published-application SAML federation metadata is imported from the tenant. This action provides the APM the SAML sign out endpoint for Azure AD and helps SP-initiated sign out terminate the client and Azure AD session.
-This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications.
+## Deployment
-## Next steps
+1. Select **Deploy**.
+2. Verify the application is in the tenant **Enterprise applications** list.
+3. With a browser, connect to the application external URL or select the application **icon** in [My Apps](https://myapps.microsoft.com/).
+4. Authenticate to Azure AD.
+5. You're redirected to the BIG-IP virtual server and signed in through SSO.
-From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
-
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+For increased security, you can block direct access to the application, thereby enforcing a path through the BIG-IP.
## Advanced deployment
-There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
+The Guided Configuration templates sometimes lack flexibility.
-Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+Learn more: [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](./f5-big-ip-kerberos-advanced.md).
-You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
-
- ![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+### Disable strict management mode
+
+Alternatively, in BIG-IP you can disable Guided Configuration strict management mode. You can change your configurations manually, although most configurations are automated with wizard templates.
-At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+1. Navigate to **Access > Guided Configuration**.
+2. At the end of the row for your application configuration, select the **padlock**.
+3. BIG-IP objects associated with the published application are unlocked for management. Changes via the wizard UI are no longer possible.
+
+ ![Screenshot of the padlock icon.](./media/f5-big-ip-oracle/strict-mode-padlock.png)
->[!NOTE]
->Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+ >[!NOTE]
+ >To re-enable strict management mode and deploye a configuration overwrites settings outside the Guided Configuration UI. Therefore, we recommend the advanced configuration method for production services.
## Troubleshooting
-You can fail to access the SHA protected application due to any number of factors, including a misconfiguration.
-
-* Kerberos is time sensitive, so requires that servers and clients be set to the correct time and where possible synchronized to a reliable time source
+If you're unable to access the SHA-secured application, see the following troubleshooting guidance.
-* Ensure the hostname for the domain controller and web application are resolvable in DNS
+* Kerberos is time sensitive. Ensure servers and clients are set to the correct time, and synchronized to a reliable time source.
+* Ensure the domain controller and web app hostname resolve in DNS.
+* Confirm no duplicate SPNs in the environment.
+ * On a domain computer, at the command line, use the query: `setspn -q HTTP/my_target_SPN`
-* Ensure there are no duplicate SPNs in your AD environment by executing the following query at the command line on a domain PC: setspn -q HTTP/my_target_SPN
+To validate an IIS application KCD configuration, see [Troubleshoot KCD configurations for Application Proxy](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md)
-You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+Go to techdocs.f5.com for [Kerberos Single Sign-On Method](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html)
### Log analysis
-BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+#### Log verbosity
+
+BIG-IP logging isolates issues with connectivity, SSO, policy violations, or misconfigured variable mappings. To start troubleshooting, increase log verbosity.
-1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+1. Navigate to **Access Policy > Overview**.
+2. Select **Event Logs**.
+3. Select **Settings**.
+4. Select the row for your published application.
+5. Select **Edit**.
+6. Select **Access System Logs**
+7. From the SSO list, select **Debug**.
+8. Select **OK**.
+9. Reproduce your issue.
+10. Inspect the logs.
-2. Select the row for your published application, then **Edit > Access System Logs**
+When inspection is complete, revert log verbosity because this mode generates excessive data.
-3. Select **Debug** from the SSO list, and then select **OK**
+#### BIG-IP error message
-Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+If a BIG-IP error message appears after Azure AD preauthentication, the issue might relate to Azure AD to BIG-IP SSO.
-If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+1. Navigate to **Access > Overview**.
+2. Select **Access reports**.
+3. Run the report for the last hour.
+4. Inspect the logs.
-1. Navigate to **Access > Overview > Access reports**
+Use the current session's **View session variables** link to see if APM receives expected Azure AD claims.
-2. Run the report for the last hour to see logs provide any clues. The **View session variables** link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+#### No BIG-IP error message
-If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+If no BIG-IP error message appeared, the issue might be related to the back-end request, or BIG-IP to application SSO.
-1. Navigate to **Access Policy > Overview > Active Sessions**
+1. Navigate to **Access Policy > Overview**.
+2. Select **Active Sessions**.
+3. Select the link for the current session.
+4. Use the **View Variables** link to identify KCD issues, particularly if the BIG-IP APM doesn't obtain correct user and domain identifiers from session variables.
-2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
+Learn more:
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+* Go to devcentral.f5.com for [APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107)
+* Go to techdocs.f5.com for [Session variables](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html)
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
Last updated 12/13/2022
+ # Deploy F5 BIG-IP Virtual Edition VM in Azure
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Last updated 04/14/2023
-+ zone_pivot_groups: enterprise-apps-minus-aad-powershell #customer intent: As an admin, I want to grant tenant-wide admin consent to an application in Azure AD.
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
Last updated 12/09/2022
zone_pivot_groups: enterprise-apps-ms-graph-ms-powershell+ #customer intent: As an admin, I want to grant consent on behalf of a single user
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
zone_pivot_groups: enterprise-apps-all+ #customer intent: As an admin, I want to hide an enterprise application from user's experience so that it is not listed in the user's Active directory access portals or Microsoft 365 launchers
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Last updated 01/02/2023 + # Home Realm Discovery for an application
active-directory Howto Enforce Signed Saml Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-enforce-signed-saml-authentication.md
Last updated 06/29/2022 -++
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Last updated 07/21/2022
+ # Configure Azure Active Directory SAML token encryption
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
Last updated 02/28/2023 -+ zone_pivot_groups: enterprise-apps-minus-portal-aad #customer intent: As an admin, I want to manage app consent policies for enterprise applications in Azure AD
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
zone_pivot_groups: enterprise-apps-all + #customer intent: As an admin, I want to review permissions granted to applications so that I can restrict suspicious or over privileged applications.
active-directory Manage Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-consent-requests.md
Last updated 07/14/2022 + # Manage consent to applications and evaluate consent requests
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
Last updated 03/29/2023
+ #customer intent: As an admin, I want to enable self-service application access so that users can self-discover applications from their My Apps portal.
active-directory Methods For Removing User Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/methods-for-removing-user-access.md
Last updated 11/17/2021 + # Remove user access to applications
active-directory Migrate Adfs Application Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
Last updated 03/23/2023
+ # Review the application activity report
active-directory Migrate Adfs Apps To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
Last updated 03/23/2023 + # Move application authentication to Azure Active Directory
active-directory Migrate Applications From Okta To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-applications-from-okta-to-azure-active-directory.md
Last updated 12/14/2022 + # Tutorial: Migrate your applications from Okta to Azure Active Directory
active-directory Migrate Okta Federation To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation-to-azure-active-directory.md
Last updated 05/19/2022 -+ # Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication
active-directory Migrate Okta Sign On Policies To Azure Active Directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md
Last updated 01/13/2023 + # Tutorial: Migrate Okta sign-on policies to Azure Active Directory Conditional Access
active-directory Migrate Okta Sync Provisioning To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning-to-azure-active-directory.md
Last updated 05/19/2022 -+ # Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migration-resources.md
Last updated 02/29/2020 + # Resources for migrating applications to Azure Active Directory
active-directory Myapps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/myapps-overview.md
Last updated 11/24/2022 -+ #Customer intent: As an Azure AD administrator, I want to make applications available to users in the My Apps portal.
active-directory One Click Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/one-click-sso-tutorial.md
Last updated 06/11/2019
+ # One-click app configuration of single sign-on
active-directory Overview Application Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/overview-application-gallery.md
Last updated 01/22/2022 + # Overview of the Azure Active Directory application gallery
active-directory Overview Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/overview-assign-app-owners.md
Last updated 12/05/2022 + #Customer intent: As an Azure AD administrator, I want to learn about enterprise application ownership.- # Overview of enterprise application ownership in Azure Active Directory
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-an-application-integration.md
Last updated 04/05/2021 + # Integrating Azure Active Directory with applications getting started guide
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md
Last updated 03/20/2023
-+ # Customer intent: As an IT admin, I need to learn what it takes to plan a single-sign on deployment for my application in Azure Active Directory.
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Last updated 03/16/2023
zone_pivot_groups: home-realm-discovery+ #customer intent: As an admin, I want to disable auto-acceleration to federated IDP during sign in using Home Realm Discovery policy
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Last updated 06/17/2022-+
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
Last updated 11/02/2022 -+ zone_pivot_groups: enterprise-apps-minus-portal #Customer intent: As an administrator of an Azure AD tenant, I want to restore a soft deleted enterprise application.
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
Last updated 07/21/2022 + #customer intent: As an admin, I want to review and take action on admin consent requests.
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Last updated 01/19/2023 ++ # Secure hybrid access with Azure Active Directory partner integrations
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
Last updated 01/17/2023 + # Secure hybrid access: Protect legacy apps with Azure Active Directory
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
Last updated 12/14/2022 + # Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tenant-restrictions.md
Last updated 3/16/2023
-+ # Restrict access to a tenant
active-directory Troubleshoot App Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
Last updated 1/18/2022 + #Customer intent: As a publisher of an application, I want troubleshoot a blocked sign-in to the Microsoft Application Network portal.
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
Last updated 07/11/2017 + # Troubleshoot password-based single sign-on
active-directory Troubleshoot Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
Last updated 07/11/2017 + # Troubleshoot SAML-based single sign-on
active-directory Tutorial Govern Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-govern-monitor.md
Last updated 07/19/2022 + # Customer intent: As an administrator of an Azure AD tenant, I want to govern and monitor my applications.
active-directory Tutorial Manage Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md
Last updated 07/18/2022+ # Customer intent: As an administrator of an Azure AD tenant, I want to manage access to my applications and make sure they are secure.
active-directory Tutorial Manage Certificates For Federated Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
Last updated 02/02/2023
+ #customer intent: As an admin of an application, I want to learn how to manage federated SAML certificates by customizing expiration dates and renewing certificates.
active-directory User Admin Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/user-admin-consent-overview.md
Last updated 04/04/2023
+ # User and admin consent in Azure Active Directory
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
Last updated 6/2/2022 -+ # Submit a request to publish your application in Azure Active Directory application gallery
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
Last updated 01/07/2021 + # Understand how users are assigned to apps
active-directory What Is Access Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-access-management.md
Last updated 07/20/2022 + # Manage access to an application
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
Last updated 12/07/2022 + # What is application management in Azure Active Directory?
active-directory What Is Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-single-sign-on.md
Last updated 12/07/2022 -+ # Customer intent: As an IT admin, I need to learn about single sign-on and my applications in Azure Active Directory.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 04/03/2023 Last updated : 05/02/2023
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## April 2023
+### Updated articles
+
+- [Configure permission classifications](configure-permission-classifications.md)
+- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)
+- [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for header-based single sign-on](f5-big-ip-header-advanced.md)
## March 2023 ### Updated articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Disable auto-acceleration sign-in](prevent-domain-hints-with-home-realm-discovery.md) - [Review permissions granted to enterprise applications](manage-application-permissions.md) - [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md) - [Configure permission classifications](configure-permission-classifications.md) - [Restrict access to a tenant](tenant-restrictions.md) - [Tutorial: Migrate Okta sign-on policies to Azure Active Directory Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Configure permission classifications](configure-permission-classifications.md) - [Disable user sign-in for an application](disable-user-sign-in-portal.md) - [Configure Datawiza for Azure AD Multi-Factor Authentication and single sign-on to Oracle EBS](datawiza-azure-ad-sso-mfa-oracle-ebs.md)-
-## January 2023
-
-### New articles
--- [Configure Datawiza for Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle EBS](datawiza-azure-ad-sso-mfa-oracle-ebs.md)-
-### Updated articles
--- [Manage app consent policies](manage-app-consent-policies.md)-- [Assign enterprise application owners](assign-app-owners.md)-- [Configure enterprise application properties](add-application-portal-configure.md)-- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards](datawiza-azure-ad-sso-oracle-jde.md)-- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle PeopleSoft](datawiza-azure-ad-sso-oracle-peoplesoft.md)-- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza](datawiza-with-azure-ad.md)-- [Secure hybrid access: Protect legacy apps with Azure Active Directory](secure-hybrid-access.md)-- [Create an enterprise application from a multi-tenant application in Azure Active Directory](create-service-principal-cross-tenant.md)-- [Configure sign-in behavior using Home Realm Discovery](configure-authentication-for-federated-users-portal.md)-- [Secure hybrid access with Azure Active Directory partner integrations](secure-hybrid-access-integrations.md)
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
If a sign-in failed, you can get more information about the reason in the **Basi
![Screenshot of a sign-in error code.](./media/concept-all-sign-ins/error-code.png)
-For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-aadsts-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
+For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
![Screenshot of the error code lookup tool.](./media/concept-all-sign-ins/error-code-lookup-tool.png)
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
If a sign-in failed, you can get more information about the reason in the **Basi
![Screenshot of a sign-in error code.](./media/concept-sign-ins/error-code.png)
-For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-aadsts-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
+For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
![Screenshot of the error code lookup tool.](./media/concept-sign-ins/error-code-lookup-tool.png)
active-directory Howto Troubleshoot Sign In Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md
You need:
6. The failure reason describes the error. For example, in the above scenario, the failure reason is **Invalid username or password or Invalid on-premises username or password**. The fix is to simply sign-in again with the correct username and password.
-7. You can get additional information, including ideas for remediation, by searching for the error code, **50126** in this example, in the [sign-ins error codes reference](../develop/reference-aadsts-error-codes.md).
+7. You can get additional information, including ideas for remediation, by searching for the error code, **50126** in this example, in the [sign-ins error codes reference](../develop/reference-error-codes.md).
8. If all else fails, or the issue persists despite taking the recommended course of action, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) following the steps in the **Troubleshooting and support** tab.
active-directory Sharepoint On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md
$t.Update()
### Configure the lifetime of the security token By default, Azure AD creates a SAML token that is valid for 1 hour.
-This lifetime cannot be customized in the Azure portal, or using a conditional access policy, but it can be done by creating a [custom token lifetime policy](../develop/active-directory-configurable-token-lifetimes.md) and apply it to the enterprise application created for SharePoint.
+This lifetime cannot be customized in the Azure portal, or using a conditional access policy, but it can be done by creating a [custom token lifetime policy](../develop/configurable-token-lifetimes.md) and apply it to the enterprise application created for SharePoint.
To do this, complete the steps below using Windows PowerShell (at the time of this writing, AzureADPreview v2.0.2.149 does not work with PowerShell Core): 1. Install the module [AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview/):
active-directory Fedramp Identification And Authentication Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md
Each row in the following table provides prescriptive guidance to help you devel
| **IA-2(11)**<br>The information system implements multifactor authentication for remote access to privileged and non-privileged accounts such that one of the factors is provided by a device separate from the system gaining access and the device meets [*FedRAMP Assignment: FIPS 140-2, NIAP* Certification, or NSA approval*].<br><br>*National Information Assurance Partnership (NIAP)<br>**Additional FedRAMP Requirements and Guidance:**<br>**Guidance:** PIV = separate device. Please refer to NIST SP 800-157 Guidelines for Derived Personal Identity Verification (PIV) Credentials. FIPS 140-2 means validated by the Cryptographic Module Validation Program (CMVP). | **Implement Azure AD multifactor authentication to access customer-deployed resources remotely so that one of the factors is provided by a device separate from the system gaining access where the device meets FIPS-140-2, NIAP certification, or NSA approval.**<p>See guidance for IA-02(1-4). Azure AD authentication methods to consider at AAL3 meeting the separate device requirements are:<p> FIDO2 security keys<br> <li>Windows Hello for Business with hardware TPM (TPM is recognized as a valid "something you have" factor by NIST 800-63B Section 5.1.7.1.)<br> <li>Smart card<p>References<br><li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md)<br> <li>[NIST 800-63B Section 5.1.7.1](https://pages.nist.gov/800-63-3/sp800-63b.html) | | **IA-2(12)*<br>The information system accepts and electronically verifies Personal Identity Verification (PIV) credentials.<br><br>**IA-2 (12) Additional FedRAMP Requirements and Guidance:**<br>**Guidance:** Include Common Access Card (CAC), i.e., the DoD technical implementation of PIV/FIPS 201/HSPD-12. | **Accept and verify personal identity verification (PIV) credentials. This control isn't applicable if the customer doesn't deploy PIV credentials.**<p>Configure federated authentication by using Active Directory Federation Services (AD FS) to accept PIV (certificate authentication) as both primary and multifactor authentication methods and issue the multifactor authentication (MultipleAuthN) claim when PIV is used. Configure the federated domain in Azure AD with setting **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` (recommended) or SupportsMfa to `$True` to direct multifactor authentication requests originating at Azure AD to AD FS. Alternatively, you can use PIV for sign-in on Windows devices and later use integrated Windows authentication along with seamless single sign-on. Windows Server and client verify certificates by default when used for authentication. <p>Resources<br><li>[What is federation with Azure AD?](../hybrid/whatis-fed.md)<br> <li>[Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br> <li>[Configure authentication policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<br> <li>[Secure resources with Azure AD multifactor authentication and AD FS](../authentication/howto-mfa-adfs.md)<br><li>[Set-MsolDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings)<br> <li>[Azure AD Connect: Seamless single sign-on](../hybrid/how-to-connect-sso.md) | | **IA-3 Device Identification and Authentication**<br>The information system uniquely identifies and authenticates [*Assignment: organization-defined specific and/or types of devices] before establishing a [Selection (one or more): local; remote; network*] connection. | **Implement device identification and authentication prior to establishing a connection.**<p>Configure Azure AD to identify and authenticate Azure AD Registered, Azure AD Joined, and Azure AD Hybrid joined devices.<p> Resources<br><li>[What is a device identity?](../devices/overview.md)<br> <li>[Plan an Azure AD devices deployment](../devices/plan-device-deployment.md)<br><li>[Require managed devices for cloud app access with conditional access](../conditional-access/require-managed-devices.md) |
-| **IA-04 Identifier Management**<br>The organization manages information system identifiers for users and devices by:<br>**(a.)** Receiving authorization from [*FedRAMP Assignment at a minimum, the ISSO (or similar role within the organization)*] to assign an individual, group, role, or device identifier;<br>**(b.)** Selecting an identifier that identifies an individual, group, role, or device;<br>**(c.)** Assigning the identifier to the intended individual, group, role, or device;<br>**(d.)** Preventing reuse of identifiers for [*FedRAMP Assignment: at least two (2) years*]; and<br>**(e.)** Disabling the identifier after [*FedRAMP Assignment: thirty-five (35) days (see additional requirements and guidance)*]<br>**IA-4e Additional FedRAMP Requirements and Guidance:**<br>**Requirement:** The service provider defines the time period of inactivity for device identifiers.<br>**Guidance:** For DoD clouds, see DoD cloud website for specific DoD requirements that go above and beyond FedRAMP http://iase.disa.mil/cloud_security/Pages/index.aspx.<br><br>**IA-4(4)**<br>The organization manages individual identifiers by uniquely identifying each individual as [*FedRAMP Assignment: contractors; foreign nationals*]. | **Disable account identifiers after 35 days of inactivity and prevent their reuse for two years. Manage individual identifiers by uniquely identifying each individual (for example, contractors and foreign nationals).**<p>Assign and manage individual account identifiers and status in Azure AD in accordance with existing organizational policies defined in AC-02. Follow AC-02(3) to automatically disable user and device accounts after 35 days of inactivity. Ensure that organizational policy maintains all accounts that remain in the disabled state for at least two years. After this time, you can remove them. <p>Determine inactivity<br> <li>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br> <li>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br> <li>[See AC-02 guidance](fedramp-access-controls.md) |
+| **IA-04 Identifier Management**<br>The organization manages information system identifiers for users and devices by:<br>**(a.)** Receiving authorization from [*FedRAMP Assignment at a minimum, the ISSO (or similar role within the organization)*] to assign an individual, group, role, or device identifier;<br>**(b.)** Selecting an identifier that identifies an individual, group, role, or device;<br>**(c.)** Assigning the identifier to the intended individual, group, role, or device;<br>**(d.)** Preventing reuse of identifiers for [*FedRAMP Assignment: at least two (2) years*]; and<br>**(e.)** Disabling the identifier after [*FedRAMP Assignment: thirty-five (35) days (see additional requirements and guidance)*]<br>**IA-4e Additional FedRAMP Requirements and Guidance:**<br>**Requirement:** The service provider defines the time period of inactivity for device identifiers.<br>**Guidance:** For DoD clouds, see DoD cloud website for specific DoD requirements that go above and beyond FedRAMP.<br><br>**IA-4(4)**<br>The organization manages individual identifiers by uniquely identifying each individual as [*FedRAMP Assignment: contractors; foreign nationals*]. | **Disable account identifiers after 35 days of inactivity and prevent their reuse for two years. Manage individual identifiers by uniquely identifying each individual (for example, contractors and foreign nationals).**<p>Assign and manage individual account identifiers and status in Azure AD in accordance with existing organizational policies defined in AC-02. Follow AC-02(3) to automatically disable user and device accounts after 35 days of inactivity. Ensure that organizational policy maintains all accounts that remain in the disabled state for at least two years. After this time, you can remove them. <p>Determine inactivity<br> <li>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br> <li>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br> <li>[See AC-02 guidance](fedramp-access-controls.md) |
| **IA-5 Authenticator Management**<br>The organization manages information system authenticators by:<br>**(a.)** Verifying, as part of the initial authenticator distribution, the identity of the individual, group, role, or device receiving the authenticator;<br>**(b.)** Establishing initial authenticator content for authenticators defined by the organization;<br>**(c.)** Ensuring that authenticators have sufficient strength of mechanism for their intended use;<br>**(d.)** Establishing and implementing administrative procedures for initial authenticator distribution, for lost/compromised or damaged authenticators, and for revoking authenticators;<br>**(e.)** Changing default content of authenticators prior to information system installation;<br>**(f.)** Establishing minimum and maximum lifetime restrictions and reuse conditions for authenticators;<br>**(g.)** Changing/refreshing authenticators [*Assignment: organization-defined time period by authenticator type*].<br>**(h.)** Protecting authenticator content from unauthorized disclosure and modification;<br>**(i.)** Requiring individuals to take, and having devices implement, specific security safeguards to protect authenticators; and<br>**(j.)** Changing authenticators for group/role accounts when membership to those accounts changes.<br><br>**IA-5 Additional FedRAMP Requirements and Guidance:**<br>**Requirement:** Authenticators must be compliant with NIST SP 800-63-3 Digital Identity Guidelines IAL, AAL, FAL level 3. Link https://pages.nist.gov/800-63-3 | **Configure and manage information system authenticators.**<p>Azure AD supports various authentication methods. You can use your existing organizational policies for management. See guidance for authenticator selection in IA-02(1-4). Enable users in combined registration for SSPR and Azure AD multifactor authentication and require users to register a minimum of two acceptable multifactor authentication methods to facilitate self-remediation. You can revoke user-configured authenticators at any time with the authentication methods API. <p>Authenticator strength/protecting authenticator content<br> <li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md)<p>Authentication methods and combined registration<br> <li>[What authentication and verification methods are available in Azure Active Directory?](../authentication/concept-authentication-methods.md)<br> <li>[Combined registration for SSPR and Azure AD multifactor authentication](../authentication/concept-registration-mfa-sspr-combined.md)<p>Authenticator revokes<br> <li>[Azure AD authentication methods API overview](/graph/api/resources/authenticationmethods-overview) | | **IA-5(1)**<br>The information system, for password-based authentication:<br>**(a.)** Enforces minimum password complexity of [*Assignment: organization-defined requirements for case sensitivity, number of characters, mix of upper-case letters, lower-case letters, numbers, and special characters, including minimum requirements for each type*];<br>**(b.)** Enforces at least the following number of changed characters when new passwords are created: [*FedRAMP Assignment: at least fifty percent (50%)*];<br>**(c.)** Stores and transmits only cryptographically-protected passwords;<br>**(d.) Enforces password minimum and maximum lifetime restrictions of [*Assignment: organization- defined numbers for lifetime minimum, lifetime maximum*];<br>**(e.)** Prohibits password reuse for [*FedRAMP Assignment: twenty-four (24)*] generations; and<br>**(f.)** Allows the use of a temporary password for system logons with an immediate change to a permanent password.<br><br>**IA-5 (1) a and d Additional FedRAMP Requirements and Guidance:**<br>**Guidance:** If password policies are compliant with NIST SP 800-63B Memorized Secret (Section 5.1.1) Guidance, the control may be considered compliant. | **Implement password-based authentication requirements.**<p>Per NIST SP 800-63B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<p>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<p>We strongly encourage passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<p>NIST reference documents<br><li>[NIST Special Publication 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br><li>[NIST Special Publication 800-53 Revision 5](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf) - IA-5 - Control enhancement (1)<p>Resource<br><li>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md) | | **IA-5(2)**<br>The information system, for PKI-based authentication:<br>**(a.)** Validates certifications by constructing and verifying a certification path to an accepted trust anchor including checking certificate status information;<br>**(b.)** Enforces authorized access to the corresponding private key;<br>**(c.)** Maps the authenticated identity to the account of the individual or group; and<br>**(d.)** Implements a local cache of revocation data to support path discovery and validation in case of inability to access revocation information via the network. | **Implement PKI-based authentication requirements.**<p>Federate Azure AD via AD FS to implement PKI-based authentication. By default, AD FS validates certificates, locally caches revocation data, and maps users to the authenticated identity in Active Directory. <p> Resources<br> <li>[What is federation with Azure AD?](../hybrid/whatis-fed.md)<br> <li>[Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) |
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
description: Learn how to automatically upgrade an Azure Kubernetes Service (AKS
Previously updated : 07/07/2022 Last updated : 05/01/2023 # Automatically upgrade an Azure Kubernetes Service (AKS) cluster
-Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. ItΓÇÖs important you apply the latest security releases, or upgrade to get the latest features. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster].
+Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. ItΓÇÖs important you apply the latest security releases or upgrade to get the latest features. Before learning about auto-upgrade, make sure you understand the [AKS cluster upgrade fundamentals][upgrade-aks-cluster].
> [!NOTE]
-> Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
+> Any upgrade operation, whether performed manually or automatically, upgrades the node image version if it's not already on the latest version. The latest version is contingent on a full AKS release and can be determined by visiting the [AKS release tracker][release-tracker].
>
-> Auto-upgrade will first upgrade the control plane, and then proceed to upgrade agent pools one by one.
+> Auto-upgrade first upgrades the control plane, and then upgrades agent pools one by one.
## Why use cluster auto-upgrade
-Cluster auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
+Cluster auto-upgrade provides a "set once and forget" mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest features or patches from AKS and upstream Kubernetes.
AKS follows a strict supportability versioning window. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions]. ## Customer versus AKS-initiated auto-upgrades
-Customers can specify cluster auto-upgrade specifics in the following guidance. These upgrades occur based on the cadence the customer specifies and are recommended for customers to remain on supported Kubernetes versions.
+You can specify cluster auto-upgrade specifics using the following guidance. The upgrades occur based on your specified cadence and are recommended to remain on supported Kubernetes versions.
AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default.
For example, Kubernetes v1.25 will upgrade to v1.26 during the v1.29 GA release.
## Cluster auto-upgrade limitations
-If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first, and then upgrade the individual node pools. Cluster auto-upgrade always upgrades the control plane and the node pools together. There's no ability of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` raises the following error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
+If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first, and then upgrade the individual node pools. Cluster auto-upgrade always upgrades the control plane and the node pools together. You can't upgrade the control plane only. Running the `az aks upgrade --control-plane-only` command raises the following error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] is disabled by default.
-## Using cluster auto-upgrade
+## Use cluster auto-upgrade
-Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the [selected auto-upgrade channel][planned-maintenance]. When making changes to auto-upgrade, allow 24 hours for the changes to take effect.
+Automatically completed upgrades are functionally the same as manual upgrades. The [selected auto-upgrade channel][planned-maintenance] determines the timing of upgrades. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more information, see [Upgrade an AKS cluster][upgrade-aks-cluster].
The following upgrade channels are available: |Channel| Action | Example ||||
-| `none`| disables auto-upgrades and keeps the cluster at its current version of Kubernetes| Default setting if left unchanged|
-| `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*|
-| `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*.
-| `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
-| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default.|
+| `none`| disables auto-upgrades and keeps the cluster at its current version of Kubernetes.| Default setting if left unchanged.|
+| `patch`| automatically upgrades the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster runs version *1.17.7*, and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.17.9*.|
+| `stable`| automatically upgrades the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.18.6*.|
+| `rapid`| automatically upgrades the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster's Kubernetes version is an *N-2* minor version, where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster first upgrades to *1.18.6*, then upgrades to *1.19.1*.|
+| `node-image`| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default.|
> [!NOTE]
-> Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
-
-> [!NOTE]
-> With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. To Learn more [AKS support window][supported-kubernetes-versions]
-
-> [!NOTE]
-> Auto-upgrade requires the cluster's Kubernetes version to be within the [AKS support window][supported-kubernetes-versions], even if using the `node-image` channel.
+>
+> Keep the following information in mind when using cluster auto-upgrade:
+>
+> * Cluster auto-upgrade only updates to GA versions of Kubernetes and doesn't update to preview versions.
+>
+> * With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. To learn more, see [AKS support window][supported-kubernetes-versions].
+>
+> * Auto-upgrade requires the cluster's Kubernetes version to be within the [AKS support window][supported-kubernetes-versions], even if using the `node-image` channel.
+>
+> * If you're using the preview API `11-02-preview` or later, and you select the `node-image` cluster auto-upgrade channel, the [node image auto-upgrade channel][node-image-auto-upgrade] automatically sets to `NodeImage`.
+>
+> * Each cluster can only be associated with a single auto-upgrade channel. This is because your specified channel determines the Kubernetes version that runs on the cluster.
-> [!NOTE]
-> If using the preview API `11-02-preview` or later, if you select the `node-image` cluster auto-upgrade channel the [node image auto-upgrade channel][node-image-auto-upgrade] will automatically be set to `NodeImage`.
+### Use cluster auto-upgrade with a new AKS cluster
-Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more information, see [Upgrade an AKS cluster][upgrade-aks-cluster].
+* Set the auto-upgrade channel when creating a new cluster using the [`az aks create`][az-aks-create] command and the `auto-upgrade-channel` parameter.
-To set the auto-upgrade channel when creating a cluster, use the *auto-upgrade-channel* parameter, similar to the following example.
+ ```azurecli-interactive
+ az aks create --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable --generate-ssh-keys
+ ```
-```azurecli-interactive
-az aks create --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable --generate-ssh-keys
-```
+### Use cluster auto-upgrade with an existing AKS cluster
-To set the auto-upgrade channel on existing cluster, update the *auto-upgrade-channel* parameter, similar to the following example.
+* Set the auto-upgrade channel on an existing cluster using the [`az aks update`][az-aks-update] command with the `auto-upgrade-channel` parameter.
-```azurecli-interactive
-az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable
-```
+ ```azurecli-interactive
+ az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable
+ ```
## Auto-upgrade in the Azure portal
-If you're using the Azure portal, you can find auto-upgrade settings under the *Settings* > *Cluster configuration* blade by selecting *Upgrade version*. By default, the `Patch` channel is selected.
+If using the Azure portal, you can find auto-upgrade settings under the **Settings** > **Cluster configuration** blade by selecting **Upgrade version**. The `Patch` channel is selected by default.
:::image type="content" source="./media/auto-upgrade-cluster/portal-upgrade.png" alt-text="The screenshot of the upgrade blade for an AKS cluster in the Azure portal. The automatic upgrade field shows 'patch' selected, and several APIs deprecated between the selected Kubernetes version and the cluster's current version are described.":::
-The Azure portal also highlights all the deprecated APIs between your current version and newer, available versions you intend to migrate to. For more information, see [the Kubernetes API Removal and Deprecation process][k8s-deprecation].
+The Azure portal also highlights all the deprecated APIs between your current version and newer, available versions you intend to migrate to. For more information, see the [Kubernetes API removal and deprecation process][k8s-deprecation].
-## Using auto-upgrade with Planned Maintenance
+## Use auto-upgrade with Planned Maintenance
-If youΓÇÖre using Planned Maintenance and cluster auto-upgrade, your upgrade starts during your specified maintenance window.
+If using Planned Maintenance and cluster auto-upgrade, your upgrade starts during your specified maintenance window.
> [!NOTE]
-> To ensure proper functionality, use a maintenance window of four hours or more.
+> To ensure proper functionality, use a maintenance window of *four hours or more*.
For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
For more information on Planned Maintenance, see [Use Planned Maintenance to sch
Use the following best practices to help maximize your success when using auto-upgrade: -- In order to keep your cluster always in a supported version (i.e within the N-2 rule), choose either `stable` or `rapid` channels.-- If you're interested in getting the latest patches as soon as possible, use the `patch` channel. The `node-image` channel is a good fit if you want your agent pools to always be running the most recent node images.-- To automatically upgrade node images while using a different cluster upgrade channel, consider using the [node image auto-upgrade][node-image-auto-upgrade] `NodeImage` channel.-- Follow [Operator best practices][operator-best-practices-scheduler].-- Follow [PDB best practices][pdb-best-practices].
+* To ensure your cluster is always in a supported version (i.e within the N-2 rule), choose either `stable` or `rapid` channels.
+* If you're interested in getting the latest patches as soon as possible, use the `patch` channel. The `node-image` channel is a good fit if you want your agent pools to always run the most recent node images.
+* To automatically upgrade node images while using a different cluster upgrade channel, consider using the [node image auto-upgrade][node-image-auto-upgrade] `NodeImage` channel.
+* Follow [Operator best practices][operator-best-practices-scheduler].
+* Follow [PDB best practices][pdb-best-practices].
+* For upgrade troubleshooting information, see the [AKS troubleshooting documentation][aks-troubleshoot-docs].
<!-- INTERNAL LINKS --> [supported-kubernetes-versions]: ./supported-kubernetes-versions.md [upgrade-aks-cluster]: ./upgrade-cluster.md [planned-maintenance]: ./planned-maintenance.md [operator-best-practices-scheduler]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets
-[node-image-auto-upgrade]: auto-upgrade-node-image.md
+[node-image-auto-upgrade]: auto-upgrade-node-image.md
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[aks-troubleshoot-docs]: /support/azure/azure-kubernetes/welcome-azure-kubernetes
<!-- EXTERNAL LINKS --> [pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"
The following example output shows the three nodes distributed across the specified region and availability zones, such as *eastus2-1* for the first availability zone and *eastus2-2* for the second availability zone:
-```console
+```output
Name: aks-nodepool1-28993262-vmss000000 topology.kubernetes.io/zone=eastus2-1 Name: aks-nodepool1-28993262-vmss000001
As you add more nodes to an agent pool, the Azure platform automatically distrib
With Kubernetes versions 1.17.0 and later, AKS uses the newer label `topology.kubernetes.io/zone` and the deprecated `failure-domain.beta.kubernetes.io/zone`. You can get the same result from running the `kubelet describe nodes` command in the previous step, by running the following script:
- ```console
+ ```bash
kubectl get nodes -o custom-columns=NAME:'{.metadata.name}',REGION:'{.metadata.labels.topology\.kubernetes\.io/region}',ZONE:'{metadata.labels.topology\.kubernetes\.io/zone}' ``` The following example resembles the output with more verbose details:
-```console
+```output
NAME REGION ZONE aks-nodepool1-34917322-vmss000000 eastus eastus-1 aks-nodepool1-34917322-vmss000001 eastus eastus-2
az aks scale \
When the scale operation completes after a few minutes, run the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell. The following output resembles the results:
-```console
+```output
Name: aks-nodepool1-28993262-vmss000000 topology.kubernetes.io/zone=eastus2-1 Name: aks-nodepool1-28993262-vmss000001
kubectl scale deployment nginx --replicas=3
By viewing nodes where your pods are running, you see pods are running on the nodes corresponding to three different availability zones. For example, with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` in a Bash shell, you see the following example output:
-```console
+```output
Name: nginx-6db489d4b7-ktdwg Node: aks-nodepool1-28993262-vmss000000/10.240.0.4 Name: nginx-6db489d4b7-v7zvj
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
The reclaim policy on both storage classes ensures that the underlying Azure Blo
Use the [kubectl get sc][kubectl-get] command to see the storage classes. The following example shows the `azureblob-fuse-premium` and `azureblob-nfs-premium` storage classes available within an AKS cluster:
-```bash
+```output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azureblob-fuse-premium blob.csi.azure.com Delete Immediate true 23h azureblob-nfs-premium blob.csi.azure.com Delete Immediate true 23h
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
The following YAML creates a pod that uses the persistent volume claim **azure-b
The output of the command resembles the following example:
- ```bash
+ ```output
test.txt ```
In this example, the following manifest configures mounting a Blob storage conta
The output of the command resembles the following example:
- ```bash
+ ```output
storageclass.storage.k8s.io/blob-nfs-premium created ```
In this example, the following manifest configures using blobfuse and mounts a B
The output of the command resembles the following example:
- ```bash
+ ```output
storageclass.storage.k8s.io/blob-fuse-premium created ```
When you create an Azure Blob storage resource for use with AKS, you can create
For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
-```azurecli
+```azurecli-interactive
az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv ``` The output of the command resembles the following example:
-```azurecli
+```azurecli-interactive
MC_myResourceGroup_myAKSCluster_eastus ```
The following YAML creates a pod that uses the persistent volume or persistent v
The output from the command resembles the following example:
- ```bash
+ ```output
Filesystem Size Used Avail Use% Mounted on ... blobfuse 14G 41M 13G 1% /mnt/blob
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
For more information on Kubernetes volumes, see [Storage options for application
* Make sure you have Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * The Azure Disk CSI driver has a per-node volume limit. The volume count changes based on the size of the node/node pool. Run the [kubectl get][kubectl-get] command to determine the number of volumes that can be allocated per node:
- ```console
+ ```bash
kubectl get CSINode <nodename> -o yaml ```
kubectl get sc
The output of the command resembles the following example:
-```console
+```output
NAME PROVISIONER AGE default (default) disk.csi.azure.com 1h managed-csi disk.csi.azure.com 1h
A persistent volume claim (PVC) is used to automatically provision storage based
The output of the command resembles the following example:
- ```console
+ ```output
persistentvolumeclaim/azure-managed-disk created ```
Once the persistent volume claim has been created and the disk successfully prov
2. Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
- ```console
+ ```bash
kubectl apply -f azure-pvc-disk.yaml ``` The output of the command resembles the following example:
- ```console
+ ```output
pod/mypod created ```
Once the persistent volume claim has been created and the disk successfully prov
The output of the command resembles the following example:
- ```console
+ ```output
[...] Volumes: volume:
When you create an Azure disk for use with AKS, you can create the disk resource
The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section.
- ```console
+ ```output
/subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk ```
following command:
The output of the command resembles the following example:
- ```console
+ ```output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s ```
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
kubectl get pvc my-azurefile
The output of the command resembles the following example:
-```console
+```output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m ```
Before you can use an Azure Files file share as a Kubernetes volume, you must cr
1. Get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**.
- ```azurecli
+ ```azurecli-interactive
az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv ``` The output of the command resembles the following example:
- ```azurecli
+ ```azurecli-interactive
MC_myResourceGroup_myAKSCluster_eastus ```
Before you can use an Azure Files file share as a Kubernetes volume, you must cr
* `nodeResourceGroupName` with the name of the resource group that the AKS cluster nodes are hosted in * `location` with the name of the region to create the resource in. It should be the same region as the AKS cluster nodes.
- ```azurecli
+ ```azurecli-interactive
az storage account create -n myAKSStorageAccount -g nodeResourceGroupName -l location --sku Standard_LRS ``` 3. Run the following command to export the connection string as an environment variable. This is used when creating the Azure file share in a later step.
- ```azurecli
+ ```azurecli-interactive
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n storageAccountName -g resourceGroupName -o tsv) ``` 4. Create the file share using the [Az storage share create][az-storage-share-create] command. Replace the placeholder `shareName` with a name you want to use for the share.
- ```azurecli
+ ```azurecli-interactive
az storage share create -n shareName --connection-string $AZURE_STORAGE_CONNECTION_STRING ``` 5. Run the following command to export the storage account key as an environment variable.
- ```azurecli
+ ```azurecli-interactive
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv) ``` 6. Run the following commands to echo the storage account name and key. Copy this information as these values are needed when you create the Kubernetes volume later in this article.
- ```azurecli
+ ```azurecli-interactive
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME echo Storage account key: $STORAGE_KEY ```
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi
The output of the command resembles the following example:
-```bash
+```output
persistentvolumeclaim/pvc-azuredisk created pod/nginx-azuredisk created ```
kubectl apply -f sc-azuredisk-csi-waitforfirstconsumer.yaml
The output of the command resembles the following example:
-```bash
+```output
storageclass.storage.k8s.io/azuredisk-csi-waitforfirstconsumer created ```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi
The output of the command resembles the following example:
-```bash
+```output
volumesnapshotclass.snapshot.storage.k8s.io/csi-azuredisk-vsc created ```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi
The output of the command resembles the following example:
-```bash
+```output
volumesnapshot.snapshot.storage.k8s.io/azuredisk-volume-snapshot created ```
kubectl describe volumesnapshot azuredisk-volume-snapshot
The output of the command resembles the following example:
-```bash
+```output
Name: azuredisk-volume-snapshot Namespace: default Labels: <none>
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi
The output of the command resembles the following example:
-```bash
+```output
persistentvolumeclaim/pvc-azuredisk-snapshot-restored created pod/nginx-restored created ```
kubectl exec nginx-restored -- ls /mnt/azuredisk
The output of the command resembles the following example:
-```bash
+```output
lost+found outfile test.txt
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi
The output of the command resembles the following example:
-```bash
+```output
persistentvolumeclaim/pvc-azuredisk-cloning created pod/nginx-restored-cloning created ```
kubectl exec nginx-restored-cloning -- ls /mnt/azuredisk
The output of the command resembles the following example:
-```bash
+```output
lost+found outfile test.txt
kubectl exec -it nginx-azuredisk -- df -h /mnt/azuredisk
The output of the command resembles the following example:
-```bash
+```output
Filesystem Size Used Avail Use% Mounted on /dev/sdc 9.8G 42M 9.8G 1% /mnt/azuredisk ``` Expand the PVC by increasing the `spec.resources.requests.storage` field running the following command:
-```basj
+```bash
kubectl patch pvc pvc-azuredisk --type merge --patch '{"spec": {"resources": {"requests": {"storage": "15Gi"}}}}' ``` The output of the command resembles the following example:
-```bash
+```output
persistentvolumeclaim/pvc-azuredisk patched ``` Run the following command to confirm the volume size has increased:
-```console
+```bash
kubectl get pv ``` The output of the command resembles the following example:
-```bash
+```output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-391ea1a6-0191-4022-b915-c8dc4216174a 15Gi RWO Delete Bound default/pvc-azuredisk managed-csi 2d2h (...)
kubectl get pvc pvc-azuredisk
The output of the command resembles the following example:
-```bash
+```output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-azuredisk Bound pvc-391ea1a6-0191-4022-b915-c8dc4216174a 15Gi RWO managed-csi 2d2h ```
kubectl exec -it nginx-azuredisk -- df -h /mnt/azuredisk
The output of the command resembles the following example:
-```bash
+```output
Filesystem Size Used Avail Use% Mounted on /dev/sdc 15G 46M 15G 1% /mnt/azuredisk ```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi
The output of the command resembles the following example:
-```bash
+```output
statefulset.apps/busybox-azuredisk created ```
kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Windo
The output of the command resembles the following example:
-```bash
+```output
2020-08-27 08:13:41Z 2020-08-27 08:13:42Z 2020-08-27 08:13:44Z
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi
The output of the command resembles the following example:
-```bash
+```output
persistentvolumeclaim/pvc-azurefile created pod/nginx-azurefile created ```
kubectl exec nginx-azurefile -- ls -l /mnt/azurefile
The output of the command resembles the following example:
-```bash
+```output
total 29 -rwxrwxrwx 1 root root 29348 Aug 31 21:59 outfile ```
kubectl apply -f azure-file-sc.yaml
The output of the command resembles the following example:
-```bash
+```output
storageclass.storage.k8s.io/my-azurefile created ```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi
The output of the command resembles the following example:
-```bash
+```output
volumesnapshotclass.snapshot.storage.k8s.io/csi-azurefile-vsc created ```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi
The output of the command resembles the following example:
-```bash
+```output
volumesnapshot.snapshot.storage.k8s.io/azurefile-volume-snapshot created ```
kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
The output of the command resembles the following example:
-```bash
+```output
Filesystem Size Used Avail Use% Mounted on //f149b5a219bd34caeb07de9.file.core.windows.net/pvc-5e5d9980-da38-492b-8581-17e3cad01770 100G 128K 100G 1% /mnt/azurefile ```
kubectl patch pvc pvc-azurefile --type merge --patch '{"spec": {"resources": {"r
The output of the command resembles the following example:
-```bash
+```output
persistentvolumeclaim/pvc-azurefile patched ```
mountOptions:
Create the storage class by using the `kubectl apply` command:
-```console
+```bash
kubectl apply -f private-azure-file-sc.yaml ``` The output of the command resembles the following example:
-```bash
+```output
storageclass.storage.k8s.io/private-azurefile-csi created ```
spec:
Create the PVC by using the [kubectl apply][kubectl-apply] command:
-```console
+```bash
kubectl apply -f private-pvc.yaml ```
kubectl apply -f nfs-sc.yaml
The output of the command resembles the following example:
-```bash
+```output
storageclass.storage.k8s.io/azurefile-csi-nfs created ```
spec:
The output of the command resembles the following example:
-```bash
+```output
statefulset.apps/statefulset-azurefile created ```
kubectl exec -it statefulset-azurefile-0 -- df -h
The output of the command resembles the following example:
-```bash
+```output
Filesystem Size Used Avail Use% Mounted on ... /dev/sda1 29G 11G 19G 37% /etc/hosts
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi
The output of the command resembles the following example:
-```bash
+```output
statefulset.apps/busybox-azurefile created ```
kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Windo
The output of the commands resembles the following example:
-```bash
+```output
2020-08-27 22:11:01Z 2020-08-27 22:11:02Z 2020-08-27 22:11:04Z
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
The following considerations apply when you use Azure NetApp Files:
1. Register the *Microsoft.NetApp* resource provider by running the following command:
- ```azurecli
+ ```azurecli-interactive
az provider register --namespace Microsoft.NetApp --wait ```
The following considerations apply when you use Azure NetApp Files:
2. When you create an Azure NetApp account for use with AKS, you can create the account in an existing resource group or create a new one in the same region as the AKS cluster. The following command creates an account named *myaccount1* in the *myResourceGroup* resource group and *eastus* region:
- ```azurecli
+ ```azurecli-interactive
az netappfiles account create \ --resource-group myResourceGroup \ --location eastus \
The following command creates an account named *myaccount1* in the *myResourceGr
3. Create a new capacity pool by using [az netappfiles pool create][az-netappfiles-pool-create]. The following example creates a new capacity pool named *mypool1* with 4 TB in size and *Premium* service level:
- ```azurecli
+ ```azurecli-interactive
az netappfiles pool create \ --resource-group myResourceGroup \ --location eastus \
The following command creates an account named *myaccount1* in the *myResourceGr
> [!NOTE] > This subnet must be in the same virtual network as your AKS cluster.
- ```azurecli
+ ```azurecli-interactive
RESOURCE_GROUP=myResourceGroup VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv) VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
The following command creates an account named *myaccount1* in the *myResourceGr
1. Create a volume using the [az netappfiles volume create][az-netappfiles-volume-create] command. Update `RESOURCE_GROUP`, `LOCATION`, `ANF_ACCOUNT_NAME` (Azure NetApp account name), `POOL_NAME`, and `SERVICE_LEVEL` with the correct values.
- ```azurecli
+ ```azurecli-interactive
RESOURCE_GROUP=myResourceGroup LOCATION=eastus ANF_ACCOUNT_NAME=myaccount1
The following command creates an account named *myaccount1* in the *myResourceGr
1. List the details of your volume using [az netappfiles volume show][az-netappfiles-volume-show]
- ```azurecli
+ ```azurecli-interactive
az netappfiles volume show \ --resource-group $RESOURCE_GROUP \ --account-name $ANF_ACCOUNT_NAME \
The following command creates an account named *myaccount1* in the *myResourceGr
The following output resembles the output of the previous command:
- ```console
+ ```output
{ ... "creationToken": "myfilepath2",
The following command creates an account named *myaccount1* in the *myResourceGr
kubectl exec -it nginx-nfs -- sh ```
- ```console
+ ```output
/ # df -h Filesystem Size Used Avail Use% Mounted on ...
This section walks you through the installation of Astra Trident using the opera
The output of the command resembles the following example:
- ```console
+ ```output
namespace/trident created ```
This section walks you through the installation of Astra Trident using the opera
The output of the command resembles the following example:
- ```console
+ ```output
serviceaccount/trident-operator created clusterrole.rbac.authorization.k8s.io/trident-operator created clusterrolebinding.rbac.authorization.k8s.io/trident-operator created
This section walks you through the installation of Astra Trident using the opera
The output of the command resembles the following example:
- ```console
+ ```output
tridentorchestrator.trident.netapp.io/trident created ```
This section walks you through the installation of Astra Trident using the opera
The output of the command resembles the following example:
- ```console
+ ```output
Name: trident Namespace: Labels: <none>
This section walks you through the installation of Astra Trident using the opera
The output of the command resembles the following example:
- ```console
+ ```output
secret/backend-tbc-anf-secret created tridentbackendconfig.trident.netapp.io/backend-tbc-anf created ```
A storage class is used to define how a unit of storage is dynamically created w
The output of the command resembles the following example:
- ```console
+ ```output
storageclass/azure-netapp-files created ```
A persistent volume claim (PVC) is a request for storage by a user. Upon the cre
The output of the command resembles the following example:
- ```console
+ ```output
persistentvolumeclaim/anf-pvc created ```
A persistent volume claim (PVC) is a request for storage by a user. Upon the cre
The output of the command resembles the following example:
- ```console
+ ```bash
kubectl get pvc -n trident NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE anf-pvc Bound pvc-bffa315d-3f44-4770-86eb-c922f567a075 1Ti RWO azure-netapp-files 62s
After the PVC is created, a pod can be spun up to access the Azure NetApp Files
The output of the command resembles the following example:
- ```console
+ ```output
pod/nginx-pod created ```
After the PVC is created, a pod can be spun up to access the Azure NetApp Files
The output of the command resembles the following example:
- ```console
+ ```output
[...] Volumes: volume:
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS)
-description: Learn how to use the cluster autoscaler to automatically scale your cluster to meet application demands in an Azure Kubernetes Service (AKS) cluster.
+description: Learn how to use the cluster autoscaler to automatically scale your Azure Kubernetes Service (AKS) clusters to meet application demands.
Previously updated : 10/03/2022- Last updated : 05/02/2023 # Automatically scale a cluster to meet application demands on Azure Kubernetes Service (AKS)
-To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component can watch for pods in your cluster that can't be scheduled because of resource constraints. When issues are detected, the number of nodes in a node pool is increased to meet the application demand. Nodes are also regularly checked for a lack of running pods, with the number of nodes then decreased as needed. This ability to automatically scale up or down the number of nodes in your AKS cluster lets you run an efficient, cost-effective cluster.
+To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component can watch for pods in your cluster that can't be scheduled because of resource constraints. When issues are detected, the number of nodes in a node pool increases to meet the application demand. Nodes are also regularly checked for a lack of running pods, with the number of nodes then decreased as needed. This ability to automatically scale up or down the number of nodes in your AKS cluster lets you run an efficient, cost-effective cluster.
This article shows you how to enable and manage the cluster autoscaler in an AKS cluster.
To adjust to changing application demands, such as between the workday and eveni
![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png)
-Both the horizontal pod autoscaler and cluster autoscaler can also decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity for a period of time. Pods on a node to be removed by the cluster autoscaler are safely scheduled elsewhere in the cluster. For more information about how scaling down works, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work).
+Both the horizontal pod autoscaler and cluster autoscaler can decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity for a period of time. Pods on a node to be removed by the cluster autoscaler are safely scheduled elsewhere in the cluster.
+
+If the current node pool size is lower than the specified minimum or greater than the specified maximum when you enable autoscaling, the autoscaler waits to take effect until a new node is needed in the node pool or until a node can be safely deleted from the node pool.
+
+For more information about how scaling down works, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work).
The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations:
The cluster autoscaler may be unable to scale down if pods can't move, such as i
For more information about how the cluster autoscaler may be unable to scale down, see [What types of pods can prevent the cluster autoscaler from removing a node?][autoscaler-scaledown].
-The cluster autoscaler uses startup parameters for things like time intervals between scale events and resource thresholds. For more information on what parameters the cluster autoscaler uses, see [Using the autoscaler profile](#using-the-autoscaler-profile).
+The cluster autoscaler uses startup parameters for things like time intervals between scale events and resource thresholds. For more information on what parameters the cluster autoscaler uses, see [using the autoscaler profile](#use-the-cluster-autoscaler-profile).
-The cluster and horizontal pod autoscalers can work together, and are often both deployed in a cluster. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods.
+The cluster and horizontal pod autoscalers can work together and are often both deployed in a cluster. When combined, the horizontal pod autoscaler runs the number of pods required to meet application demand. The cluster autoscaler runs the number of nodes required to support the scheduled pods.
> [!NOTE]
-> Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler).
+> Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler-on-a-cluster).
-## Create an AKS cluster and enable the cluster autoscaler
+## Use the cluster autoscaler on your AKS cluster
-If you need to create an AKS cluster, use the [az aks create][az-aks-create] command. To enable and configure the cluster autoscaler on the node pool for the cluster, use the `--enable-cluster-autoscaler` parameter, and specify a node `--min-count` and `--max-count`.
+### Enable the cluster autoscaler on a new cluster
> [!IMPORTANT] > The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings. For more information, see [Can I modify the AKS resources in the node resource group?][aks-faq-node-resource-group]
-The following example creates an AKS cluster with a single node pool backed by a virtual machine scale set. It also enables the cluster autoscaler on the node pool for the cluster and sets a minimum of *1* and maximum of *3* nodes:
-
-```azurecli-interactive
-# First create a resource group
-az group create --name myResourceGroup --location eastus
-
-# Now create the AKS cluster and enable the cluster autoscaler
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 1 \
- --vm-set-type VirtualMachineScaleSets \
- --load-balancer-sku standard \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 3
-```
+1. Create a resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location eastus
+ ```
-It takes a few minutes to create the cluster and configure the cluster autoscaler settings.
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command and enable and configure the cluster autoscaler on the node pool for the cluster using the `--enable-cluster-autoscaler` parameter and specifying a node `--min-count` and `--max-count`. The following example command creates a cluster with a single node backed by a virtual machine scale set, enables the cluster autoscaler, sets a minimum of one and maximum of three nodes:
-## Update an existing AKS cluster to enable the cluster autoscaler
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 1 \
+ --vm-set-type VirtualMachineScaleSets \
+ --load-balancer-sku standard \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
+ ```
-Use the [az aks update][az-aks-update] command to enable and configure the cluster autoscaler on the node pool for the existing cluster. Use the `--enable-cluster-autoscaler` parameter, and specify a node `--min-count` and `--max-count`.
+ It takes a few minutes to create the cluster and configure the cluster autoscaler settings.
+
+### Enable the cluster autoscaler on an existing cluster
> [!IMPORTANT] > The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings. For more information, see [Can I modify the AKS resources in the node resource group?][aks-faq-node-resource-group]
-The following example updates an existing AKS cluster to enable the cluster autoscaler on the node pool for the cluster and sets a minimum of *1* and maximum of *3* nodes:
+* Update an existing cluster using the [`az aks update`][az-aks-update] command and enable and configure the cluster autoscaler on the node pool using the `--enable-cluster-autoscaler` parameter and specifying a node `--min-count` and `--max-count`. The following example command updates an existing AKS cluster to enable the cluster autoscaler on the node pool for the cluster and sets a minimum of one and maximum of three nodes:
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 3
-```
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
+ ```
+
+ It takes a few minutes to update the cluster and configure the cluster autoscaler settings.
+
+### Disable the cluster autoscaler on a cluster
+
+* Disable the cluster autoscaler using the [`az aks update`][az-aks-update-preview] command and the `--disable-cluster-autoscaler` parameter.
-It takes a few minutes to update the cluster and configure the cluster autoscaler settings.
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --disable-cluster-autoscaler
+ ```
+
+ Nodes aren't removed when the cluster autoscaler is disabled.
+
+> [!NOTE]
+> You can manually scale your cluster after disabling the cluster autoscaler using the [`az aks scale`][az-aks-scale] command. If you use the horizontal pod autoscaler, that feature continues to run with the cluster autoscaler disabled, but pods may end up unable to be scheduled if all node resources are in use.
+
+### Re-enable a disabled cluster autoscaler
+
+You can re-enable the cluster autoscaler on an existing cluster using the [`az aks update`][az-aks-update-preview] command and specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
## Change the cluster autoscaler settings > [!IMPORTANT]
-> If you have multiple node pools in your AKS cluster, skip to the [autoscale with multiple agent pools section](#use-the-cluster-autoscaler-with-multiple-node-pools-enabled). Clusters with multiple agent pools require use of the `az aks nodepool` command set to change node pool specific properties instead of `az aks`.
+> If you have multiple node pools in your AKS cluster, skip to the [autoscale with multiple agent pools section](#use-the-cluster-autoscaler-with-multiple-node-pools-enabled). Clusters with multiple agent pools require the `az aks nodepool` command instead of `az aks`.
-In the previous step to create an AKS cluster or update an existing node pool, the cluster autoscaler minimum node count was set to *1*, and the maximum node count was set to *3*. As your application demands change, you may need to adjust the cluster autoscaler node count.
+In the previous step to create an AKS cluster or update an existing node pool, the cluster autoscaler minimum node count was set to one and the maximum node count was set to three. As your application demands change, you may need to adjust the cluster autoscaler node count.
-To change the node count, use the [az aks update][az-aks-update] command.
+* Change the node count using the [`az aks update`][az-aks-update] command and update the cluster autoscaler using the `--update-cluster-autoscaler` parameter and specifying your updated node `--min-count` and `--max-count`.
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --update-cluster-autoscaler \
- --min-count 1 \
- --max-count 5
-```
-
-The above example updates cluster autoscaler on the single node pool in *myAKSCluster* to a minimum of *1* and maximum of *5* nodes.
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --update-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 5
+ ```
> [!NOTE]
-> The cluster autoscaler will enforce the minimum count in cases where the actual count drops below the minimum due to external factors, such as during a spot eviction or when changing the minimum count value from the AKS API.
+> The cluster autoscaler enforces the minimum count in cases where the actual count drops below the minimum due to external factors, such as during a spot eviction or when changing the minimum count value from the AKS API.
Monitor the performance of your applications and services, and adjust the cluster autoscaler node counts to match the required performance.
-## Using the autoscaler profile
+## Use the cluster autoscaler profile
-You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you had workloads that ran every 15 minutes, you may want to change the autoscaler profile to scale down under utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings that you can update:
+You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you may want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings you can update:
| Setting | Description | Default value | |-|||
You can also configure more granular details of the cluster autoscaler by changi
| scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down | 0.5 | | max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds | | balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
-| expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
+| expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
| skip-nodes-with-local-storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath | false |
-| skip-nodes-with-system-pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true |
+| skip-nodes-with-system-pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true |
| max-empty-bulk-delete | Maximum number of empty nodes that can be deleted at the same time | 10 nodes | | new-pod-scale-up-delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. | 0 seconds | | max-total-unready-percentage | Maximum percentage of unready nodes in the cluster. After this percentage is exceeded, CA halts operations | 45% |
-| max-node-provision-time | Maximum time the autoscaler waits for a node to be provisioned | 15 minutes |
+| max-node-provision-time | Maximum time the autoscaler waits for a node to be provisioned | 15 minutes |
| ok-total-unready-count | Number of allowed unready nodes, irrespective of max-total-unready-percentage | 3 nodes | > [!IMPORTANT]
-> The cluster autoscaler profile affects all node pools that use the cluster autoscaler. You can't set an autoscaler profile per node pool.
+> When using the autoscaler profile, keep the following information in mind:
>
-> The cluster autoscaler profile requires version *2.11.1* or greater of the Azure CLI. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-
-### Set the cluster autoscaler profile on an existing AKS cluster
-
-Use the [az aks update][az-aks-update-preview] command with the *cluster-autoscaler-profile* parameter to set the cluster autoscaler profile on your cluster. The following example configures the scan interval setting as 30s in the profile.
+> * The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you est the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile.
+> * The cluster autoscaler profile requires version *2.11.1* or greater of the Azure CLI. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --cluster-autoscaler-profile scan-interval=30s
-```
-
-When you enable the cluster autoscaler on node pools in the cluster, these node pools with CA enabled will also use the cluster autoscaler profile. For example:
+### Set the cluster autoscaler profile on a new cluster
-```azurecli-interactive
-az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 3
-```
-
-> [!IMPORTANT]
-> When you set the cluster autoscaler profile, any existing node pools with the cluster autoscaler enabled will start using the profile immediately.
+* Create an AKS cluster using the [`az aks create`][az-aks-create] command and set the cluster autoscaler profile using the `cluster-autoscaler-profile` parameter.
-### Set the cluster autoscaler profile when creating an AKS cluster
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 1 \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3 \
+ --cluster-autoscaler-profile scan-interval=30s
+ ```
-You can also use the *cluster-autoscaler-profile* parameter when you create your cluster. For example:
+### Set the cluster autoscaler profile on an existing cluster
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 1 \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 3 \
- --cluster-autoscaler-profile scan-interval=30s
-```
+* Set the cluster autoscaler on an existing cluster using the [`az aks update`][az-aks-update-preview] command and the `cluster-autoscaler-profile` parameter. The following example configures the scan interval setting as *30s*:
-The above command creates an AKS cluster and defines the scan interval as 30 seconds for the cluster-wide autoscaler profile. The command also enables the cluster autoscaler on the initial node pool, sets the minimum node count to 1 and the maximum node count to 3.
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --cluster-autoscaler-profile scan-interval=30s
+ ```
### Reset cluster autoscaler profile to default values
-Use the [az aks update][az-aks-update-preview] command to reset the cluster autoscaler profile on your cluster.
-
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --cluster-autoscaler-profile ""
-```
+* Reset the cluster autoscaler profile using the [`az aks update`][az-aks-update-preview] command.
-## Disable the cluster autoscaler
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --cluster-autoscaler-profile ""
+ ```
-If you no longer wish to use the cluster autoscaler, you can disable it using the [az aks update][az-aks-update-preview] command, specifying the `--disable-cluster-autoscaler` parameter. Nodes aren't removed when the cluster autoscaler is disabled.
-
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --disable-cluster-autoscaler
-```
+## Retrieve cluster autoscaler logs and status updates
-You can manually scale your cluster after disabling the cluster autoscaler by using the [az aks scale][az-aks-scale] command. If you use the horizontal pod autoscaler, that feature continues to run with the cluster autoscaler disabled, but pods may end up unable to be scheduled if all node resources are in use.
+You can retrieve logs and status updates from the cluster autoscaler to help diagnose and debug autoscaler events. AKS manages the cluster autoscaler on your behalf and runs it in the managed control plane. You can enable control plane node to see the logs and operations from the cluster autoscaler.
-## Re-enable a disabled cluster autoscaler
+Use the following steps to configure logs to be pushed from the cluster autoscaler into Log Analytics:
-If you wish to re-enable the cluster autoscaler on an existing cluster, you can re-enable it using the [az aks update][az-aks-update-preview] command, specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
+1. Set up a rule for resource logs to push cluster autoscaler logs to Log Analytics. Follow the [instructions here][aks-view-master-logs], and make sure you check the box for `cluster-autoscaler` when selecting options for "Logs".
+2. Select the "Logs" section on your cluster via the Azure portal.
+3. Input the following example query into Log Analytics:
-## Retrieve cluster autoscaler logs and status
+ ```kusto
+ AzureDiagnostics
+ | where Category == "cluster-autoscaler"
+ ```
-To diagnose and debug autoscaler events, logs and status can be retrieved from the cluster autoscaler.
+ As long as there are logs to retrieve, you should see logs similar to the following:
-AKS manages the cluster autoscaler on your behalf and runs it in the managed control plane. You can enable control plane node to see the logs and operations from CA.
+ ![Log Analytics logs](media/autoscaler/autoscaler-logs.png)
-To configure logs to be pushed from the cluster autoscaler into Log Analytics, follow these steps.
+The cluster autoscaler also writes out the health status to a `configmap` named `cluster-autoscaler-status`. You can retrieve these logs using the following `kubectl` command:
-1. Set up a rule for resource logs to push cluster-autoscaler logs to Log Analytics. [Instructions are detailed here][aks-view-master-logs], ensure you check the box for `cluster-autoscaler` when selecting options for "Logs".
-1. Select the "Logs" section on your cluster via the Azure portal.
-1. Input the following example query into Log Analytics:
-
-```kusto
-AzureDiagnostics
-| where Category == "cluster-autoscaler"
+```bash
+kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml
```
-You should see logs similar to the following example as long as there are logs to retrieve.
+To learn more about the autoscaler logs, read the FAQ on the [Kubernetes/autoscaler GitHub project][kubernetes-faq].
-![Log Analytics logs](media/autoscaler/autoscaler-logs.png)
+## Use the cluster autoscaler with node pools
-The cluster autoscaler will also write out health status to a `configmap` named `cluster-autoscaler-status`. To retrieve these logs, execute the following `kubectl` command. A health status will be reported for each node pool configured with the cluster autoscaler.
+### Use the cluster autoscaler with multiple node pools enabled
-```bash
-kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml
-```
-
-To learn more about what is logged from the autoscaler, read the FAQ on the [Kubernetes/autoscaler GitHub project][kubernetes-faq].
+You can use the cluster autoscaler with [multiple node pools][aks-multiple-node-pools] enabled. When using both features together, you enable the cluster autoscaler on each individual node pool in the cluster and can pass unique autoscaling rules to each.
-## Use the cluster autoscaler with multiple node pools enabled
+* Update an existing node pool's settings using the [`az aks nodepool update`][az-aks-nodepool-update] command. The following command continues from the [previous steps](#enable-the-cluster-autoscaler-on-a-new-cluster) in this article:
-The cluster autoscaler can be used together with [multiple node pools][aks-multiple-node-pools] enabled. Follow that document to learn how to enable multiple node pools and add additional node pools to an existing cluster. When using both features together, you enable the cluster autoscaler on each individual node pool in the cluster and can pass unique autoscaling rules to each.
+ ```azurecli-interactive
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name nodepool1 \
+ --update-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 5
+ ```
-The below command assumes you followed the [initial instructions](#create-an-aks-cluster-and-enable-the-cluster-autoscaler) earlier in this document and you want to update an existing node pool's max-count from *3* to *5*. Use the [az aks nodepool update][az-aks-nodepool-update] command to update an existing node pool's settings.
+### Disable the cluster autoscaler on a node pool
-```azurecli-interactive
-az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name nodepool1 \
- --update-cluster-autoscaler \
- --min-count 1 \
- --max-count 5
-```
+* Disable the cluster autoscaler on a node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command and the `--disable-cluster-autoscaler` parameter.
-The cluster autoscaler can be disabled with [az aks nodepool update][az-aks-nodepool-update] and passing the `--disable-cluster-autoscaler` parameter.
+ ```azurecli-interactive
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name nodepool1 \
+ --disable-cluster-autoscaler
+ ```
-```azurecli-interactive
-az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name nodepool1 \
- --disable-cluster-autoscaler
-```
+### Re-enable the cluster autoscaler on a node pool
-If you wish to re-enable the cluster autoscaler on an existing cluster, you can re-enable it using the [az aks nodepool update][az-aks-nodepool-update] command, specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
+Re-enable the cluster autoscaler on a node pool using the [az aks nodepool update][az-aks-nodepool-update] command and specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
> [!NOTE]
-> If you are planning on using the cluster autoscaler with nodepools that span multiple zones and leverage scheduling features related to zones such as volume topological scheduling, the recommendation is to have one nodepool per zone and enable the `--balance-similar-node-groups` through the autoscaler profile. This will ensure that the autoscaler will scale up succesfully and try and keep the sizes of the nodepools balanced.
+> If you plan on using the cluster autoscaler with node pools that span multiple zones and leverage scheduling features related to zones such as volume topological scheduling, we recommend that you have one node pool per zone and enable the `--balance-similar-node-groups` through the autoscaler profile. This ensure the autoscaler can successfully scale up and keep the sizes of the node pools balanced.
## Configure the horizontal pod autoscaler
-Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] is used to provide resource utilization to Kubernetes. You can configure horizontal pod autoscaling through the `kubectl autoscale` command or through a manifest. For more details on using the horizontal pod autoscaler, see [HorizontalPodAutoscaler Walkthrough][kubernetes-hpa-walkthrough].
+Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] is used to provide resource utilization to Kubernetes. You can configure horizontal pod autoscaling through the `kubectl autoscale` command or through a manifest. For more details on using the horizontal pod autoscaler, see the [HorizontalPodAutoscaler Walkthrough][kubernetes-hpa-walkthrough].
## Next steps
To further help improve cluster resource utilization and free up CPU and memory
[az-aks-update]: /cli/azure/aks#az-aks-update [az-aks-scale]: /cli/azure/aks#az-aks-scale [vertical-pod-autoscaler]: vertical-pod-autoscaler.md
+[az-group-create]: /cli/azure/group#az_group_create
<!-- LINKS - external --> [az-aks-update-preview]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
To remove Node Restriction from a cluster.
az aks update -n aks -g myResourceGroup --disable-node-restriction ```
+## Fully managed resource group (Preview)
+
+AKS deploys infrastructure into your subscription for connecting to and running your applications. Changes made directly to resources in the [node resource group][whatis-nrg] can affect cluster operations or cause issues later. For example, scaling, storage, or network configuration should be through the Kubernetes API, and not directly on these resources.
+
+To prevent changes from being made to the Node Resource Group, you can apply a deny assignment and block users from modifying resources created as part of the AKS cluster.
++
+### Before you begin
+
+You must have the following resources installed:
+
+* The Azure CLI version 2.44.0 or later. Run `az --version` to find the current version, and if you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* The `aks-preview` extension version 0.5.126 or later
+
+#### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+#### Register the 'NRGLockdownPreview' feature flag
+
+Register the `NRGLockdownPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "NRGLockdownPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "NRGLockdownPreview"
+```
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Create an AKS cluster with node resource group lockdown
+
+To create a cluster using node resource group lockdown, set the `--nrg-lockdown-restriction-level` to **ReadOnly**. This allows you to view the resources, but not modify them.
+
+```azurecli-interactive
+az aks create -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level ReadOnly
+```
+
+### Update an existing cluster with node resource group lockdown
+
+```azurecli-interactive
+az aks update -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level ReadOnly
+```
+
+### Remove node resource group lockdown from a cluster
+
+```azurecli-interactive
+az aks update -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level Unrestricted
+```
+ ## Next steps
az aks update -n aks -g myResourceGroup --disable-node-restriction
[az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update [baseline-reference-architecture-aks]: /azure/architecture/reference-architectures/containers/aks/baseline-aks
+[whatis-nrg]: ./concepts-clusters-workloads.md#node-resource-group
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
spec:
For more information on how to control where pods are scheduled, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
+### Node resource group
+
+When you create an AKS cluster, you need to specify a resource group to create the cluster resource in. In addition to this resource group, the AKS resource provider also creates and manages a separate resource group called the node resource group. The *node resource group* contains the following infrastructure resources:
+
+- The virtual machine scale sets and VMs for every node in the node pools
+- The virtual network for the cluster
+- The storage for the cluster
+
+The node resource group is assigned a name by default, such as *MC_myResourceGroup_myAKSCluster_eastus*. During cluster creation, you also have the option to specify the name assigned to your node resource group. When you delete your AKS cluster, the AKS resource provider automatically deletes the node resource group.
+
+The node resource group has the following limitations:
+
+* You can't specify an existing resource group for the node resource group.
+* You can't specify a different subscription for the node resource group.
+* You can't change the node resource group name after the cluster has been created.
+* You can't specify names for the managed resources within the node resource group.
+* You can't modify or delete Azure-created tags of managed resources within the node resource group.
+
+If you modify or delete Azure-created tags and other resource properties in the node resource group, you could get unexpected results, such as scaling and upgrading errors. As AKS manages the lifecycle of infrastructure in the Node Resource Group, any changes will move your cluster into an [unsupported state][aks-support].
+
+A common scenario where customers want to modify resources is through tags. AKS allows you to create and modify tags that are propogated to resources in the Node Resource Group, and you can add those tags when [creating or updating][aks-tags] the cluster. You might want to create or modify custom tags, for example, to assign a business unit or cost center. This can also be achieved by creating Azure Policies with a scope on the managed resource group.
+
+Modifying any **Azure-created tags** on resources under the node resource group in the AKS cluster is an unsupported action, which breaks the service-level objective (SLO). For more information, see [Does AKS offer a service-level agreement?][aks-service-level-agreement]
+
+To reduce the chance of changes in the node resource group affecting your clusters, you can enable node resource group lockdown to apply a deny assignment to your AKS resources. More information can be found in [Cluster configuration in AKS][configure-nrg].
+
+> [!WARNING]
+> If you have don't have node resource group lockdown enabled, you can directly modify any resource in the node resource group. Directly modifying resources in the node resource group can cause your cluster to become unstable or unresponsive.
+ ## Pods Kubernetes uses *pods* to run an instance of your application. A pod represents a single instance of your application.
This article covers some of the core Kubernetes components and how they apply to
[use-multiple-node-pools]: use-multiple-node-pools.md [operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [reservation-discounts]:../cost-management-billing/reservations/save-compute-costs-reservations.md
+[configure-nrg]: ./cluster-configuration.md#fully-managed-resource-group-preview
+[aks-service-level-agreement]: faq.md#does-aks-offer-a-service-level-agreement
+[aks-tags]: use-tags.md
+[aks-support]: support-policies.md#user-customization-of-agent-nodes
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
The AKS managed `kube-proxy` DaemonSet can also be disabled entirely if that is
To install the aks-preview extension, run the following command:
-```azurecli
+```azurecli-interactive
az extension add --name aks-preview ``` Run the following command to update to the latest version of the extension released:
-```azurecli
+```azurecli-interactive
az extension update --name aks-preview ```
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
kubectl get nodes -o=custom-columns="NAME:.metadata.name,ADDRESSES:.status.addre
The output from the `kubectl get nodes` command will show that the nodes have addresses and pod IP assignment space from both IPv4 and IPv6.
-```
+```output
NAME ADDRESSES PODCIDRS aks-nodepool1-14508455-vmss000000 10.240.0.4,2001:1234:5678:9abc::4 10.244.0.0/24,fd12:3456:789a::/80 aks-nodepool1-14508455-vmss000001 10.240.0.5,2001:1234:5678:9abc::5 10.244.1.0/24,fd12:3456:789a:0:1::/80
Once the deployment has been exposed and the `LoadBalancer` services have been f
kubectl get services ```
-```
+```output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63s
SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.
curl -s "http://[${SERVICE_IP}]" | head -n5 ```
-```
+```html
<!DOCTYPE html> <html> <head>
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Azure AD workload identity (preview) is supported on both Windows and Linux clus
1. Use the Azure CLI `az account set` command to set a specific subscription to be the current active subscription. Then use the `az identity create` command to create a managed identity.
- ```azurecli
+ ```azurecli-interactive
export SUBSCRIPTION_ID=<subscription id> export RESOURCE_GROUP=<resource group name> export UAMI=<name for user assigned identity>
Azure AD workload identity (preview) is supported on both Windows and Linux clus
2. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the `az keyvault set-policy` command shown below.
- ```azurecli
+ ```azurecli-interactive
az keyvault set-policy -n $KEYVAULT_NAME --key-permissions get --spn $USER_ASSIGNED_CLIENT_ID az keyvault set-policy -n $KEYVAULT_NAME --secret-permissions get --spn $USER_ASSIGNED_CLIENT_ID az keyvault set-policy -n $KEYVAULT_NAME --certificate-permissions get --spn $USER_ASSIGNED_CLIENT_ID
Azure AD workload identity (preview) is supported on both Windows and Linux clus
apiVersion: v1 metadata: name: busybox-secrets-store-inline-user-msi
+ labels:
+ azure.workload.identity/use: true
spec: serviceAccountName: ${SERVICE_ACCOUNT_NAME} containers:
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Currently, you can't modify the list of admission controllers in AKS.
Yes, you may use admission controller webhooks on AKS. It's recommended you exclude internal AKS namespaces, which are marked with the **control-plane label.** For example:
-```
+```output
namespaceSelector: matchExpressions: - key: control-plane
As the name suggests, bridge mode Azure CNI, in a "just in time" fashion, will c
The following example shows what the ip route setup looks like in Bridge mode. Regardless of how many pods the node has, there will only ever be two routes. The first one saying, all traffic excluding local on azure0 will go to the default gateway of the subnet through the interface with ip "src 10.240.0.4" (which is Node primary IP) and the second one saying "10.20.x.x" Pod space to kernel for kernel to decide.
-```bash
+```output
default via 10.240.0.1 dev azure0 proto dhcp src 10.240.0.4 metric 100 10.240.0.0/12 dev azure0 proto kernel scope link src 10.240.0.4 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
Transparent mode takes a straight forward approach to setting up Linux networkin
The following example shows a ip route setup of transparent mode. Each Pod's interface will get a static route attached so that traffic with dest IP as the Pod will be sent directly to the Pod's host side `veth` pair interface.
-```bash
+```output
10.240.0.216 dev azv79d05038592 proto static 10.240.0.218 dev azv8184320e2bf proto static 10.240.0.219 dev azvc0339d223b9 proto static
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
The following steps deploy and test the application.
You should see output similar to the following to indicate that all the pods are running.
- ```bash
+ ```output
NAME READY STATUS RESTARTS AGE javaee-cafe-cluster-agic-67cdc95bc-2j2gr 1/1 Running 0 29s javaee-cafe-cluster-agic-67cdc95bc-fgtt8 1/1 Running 0 29s
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
1. Create a new AKS cluster and enable the HTTP application routing add-on using the [`az aks create`][az-aks-create] command with the `--enable-addons` parameter.
- ```azurecli
+ ```azurecli-interactive
az aks create --resource-group myResourceGroup --name myAKSCluster --enable-addons http_application_routing ``` You can also enable HTTP routing on an existing AKS cluster using the [`az aks enable-addons`][az-aks-enable-addons] command with the `--addons` parameter.
- ```azurecli
+ ```azurecli-interactive
az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons http_application_routing ``` 2. Retrieve the DNS zone name using the [`az aks show`][az-aks-show] command. You need the DNS zone name to deploy applications to the cluster.
- ```azurecli
+ ```azurecli-interactive
az aks show --resource-group myResourceGroup --name myAKSCluster --query addonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName -o table ```
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
* Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *myAKSCluster* in the *myResourceGroup*:
- ```azurecli
+ ```azurecli-interactive
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
1. Remove the HTTP application routing add-on using the [`az aks disable-addons][az-aks-disable-addons] command with the `addons` parameter.
- ```azurecli
+ ```azurecli-interactive
az aks disable-addons --addons http_application_routing --name myAKSCluster --resource-group myResourceGroup --no-wait ```
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
The schema for the config file looks like this:
> [!IMPORTANT] > For compatibility with Go-based components that are part of the Kubernetes system, the certificate **must** support `Subject Alternative Names(SANs)` instead of the deprecated Common Name certs.
+>
+> There are differences in applications on how to comply with the environment variable `http_proxy`, `https_proxy`, and `no_proxy`. Curl and Python don't support CIDR in `no_proxy`, Ruby does.
Example input:
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
The following example creates a Kubernetes namespace for the ingress resources n
```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
# Set variable for ACR location to use for pulling images ACR_URL=<REGISTRY_URL>
helm install ingress-nginx ingress-nginx/ingress-nginx \
```azurepowershell-interactive # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
# Set variable for ACR location to use for pulling images $AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
Use the `--set controller.service.loadBalancerIP` and `--set controller.service.
```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
# Set variable for ACR location to use for pulling images ACR_URL=<REGISTRY_URL>
helm install ingress-nginx ingress-nginx/ingress-nginx \
```azurepowershell-interactive # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
# Set variable for ACR location to use for pulling images $AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-service-principal.md
Check the expiration date of your service principal credentials using the [`az a
az ad app credential list --id <app-id> --query "[].endDateTime" -o tsv ```
-The default expiration time for the service principal credentials is one year. If your credentials are older than one year, you can [reset the existing credentials](/update-credentials#reset-the-existing-service-principal-credentials) or [create a new service principal](/update-credentials#create-a-new-service-principal).
+The default expiration time for the service principal credentials is one year. If your credentials are older than one year, you can [reset the existing credentials](update-credentials.md#reset-the-existing-service-principal-credentials) or [create a new service principal](update-credentials.md#create-a-new-service-principal).
**General Azure CLI troubleshooting**
Check the expiration date of your service principal credentials using the [Get-A
Get-AzADAppCredential -ApplicationId <ApplicationId> ```
-The default expiration time for the service principal credentials is one year. If your credentials are older than one year, you can [reset the existing credentials](/update-credentials#reset-the-existing-service-principal-credentials) or [create a new service principal](/update-credentials#create-a-new-service-principal).
+The default expiration time for the service principal credentials is one year. If your credentials are older than one year, you can [reset the existing credentials](update-credentials.md#reset-the-existing-service-principal-credentials) or [create a new service principal](update-credentials.md#create-a-new-service-principal).
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
kubectl apply -f ./deploy/redis.yaml
And verify that your state store was successfully configured in the output:
-```bash
+```output
component.dapr.io/statestore created ```
kubectl logs --selector=app=node -c node --tail=-1
If the deployments were successful, you should see logs like this:
-```bash
+```ouput
Got a new order! Order ID: 1 Successfully persisted state Got a new order! Order ID: 2
You should see the latest JSON in the response.
Use the [az group delete][az-group-delete] command to remove the resource group, the cluster, the namespace, and all related resources.
-```azurecli
+```azurecli-interactive
az group delete --name MyResourceGroup ```
az group delete --name MyResourceGroup
Use the [Remove-AzResourceGroup][remove-azresourcegroup] command to remove the resource group, the cluster, the namespace, and all related resources.
-```azurepowershell
+```azurepowershell-interactive
Remove-AzResourceGroup -Name MyResourceGroup ```
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
Next, using `kubectl cordon <node-names>`, specify the desired nodes in a space-
kubectl cordon aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002 ```
-```bash
+```output
node/aks-nodepool1-31721111-vmss000000 cordoned node/aks-nodepool1-31721111-vmss000001 cordoned node/aks-nodepool1-31721111-vmss000002 cordoned
aks Upgrade Windows 2019 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md
Once you update the nodeSelector on the YAML file, you should also update the co
If you have an application deployed already, follow the recommended steps to deploy a new node pool with Windows Server 2022 nodes. Once deployed, your environment will show Windows Server 2019 and 2022 nodes, with the workloads running on the 2019 nodes:
-```console
+```bash
kubectl get nodes -o wide ``` This command shows all nodes on your AKS cluster with extra details on the output:
akswspool000002 Ready agent 5h37m v1.23.8 10.240.0.
With the Windows Server 2022 node pool deployed and the YAML file configured, you can now deploy the new version of the YAML:
-```console
+```bash
kubectl apply -f <filename> ```
service/sample unchanged
``` At this point, AKS starts the process of terminating the existing pods and deploying new pods to the Windows Server 2022 nodes. You can check the status of your deployment by running:
-```console
+```bash
kubectl get pods -o wide ``` This command returns the status of the pods on the default namespace. You might need to change the command above to list the pods on specific namespaces.
sample-7794bfcc4c-sh78c 1/1 Running 0 2m49s 10.240.0.228 ak
If you're using Group Managed Service Accounts (gMSA), update the Managed Identity configuration for the new node pool. gMSA uses a secret (user account and password) so the node on which the Windows pod is running can authenticate the container against Active Directory. To access that secret on Azure Key Vault, the node uses a Managed Identity that allows the node to access the resource. Since Managed Identities are configured per node pool, and the pod now resides on a new node pool, you need to update that configuration. Check out [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster](./use-group-managed-service-accounts.md) for more information.
-The same principle applies to Managed Identities used for any other pod/node pool when accessing other Azure resources. Any access provided via Managed Identity needs to be updated to reflect the new node pool. To view update and sign-in activities, see [How to view Managed Identity activity](../active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md).
+The same principle applies to Managed Identities used for any other pod/node pool when accessing other Azure resources. Any access provided via Managed Identity needs to be updated to reflect the new node pool. To view update and sign-in activities, see [How to view Managed Identity activity](../active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md).
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
The ARM64 processor provides low power compute for your Kubernetes workloads. To
Use `az aks nodepool add` command to add an ARM64 node pool.
-```azurecli
+```azurecli-interactive
az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
Mariner is an open-source Linux distribution available as an AKS container host.
You can add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku mariner`.
-```azurecli
+```azurecli-interactive
az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
Use the following instructions to migrate your Ubuntu nodes to Mariner nodes.
3. [Drain the existing Ubuntu nodes][drain-nodes]. 4. Remove the existing Ubuntu nodes using the `az aks delete` command.
-```azurecli
+```azurecli-interactive
az aks nodepool delete \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
az aks nodepool scale \
List the status of your node pools again using the [`az aks node pool list`][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Scaling* state with a new count of *5* nodes:
-```azurecli
+```azurecli-interactive
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster ```
az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name myn
The following example output from the [`az aks node pool list`][az-aks-nodepool-list] command shows that *mynodepool* is in the *Deleting* state:
-```azurecli
+```azurecli-interactive
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster ```
For more information on the capacity reservation groups, please refer to [Capaci
To install the aks-preview extension, run the following command:
-```azurecli
+```azurecli-interactive
az extension add --name aks-preview ``` Run the following command to update to the latest version of the extension released:
-```azurecli
+```azurecli-interactive
az extension update --name aks-preview ```
az aks nodepool add \
The following example output from the [`az aks node pool list`][az-aks-nodepool-list] command shows that *gpunodepool* is *Creating* nodes with the specified *VmSize*:
-```azurecli
+```azurecli-interactive
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster ```
az aks nodepool add \
The following example output from the [`az aks nodepool list`][az-aks-nodepool-list] command shows that *taintnp* is *Creating* nodes with the specified *nodeTaints*:
-```azurecli
+```azurecli-interactive
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster ```
spec:
Schedule the pod using the `kubectl apply -f nginx-toleration.yaml` command:
-```console
+```bash
kubectl apply -f nginx-toleration.yaml ``` It takes a few seconds to schedule the pod and pull the NGINX image. Use the [kubectl describe pod][kubectl-describe] command to view the pod status. The following condensed example output shows the *sku=gpu:NoSchedule* toleration is applied. In the events section, the scheduler has assigned the pod to the *aks-taintnp-28993262-vmss000000* node:
-```console
+```bash
kubectl describe pod mypod ```
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
This article helps you understand this new feature, and how to implement it.
To install the aks-preview extension, run the following command:
-```azurecli
+```azurecli-interactive
az extension add --name aks-preview ``` Run the following command to update to the latest version of the extension released:
-```azurecli
+```azurecli-interactive
az extension update --name aks-preview ```
Perform the following steps to deploy an AKS Mariner cluster using the Azure CLI
The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
- ```azurecli
+ ```azurecli-interactive
az aks create --name myAKSCluster --resource-group myResourceGroup --os-sku mariner --workload-runtime KataMshvVmIsolation --node-vm-size Standard_D4s_v3 --node-count 1 2. Run the following command to get access credentials for the Kubernetes cluster. Use the [az aks get-credentials][aks-get-credentials] command and replace the values for the cluster name and the resource group name.
- ```azurecli
+ ```azurecli-interactive
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
Use the following command to enable Pod Sandboxing (preview) by creating a node
The following example adds a node pool to *myAKSCluster* with one node in *nodepool2* in the *myResourceGroup*:
- ```azurecli
+ ```azurecli-interactive
az aks nodepool add --cluster-name myAKSCluster --resource-group myResourceGroup --name nodepool2 --os-sku mariner --workload-runtime KataMshvVmIsolation --node-vm-size Standard_D4s_v3 ``` 2. Run the [az aks update][az-aks-update] command to enable pod sandboxing (preview) on the cluster.
- ```bash
+ ```azurecli-interactive
az aks update --name myAKSCluster --resource-group myResourceGroup ```
To demonstrate the deployment of an untrusted application into the pod sandbox o
When you're finished evaluating this feature, to avoid Azure charges, clean up your unnecessary resources. If you deployed a new cluster as part of your evaluation or testing, you can delete the cluster using the [az aks delete][az-aks-delete] command.
-```azurecli
+```azurecli-interactive
az aks delete --resource-group myResourceGroup --name myAKSCluster ```
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
In this article, you learned how to create and manage system node pools in an AK
[use-multiple-node-pools]: use-multiple-node-pools.md [maximum-pods]: configure-azure-cni.md#maximum-pods-per-node [update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools
-[start-stop-nodepools]: /start-stop-nodepools.md
+[start-stop-nodepools]: start-stop-nodepools.md
[node-affinity]: operator-best-practices-advanced-scheduler.md#node-affinity
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
You must have the latest version of Azure CLI installed.
To install the aks-preview extension, run the following command:
-```azurecli
+```azurecli-interactive
az extension add --name aks-preview ``` Run the following command to update to the latest version of the extension released:
-```azurecli
+```azurecli-interactive
az extension update --name aks-preview ```
az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipoo
The following example output shows the *mywasipool* has the *workloadRuntime* type of *WasmWasi*.
+```azurecli-interactive
+az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipool --query workloadRuntime
+```
```output
-$ az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipool --query workloadRuntime
"WasmWasi" ``` Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
-```azurecli
+```azurecli-interactive
az aks get-credentials -n myakscluster -g myresourcegroup ``` Use `kubectl get nodes` to display the nodes in your cluster.
+```bash
+kubectl get nodes -o wide
+```
```output
-$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-mywasipool-12456878-vmss000000 Ready agent 123m v1.23.12 <WASINODE_IP> <none> Ubuntu 22.04.1 LTS 5.15.0-1020-azure containerd://1.5.11+azure-2 aks-nodepool1-12456878-vmss000000 Ready agent 133m v1.23.12 <NODE_IP> <none> Ubuntu 22.04.1 LTS 5.15.0-1020-azure containerd://1.5.11+azure-2
aks-nodepool1-12456878-vmss000000 Ready agent 133m v1.23.12 <NODE_IP
Use `kubectl describe node` to show the labels on a node in the WASI node pool. The following example shows the details of *aks-mywasipool-12456878-vmss000000*.
+```bash
+kubectl describe node aks-mywasipool-12456878-vmss000000
+```
```output
-$ kubectl describe node aks-mywasipool-12456878-vmss000000
- Name: aks-mywasipool-12456878-vmss000000 Roles: agent Labels: agentpool=mywasipool
scheduling:
Use `kubectl` to create the `RuntimeClass` objects.
-```azurecli-interactive
+```bash
kubectl apply -f wasm-runtimeclass.yaml ```
spec:
Use `kubectl` to run your example deployment:
-```azurecli-interactive
+```bash
kubectl apply -f slight.yaml ``` Use `kubectl get svc` to get the external IP address of the service.
+```bash
+kubectl get svc
+```
```output
-$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10m wasm-slight LoadBalancer 10.0.133.247 <EXTERNAL-IP> 80:30725/TCP 2m47s
wasm-slight LoadBalancer 10.0.133.247 <EXTERNAL-IP> 80:30725/TCP 2m47s
Access the example application at `http://EXTERNAL-IP/hello`. The following example uses `curl`. ```output
-$ curl http://EXTERNAL-IP/hello
+curl http://EXTERNAL-IP/hello
hello ```
hello
To remove the example deployment, use `kubectl delete`.
-```azurecli-interactive
+```bash
kubectl delete -f slight.yaml ```
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API
To install the aks-preview extension, run the following command:
-```azurecli
+```azurecli-interactive
az extension add --name aks-preview ``` Run the following command to update to the latest version of the extension released:
-```azurecli
+```azurecli-interactive
az extension update --name aks-preview ```
In this section, you deploy, upgrade, or disable the Vertical Pod Autoscaler on
1. To enable VPA on a new cluster, use `--enable-vpa` parameter with the [az aks create][az-aks-create] command.
- ```azurecli
+ ```azurecli-interactive
az aks create -n myAKSCluster -g myResourceGroup --enable-vpa ```
In this section, you deploy, upgrade, or disable the Vertical Pod Autoscaler on
2. Optionally, to enable VPA on an existing cluster, use the `--enable-vpa` with the [az aks upgrade][az-aks-upgrade] command.
- ```azurecli
+ ```azurecli-interactive
az aks update -n myAKSCluster -g myResourceGroup --enable-vpa ```
In this section, you deploy, upgrade, or disable the Vertical Pod Autoscaler on
3. Optionally, to disable VPA on an existing cluster, use the `--disable-vpa` with the [az aks upgrade][az-aks-upgrade] command.
- ```azurecli
+ ```azurecli-interactive
az aks update -n myAKSCluster -g myResourceGroup --disable-vpa ```
The following steps create a deployment with two pods, each running a single con
The example output resembles the following:
- ```bash
+ ```output
hamster-78f9dcdd4c-hf7gk 1/1 Running 0 24s hamster-78f9dcdd4c-j9mc7 1/1 Running 0 24s ```
The following steps create a deployment with two pods, each running a single con
The example output is a snippet of the information about the cluster:
- ```bash
+ ```output
hamster: Container ID: containerd:// Image: k8s.gcr.io/ubuntu-slim:0.1
The following steps create a deployment with two pods, each running a single con
The example output is a snippet of the information describing the pod:
- ```bash
+ ```output
State: Running Started: Wed, 28 Sep 2022 15:09:51 -0400 Ready: True
The following steps create a deployment with two pods, each running a single con
The example output is a snippet of the information about the resource utilization:
- ```bash
+ ```output
State: Running Started: Wed, 28 Sep 2022 15:09:51 -0400 Ready: True
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
1. Enable VPA for your cluster by running the following command. Replace cluster name `myAKSCluster` with the name of your AKS cluster and replace `myResourceGroup` with the name of the resource group the cluster is hosted in.
- ```azurecli
+ ```azurecli-interactive
az aks update -n myAKSCluster -g myResourceGroup --enable-vpa ```
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
kubectl apply -f ingress.yaml -n hello-web-app-routing
The following example output shows the created resources:
-```bash
+```output
deployment.apps/aks-helloworld created service/aks-helloworld created ingress.networking.k8s.io/aks-helloworld created
kubectl apply -f ingressbackend.yaml -n hello-web-app-routing
The following example output shows the created resources:
-```bash
+```output
deployment.apps/aks-helloworld created service/aks-helloworld created ingress.networking.k8s.io/aks-helloworld created
kubectl apply -f service.yaml -n hello-web-app-routing
The following example output shows the created resources:
-```bash
+```output
deployment.apps/aks-helloworld created service/aks-helloworld created ```
kubectl delete namespace hello-web-app-routing
You can remove the Web Application Routing add-on using the Azure CLI. To do so run the following command, substituting your AKS cluster and resource group name. Be careful if you already have some of the other add-ons (open-service-mesh or azure-keyvault-secrets-provider) enabled on your cluster so that you don't accidentally disable them.
-```azurecli
+```azurecli-interactive
az aks disable-addons --addons web_application_routing --name myAKSCluster --resource-group myResourceGroup ```
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --q
Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity.
-```azurecli
+```azurecli-interactive
az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}" ```
export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_
Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name.
-```azurecli
+```azurecli-interactive
az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}" ```
Serviceaccount/workload-identity-sa created
Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject.
-```azurecli
+```azurecli-interactive
az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}" --audience api://AzureADTokenExchange ```
EOF
> [!IMPORTANT] > Ensure your application pods using workload identity have added the following label [azure.workload.identity/use: "true"] to your running pods/deployments, otherwise the pods will fail once restarted.
-```azurecli-interactive
+```bash
kubectl apply -f <your application> ```
You can retrieve this information using the Azure CLI command: [az keyvault list
1. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands:
- ```azurecli
+ ```azurecli-interactive
export RESOURCE_GROUP="myResourceGroup" export USER_ASSIGNED_IDENTITY_NAME="myIdentity" export KEYVAULT_NAME="myKeyVault"
You can retrieve this information using the Azure CLI command: [az keyvault list
To disable the Azure AD workload identity on the AKS cluster where it's been enabled and configured, you can run the following command:
-```azurecli
+```azurecli-interactive
az aks update --resource-group myResourceGroup --name myAKSCluster --disable-workload-identity ```
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
If you don't have a managed identity created and assigned to your pod, perform t
1. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity.
- ```azurecli
+ ```azurecli-interactive
az account set --subscription "subscriptionID" ```
- ```azurecli
+ ```azurecli-interactive
az identity create --name "userAssignedIdentityName" --resource-group "resourceGroupName" --location "location" --subscription "subscriptionID" ```
If you don't have a managed identity created and assigned to your pod, perform t
If you don't have a dedicated Kubernetes service account created for this application, perform the following steps to create and then annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name.
-```azurecli
+```azurecli-interactive
az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}" ```
Serviceaccount/workload-identity-sa created
Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. Replace the values `resourceGroupName`, `userAssignedIdentityName`, `federatedIdentityName`, `serviceAccountNamespace`, and `serviceAccountName`.
-```azurecli
+```azurecli-interactive
az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} --audience api://AzureADTokenExchange ```
After you've completed your testing and the application is successfully able to
1. Run the [az aks pod-identity delete][az-aks-pod-identity-delete] command to remove the identity from your pod. This should only be done after all pods in the namespace using the pod-managed identity mapping have migrated to use the sidecar.
- ```azurecli
+ ```azurecli-interactive
az aks pod-identity delete --name podIdentityName --namespace podIdentityNamespace --resource-group myResourceGroup --cluster-name myAKSCluster ```
app-service Quickstart Golang https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-golang.md
> Go on App Service on Linux is _experimental_. >
-In this quickstart, you'll deploy a Go web app to Azure App Service. Azure App Service is a fully managed web hosting service that supports Go 1.18 and higher apps hosted in a Linux server environment.
+In this quickstart, you'll deploy a Go web app to Azure App Service. Azure App Service is a fully managed web hosting service that supports Go 1.19 and higher apps hosted in a Linux server environment.
To complete this quickstart, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs).-- [Go 1.18](https://go.dev/dl/) or higher installed locally.
+- [Go 1.19](https://go.dev/dl/) or higher installed locally.
## 1 - Sample application
az login
Create the webapp and other resources, then deploy your code to Azure using [az webapp up](/cli/azure/webapp#az-webapp-up). ```azurecli
-az webapp up --runtime GO:1.18 --os linux --sku B1
+az webapp up --runtime GO:1.19 --os linux --sku B1
``` * The `--runtime` parameter specifies what version of Go your app is running. This example uses Go 1.18. To list all available runtimes, use the command `az webapp list-runtimes --os linux --output table`.
You can launch the app at http://&lt;app-name>.azurewebsites.net
"name": "&lt;app-name>", "os": "&lt;os-type>", "resourcegroup": "&lt;group-name>",
- "runtime_version": "go|1.18",
+ "runtime_version": "go|1.19",
"runtime_version_detected": "0.0", "sku": "FREE", "src_path": "&lt;your-folder-location>"
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
# Create a WordPress site
-[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure
+[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure
In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) with [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/index.yml) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Basic** tier for your app and a **Burstable, B1ms** tier for your database, and incurs a cost for your Azure Subscription. For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
Last updated 03/31/2020
-# Migrate Azure Application Gateway and Web Application Firewall from v1 to v2
+# Migrate Azure Application Gateway and Web Application Firewall from V1 to V2
-[Azure Application Gateway and Web Application Firewall (WAF) v2](application-gateway-autoscaling-zone-redundant.md) is now available, offering additional features such as autoscaling and availability-zone redundancy. However, existing v1 gateways aren't automatically upgraded to v2. If you want to migrate from v1 to v2, follow the steps in this article.
+We announced the deprecation of Application Gateway V1 SKU (Standard and WAF) on April 28, 2023 and subsequently this SKU retires on April 28, 2026. Learn more about the V1 retirement [here](./v1-retirement.md).
+
+[Azure Application Gateway and Web Application Firewall (WAF) V2](application-gateway-autoscaling-zone-redundant.md) now offer additional features such as autoscaling, availability, zone redundancy, higher performance, faster operations and improved throughput compared to V1. Moreover, all new features are released for V2 SKU. Hence, we highly recommend you to start planning your migrations now.
+
+The existing V1 gateways aren't automatically upgraded to V2. So here is the guide to help you plan and carry out the migrations by yourself.
There are two stages in a migration: 1. Migrate the configuration 2. Migrate the client traffic
-This article covers configuration migration. Client traffic migration varies depending on your specific environment. However, some high-level, general recommendations [are provided](#migrate-client-traffic).
+This article primarily helps with the configuration migration. The traffic migration would vary depending on customerΓÇÖs needs and environment. But we have included some general recommendations further in this [article](#traffic-migration).
-## Migration overview
+## Configuration migration
-An Azure PowerShell script is available that does the following:
+An Azure PowerShell script is provided in this document. It performs the following operations to help you with the configuration:
* Creates a new Standard_v2 or WAF_v2 gateway in a virtual network subnet that you specify. * Seamlessly copies the configuration associated with the v1 Standard or WAF gateway to the newly created Standard_V2 or WAF_V2 gateway.
-### Caveats\Limitations
-
-* The new v2 gateway has new public and private IP addresses. It isn't possible to move the IP addresses associated with the existing v1 gateway seamlessly to v2. However, you can allocate an existing (unallocated) public or private IP address to the new v2 gateway.
-* You must provide an IP address space for another subnet within your virtual network where your v1 gateway is located. The script can't create the v2 gateway in any existing subnets that already have a v1 gateway. However, if the existing subnet already has a v2 gateway, that may still work provided there's enough IP address space.
-* If you have a network security group or user defined routes associated to the v2 gateway subnet, make sure they adhere to the [NSG requirements](../application-gateway/configuration-infrastructure.md#network-security-groups) and [UDR requirements](../application-gateway/configuration-infrastructure.md#supported-user-defined-routes) for a successful migration
-* [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet.
-* To migrate a TLS/SSL configuration, you must specify all the TLS/SSL certs used in your v1 gateway.
-* If you have FIPS mode enabled for your V1 gateway, it won't be migrated to your new v2 gateway. FIPS mode isn't supported in v2.
-* In case of Private IP only V1 gateway, the script will generate a private and public IP address for the new V2 gateway. The Private IP only V2 gateway is currently in public preview. Once it becomes generally available, customers can utilize the script to transfer their private IP only V1 gateway to a private IP only V2 gateway.
-* Headers with names containing anything other than letters, digits, and hyphens are not passed to your application. This only applies to header names, not header values. This is a breaking change from v1.
-* NTLM and Kerberos authentication is not supported by Application Gateway v2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from v1 to v2 gateways if run.
-
-## Download the script
+## Downloading the script
-Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureAppGWMigration).
+You can download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureAppGWMigration).
-## Use the script
+## Using the script
There are two options for you depending on your local PowerShell environment setup and preferences:
There are two options for you depending on your local PowerShell environment set
To determine if you have the Azure Az modules installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az modules, then you can use the `Install-Script` method.
-### Install using the Install-Script method
+#### Install using the Install-Script method
To use this option, you must not have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. You can either uninstall the Azure Az modules, or use the other option to download the script manually and run it.
Run the script with the following command to get the latest version:
This command also installs the required Az modules.
-### Install using the script directly
+#### Install using the script directly
If you do have some Azure Az modules installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/gallery/how-to/working-with-packages/manual-download).
To run the script:
-validateMigration -enableAutoScale ```
-## Migrate client traffic
+### Caveats\Limitations
+
+* The new v2 gateway has new public and private IP addresses. It isn't possible to move the IP addresses associated with the existing v1 gateway seamlessly to v2. However, you can allocate an existing (unallocated) public or private IP address to the new v2 gateway.
+* You must provide an IP address space for another subnet within your virtual network where your v1 gateway is located. The script can't create the v2 gateway in any existing subnets that already have a v1 gateway. However, if the existing subnet already has a v2 gateway, that may still work provided there's enough IP address space.
+* If you have a network security group or user defined routes associated to the v2 gateway subnet, make sure they adhere to the [NSG requirements](../application-gateway/configuration-infrastructure.md#network-security-groups) and [UDR requirements](../application-gateway/configuration-infrastructure.md#supported-user-defined-routes) for a successful migration
+* [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet.
+* To migrate a TLS/SSL configuration, you must specify all the TLS/SSL certs used in your v1 gateway.
+* If you have FIPS mode enabled for your V1 gateway, it won't be migrated to your new v2 gateway. FIPS mode isn't supported in v2.
+* In case of Private IP only V1 gateway, the script generates a private and public IP address for the new V2 gateway. The Private IP only V2 gateway is currently in public preview. Once it becomes generally available, customers can utilize the script to transfer their private IP only V1 gateway to a private IP only V2 gateway.
+* Headers with names containing anything other than letters, digits, and hyphens are not passed to your application. This only applies to header names, not header values. This is a breaking change from v1.
+* NTLM and Kerberos authentication is not supported by Application Gateway v2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from v1 to v2 gateways if run.
+
+## Traffic migration
First, double check that the script successfully created a new v2 gateway with the exact configuration migrated over from your v1 gateway. You can verify this from the Azure portal.
Here are a few scenarios where your current application gateway (Standard) may r
Update your clients to use the IP address(es) associated with the newly created v2 application gateway. We recommend that you don't use IP addresses directly. Consider using the DNS name label (for example, yourgateway.eastus.cloudapp.azure.com) associated with your application gateway that you can CNAME to your own custom DNS zone (for example, contoso.com).
-## ApplicationGateway V2 pricing
-
-The pricing models are different for the Application Gateway v1 and v2 SKUs. Please review the pricing at [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) page before migrating from V1 to V2.
-
-## Common questions
-
-### Are there any limitations with the Azure PowerShell script to migrate the configuration from v1 to v2?
-
-Yes. See [Caveats/Limitations](#caveatslimitations).
-
-### Is this article and the Azure PowerShell script applicable for Application Gateway WAF product as well?
-
-Yes.
-
-### Does the Azure PowerShell script also switch over the traffic from my v1 gateway to the newly created v2 gateway?
+## Pricing considerations
-No. The Azure PowerShell script only migrates the configuration. Actual traffic migration is your responsibility and in your control.
+The pricing models are different for the Application Gateway v1 and v2 SKUs. V2 is charged based on consumption. See [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) before migrating for pricing information.
-### Is the new v2 gateway created by the Azure PowerShell script sized appropriately to handle all of the traffic that is currently served by my v1 gateway?
+### Cost efficiency guidance
-The Azure PowerShell script creates a new v2 gateway with an appropriate size to handle the traffic on your existing v1 gateway. Autoscaling is disabled by default, but you can enable AutoScaling when you run the script.
+The V2 SKU comes with a range of advantages such as a performance boost of 5x, improved security with Key Vault integration, faster updates of security rules in WAF_V2, WAF Custom rules, Policy associations, and Bot protection. It also offers high scalability, optimized traffic routing, and seamless integration with Azure services. These features can improve the overall user experience, prevent slowdowns during times of heavy traffic, and avoid expensive data breaches.
-### I configured my v1 gateway to send logs to Azure storage. Does the script replicate this configuration for v2 as well?
+There are 5 variants available in V1 SKU based on the Tier and Size - Standard_Small, Standard_Medium, Standard_Large, WAF_Medium and WAF_Large.
-No. The script doesn't replicate this configuration for v2. You must add the log configuration separately to the migrated v2 gateway.
-### Does this script support certificates uploaded to Azure KeyVault ?
+| SKU | v1 Fixed Price/mo | v2 Fixed Price/mo | Recommendation|
+| - |:-:|:--:|:--: |
+|Standard Medium | 102.2 | 179.8|V2 SKU can handle a larger number of requests than a V1 gateway, so we recommend consolidating multiple V1 gateways into a single V2 gateway, to optimize the cost. Ensure that consolidation doesnΓÇÖt exceed the Application Gateway [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#application-gateway-limits). We recommend 3:1 consolidation. |
+| WAF Medium | 183.96 | 262.8 |Same as for Standard Medium |
+| Standard Large | 467.2 | 179.58 | For these variants, in most cases, moving to a V2 gateway can provide you with a better price benefit compared to V1.|
+| WAF Large | 654.08 | 262.8 |Same as for Standard Large |
-No. Currently the script doesn't support certificates in KeyVault. However, this is being considered for a future version.
+> [!NOTE]
+> The calculations shown here are based on East US and for a gateway with 2 instances in V1. The variable cost in V2 is based on one of the 3 dimensions with highest usage: New connections (50/sec), Persistent connections (2500 persistent connections/min), Throughput (1 CU can handle 2.22 Mbps). <br>
+> <br>
+> The scenarios described here are examples and are for illustration purposes only. For pricing information according to your region, see the [Pricing page](https://azure.microsoft.com/pricing/details/application-gateway/).
+
+For further concerns regarding the pricing, work with your CSAM or get in touch with our support team for assistance.
-### I ran into some issues with using this script. How can I get help?
+## Common questions
-You can contact Azure Support under the topic "Configuration and Setup/Migrate to V2 SKU". Learn more about [Azure support here](https://azure.microsoft.com/support/options/).
+Common questions on migration can be found [here](./retirement-faq.md#faq-on-v1-to-v2-migration)
## Next steps
application-gateway Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/retirement-faq.md
+
+ Title: FAQ on V1 retirement
+
+description: This article lists out commonly added questions on retirement of Application gateway V1 SKUs and Migration
++++ Last updated : 04/19/2023++
+# FAQs
+On April 28,2023 we announced retirement of Application gateway V1 on 28 April 2026.This article lists the commonly asked questions on V1 retirement and V1-V2 migration.
+
+## Common questions on V1 retirement
+
+### What is the official date Application Gateway V1 is cut off from creation?
+
+New Customers are not allowed to create v1 from 1 July 2023. However, any existing V1 customers can continue to create resources until August 2024 and manage V1 resources until the retirement date of 28 April 2026.
+
+### What happens to existing Application Gateway V1 after 28 April 2026?
+
+Once the deadline arrives V1 gateways are not supported. Any V1 SKU resources that are still active are stopped, and force deleted.
+
+### What is the definition of a new customer on Application Gateway V1 SKU?
+
+Customers who did not have Application Gateway V1 SKU in their subscriptions in the month of June 2023 are considered as new customers. These customers wonΓÇÖt be able to create new V1 gateways from 1 July 2023.
+
+### What is the definition of an existing customer on Application Gateway V1 SKU?
+
+Customers who had active or stopped but allocated Application Gateway V1 SKU in their subscriptions in the month of June 2023, are considered existing customers. These customers get until August 28, 2024 to create new V1 application gateways and until April 28,2026 to migrate their V1 gateways to V2.
+
+### Does this migration plan affect any of my existing workloads that run on Application Gateway V1 SKU?
+
+Until April 28, 2026, existing Application Gateway V1 deployments are supported. After April 28, 2026, any V1 SKU resources that are still active are stopped, and force deleted.
+
+### What happens to my V1 application gateways if I donΓÇÖt plan on migrating soon?
+
+On April 28, 2026, the V1 gateways are fully retired and all active AppGateway V1s are stopped & deleted. To prevent business impact, we highly recommend starting to plan your migration at the earliest and complete it before April 28, 2026.
+
+### How do I migrate my application gateway V1 to V2 SKU?
+
+If you have an Application Gateway V1, [Migration from v1 to v2](./migrate-v1-v2.md) can be currently done in two stages:
+- Stage 1: Migrate the configuration - Detailed instruction for Migrating the configuration can be found here.
+- Stage 2: Migrate the client traffic -Client traffic migration varies depending on your specific environment. High level guidelines on traffic migration are provided here.
+
+### Can Microsoft migrate this data for me?
+
+No, Microsoft cannot migrate user's data on their behalf. Users must do the migration themselves by using the self-serve options provided.
+
+### What is the time required for migration?
+
+Planning and execution of migration greatly depends on the complexity of the deployment and could take couple of months.
+
+### How do I report an issue?
+
+Post your issues and questions about migration to our [Microsoft Q&A](https://aka.ms/ApplicationGatewayQA) for AppGateway, with the keyword V1Migration. We recommend posting all your questions on this forum. If you have a support contract, you're welcome to log a [support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/NewSupportRequestV3Blade) as well.
+
+## FAQ on V1 to V2 migration
+
+### Are there any limitations with the Azure PowerShell script to migrate the configuration from v1 to v2?
+
+Yes. See [Caveats/Limitations](./migrate-v1-v2.md#caveatslimitations).
+
+### Is this article and the Azure PowerShell script applicable for Application Gateway WAF product as well?
+
+Yes.
+
+### Does the Azure PowerShell script also switch over the traffic from my v1 gateway to the newly created v2 gateway?
+
+No. The Azure PowerShell script only migrates the configuration. Actual traffic migration is your responsibility and in your control.
+
+### Is the new v2 gateway created by the Azure PowerShell script sized appropriately to handle all of the traffic that is currently served by my v1 gateway?
+
+The Azure PowerShell script creates a new v2 gateway with an appropriate size to handle the traffic on your existing v1 gateway. Autoscaling is disabled by default, but you can enable AutoScaling when you run the script.
+
+### I configured my v1 gateway to send logs to Azure storage. Does the script replicate this configuration for v2 as well?
+
+No. The script doesn't replicate this configuration for v2. You must add the log configuration separately to the migrated v2 gateway.
+
+### Does this script support certificates uploaded to Azure Key Vault ?
+
+No. Currently the script doesn't support certificates in Key Vault.
+
+### I ran into some issues with using this script. How can I get help?
+
+You can contact Azure Support under the topic "Configuration and Setup/Migrate to V2 SKU". Learn more about [Azure support here](https://azure.microsoft.com/support/options/).
application-gateway V1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/v1-retirement.md
+
+ Title: We're retiring Application Gateway V1 SKU in April 2026
+
+description: This article provides a high-level overview of the retirement of Application gateway V1 SKUs
++++ Last updated : 04/19/2023+++
+# Migrate your Application Gateways from V1 SKU to V2 SKU by April 28, 2026
+
+**Applies to:** :heavy_check_mark: Application Gateway v1 deployments
+
+We announced the deprecation of Application Gateway V1 on **April 28 ,2023**. Starting from **April 28, 2026**, we are retiring ApplicationGateway v1 SKU. This means that the service is not supported after this date. If you use Application Gateway V1 SKU, start planning your migration to V2 now. Complete it by April 28, 2026, to take advantage of [Application Gateway V2](./overview-v2.md).
+
+## Retirement Timelines
+
+- Deprecation announcement: April 28 ,2023
+
+- No new subscriptions for V1 deployments: July 1,2023 - Application Gateway V1 is no longer available for deployment on new subscriptions from July 1 2023.
+
+- No new V1 deployments: August 28, 2024 - V1 creation is stopped completely for all customers 28 August 2024 onwards.
+
+- SKU retirement: April 28, 2026 - Any Application Gateway V1 that are in Running status will be stopped. Application Gateway V1s that is not migrated to Application Gateway V2 are informed regarding timelines for deleting them and subsequently force deleted.
+
+## Resources available for migration
+
+- Follow the steps outlined in the [migration script](./migrate-v1-v2.md) to migrate from Application Gateway v1 to v2. Please review [pricing](./understanding-pricing.md) before making the transition.
+
+- If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or customer success account managers (CSAMs)), please work with them for migration.
+
+## Required action
+
+Start planning your migration to Application Gateway V2 today.
+
+- Make a list of all Application Gateway V1 Sku gateways: On April 28,2023, we sent out emails with subject "Retirement Notice: Transition to Application Gateway V2 by 28 April 2026" to V1 subscription owners. The email provides the Subscription, Gateway Name and Application Gateway V1 resource details. Please use the details to build this list.
+
+- [Learn more](./migrate-v1-v2.md) about migrating your application gateway V1 to V2 SKU. For more information, see [Frequently asked questions about V1 to V2 migration.](./retirement-faq.md#faq-on-v1-to-v2-migration)
+
+- For technical questions, issues, and help get answers from community experts in [Microsoft Q&A](https://aka.ms/ApplicationGatewayQA) or reach out to us at [AppGatewayMigrationTeam](mailto:appgatewaymigration@microsoft.com). If you have a support question and you need technical help, please create a [support request](https://portal.azure.com/#view/Microsoft_Azure_Support/NewSupportRequestV3Blade) .
+
+- Complete the migration as soon as possible to prevent business impact and to take advantage of the improved performance, security, and new features of Application Gateway V2.
+
+Blog: [Take advantage of Application gateway V2](https://azure.microsoft.com/blog/taking-advantage-of-the-new-azure-application-gateway-v2/)
+
+## Next steps
+
+ * [Migration Guidance](./migrate-v1-v2.md)
+ * [Common questions on Migration](./retirement-faq.md)
+
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
The following table lists the supported languages for print text by the most rec
|Kazakh (Latin) | `kk-latn`|Zhuang | `za` | |Khaling | `klr`|Zulu | `zu` |
-### Print text in preview (API version 2022-06-30-preview)
+### Print text in preview (API version 2023-02-28-preview)
Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
Use the parameter `api-version=2022-06-30-preview` when using the REST API or th
|Guarani| `gn`| Shambala | `ksb`| |Gusii | `guz`|Shona | `sn`| |Greek | `el`|Siksika | `bla`|
-|Herero | `hz` |Soga | `xog`|
-|Hiligaynon | `hil` |Somali (Latin) | `so-latn` |
-|Iban | `iba`|Songhai | `son` |
-|Igbo |`ig`|South Ndebele | `nr`|
-|Iloko | `ilo`|Southern Altai | `alt`|
-|Ingush |`inh`|Southern Sotho | `st` |
-|Jola-Fonyi |`dyo`|Sundanese | `su` |
-|Kabardian | `kbd` | Swati | `ss` |
-|Kalenjin | `kln` |Tabassaran| `tab` |
-|Kalmyk | `xal` | Tachelhit| `shi` |
-|Kanuri | `kr`|Tahitian | `ty`|
-|Khakas | `kjh` |Taita | `dav` |
+|Hebrew | `he` | Soga | `xog`|
+|Herero | `hz` |Somali (Latin) | `so-latn` |
+|Hiligaynon | `hil` |Songhai | `son` |
+|Iban | `iba`|South Ndebele | `nr`|
+|Igbo |`ig`|Southern Altai | `alt`|
+|Iloko | `ilo`|Southern Sotho | `st` |
+|Ingush |`inh`|Sundanese | `su` |
+|Jola-Fonyi |`dyo`|Swati | `ss` |
+|Kabardian | `kbd` | Tabassaran| `tab` |
+|Kalenjin | `kln` |Tachelhit| `shi` |
+|Kalmyk | `xal` | Tahitian | `ty`|
+|Kanuri | `kr`|Taita | `dav` |
+|Khakas | `kjh` | Tamil | `ta`|
|Kikuyu | `ki` | Tatar (Cyrillic) | `tt-cyrl` | |Kildin Sami | `sjd` | Teso | `teo` | |Kinyarwanda| `rw`| Thai | `th`|
This technology is currently available for US driver licenses and the biographic
::: moniker range="form-recog-2.1.0" > [!div class="nextstepaction"] > [Try Form Recognizer Sample Labeling tool](https://aka.ms/fott-2.1-ga)
attestation Custom Tcb Baseline Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/custom-tcb-baseline-enforcement.md
Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](../security/fundamentals/trusted-hardware-identity-management.md) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM might lag the latest baseline offered by Intel. This is to prevent any attestation failure scenarios for ACC customers who require more time for patching platform software (PSW) updates.
-The custom TCB baseline enforcement feature in Azure Attestation will empower you to perform SGX attestation against a desired TCB baseline. It is always recommended for [Azure Confidential Computing](../confidential-computing/overview.md) (ACC) SGX customers to install the latest PSW version supported by Intel and configure their SGX attestation policy with the latest TCB baseline supported by Azure.
+Azure Attestation offers the custom TCB baseline enforcement feature (preview) which will empower you to perform SGX attestation against a desired TCB baseline. It is always recommended for [Azure Confidential Computing](../confidential-computing/overview.md) (ACC) SGX customers to install the latest PSW version supported by Intel and configure their SGX attestation policy with the latest TCB baseline supported by Azure.
## Why use custom TCB baseline enforcement feature?
c:[type=="x-ms-attestation-type"] => issue(type="tee", value=c.value);
- If the PSW version of ACC node is lower than the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will fail - If the PSW version of ACC node is greater than or equal to the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will pass - For customers who do not configure a custom TCB baseline in attestation policy, attestation will be performed against the Azure default TCB baseline-- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline
+- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
For more information, seeΓÇ»[Configure reboot settings](../update-management/con
### WSUS server configuration
-If the environment is set to get updates from WSUS, ensure that it is approved in WSUS before the update deployment. For more information, see [WSUS configuration settings](../update-management/configure-wuagent.md#make-wsus-configuration-settings). If your environment is not using WSUS, ensure that you remove the WSUS server settings and [reset Windows update component](https://learn.microsoft.com/windows/deployment/update/windows-update-resources#how-do-i-reset-windows-update-components).
+If the environment is set to get updates from WSUS, ensure that it is approved in WSUS before the update deployment. For more information, see [WSUS configuration settings](../update-management/configure-wuagent.md#make-wsus-configuration-settings). If your environment is not using WSUS, ensure that you remove the WSUS server settings and [reset Windows update component](/windows/deployment/update/windows-update-resources#how-do-i-reset-windows-update-components).
### Automatically download and install
-To fix the issue, disable the **AutoUpdate** feature. Set it to Disabled in the local group policy Configure Automatic Updates. For more information, see [Configure automatic updates](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates).
+To fix the issue, disable the **AutoUpdate** feature. Set it to Disabled in the local group policy Configure Automatic Updates. For more information, see [Configure automatic updates](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates).
## <a name="troubleshoot-offline"></a>Troubleshoot offline
CheckResultMessageArguments : {}
RuleId : DotNetFrameworkInstalledCheck RuleGroupId : prerequisites
-RuleName : .Net Framework 4.6.2+
+RuleName : .NET Framework 4.6.2+
RuleGroupName : Prerequisite Checks RuleDescription : .NET Framework version 4.6.2 or higher is required CheckResult : Passed
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
The Feature Management libraries also manage feature flag lifecycles behind the
The [Add feature flags to an ASP.NET Core app Quickstart](./quickstart-feature-flag-aspnet-core.md) shows a simple example of how to use feature flags in an ASP.NET Core application. This tutorial shows additional setup options and capabilities of the Feature Management libraries. You can use the sample app created in the quickstart to try out the sample code shown in this tutorial.
-For the ASP.NET Core feature management API reference documentation, see [Microsoft.FeatureManagement Namespace](https://www.nuget.org/packages/Microsoft.FeatureManagement/).
+For the ASP.NET Core feature management API reference documentation, see [Microsoft.FeatureManagement Namespace](/dotnet/api/microsoft.featuremanagement).
In this tutorial, you will learn how to:
To access the .NET Core feature manager, your app must have references to the `M
The .NET Core feature manager is configured from the framework's native configuration system. As a result, you can define your application's feature flag settings by using any configuration source that .NET Core supports, including the local *appsettings.json* file or environment variables.
-By default, the feature manager retrieves feature flag configuration from the `"FeatureManagement"` section of the .NET Core configuration data. To use the default configuration location, call the AddFeatureManagement method of the **IServiceCollection** passed into the **ConfigureServices** method of the **Startup** class.
+By default, the feature manager retrieves feature flag configuration from the `"FeatureManagement"` section of the .NET Core configuration data. To use the default configuration location, call the [AddFeatureManagement](/dotnet/api/microsoft.featuremanagement.servicecollectionextensions.addfeaturemanagement) method of the **IServiceCollection** passed into the **ConfigureServices** method of the **Startup** class.
```csharp
public class Startup
```
-If you use filters in your feature flags, you must include the [Microsoft.FeatureManagement.FeatureFilters](/dotnet/api/microsoft.azure.management.storsimple8000series.models.featurefilter) namespace and add a call to AddFeatureFilters specifying the type name of the filter you want to use as the generic type of the method. For more information on using feature filters to dynamically enable and disable functionality, see [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md).
+If you use filters in your feature flags, you must include the [Microsoft.FeatureManagement.FeatureFilters](/dotnet/api/microsoft.featuremanagement.featurefilters) namespace and add a call to [AddFeatureFilter](/dotnet/api/microsoft.featuremanagement.ifeaturemanagementbuilder.addfeaturefilter) specifying the type name of the filter you want to use as the generic type of the method. For more information on using feature filters to dynamically enable and disable functionality, see [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md).
The following example shows how to use a built-in feature filter called `PercentageFilter`:
By convention, the `FeatureManagement` section of this JSON document is used for
## Use dependency injection to access IFeatureManager
-For some operations, such as manually checking feature flag values, you need to get an instance of IFeatureManager. In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
+For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](/dotnet/api/microsoft.featuremanagement.ifeaturemanager). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
### [.NET 5.x](#tab/core5x)
public IActionResult Index()
} ```
-When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered IDisabledFeaturesHandler interface is called. The default `IDisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
+When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered [IDisabledFeaturesHandler](/dotnet/api/microsoft.featuremanagement.mvc.idisabledfeatureshandler) interface is called. The default `IDisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
## MVC views
app.UseForFeature(featureName, appBuilder => {
In this tutorial, you learned how to implement feature flags in your ASP.NET Core application by using the `Microsoft.FeatureManagement` libraries. For more information about feature management support in ASP.NET Core and App Configuration, see the following resources: * [ASP.NET Core feature flag sample code](./quickstart-feature-flag-aspnet-core.md)
-* [Microsoft.FeatureManagement documentation](https://www.nuget.org/packages/Microsoft.FeatureManagement/)
+* [Microsoft.FeatureManagement documentation](/dotnet/api/microsoft.featuremanagement)
* [Manage feature flags](./manage-feature-flags.md)
azure-arc Troubleshoot Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance.md
You can use any client like `SqlCmd`, SQL Server Management Studio (SSMS), or Az
If the previous steps all succeeded without any problem and you still can't log in, collect the logs and contact support
+### Connection between Failover groups is lost
+If the Failover groups between primary and geo-secondary Arc SQL Managed instances is configured to be in `sync` mode and the connection is lost for whatever reason for an extended period of time, then the logs on the primary Arc SQL managed instance cannot be truncated until the transactions are sent to the geo-secondary. This could lead to the logs filling up and potentially running out of space on the primary site. To break out of this situation, remove the failover groups and re-configure when the connection between the sites is re-established.
+
+The failover groups can be removed on both primary as well as secondary site as follows:
+
+IF the data controller is deployed in `indirect` mode:
+`kubectl delete fog <failovergroup name>`
+
+and if the data controller is deployed in `direct` mode, provide the `sharedname` and the failover group is deleted on both sites:
+`az sql instance-failover-group-arc delete --name fogcr --mi <arcsqlmi> --resource-group <resource group>`
++
+Once the failover group on the primary site is deleted, logs can be truncated to free up space.
+ ### Collection controller logs ```console
kubectl -n $nameSpace cp $sqlmiName-ha-0:/var/log $localFolder/$sqlmiName-ha-0/
## Next steps
-[Get logs to troubleshoot Azure Arc-enabled data services](troubleshooting-get-logs.md)
+[Get logs to troubleshoot Azure Arc-enabled data services](troubleshooting-get-logs.md)
azure-arc Identity Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/identity-access-overview.md
This topic provides an overview of these two RBAC systems and how you can use th
## Kubernetes RBAC
-[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) provides granular filtering of user actions. With Kubernetes RBAC, You assign users or groups permission to create and modify resources or view logs from running application workloads. You can create roles to define permissions, and then assign those roles to users with role bindings. Permissions may be scoped to a single namespace or across the entire cluster.
+[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) provides granular filtering of user actions. With Kubernetes RBAC, you assign users or groups permission to create and modify resources or view logs from running application workloads. You can create roles to define permissions, and then assign those roles to users with role bindings. Permissions may be scoped to a single namespace or across the entire cluster.
The Azure Arc-enabled Kubernetes cluster connect feature uses Kubernetes RBAC to provide connectivity to the `apiserver` of the cluster. This connectivity doesn't require any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner. Using the cluster connect feature helps enable interactive debugging and troubleshooting scenarios. It can also be used to provide cluster access to Azure services for [custom locations](conceptual-custom-locations.md).
For more information, see [Azure RBAC on Azure Arc-enabled Kubernetes](conceptua
- Learn about [access and identity options for Azure Kubernetes Service (AKS) clusters](../../aks/concepts-identity.md). - Learn about [Cluster connect access to Azure Arc-enabled Kubernetes clusters](conceptual-cluster-connect.md).-- Learn about [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md)
+- Learn about [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md)
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
async def get_name(
"name": name,} def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse:
- return AsgiMiddleware(app).handle(req, context)
+ return func.AsgiMiddleware(app).handle(req, context)
```
+For a full example, see [Using FastAPI Framework with Azure Functions](/samples/azure-samples/fastapi-on-azure-functions/azure-functions-python-create-fastapi-app/).
# [WSGI](#tab/wsgi)
def main(req: func.HttpRequest, context) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.') return func.WsgiMiddleware(app).handle(req, context) ```
-For a full example, see [Use Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
For apps that run on servers (such as web services and service/daemon apps), if
[Azure Active Directory (Azure AD) authentication]: ../active-directory/fundamentals/active-directory-whatis.md [Shared Access Signature (SAS) Token authentication]: azure-maps-authentication.md#shared-access-signature-token-authentication [role-based access control (RBAC)]: azure-maps-authentication.md#authorization-with-role-based-access-control
-[Configurable token lifetimes in the Microsoft identity platform (preview)]: ../active-directory/develop/active-directory-configurable-token-lifetimes.md
+[Configurable token lifetimes in the Microsoft identity platform (preview)]: ../active-directory/develop/configurable-token-lifetimes.md
[Create SAS tokens]: azure-maps-authentication.md#create-sas-tokens [Public client and confidential client applications]: ../active-directory/develop/msal-client-applications.md [Cross origin resource sharing (CORS)]: azure-maps-authentication.md#cross-origin-resource-sharing-cors
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (preview)
+### [3.0.0-preview.7] (May 2nd, 2023)
+
+#### New features (3.0.0-preview.7)
+
+- In addition to map configuration, [Map.setServiceOptions()] now supports changing `domain`, `styleAPIVersion`, `styleDefinitionsVersion` on runtime.
+
+#### Bug fixes (3.0.0-preview.7)
+
+- Fixed token expired exception on relaunches when using AAD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request
+
+- Fixed redundant style definition and thumbnail requests
+
+- Fixed incorrect `aria-label` applied to zoom out control button element
+
+- Fixed the possibility of undefined copyright element container when withRuleBasedAttribution is set to false
+
+- Fixed the possibility of event listener removal called on undefined target in `EventManager.remove()`
+
+#### Installation (3.0.0-preview.7)
+
+The preview is available on [npm][3.0.0-preview.7] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.7][3.0.0-preview.7]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.7/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.7/atlas.min.js"></script>
+ ```
+ ### [3.0.0-preview.6] (March 31, 2023) #### Installation (3.0.0-preview.6)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2 (latest)
+### [2.2.7] (May 2nd, 2023)
+
+#### New features (2.2.7)
+
+- In addition to map configuration, [Map.setServiceOptions()] now supports changing `domain`, `styleAPIVersion`, `styleDefinitionsVersion` on runtime.
+
+#### Bug fixes (2.2.7)
+
+- Fixed token expired exception on relaunches when using AAD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request
+
+- Fixed redundant style definition and thumbnail requests
+
+- Fixed incorrect `aria-label` applied to zoom out control button element
+
+- Fixed the possibility of undefined copyright element container when withRuleBasedAttribution is set to false
+
+- Fixed the possibility of event listener removal called on undefined target in EventManager.remove()
+ ### [2.2.6] #### Bug fixes (2.2.6)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0-preview.7]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.7
[3.0.0-preview.6]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.6 [3.0.0-preview.5]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.5 [3.0.0-preview.4]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.4 [3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.2.7]: https://www.npmjs.com/package/azure-maps-control/v/2.2.7
[2.2.6]: https://www.npmjs.com/package/azure-maps-control/v/2.2.6 [2.2.5]: https://www.npmjs.com/package/azure-maps-control/v/2.2.5 [2.2.4]: https://www.npmjs.com/package/azure-maps-control/v/2.2.4
Stay up to date on Azure Maps:
[language mapping]: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-maps/supported-languages.md#azure-maps-supported-languages [user regions (view)]: /javascript/api/azure-maps-control/atlas.styleoptions?view=azure-maps-typescript-latest#azure-maps-control-atlas-styleoptions-view [ImageSpriteManager.add()]: /javascript/api/azure-maps-control/atlas.imagespritemanager?view=azure-maps-typescript-latest#azure-maps-control-atlas-imagespritemanager-add
+[Map.setServiceOptions()]: https://learn.microsoft.com/javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#azure-maps-control-atlas-map-setserviceoptions
[azure-maps-control]: https://www.npmjs.com/package/azure-maps-control [maplibre-gl]: https://www.npmjs.com/package/maplibre-gl [SourceManager]: /javascript/api/azure-maps-control/atlas.sourcemanager
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
The next tutorial demonstrates the process of creating a simple store locator us
[setCamera]: /javascript/api/azure-maps-control/atlas.map#setCamera_CameraOptions___CameraBoundsOptions___AnimationOptions_ [MapControlCredential]: /javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential [Azure Maps Route Directions API]: /javascript/api/azure-maps-rest/atlas.service.routeurl#calculateroutedirections-aborter--geojson-position-calculateroutedirectionsoptions-
-[Truck Route]: https://samples.azuremaps.com/?sample=car-vs-truck-route
+[Truck Route]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/main/Samples/Tutorials/Truck%20Route
[Multiple routes by mode of travel]: https://samples.azuremaps.com/?sample=multiple-routes-by-mode-of-travel
-[Data-driven style expressions]: data-driven-style-expressions-web-sdk.md
[URI Parameters for Post Route Directions]: /rest/api/maps/route/postroutedirections#uri-parameters
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Starting from agent version 1.13.27, the Linux agent will support both Python 2
If you're using an older version of the agent, you must have the virtual machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default, then you must install it. The following sample commands will install Python 2 on different distros:
+ - **Red Hat, CentOS, Oracle**:
+
+ ```bash
+ sudo yum install -y python2
+ ```
+ - **Ubuntu, Debian**:
+
+ ```bash
+ sudo apt-get update
+ sudo apt-get install -y python2
+ ```
+ - **SUSE**:
+
+ ```bash
+ sudo zypper install -y python2
+ ```
Again, only if you're using an older version of the agent, the python2 executable must be aliased to *python*. Use the following method to set this alias: 1. Run the following command to remove any existing aliases:
- ```
+ ```bash
sudo update-alternatives --remove-all python ``` 1. Run the following command to create the alias:
- ```
- sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1
+ ```bash
+ sudo update-alternatives --install /usr/bin/python python /usr/bin/python2
``` ### Supported Linux hardening
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Using Azure Monitor agent, you get immediate benefits as shown below:
## Consolidating legacy agents
-Deploy Azure Monitor Agent on all new virtual machines, scale sets, and on-premises servers to collect data for [supported services and features](#supported-services-and-features).
+Deploy Azure Monitor Agent on all new virtual machines, scale sets, and on-premises servers to collect data for [supported services and features](./azure-monitor-agent-migration.md#migrate-additional-services-and-features).
If you have machines already deployed with legacy Log Analytics agents, we recommend you [migrate to Azure Monitor Agent](./azure-monitor-agent-migration.md) as soon as possible. The legacy Log Analytics agent will not be supported after August 2024.
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
## Supported services and features
-In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure Monitor features in preview:
-
-| Azure Monitor feature | Current support | Other extensions installed | More information |
-| : | : | : | : |
-| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
-| [Container insights](../containers/container-insights-overview.md) | Public preview | Containerized Azure Monitor agent | [Enable Container Insights](../containers/container-insights-onboard.md) |
-
-In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure services in preview:
-
-| Azure service | Current support | Other extensions installed | More information |
-| : | : | : | : |
-| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
-| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
-| Azure Stack HCI Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
-| Azure Virtual Desktop (AVD) Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
-
-> [!NOTE]
-> Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available.
-
+For a list of features and services that use Azure Monitor Agent for data collection, see [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md#migrate-additional-services-and-features).
## Supported regions
-Azure Monitor Agent is available in all public regions, Azure Government anmd China clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
+Azure Monitor Agent is available in all public regions, Azure Government and China clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
## Costs
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Azure Storage | | | X | | | Event Hub | | | X | | **Services and features supported** | | | | |
-| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | |
+| | Microsoft Sentinel | X ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | X | |
| | VM Insights | X (Public preview) | X | | | | Microsoft Defender for Cloud | X (Public preview) | X | |
-| | Update Management | X (Public preview, independent of monitoring agents) | X | |
+| | Automation Update Management | | X | |
+| | Update Management | X (Public preview, independent of monitoring agents) | | |
| | Change Tracking | X (Public preview) | X | | | | SQL Best Practices Assessment | X | | |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Azure Storage | | | X | | | | Event Hub | | | X | | | **Services and features supported** | | | | | |
-| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | |
+| | Microsoft Sentinel | X ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | X | |
| | VM Insights | X (Public preview) | X | | | | Microsoft Defender for Cloud | X (Public preview) | X | | | | Update Management | X (Public preview, independent of monitoring agents) | X | |
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
# Tools for migrating from Log Analytics Agent to Azure Monitor Agent
-Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MM) include enhanced security, cost-effectiveness, performance, manageability and reliability. This article explains how to use the AMA Migration Helper and DCR Config Generator tools to help automate and track the migration from Log Analytics Agent to Azure Monitor Agent.
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for Windows and Linux machines, in Azure and non-Azure environments, including on-premises and third-party clouds. The [benefits of migrating to Azure Monitor Agent](../agents/azure-monitor-agent-migration.md) include enhanced security, cost-effectiveness, performance, manageability and reliability. This article explains how to use the AMA Migration Helper and DCR Config Generator tools to help automate and track the migration from Log Analytics Agent to Azure Monitor Agent.
![Flow diagram that shows the steps involved in agent migration and how the migration tools help in generating DCRs and tracking the entire migration process.](media/azure-monitor-agent-migration/mma-to-ama-migration-steps.png) > [!IMPORTANT]
-> Do not remove the legacy agents if being used by other [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features). Use the migration helper to discover which solutions/services you use today.
+> Do not remove legacy agents being used by other [Azure solutions or services](./azure-monitor-agent-migration.md#migrate-additional-services-and-features). Use the migration helper to discover which solutions and services you use today.
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
Azure Monitor Agent relies only on [data collection rules (DCRs)](../essentials/
Use the DCR Config Generator tool to parse Log Analytics Agent configuration from your workspaces and generate/deploy corresponding data collection rules automatically. You can then associate the rules to machines running the new agent using built-in association policies. > [!NOTE]
-> DCR Config Generator does not currently support additional configuration for [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features) dependent on Log Analytics Agent.
+> DCR Config Generator does not currently support additional configuration for [Azure solutions or services](./azure-monitor-agent-migration.md#migrate-additional-services-and-features) dependent on Log Analytics Agent.
### Prerequisites To install DCR Config Generator, you need:
To install DCR Config Generator:
1. Run the script:
- Option 1: Outputs **ready-to-deploy ARM template files** only that will create the generated DCR in the specified subscription and resource group, when deployed.
+ Option 1: Outputs **ready-to-deploy ARM template files** only, which creates the generated DCR in the specified subscription and resource group, when deployed.
```powershell .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
To install DCR Config Generator:
- Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events. - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
- If the Log Analytics workspace was not [configured to collect data](./log-analytics-agent.md#data-collected) from connected agents, the generated files will be empty. This is a scenario in which the agent was connected to a Log Analytics workspace, but was not configured to send any data from the host machine.
-
-1. [Deploy the generated ARM template](../../azure-resource-manager/templates/deployment-tutorial-local-template.md) to associate the generated data collection rules with virtual machines running the new agent.
+ If the Log Analytics workspace wasn't [configured to collect data](./log-analytics-agent.md#data-collected) from connected agents, the generated files will be empty. This is a scenario in which the agent was connected to a Log Analytics workspace, but wasn't configured to send any data from the host machine.
+
+1. Deploy the generated ARM templates:
+
+
+ ### [Portal](#tab/portal-1)
+ 1. In the portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot of the Deploy custom template screen.":::
+
+ 1. Select **Build your own template in the editor**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot of the template editor.":::
+
+ 1. Paste the generated template into the editor and select **Save**.
+ 1. On the **Custom deployment** screen, specify a **Subscription**, **Resource group**, and **Region**.
+ 1. Select **Review + create** > **Create**.
+
+ ### [PowerShell](#tab/azure-powershell)
+
+ ```powershell-interactive
+ New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateFile <path-to-template>
+ ```
+
+
+ > [!NOTE]
+ > You can include up to 100 'counterSpecifiers' in a data collection rule. 'samplingFrequencyInSeconds' must be between 1 and 300, inclusive.
+
+1. Associate machines to your data collection rules:
+
+ 1. From the **Monitor** menu, select **Data Collection Rules**.
+ 1. From the **Data Collection Rules** screen, select your data collection rule.
+ 1. Select **View resources** > **Add**.
+ 1. Select your machines > **Apply**.
+
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
# Migrate to Azure Monitor Agent from Log Analytics agent
-[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in both Azure and non-Azure (on-premises and third-party clouds) environments. It introduces a simplified, flexible method of configuring collection configuration called [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent and provides guidance on how to implement a successful migration.
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for Windows and Linux machines, in Azure and non-Azure environments, including on-premises and third-party clouds. The agent introduces a simplified, flexible method of configuring data collection using [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). This article provides guidance on how to implement a successful migration from the Log Analytics agent to Azure Monitor Agent.
> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](./agents-overview.md#supported-services-and-features), you should start planning your migration to Azure Monitor Agent by using the information in this article and the availability of other solutions/services.
+> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). After this date, Microsoft will no longer provide any support for the Log Analytics agent. If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](#migrate-additional-services-and-features), start planning your migration to Azure Monitor Agent by using the information in this article.
## Benefits
-In addition to consolidating and improving upon legacy Log Analytics agents, Azure Monitor Agent provides additional immediate benefits for **cost savings, simplified management experience, security and performance.** [Learn more about these benefits](./azure-monitor-agent-overview.md#benefits)
-
+In addition to consolidating and improving on the legacy Log Analytics agents, Azure Monitor Agent provides [a variety of immediate benefits](./azure-monitor-agent-overview.md#benefits), including **cost savings, a simplified management experience, and enhanced security and performance.**
## Migration guidance
-### Before you begin
-1. Review and follow the **[prerequisites](./azure-monitor-agent-manage.md#prerequisites)** for use with Azure Monitor Agent.
- - For non-Azure and on premises servers, [installing the Azure Arc agent](../../azure-arc/servers/agent-overview.md) is required though it's not mandatory to use Azure Arc for management overall. As such, this should incur no additional cost for Arc.
-2. Service (legacy Solutions) requirements - The legacy Log Analytics agents are used by various Azure services to collect required data. If you're not using any additional Azure service, you may skip this step altogether.
- - Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **discover solutions enabled** on your workspace(s) that use the legacy agents, including the **per-solution migration recommendation<sup>1</sup>** shown under `Workspace overview` tab.
- - If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
-3. **Agent coexistence:** If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later.
- - Be careful when you collect **duplicate data** from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate **additional cost/charges** for data ingestion and retention. Here are some things that can help
- - If possible, configure the agents to *send the data to different destinations*, i.e. either different workspaces or different tables in same workspace
- - If not used, disable any duplicate data collection from legacy agents by [removing the workspace configurations](./agent-data-sources.md#configure-data-sources)
- - For **Defender for Cloud**, the experiences natively deduplicate data if using both agents. Also you will only be [billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents
- - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
- - Running two telemetry agents on the same machine **consumes double the resources**, including but not limited to CPU, memory, storage space, and network bandwidth.
-
-<sup>1</sup> Start testing your scenarios during the preview phase. This will save time, avoid surprises later and ensure you're ready to deploy to production as soon as the service becomes generally available. Moreover you benefit from added security and reduced cost immediately.
+Before you begin migrating from the Log Analytics agent to Azure Monitor Agent, review the checklist below.
+
+### Before you begin
+
+> [!div class="checklist"]
+> - **Check the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for installing Azure Monitor Agent.**<br>To monitor non-Azure and on-premises servers, you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md). You won't incur an additional cost for installing the Azure Arc agent and you don't necessarily need to use Azure Arc to manage your non-Azure virtual machines.
+> - **Understand your current needs.**<br>Use the **Workspace overview** tab of the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to see connected agents and discover solutions enabled on your Log Analytics workspaces that use legacy agents, including per-solution migration recommendations.
+> - **Verify that Azure Monitor Agent can address all of your needs.**<br>Azure Monitor Agent is generally available for data collection and is used for data collection by various Azure Monitor features and other Azure services. For details, see [Supported services and features](#migrate-additional-services-and-features).
+> - **Consider installing Azure Monitor Agent together with a legacy agent for a transition period.**<br>Run Azure Monitor Agent alongside the legacy Log Analytics agent on the same machine to continue using existing functionality during evaluation or migration. Keep in mind that running two agents on the same machine doubles resource consumption, including but not limited to CPU, memory, storage space, and network bandwidth.<br>
+> - If you're setting up a new environment with resources, such as deployment scripts and onboarding templates, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later.
+> - If you have two agents on the same machine, avoid collecting duplicate data.<br> Collecting duplicate data from the same machine can skew query results, affect downstream features like alerts, dashboards, and workbooks, and generate extra charges for data ingestion and retention.<br>
+> **To avoid data duplication:**
+ > - Configure the agents to send the data to different workspaces or different tables in the same workspace.
+ > - Disable duplicate data collection from legacy agents by [removing the workspace configurations](./agent-data-sources.md#configure-data-sources).
+ > - Defender for Cloud natively deduplicates data when you use both agents, and [you'll be billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when you run the agents side by side.
+ > - For Sentinel, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
### Migration steps+ ![Flow diagram that shows the steps involved in agent migration and how the migration tools help in generating DCRs and tracking the entire migration process.](media/azure-monitor-agent-migration/mma-to-ama-migration-steps.png)
-1. **[Create data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule)**. You can use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator)<sup>1</sup> to **automatically convert your legacy agent configuration into data collection rule templates**. Review the generated rules before you create them, to leverage benefits like filtering, granular targeting (per machine), and other optimizations.
+1. Use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to convert your legacy agent configuration into [data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule) automatically.<sup>1</sup>
+
+ Review the generated rules before you create them, to leverage benefits like [filtering](../essentials/data-collection-transformations.md), granular targeting (per machine), and other optimizations.
+
+1. Test the new agent and data collection rules on a few nonproduction machines:
+
+ 1. Deploy the generated data collection rules and associate them with a few machines, as described in [Installing and using DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator).
+
+ To avoid double ingestion, you can disable data collection from legacy agents during the testing phase without uninstalling the agents yet, by [removing the workspace configurations for legacy agents](./agent-data-sources.md#configure-data-sources).
-2. Deploy extensions and DCR-associations:
- 1. **Test first** by deploying extensions<sup>2</sup> and DCR-Associations on a few non-production machines. You can also deploy side-by-side on machines running legacy agents (see [agent coexistence](#before-you-begin) section above)
- 2. Once data starts flowing via Azure Monitor agent, **compare it with legacy agent data** to ensure there are no gaps. You can do this by joining with the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' for the new data collection.
- 3. If you are required to run both agents and wish to **avoid double ingestion**, you can disable data collection from legacy agents without uninstalling them yet, by simply [removing the workspace configurations for legacy agents](./agent-data-sources.md#configure-data-sources)
- 4. Post testing, you can **roll out broadly** using [built-in policies](./azure-monitor-agent-manage.md#use-azure-policy) for at-scale deployment of extensions and DCR-associations. **Using policy will also ensure automatic deployment of extensions and DCR-associations for any new machines in future.**
- 5. Throughout this process, use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines
+ 1. Compare the data ingested by Azure Monitor Agent with legacy agent data to ensure there are no gaps. You can do this on any table by using the [join operator](/azure/data-explorer/kusto/query/joinoperator?pivots=azuremonitor) to add the `Category` column from the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table, which indicates `Azure Monitor Agent` for data collected by the Azure Monitor Agent.
+
+ For example, this query adds the `Category` column from the `Heartbeat` table to data retrieved from the `Event` table:
+
+ ```kusto
+ Heartbeat
+ | distinct Computer, SourceComputerId, Category
+ | join kind=inner (
+ Event
+ | extend d=parse_xml(EventData)
+ | extend sourceHealthServiceId = tostring(d.DataItem.["@sourceHealthServiceId"])
+ | project-reorder TimeGenerated, Computer, EventID, sourceHealthServiceId, ParameterXml, EventData
+ ) on $left.SourceComputerId==$right.sourceHealthServiceId
+ | project TimeGenerated, Computer, Category, EventID, sourceHealthServiceId, ParameterXml, EventData
+ ```
+
+1. Use [built-in policies](../agents/azure-monitor-agent-manage.md#built-in-policies) to deploy extensions and DCR associations at scale small-scale testing. Using policy also ensures automatic deployment of extensions and DCR associations for new machines.<sup>3</sup>
-3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly. You can do this by joining with/looking at the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' vs 'Direct Agent' (for legacy).
+ Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines.
+
+1. **Verify** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly:
+ 1. Look at the **Overview** and **Usage** tabs of [Log Analytics Workspace Insights](../logs/log-analytics-workspace-overview.md) for spikes or dips in ingestion rates following the migration. Check both the overall workspace ingestion and the table-level ingestion rates.
+ 1. Check your workbooks, dashboards, and alerts for variances from typical behavior following the migration.
+
+1. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, **disable or uninstall the legacy Log Analytics agents**.
+ - If you have need to continue using both agents, [disable data collection with the Log Analytics agent](./agent-data-sources.md#configure-data-sources).
+ - If you've migrated to Azure Monitor Agent for all your requirements, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. Continue using the legacy Log Analytics for features and solutions that Azure Monitor Agent doesn't support.
+ - Don't uninstall the legacy agent if you need to use it to upload data to System Center Operations Manager.
-4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents**
- 1. If you have need to continue using both agents, skip uninstallation and only [disable the legacy data collection](./agent-data-sources.md#configure-data-sources), also described above.
- 2. If you've migrated to Azure Monitor agent for all your requirements, you may [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent.
- 3. Don't uninstall the legacy agent if you need to use it for uploading data to System Center Operations Manager.
+<sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for more features and solutions will be available soon
+<sup>2</sup> You might need to deploy [extensions required for specific solutions](#migrate-additional-services-and-features) in addition to the Azure Monitor Agent extension.
+## Migrate additional services and features
-<sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for additional features and solutions will be available soon
-<sup>2</sup> In addition to the Azure Monitor agent extension, you need to deploy additional extensions required for specific solutions. See [other extensions to be installed here](./agents-overview.md#supported-services-and-features)
+Azure Monitor Agent is generally available for data collection.
+Most services that used Log Analytics agent for data collection are migrating to Azure Monitor Agent.
+The following features and services now use Azure Monitor Agent in preview. This means you can already choose to use Azure Monitor Agent to collect data when you enable the feature or service; otherwise, the Log Analytics agent is still enabled by default.
+
+| Service or feature | Migration recommendation | Other extensions installed | More information |
+| : | : | : | : |
+| [VM insights](../vm/vminsights-overview.md) | Public preview with Azure Monitor Agent | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
+| [Container insights](../containers/container-insights-overview.md) | Public preview with Azure Monitor Agent | Containerized Azure Monitor agent | [Enable Container Insights](../containers/container-insights-onboard.md) |
+| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview with Azure Monitor Agent | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview with Azure Monitor Agent](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview with Azure Monitor Agent](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview with Azure Monitor Agent](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | See [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. |
+| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Public preview with Azure Monitor Agent | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview with Azure Monitor Agent | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
+| Azure Stack HCI Insights | Private preview | No other extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
+| Azure Virtual Desktop (AVD) Insights | Private preview | No other extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
+
+> [!NOTE]
+> Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available.
+
+When you migrate the following services, which currently use Log Analytics agent, to their respective replacements (v2), you no longer need either of the monitoring agents:
+
+| Service | Migration recommendation | Other extensions installed | More information |
+| : | : | : | : |
+| [Update Management](../../automation/update-management/overview.md) | Update Management Center - Public preview (no dependency on Log Analytics agents or Azure Monitor Agent) | None | [Update management center (Public preview with Azure Monitor Agent) documentation](../../update-center/index.yml) |
+| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension - Generally available (no dependency on Log Analytics agents or Azure Monitor Agent) | None | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
## Next steps
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
## Install the agent 1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below): [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox)
-2. Open an elevated admin command prompt window and update path to the location where you downloaded the installer.
+2. Open an elevated admin command prompt window and change directory to the location where you downloaded the installer.
3. To install with **default settings**, run the following command: ```cli msiexec /i AzureMonitorAgentClientSetup.msi /qn
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
+
+ Title: Upgrade legacy rules management to the current Azure Monitor Log Alerts API
+description: Learn how to switch to the log alerts management to ScheduledQueryRules API
+++ Last updated : 2/23/2022+
+# Upgrade to the Log Alerts API from the legacy Log Analytics alerts API
+
+> [!IMPORTANT]
+> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log alerts by that date.
+> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits).
+> Once you migrate rules to the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules), you cannot revert back to the older [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts).
+
+In the past, users used the [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts) to manage log alert rules. Currently workspaces use [ScheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) for new rules. This article describes the benefits and the process of switching legacy log alert rules management from the legacy API to the current API.
+
+## Benefits
+
+- Manage all log rules in one API.
+- Single template for creation of alert rules (previously needed three separate templates).
+- Single API for all Azure resources log alerting.
+- Support for stateful (preview) and 1-minute log alerts.
+- [PowerShell cmdlets](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell) and [Azure CLI](/azure/azure-monitor/alerts/alerts-log#manage-log-alerts-using-cli) support for switched rules.
+- Alignment of severities with all other alert types and newer rules.
+- Ability to create a [cross workspace log alert](/azure/azure-monitor/logs/cross-workspace-query) that spans several external resources like Log Analytics workspaces or Application Insights resources for switched rules.
+- Users can specify dimensions to split the alerts for switched rules.
+- Log alerts have extended period of up to two days of data (previously limited to one day) for switched rules.
+
+## Impact
+
+- All switched rules must be created/edited with the current API. See [sample use via Azure Resource Template](/azure/azure-monitor/alerts/alerts-log-create-templates) and [sample use via PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell).
+- As rules become Azure Resource Manager tracked resources in the current API and must be unique, rules resource ID will change to this structure: `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Display names of the alert rule will remain unchanged.
+
+## Process
+
+View workspaces to upgrade using this [Azure Resource Graph Explorer query](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29). Open the [link](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29), select all available subscriptions, and run the query.
+
+The process of switching isn't interactive and doesn't require manual steps, in most cases. Your alert rules aren't stopped or stalled, during or after the switch.
+Do this call to switch all alert rules associated with each of the Log Analytics workspaces:
+
+```
+PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
+```
+
+With request body containing the below JSON:
+
+```json
+{
+ "scheduledQueryRulesEnabled" : true
+}
+```
+
+Here is an example of using [ARMClient](https://github.com/projectkudu/ARMClient), an open-source command-line tool, that simplifies invoking the above API call:
+
+```powershell
+$switchJSON = '{"scheduledQueryRulesEnabled": true}'
+armclient PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview $switchJSON
+```
+
+You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool:
+
+```bash
+az rest --method put --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body "{\"scheduledQueryRulesEnabled\" : true}"
+```
+
+If the switch is successful, the response is:
+
+```json
+{
+ "version": 2,
+ "scheduledQueryRulesEnabled" : true
+}
+```
+
+## Check switching status of workspace
+
+You can also use this API call to check the switch status:
+
+```
+GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
+```
+
+You can also use [ARMClient](https://github.com/projectkudu/ARMClient) tool:
+
+```powershell
+armclient GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
+```
+
+You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool:
+
+```bash
+az rest --method get --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
+```
+
+If the Log Analytics workspace was switched to [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules), the response is:
+
+```json
+{
+ "version": 2,
+ "scheduledQueryRulesEnabled" : true
+}
+```
+If the Log Analytics workspace wasn't switched, the response is:
+
+```json
+{
+ "version": 2,
+ "scheduledQueryRulesEnabled" : false
+}
+```
+
+## Next steps
+
+- Learn about the [Azure Monitor - Log Alerts](/azure/azure-monitor/alerts/alerts-unified-log).
+- Learn how to [manage your log alerts using the API](/azure/azure-monitor/alerts/alerts-log-create-templates).
+- Learn how to [manage log alerts using PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell).
+- Learn more about the [Azure Alerts experience](/azure/azure-monitor/alerts/alerts-overview).
azure-monitor Api Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/api-alerts.md
Title: Use the Log Analytics Alert REST API
+ Title: Legacy Log Analytics Alert REST API
description: The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and examples for performing different operations. Last updated 2/23/2022
-# Create and manage alert rules in Log Analytics with REST API
+# Legacy Log Analytics alerts REST API
+
+This article describes how to manage alert rules using the legacy API.
> [!IMPORTANT]
-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), Log Analytics workspaces created after *June 1, 2019* manage alert rules by using the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). Customers are encouraged to [switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). This article describes management of alert rules by using the legacy API.
+> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log alerts by that date.
+> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits).
The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and several examples for performing different operations.
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
PS C:\> Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap
``` > [!NOTE]
-> The naming of AppFilter in this context can be confusing, `AppFilter` sets the application name regex filter (HostingEnvironment.SiteName in the case of .Net on IIS). `VirtualPathFilter` sets the virtual path regex filter (HostingEnvironment.ApplicationVirtualPath in the case of .Net on IIS). To instrument a single app you would use the VirtualPathFilter as follows: `Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap @(@{VirtualPathFilter="^/MyAppName$"; InstrumentationSettings=@{InstrumentationKey='<your ikey>'}})`
+> The naming of AppFilter in this context can be confusing, `AppFilter` sets the application name regex filter (HostingEnvironment.SiteName in the case of .NET on IIS). `VirtualPathFilter` sets the virtual path regex filter (HostingEnvironment.ApplicationVirtualPath in the case of .NET on IIS). To instrument a single app you would use the VirtualPathFilter as follows: `Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap @(@{VirtualPathFilter="^/MyAppName$"; InstrumentationSettings=@{InstrumentationKey='<your ikey>'}})`
#### Parameters
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Micro
* **Deployment method**: Framework dependent or self-contained * **Web server**: Internet Information Server (IIS) or Kestrel * **Hosting platform**: The Web Apps feature of Azure App Service, Azure Virtual Machines, Docker, and Azure Kubernetes Service (AKS)
-* **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that aren't in preview
+* **.NET version**: All officially [supported .NET versions](https://dotnet.microsoft.com/download/dotnet) that aren't in preview
* **IDE**: Visual Studio, Visual Studio Code, or command line
-> [!NOTE]
-> ASP.NET Core 6.0 requires [Application Insights 2.19.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.19.0) or later.
- ## Prerequisites You need: - A functioning ASP.NET Core application. If you need to create an ASP.NET Core application, follow this [ASP.NET Core tutorial](/aspnet/core/getting-started/).
+- A reference to a supported version of the [Application Insights](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package.
- A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md). ## Enable Application Insights server-side telemetry (Visual Studio)
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
1. Add `AddApplicationInsightsTelemetry()` to your `startup.cs` or `program.cs` class. The choice depends on your .NET Core version. ### [ASP.NET Core 6 and later](#tab/netcorenew)
-
+ Add `builder.Services.AddApplicationInsightsTelemetry();` after the `WebApplication.CreateBuilder()` method in your `Program` class, as in this example: ```csharp
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
var app = builder.Build(); ```
-
+ ### [ASP.NET Core 5 and earlier](#tab/netcoreold)
-
+ Add `services.AddApplicationInsightsTelemetry();` to the `ConfigureServices()` method in your `Startup` class, as in this example:
-
+ ```csharp // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services)
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
services.AddMvc(); } ```
-
-
+ > [!NOTE]
+ > This .NET version is no longer supported.
+
+
+
1. Set up the connection string. Although you can provide a connection string as part of the `ApplicationInsightsServiceOptions` argument to `AddApplicationInsightsTelemetry`, we recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
Dependency collection is enabled by default. [Dependency tracking in Application
Support for [performance counters](./performance-counters.md) in ASP.NET Core is limited: * SDK versions 2.4.1 and later collect performance counters if the application is running in Web Apps (Windows).
-* SDK versions 2.7.1 and later collect performance counters if the application is running in Windows and targets `NETSTANDARD2.0` or later.
+* SDK versions 2.7.1 and later collect performance counters if the application is running in Windows and targets `netstandard2.0` or later.
* For applications that target the .NET Framework, all versions of the SDK support performance counters. * SDK versions 2.8.0 and later support the CPU/memory counter in Linux. No other counter is supported in Linux. To get system counters in Linux and other non-Windows environments, use [EventCounters](#eventcounter).
public void ConfigureServices(IServiceCollection services)
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ This table has the full list of `ApplicationInsightsServiceOptions` settings:
This table has the full list of `ApplicationInsightsServiceOptions` settings:
|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling. | True |EnableHeartbeat | Enable/Disable the heartbeats feature. It periodically (15-min default) sends a custom metric named `HeartbeatState` with information about the runtime like .NET version and Azure environment information, if applicable. | True |AddAutoCollectedMetricExtractor | Enable/Disable the `AutoCollectedMetrics extractor`. This telemetry processor sends preaggregated metrics about requests/dependencies before sampling takes place. | True
-|RequestCollectionOptions.TrackExceptions | Enable/Disable reporting of unhandled exception tracking by the request collection module. | False in NETSTANDARD2.0 (because exceptions are tracked with `ApplicationInsightsLoggerProvider`). True otherwise.
+|RequestCollectionOptions.TrackExceptions | Enable/Disable reporting of unhandled exception tracking by the request collection module. | False in `netstandard2.0` (because exceptions are tracked with `ApplicationInsightsLoggerProvider`). True otherwise.
|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling causes the following settings to be ignored: `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, and `EnableAppServicesHeartbeatTelemetryModule`. | True For the most current list, see the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs).
public void ConfigureServices(IServiceCollection services)
> [!NOTE] > `services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();` works for simple initializers. For others, `services.AddSingleton(new MyCustomTelemetryInitializer() { fieldName = "myfieldName" });` is required.
+> This .NET version is no longer supported.
public void ConfigureServices(IServiceCollection services)
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ ### Add telemetry processors
public void ConfigureServices(IServiceCollection services)
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ ### Configure or remove default TelemetryModules
public void ConfigureServices(IServiceCollection services)
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ In versions 2.12.2 and later, [`ApplicationInsightsServiceOptions`](#use-applicationinsightsserviceoptions) includes an easy option to disable any of the default modules.
public void ConfigureServices(IServiceCollection services)
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ > [!NOTE]
public void Configure(IApplicationBuilder app, IHostingEnvironment env, Telemetr
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular autocollection module, see [Remove the telemetry module](#configure-or-remove-default-telemetrymodules).
The preceding code sample prevents the sending of telemetry to Application Insig
This section provides answers to common questions.
-### Does Application Insights support ASP.NET Core 3.X?
+### Does Application Insights support ASP.NET Core 3.1?
-Yes. Update to [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) version 2.8.0 or later. Earlier versions of the SDK don't support ASP.NET Core 3.X.
+ASP.NET Core 3.1 is no longer supported by Microsoft.
-Also, if you're [enabling server-side telemetry based on Visual Studio](#enable-application-insights-server-side-telemetry-visual-studio), update to the latest version of Visual Studio 2019 (16.3.0) to onboard. Earlier versions of Visual Studio don't support automatic onboarding for ASP.NET Core 3.X apps.
+[Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) version 2.8.0 and Visual Studio 2019 or later can be used with ASP.NET Core 3.1 applications.
### How can I track telemetry that's not automatically collected?
public void ConfigureServices(IServiceCollection services)
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later.
azure-monitor Eventcounters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md
[`EventCounter`](/dotnet/core/diagnostics/event-counters) is .NET/.NET Core mechanism to publish and consume counters or statistics. EventCounters are supported in all OS platforms - Windows, Linux, and macOS. It can be thought of as a cross-platform equivalent for the [PerformanceCounters](/dotnet/api/system.diagnostics.performancecounter) that is only supported in Windows systems.
-While users can publish any custom `EventCounters` to meet their needs, .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and higher runtime publishes a set of these counters by default. This document will walk through the steps required to collect and view `EventCounters` (system defined or user defined) in Azure Application Insights.
+While users can publish any custom `EventCounters` to meet their needs, [.NET](/dotnet/fundamentals/) publishes a set of these counters by default. This document will walk through the steps required to collect and view `EventCounters` (system defined or user defined) in Azure Application Insights.
## Using Application Insights to collect EventCounters
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
In summary, we recommend `GetMetric()` because it does pre-aggregation, it accum
## Get started with GetMetric
-For our examples, we're going to use a basic .NET Core 3.1 worker service application. If you want to replicate the test environment used with these examples, follow steps 1-6 in the [Monitoring worker service article](worker-service.md#net-core-lts-worker-service-application). These steps add Application Insights to a basic worker service project template. The concepts apply to any general application where the SDK can be used, including web apps and console apps.
+For our examples, we're going to use a basic .NET Core 3.1 worker service application. If you want to replicate the test environment used with these examples, follow steps 1-6 in the [Monitoring worker service article](worker-service.md#net-core-worker-service-application). These steps add Application Insights to a basic worker service project template. The concepts apply to any general application where the SDK can be used, including web apps and console apps.
### Send metrics
SeverityLevel.Error);
## Next steps
-* [Metrics - Get - REST API](https://learn.microsoft.com/rest/api/application-insights/metrics/get)
+* [Metrics - Get - REST API](/rest/api/application-insights/metrics/get)
* [Application Insights API for custom events and metrics](api-custom-events-metrics.md) * [Learn more](./worker-service.md) about monitoring worker service applications. * Use [log-based and pre-aggregated metrics](./pre-aggregated-metrics-log-metrics.md).
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
For more information, see the following link: https://github.com/MicrosoftDocs/a
## Confirm data is flowing
-Check the data flow by going to the Azure portal and navigating to the Application Insights resource that you've enabled the SDK for. From there, you can view the data in the "Live Metrics Stream" or "Metrics" sections.
+Check the data flow by going to the Azure portal and navigating to the Application Insights resource that you've enabled the SDK for. From there, you can view the data in the "Transaction search" or "Metrics" sections.
Additionally, you can use the SDK's trackPageView() method to manually send a page view event and verify that it appears in the portal.
Detailed release notes regarding updates and bug fixes can be found on [GitHub](
* [Track usage](usage-overview.md) * [Custom events and metrics](api-custom-events-metrics.md) * [Build-measure-learn](usage-overview.md)
-* [JavaScript SDK advanced topics](javascript-sdk-advanced.md)
+* [JavaScript SDK advanced topics](javascript-sdk-advanced.md)
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live Metrics is currently supported for ASP.NET, ASP.NET Core, Azure Functions,
## Get started > [!IMPORTANT]
-> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications requires Application Insights version 2.8.0 or above. To enable Application Insights, ensure that it's activated in the Azure portal and that the Application Insights NuGet package is included. Without the NuGet package, some telemetry is sent to Application Insights, but that telemetry won't show in Live Metrics.
+> To enable Application Insights, ensure that it's activated in the Azure portal and your app is using a recent version of the [Application Insights](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package. Without the NuGet package, some telemetry is sent to Application Insights, but that telemetry won't show in Live Metrics.
1. Follow language-specific guidelines to enable Live Metrics: * [ASP.NET](./asp-net.md): Live Metrics is enabled by default.
namespace LiveMetricsDemo
} } }- ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ # [.NET Framework](#tab/dotnet-framework) ```csharp
public void ConfigureServices(IServiceCollection services)
} ```
+> [!NOTE]
+> This .NET version is no longer supported.
+ # [.NET Framework](#tab/dotnet-framework) In the *applicationinsights.config* file, add `AuthenticationApiKey` to `QuickPulseTelemetryModule`:
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
You must have a valid Application Insights connection string. This string is req
Specific instructions for each type of application are described in the following sections.
-## .NET Core LTS Worker Service application
+## .NET Core Worker Service application
The full example is shared at the [NuGet website](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService).
-1. Download and install .NET Core [Long Term Support (LTS)](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+1. [Download and install the .NET SDK](https://dotnet.microsoft.com/download).
1. Create a new Worker Service project either by using a Visual Studio new project template or the command line `dotnet new worker`.
-1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
+1. Add the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
1. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `CreateHostBuilder()` method in your `Program.cs` class, as in this example:
The full example is shared at this [GitHub page](https://github.com/microsoft/Ap
``` 1. Set up the connection string.
- Use the same `appsettings.json` from the preceding .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service example.
+ Use the same `appsettings.json` from the preceding [.NET](/dotnet/fundamentals/) Worker Service example.
## .NET Core/.NET Framework console application
-As mentioned in the beginning of this article, the new package can be used to enable Application Insights telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), so it can be used for console apps in .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher, and .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+As mentioned in the beginning of this article, the new package can be used to enable Application Insights telemetry from even a regular console application. This package targets [`netstandard2.0`](/dotnet/standard/net-standard), so it can be used for console apps in [.NET Core](/dotnet/fundamentals/) or higher, and [.NET Framework](/dotnet/framework/) or higher.
-The full example is shared at this [GitHub page](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp).
+The full example is shared at this [GitHub page](https://github.com/microsoft/ApplicationInsights-dotnet/tree/main/examples/ConsoleApp).
1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
Dependency collection is enabled by default. The article [Dependency tracking in
### EventCounter
-`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on how to customize the list.
+`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from [.NET](/dotnet/fundamentals/) apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on how to customize the list.
### Manually track other telemetry
Visual Studio IDE onboarding is currently supported only for ASP.NET/ASP.NET Cor
### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formerly Status Monitor v2)?
-No. [Azure Monitor Application Insights Agent](./application-insights-asp-net-agent.md) currently supports .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) only.
+No. [Azure Monitor Application Insights Agent](./application-insights-asp-net-agent.md) currently supports [.NET](/dotnet/fundamentals/) only.
### Are all features supported if I run my application in Linux?
Use this sample if you're using a console application written in either .NET Cor
Use this sample if you're in ASP.NET Core and creating background tasks in accordance with [official guidance](/aspnet/core/fundamentals/host/hosted-services). [.NET Core Worker Service](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService):
-Use this sample if you have a .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service application in accordance with [official guidance](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template).
+Use this sample if you have a [.NET](/dotnet/fundamentals/) Worker Service application in accordance with [official guidance](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template).
## Open-source SDK
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
If the majority of your data comes from one of these following tables:
- KubeServices - KubeEvents
-You can adjust your ingestion using the [cost optimization settings](../containers/container-insights-cost-config.md) and/or migrating to the Prometheus metrics addon (../essentials/prometheus-metrics-overview.md)
+You can adjust your ingestion using the [cost optimization settings](../containers/container-insights-cost-config.md) and/or migrating to the [Prometheus metrics addon](container-insights-prometheus.md)
Otherwise, the majority of your data belongs to the ContainerLog table. and you can follow the steps below to reduce your ContainerLog costs.
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
description: This article describes a method to use Azure Logic Apps to query da
Previously updated : 03/01/2022 Last updated : 05/01/2023
Log Analytics workspace and log queries in Azure Monitor are multitenancy servic
- Log queries can't return more than 500,000 rows. - Log queries can't return more than 64,000,000 bytes.-- Log queries can't run longer than 10 minutes by default.
+- Log queries can't run longer than 10 minutes.
- Log Analytics connector is limited to 100 calls per minute. ## Logic Apps procedure
Go to the **Storage accounts** menu in the Azure portal and select your storage
[![Screenshot that shows blob data.](media/logs-export-logic-app/blob-data.png "Screenshot that shows sample data exported to a blob.")](media/logs-export-logic-app/blob-data.png#lightbox)
+### Logic App template
+
+The optional Parse JSON step isn't included in template
+
+```json
+{
+ "definition": {
+ "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
+ "actions": {
+ "Compose": {
+ "inputs": "@body('Run_query_and_list_results')",
+ "runAfter": {
+ "Run_query_and_list_results": [
+ "Succeeded"
+ ]
+ },
+ "type": "Compose"
+ },
+ "Create_blob_(V2)": {
+ "inputs": {
+ "body": "@outputs('Compose')",
+ "headers": {
+ "ReadFileMetadataFromServer": true
+ },
+ "host": {
+ "connection": {
+ "name": "@parameters('$connections')['azureblob']['connectionId']"
+ }
+ },
+ "method": "post",
+ "path": "/v2/datasets/@{encodeURIComponent(encodeURIComponent('AccountNameFromSettings'))}/files",
+ "queries": {
+ "folderPath": "/logicappexport",
+ "name": "@{utcNow()}",
+ "queryParametersSingleEncoded": true
+ }
+ },
+ "runAfter": {
+ "Compose": [
+ "Succeeded"
+ ]
+ },
+ "runtimeConfiguration": {
+ "contentTransfer": {
+ "transferMode": "Chunked"
+ }
+ },
+ "type": "ApiConnection"
+ },
+ "Run_query_and_list_results": {
+ "inputs": {
+ "body": "let dt = now();\nlet year = datetime_part('year', dt);\nlet month = datetime_part('month', dt);\nlet day = datetime_part('day', dt);\n let hour = datetime_part('hour', dt);\nlet startTime = make_datetime(year,month,day,hour,0)-1h;\nlet endTime = startTime + 1h - 1tick;\nAzureActivity\n| where ingestion_time() between(startTime .. endTime)\n| project \n TimeGenerated,\n BlobTime = startTime, \n OperationName ,\n OperationNameValue ,\n Level ,\n ActivityStatus ,\n ResourceGroup ,\n SubscriptionId ,\n Category ,\n EventSubmissionTimestamp ,\n ClientIpAddress = parse_json(HTTPRequest).clientIpAddress ,\n ResourceId = _ResourceId ",
+ "host": {
+ "connection": {
+ "name": "@parameters('$connections')['azuremonitorlogs']['connectionId']"
+ }
+ },
+ "method": "post",
+ "path": "/queryData",
+ "queries": {
+ "resourcegroups": "resource-group-name",
+ "resourcename": "workspace-name",
+ "resourcetype": "Log Analytics Workspace",
+ "subscriptions": "workspace-subscription-id",
+ "timerange": "Set in query"
+ }
+ },
+ "runAfter": {},
+ "type": "ApiConnection"
+ }
+ },
+ "contentVersion": "1.0.0.0",
+ "outputs": {},
+ "parameters": {
+ "$connections": {
+ "defaultValue": {},
+ "type": "Object"
+ }
+ },
+ "triggers": {
+ "Recurrence": {
+ "evaluatedRecurrence": {
+ "frequency": "Day",
+ "interval": 1
+ },
+ "recurrence": {
+ "frequency": "Day",
+ "interval": 1
+ },
+ "type": "Recurrence"
+ }
+ }
+ },
+ "parameters": {
+ "$connections": {
+ "value": {
+ "azureblob": {
+ "connectionId": "/subscriptions/logic-app-subscription-id/resourceGroups/logic-app-resource-group-name/providers/Microsoft.Web/connections/blob-connection-name",
+ "connectionName": "blob-connection-name",
+ "id": "/subscriptions/logic-app-subscription-id/providers/Microsoft.Web/locations/canadacentral/managedApis/azureblob"
+ },
+ "azuremonitorlogs": {
+ "connectionId": "/subscriptions/blob-connection-name/resourceGroups/logic-app-resource-group-name/providers/Microsoft.Web/connections/azure-monitor-logs-connection-name",
+ "connectionName": "azure-monitor-logs-connection-name",
+ "id": "/subscriptions/blob-connection-name/providers/Microsoft.Web/locations/canadacentral/managedApis/azuremonitorlogs"
+ }
+ }
+ }
+ }
+}
+```
+ ## Next steps - Learn more about [log queries in Azure Monitor](./log-query-overview.md).
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md
In this article, you:
## Prerequisites -- Profiler supports .NET Framework, .NET Core, and .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer applications.
- - Verify you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or later.
+- Profiler supports .NET Framework and .NET applications.
+ - Verify you're using [.NET Framework 4.6.2](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or later.
- Confirm that the deployed OS is `Windows Server 2012 R2` or later. - [An Azure Service Fabric managed cluster](../../service-fabric/quickstart-managed-cluster-portal.md).
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
The following example shows a configuration equivalent to the default configurat
Snapshots are collected _only_ on exceptions reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
-## Configure snapshot collection for apps by using ASP.NET Core LTS or above
+
+## Configure snapshot collection for applications using ASP.NET Core
### Prerequisites
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
This section lists the applications and environments that are supported.
Snapshot collection is available for: -- .NET Framework and ASP.NET applications running .NET Framework 4.6.2 and newer versions.-- .NET and ASP.NET applications running .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer versions on Windows.-- .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) (and newer versions) applications on Windows.-
-.NET Core versions prior to LTS are out of support and we don't recommend their use.
+- .NET Framework 4.6.2 and newer versions.
+- [.NET 6.0 or later](https://dotnet.microsoft.com/download) on Windows.
### Environments
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
This article describes error messages and resolutions that can help you troubles
|`Error allocating volume - Export policy rules does not match kerberosEnabled flag` | Azure NetApp Files does not support Kerberos for NFSv3 volumes. Kerberos is supported only for the NFSv4.1 protocol. | |`This NetApp account has no configured Active Directory connections` | Configure Active Directory for the NetApp account with fields **KDC IP** and **AD Server Name**. See [Configure the Azure portal](configure-kerberos-encryption.md#configure-the-azure-portal) for instructions. | |`Mismatch between KerberosEnabled flag value and ExportPolicyRule's access type parameter values.` | Azure NetApp Files does not support converting a plain NFSv4.1 volume to Kerberos NFSv4.1 volume, and vice-versa. |
-|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to fewer than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpcgssd` service as follows. The command might vary depending on the OS.<br> RHEL 7: <br> `service nfs restart` <br> `service rpcgssd restart` <br> CentOS 8: <br> `systemctl enable nfs-client.target && systemctl start nfs-client.target` <br> Ubuntu: <br> (Restart the `rpc-gssd` service.) <br> `sudo systemctl start rpc-gssd.service` </ul>|
+|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to fewer than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpc-gssd` service as follows. The exact service names may vary on some Linux distributions.<br>Most current distributions use the same service names. Perform the following as root or with `sudo`<br> `systemctl enable nfs-client.target && systemctl start nfs-client.target`<br>(Restart the `rpc-gssd` service.) <br> `systemctl restart rpc-gssd.service` </ul>|
|`mount.nfs: an incorrect mount option was specified` | The issue might be related to the NFS client issue. Reboot the NFS client. | |`Hostname lookup failed` | You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.1.1.4`, the hostname of the AD machine (as found by using the hostname command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.1.1.4 -> AD1.contoso.com`. | |`Volume creation fails due to unreachable DNS server` | Two possible solutions are available: <br> <ul><li> This error indicates that DNS is not reachable. The reason might be an incorrect DNS IP or a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. </li> <li> Make sure that the AD and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. </li></ul> |
azure-portal Capture Browser Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/capture-browser-trace.md
Title: Capture a browser trace for troubleshooting description: Capture network information from a browser trace to help troubleshoot issues with the Azure portal. Previously updated : 02/24/2023 Last updated : 05/01/2023
You can capture this information any [supported browser](azure-portal-supported-
The following steps show how to use the developer tools in Microsoft Edge. For more information, see [Microsoft Edge DevTools](/microsoft-edge/devtools-guide-chromium).
+> [!NOTE]
+> The screenshots below show the DevTools in Focus Mode with a vertical **Activity Bar**. Depending on your settings, your configuration may look different. For more information, see [Simplify DevTools using Focus Mode](/microsoft-edge/devtools-guide-chromium/experimental-features/focus-mode).
+ 1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your account. 1. Start recording the steps you take in the portal, using [Steps Recorder](https://support.microsoft.com/windows/record-steps-to-reproduce-a-problem-46582a9b-620f-2e36-00c9-04e25d784e47). 1. In the portal, navigate to the step prior to where the issue occurs.
-1. Press F12 to launch the developer tools. You can also launch the tools from the toolbar menu under **More tools** > **Developer tools**.
+1. Press F12 to launch Microsoft Edge DevTools. You can also launch the tools from the toolbar menu under **More tools** > **Developer tools**.
-1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page:
+1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page.
1. Select the **Console** tab, select **Console settings**, then select **Preserve Log**. :::image type="content" source="media/capture-browser-trace/edge-console-preserve-log.png" alt-text="Screenshot that highlights the Preserve log option on the Console tab in Edge.":::
- 1. Select the **Network** tab, then select **Preserve log**.
+ 1. Select the **Network** tab. If that tab isn't visible, click the **More tools** (+) button and select **Network**. Then, from the **Network** tab, select **Preserve log**.
:::image type="content" source="media/capture-browser-trace/edge-network-preserve-log.png" alt-text="Screenshot that highlights the Preserve log option on the Network tab in Edge.":::
azure-resource-manager Bicep Functions Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-files.md
Title: Bicep functions - files
description: Describes the functions to use in a Bicep file to load content from a file. Previously updated : 10/10/2022 Last updated : 04/21/2023 # File functions for Bicep
The following example creates a JSON file that contains values for a network sec
You load that file and convert it to a JSON object. You use the object to assign values to the resource. +
+You can reuse the file of values in other Bicep files that deploy a network security group.
+
+## loadYamlContent
+
+`loadYamlContent(filePath, [pathFilter], [encoding])`
+
+Loads the specified YAML file as an Any object.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. It can't include variables. |
+| pathFilter | No | string | The path filter is a JSONPath expression to specify that only part of the file is loaded. |
+| encoding | No | string | The file encoding. The default value is `utf-8`. The available options are: `iso-8859-1`, `us-ascii`, `utf-16`, `utf-16BE`, or `utf-8`. |
+
+### Remarks
+
+Use this function when you have YAML content or minified YAML content that is stored in a separate file. Rather than duplicating the YAML content in your Bicep file, load the content with this function. You can load a part of a YAML file by specifying a path filer. The file is loaded when the Bicep file is compiled to the YAML template. You can't include variables in the file path because they haven't been resolved when compiling to the template. During deployment, the YAML template contains the contents of the file as a hard-coded string.
+
+In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
+
+This function requires **Bicep version >0.16.2**.
+
+The maximum allowed size of the file is **1,048,576 characters**, including line endings.
+
+### Return value
+
+The contents of the file as an Any object.
+
+### Examples
+
+The following example creates a YAML file that contains values for a network security group.
++
+You load that file and convert it to a JSON object. You use the object to assign values to the resource.
+ You can reuse the file of values in other Bicep files that deploy a network security group.
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md
Title: Bicep functions
description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 07/05/2022 Last updated : 04/21/2023 # Bicep functions
The following functions are available for loading the content from external file
* [loadFileAsBase64](bicep-functions-files.md#loadfileasbase64) * [loadJsonContent](bicep-functions-files.md#loadjsoncontent)
+* [loadYamlContent](bicep-functions-files.md#loadyamlcontent)
* [loadTextContent](bicep-functions-files.md#loadtextcontent) ## Lambda functions
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep
description: Use loops to iterate over collections in Bicep Previously updated : 12/09/2022 Last updated : 05/02/2023 # Iterative loops in Bicep
The next example creates the number of storage accounts specified in the `storag
param location string = resourceGroup().location param storageCount int = 2
-resource storageAcct 'Microsoft.Storage/storageAccounts@2021-06-01' = [for i in range(0, storageCount): {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, storageCount): {
name: '${i}storage${uniqueString(resourceGroup().id)}' location: location sku: {
module stgModule './storageAccount.bicep' = [for i in range(0, storageCount): {
location: location } }]+
+output storageAccountEndpoints array = [for i in range(0, storageCount): {
+ endpoint: stgModule[i].outputs.storageEndpoint
+}]
``` ## Array elements
param storageNames array = [
'coho' ]
-resource storageAcct 'Microsoft.Storage/storageAccounts@2021-06-01' = [for name in storageNames: {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for name in storageNames: {
name: '${name}${uniqueString(resourceGroup().id)}' location: location sku: {
var storageConfigurations = [
} ]
-resource storageAccountResources 'Microsoft.Storage/storageAccounts@2021-06-01' = [for (config, i) in storageConfigurations: {
+resource storageAccountResources 'Microsoft.Storage/storageAccounts@2022-09-01' = [for (config, i) in storageConfigurations: {
name: '${storageAccountNamePrefix}${config.suffix}${i}' location: resourceGroup().location sku: {
To serially deploy instances of a resource, add the [batchSize decorator](./file
param location string = resourceGroup().location @batchSize(2)
-resource storageAcct 'Microsoft.Storage/storageAccounts@2021-06-01' = [for i in range(0, 4): {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, 4): {
name: '${i}storage${uniqueString(resourceGroup().id)}' location: location sku: {
You can't use a loop for a nested child resource. To create more than one instan
For example, suppose you typically define a file service and file share as nested resources for a storage account. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2021-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = {
name: 'examplestorage' location: resourceGroup().location kind: 'StorageV2'
To create more than one file share, move it outside of the storage account. You
The following example shows how to create a storage account, file service, and more than one file share: ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2021-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = {
name: 'examplestorage' location: resourceGroup().location kind: 'StorageV2'
azure-resource-manager Publish Service Catalog Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md
To publish a managed application definition to your service catalog, do the foll
- Create a storage account where you store the managed application definition. - Deploy the managed application definition to your own storage account so it's available in your service catalog.
-If you're managed application definition is less than 120 MB and you don't want to use your own storage account, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
+If your managed application definition is less than 120 MB and you don't want to use your own storage account, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
> [!NOTE] > You can use Bicep to develop a managed application definition but it must be converted to ARM template JSON before you can publish the definition in Azure. To convert Bicep to JSON, use the Bicep [build](../bicep/bicep-cli.md#build) command. After the file is converted to JSON it's recommended to verify the code for accuracy.
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
- Title: Attach Azure disk pools to Azure VMware Solution hosts
-description: Learn how to attach an Azure disk pool surfaced through an iSCSI target as the VMware vSphere datastore of an Azure VMware Solution private cloud. Once the datastore is configured, you can create volumes on it and consume them from your Azure VMware Solution private cloud.
-- Previously updated : 11/02/2021
-#Customer intent: As an Azure service administrator, I want to scale my AVS hosts using disk pools instead of scaling clusters. So that I can use block storage for active working sets and tier less frequently accessed data from vSAN to disks. I can also replicate data from on-premises or primary VMware vSphere environment to disk storage for the secondary site.
---
-# Attach disk pools to Azure VMware Solution hosts
-
-[Azure disk pools](../virtual-machines/disks-pools.md) offer persistent block storage to applications and workloads backed by Azure Disks. You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance. For example, you can scale up by using disk pools instead of scaling clusters if you host storage-intensive workloads. You can also use disks to replicate data from on-premises or primary VMware vSphere environments to disk storage for the secondary site. To scale storage independent of the Azure VMware Solution hosts, we support surfacing [ultra disks](../virtual-machines/disks-types.md#ultra-disks), [premium SSD](../virtual-machines/disks-types.md#premium-ssds) and [standard SSD](../virtual-machines/disks-types.md#standard-ssds) as the datastores.
-
->[!IMPORTANT]
->We are officially halting the preview of Azure Disk Pools, and it will not be made generally available.
->New customers will not be able to register the `Microsoft.StoragePool` resource provider on their subscription and deploy new Disk Pools.
-> Existing subscriptions registered with Microsoft.StoragePool may continue to deploy and manage disk pools for the time being.
-
-Azure managed disks are attached to one iSCSI controller virtual machine deployed under the Azure VMware Solution resource group. Disks get deployed as storage targets to a disk pool, and each storage target shows as an iSCSI LUN under the iSCSI target. You can expose a disk pool as an iSCSI target connected to Azure VMware Solution hosts as a datastore. A disk pool surfaces as a single endpoint for all underlying disks added as storage targets. Each disk pool can have only one iSCSI controller.
-
-The diagram shows how disk pools work with Azure VMware Solution hosts. Each iSCSI controller accesses managed disk using a standard Azure protocol, and the Azure VMware Solution hosts can access the iSCSI controller over iSCSI.
----
-## Supported regions
-
-You can only connect the disk pool to an Azure VMware Solution private cloud in the same region. For a list of supported regions, see [Regional availability](../virtual-machines/disks-pools.md#regional-availability). If your private cloud is deployed in a non-supported region, you can redeploy it in a supported region. Azure VMware Solution private cloud and disk pool colocation provide the best performance with minimal network latency.
--
-## Prerequisites
--- Scalability and performance requirements of your workloads are identified. For details, see [Planning for Azure disk pools](../virtual-machines/disks-pools-planning.md).--- [Azure VMware Solution private cloud](deploy-azure-vmware-solution.md) deployed with a [virtual network configured](deploy-azure-vmware-solution.md#connect-to-azure-virtual-network-with-expressroute). For more information, see [Network planning checklist](tutorial-network-checklist.md) and [Configure networking for your VMware private cloud](tutorial-configure-networking.md). -
- - If you select ultra disks, use an Ultra Performance ExpressRoute virtual network gateway for the disk pool network connection to your Azure VMware Solution private cloud and then [enable ExpressRoute FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
-
- - If you select premium SSDs or standard SSDs, use a Standard (1 Gbps) or High Performance (2 Gbps) ExpressRoute virtual network gateway for the disk pool network connection to your Azure VMware Solution private cloud.
--- You must use Standard\_DS##\_v3 to host iSCSI. If you encounter quota issues, request an increase in [vCPU quota limits](../azure-portal/supportability/per-vm-quota-requests.md) per Azure VM series for Dsv3 series.--- Disk pool as the backing storage deployed and exposed as an iSCSI target with each disk as an individual LUN. For details, see [Deploy an Azure disk pool](../virtual-machines/disks-pools-deploy.md).-
- >[!IMPORTANT]
- > The disk pool must be deployed in the same subscription as the VMware cluster, and it must be attached to the same VNET as the VMware cluster.
-
-## Add a disk pool to your private cloud
-You'll attach to a disk pool surfaced through an iSCSI target as the VMware datastore of an Azure VMware Solution private cloud.
-
->[!IMPORTANT]
->While in **Public Preview**, only attach a disk pool to a test or non-production cluster.
-
-# [Azure CLI](#tab/azure-cli)
-
-Check if the subscription is registered to `Microsoft.AVS`.
-
-```azurecli
-az provider show -n "Microsoft.AVS" --query registrationState
-```
-
-If it's not already registered, then register it.
-
-```azurecli
-az provider register -n "Microsoft.AVS"
-```
-
-Check if the subscription is registered to `CloudSanExperience` AFEC in Microsoft.AVS.
-
-```azurecli
-az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS"
-```
-
-If it's not already registered, then register it.
-
-```azurecli
-az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"
-```
-
-The registration may take approximately 15 minutes to complete, you can use the following command to check status:
-
-```azurecli
-az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state
-```
-
->[!TIP]
->If the registration is stuck in an intermediate state for longer than 15 minutes to complete, unregister and then re-register the flag.
->
->```azurecli
->az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS"
->az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"
->```
-
-Check if the `vmware `extension is installed.
-
-```azurecli
-az extension show --name vmware
-```
-
-If the extension is already installed, check if the version is **3.0.0**. If an older version is installed, update the extension.
-
-```azurecli
-az extension update --name vmware
-```
-
-If it's not already installed, install it.
-
-```azurecli
-az extension add --name vmware
-```
-
-### Attach the iSCSI LUN
-
-Create and attach an iSCSI datastore in the Azure VMware Solution private cloud cluster using `Microsoft.StoragePool` provided iSCSI target. The disk pool attaches to a virtual network through a delegated subnet, which is done with the Microsoft.StoragePool/diskPools resource provider. If the subnet isn't delegated, the deployment fails.
-
-```azurecli
-#Initialize input parameters
-resourceGroupName='<yourRGName>'
-name='<desiredDataStoreName>'
-cluster='<desiredCluster>'
-privateCloud='<privateCloud>'
-lunName='<desiredLunName>'
-
-az vmware datastore disk-pool-volume create --name $name --resource-group $resourceGroupName --cluster $cluster --private-cloud $privateCloud --target-id /subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/ResourceGroup1/providers/Microsoft.StoragePool/diskPools/mpio-diskpool/iscsiTargets/mpio-iscsi-target --lun-name $lunName
-```
-
->[!TIP]
->You can display the help on the datastores.
->
-> ```azurecli
-> az vmware datastore -h
-> ```
--
-To confirm that the attach succeeded, you can use the following commands:
-
-Show the details of an iSCSI datastore in a private cloud cluster.
-
-```azurecli
-az vmware datastore show --name MyCloudSANDatastore1 --resource-group MyResourceGroup --cluster -Cluster-1 --private-cloud MyPrivateCloud
-```
-
-List all the datastores in a private cloud cluster.
-
-```azurecli
-az vmware datastore list --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud
-```
-
-# [Portal](#tab/azure-portal)
-
-### Preview registration
-
-First, register your subscription to the Microsoft.AVS and CloudSanExperience.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Search for and select **Subscriptions**.
-1. Select the subscription you want to use and select **Resource providers** under **Settings**.
-1. Search for **Microsoft.AVS**, select it, and select **Register**.
-1. Select **Preview features** under **Settings**.
-1. Search for and register **CloudSanExperience**.
-
-### Connect your disk pool
-
-Now that your subscription has been properly registered, you can connect your disk pool to your Azure VMware Solution private cloud cluster.
-
-> [!IMPORTANT]
-> Your disk pool attaches to a virtual network through a delegated subnet, which is done with the Microsoft.StoragePool resource provider. If the subnet isn't delegated, the deployment fails. See [Delegate subnet permission](../virtual-machines/disks-pools-deploy.md#delegate-subnet-permission) for details.
-
-1. Navigate to your Azure VMware Solution.
-1. Select **Storage (preview)** under **Manage**.
-1. Select **Connect a disk pool**.
-1. Select the subscription you'd like to use.
-1. Select your disk pool, and the client cluster you'd like to connect it to.
-1. Enable your LUNs (if any), provide a datastore name (by default, the LUN is used), and select **Connect**.
--
-When the connection succeeds, you will see the datastores added in vCenter.
----
-## Disconnect a disk pool from your private cloud
-
-When you disconnect a disk pool, the disk pool resources aren't deleted. There's no maintenance window required for this operation. But, be careful when you do it.
-
-First, power off the VMs and remove all objects associated with the disk pool datastores, which includes:
-
- - VMs (remove from inventory)
-
- - Templates
-
- - Snapshots
-
-Then, delete the private cloud datastore.
-
-1. Navigate to your Azure VMware Solution in the Azure portal.
-1. Select **Storage** under **Manage**.
-1. Select the disk pool you want to disconnect from and select **Disconnect**.
--
-## Next steps
-
-Now that you've attached a disk pool to your Azure VMware Solution hosts, you may want to learn about:
--- [Managing an Azure disk pool](../virtual-machines/disks-pools-manage.md). Once you've deployed a disk pool, there are various management actions available to you. You can add or remove a disk to or from a disk pool, update iSCSI LUN mapping, or add ACLs.--- [Deleting a disk pool](../virtual-machines/disks-pools-deprovision.md#delete-a-disk-pool). When you delete a disk pool, all the resources in the managed resource group are also deleted.--- [Disabling iSCSI support on a disk](../virtual-machines/disks-pools-deprovision.md#disable-iscsi-support). If you disable iSCSI support on a disk pool, you effectively can no longer use a disk pool.--- [Moving disk pools to a different subscription](../virtual-machines/disks-pools-move-resource.md). Move an Azure disk pool to a different subscription, which involves moving the disk pool itself, contained disks, managed resource group, and all the resources. --- [Troubleshooting disk pools](../virtual-machines/disks-pools-troubleshoot.md). Review the common failure codes related to Azure disk pools (preview). It also provides possible resolutions and some clarity on disk pool statuses.
backup Backup Azure System State Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-system-state-troubleshoot.md
We recommend you perform the following validation steps, before you start troubl
- Ensure your OS has the latest updates - [Ensure unsupported drives and files with unsupported attributes are excluded from backup](backup-support-matrix-mars-agent.md#supported-drives-or-volumes-for-backup) - Ensure **System Clock** on the protected system is configured to correct time zone <br>-- [Ensure that the server has at least .Net Framework version 4.5.2 and higher](https://www.microsoft.com/download/details.aspx?id=30653)<br>
+- [Ensure that the server has at least .NET Framework version 4.6.2 and higher](https://dotnet.microsoft.com/download/dotnet-framework)
- If you're trying to **reregister your server** to a vault, then: <br> - Ensure the agent is uninstalled on the server and it's deleted from the portal <br> - Use the same passphrase that was initially used for registering the server <br>
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 04/05/2023 Last updated : 05/02/2023
The following screenshot shows _Multiple Backups_ occurred in a day.
## Create an Enhanced policy and configure VM backup
+**Choose a client**
+
+# [Azure portal](#tab/azure-portal)
++ Follow these steps: 1. In the Azure portal, select a Recovery Services vault to back up the VM.
Follow these steps:
6. Select **Create**. +
+# [PowerShell](#tab/powershell)
+
+To create an enhanced backup policy or update the policy, run the following cmdlets:
+
+**Step 1: Create the backup policy**
+
+```azurepowershell
+$SchPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -PolicySubType "Enhanced" -WorkloadType "AzureVM" -ScheduleRunFrequency ΓÇ£HourlyΓÇ¥
+```
+
+The parameter `ScheduleRunFrequency:Hourly` now also be an acceptable value for Azure VM workload.
+
+Also, the output object for this cmdlet contains the following additional fields for Azure VM workload, if you're creating hourly policy.
+
+- `[-ScheduleWindowStartTime <DateTime>]`
+- `[-ScheduleRunTimezone <String>]`
+- `[-ScheduleInterval <Int>]`
+- `[-ScheduleWindowDuration <Int>]`
+
+**Step 2: Set the backup schedule objects**
+
+```azurepowershell
+$startTime = Get-Date -Date "2021-12-22T06:10:00.00+00:00"
+$SchPol.ScheduleRunStartTime = $startTime
+$SchPol.ScheduleInterval = 6
+$SchPol.ScheduleWindowDuration = 12
+$SchPol.ScheduleRunTimezone = "PST"
+
+```
+
+This sample cmdlet, contains the following parameters:
+
+- `$ScheduleInterval`: Defines the difference (in hours) between two successive backups per day. Currently, the acceptable values are *4*, *6*, *8* and *12*.
+
+- `$ScheduleWindowStartTime`: The time at which the first backup job is triggered, in case of *hourly backups*. The current limits (in policy's timezone) are:
+ - `Minimum: 00:00`
+ - `Maximum:19:30`
+
+- `$ScheduleRunTimezone`: Specifies the timezone in which backups are scheduled. The default schedule is *UTC*.
+
+- `$ScheduleWindowDuration`: The time span (in hours measured from the Schedule Window Start Time) beyond which backup jobs shouldn't be triggered. The current limits are:
+ - `Minimum: 4`
+ - `Maximum:23`
+
+**Step 3: Create the backup retention policy**
+
+```azurepowershell
+Get-AzRecoveryServicesBackupRetentionPolicyObject -WorkloadType AzureVM -ScheduleRunFrequency "Hourly"
+```
+
+- The parameter `ScheduleRunFrequency:Hourly` is also an acceptable value for Azure VM workload.
+- If `ScheduleRunFrequency` is hourly, you don't need to enter a value for `RetentionTimes` to the policy object.
+
+**Step 4: Set the backup retention policy object**
+
+```azurepowershell
+$RetPol.DailySchedule.DurationCountInDays = 365
+
+```
+
+**Step 5: Save the policy configuration**
+
+```azurepowershell
+AzRecoveryServicesBackupProtectionPolicy
+New-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -WorkloadType AzureVM -RetentionPolicy $RetPol -SchedulePolicy $SchPol
+
+```
+
+For Enhanced policy, the allowed values for snapshot retention are from *1* day to *30* days.
+
+>[!Note]
+>The specific value depends on the hourly frequency. For example, when hourly frequency is *4 hours*, the maximum retention allowed is *17 days*, for 6 hours it is 22 days. Let's add this specific information here.
+++
+**Step 6: Update snapshot retention duration**
+
+```azurepowershell
+$bkpPol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy"
+$bkpPol.SnapshotRetentionInDays=10
+Set-AzRecoveryServicesBackupProtectionPolicy -policy $bkpPol -VaultId <VaultId>
+
+```
+### List enhanced backup policies
+
+To view the existing enhanced policies, run the following cmdlet:
+
+```azurepowershell
+Get-AzRecoveryServicesBackupProtectionPolicy -PolicySubType "Enhanced"
+
+```
++
+For `Get-AzRecoveryServicesBackupProtectionPolicy`:
+- Add the parameter `PolicySubType`. The allowed values are `Enhanced` and `Standard`. If you don't specify a value for this parameter, all policies (standard and enhanced) get listed.
+- The applicable parameter sets are `NoParamSet`, `WorkloadParamSet`, `WorkloadBackupManagementTypeParamSet`.
+- For non-VM workloads, allowed value is `Standard` only.
+
+>[!Note]
+>You can retrieve the sub type of policies. To list Standard backup policies, specify `Standard` as the value of this parameter. To list Enhanced backup policies for Azure VMs, specify `Enhanced` as the value of this parameter.
+++++
+### Configure backup
+
+To configure backup of a Trusted launch VM or assign a new policy to the VM, run the following cmdlet:
+
+```azurepowershell
+$targetVault = Get-AzRecoveryServicesVault -ResourceGroupName "Contoso-docs-rg" -Name "testvault"
+$pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -VaultId $targetVault.ID
+Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -VaultId $targetVault.ID
+
+```
++++
+# [CLI](#tab/cli)
+
+To create an enhanced backup policy, run the following command:
+
+```azurecli
+az backup policy create --policy {policy} --resource-group MyResourceGroup --vault-name MyVault --name MyPolicy --backup-management-type AzureIaaSVM -PolicySubType "Enhanced"
+Policy is passed in JSON format to the create command.
+
+```
+
+### Update an enhanced backup policy
+
+To update an enhanced backup policy, run the following command:
+
+```azurecli
+az backup policy set --policy {policy} --resource-group MyResourceGroup --vault-name MyVault -PolicySubType "Enhanced"
+
+```
+
+### List enhanced backup policies
+
+To list all existing enhanced policies, run the following command:
+
+```azurecli
+az backup policy list --resource-group MyResourceGroup --vault-name MyVault --policy-sub-type Enhanced --workload-type VM
+
+```
+
+For parameter `ΓÇôpolicy-sub-type`, the allowed values are `Enhanced` and `Standard`. If you don't specify a value for this parameter, all policies (standard and enhanced) get listed.
+
+For non-VM workloads, the only allowed value is `Standard`
++
+### Configure backup for a VM or assign a new policy to a VM
+
+To configure backup for a VM or assign a new policy to the VM, run the following command:
+
+```azurecli
+az backup protection enable-for-vm \
+ --resource-group myResourceGroup \
+ --vault-name myRecoveryServicesVault \
+ --vm $(az vm show -g VMResourceGroup -n MyVm --query id | tr -d '"') \
+ --policy-name DefaultPolicy
+
+```
+
+Trusted Launch VMs can only be backed up using Enhanced policies.
+
+>[!Note]
+>- Currently, a non-Trusted Launch VM that was earlier using Standard policy can't start using Enhanced policy.
+>- A VM that is using Enhanced policy can't be updated to use Standard policy.
+++
+...
+++ >[!Note] >- The support for Enhanced policy is available in all Azure Public and US Government regions. >- We support Enhanced policy configuration through [Recovery Services vault](./backup-azure-arm-vms-prepare.md) and [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm) only. Configuration through Backup center is currently not supported.
backup Blob Backup Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md
Title: Configure and manage backup for Azure Blobs using Azure Backup description: Learn how to configure and manage operational and vaulted backups for Azure Blobs. Previously updated : 03/10/2023 Last updated : 05/02/2023
To create a backup policy, follow these steps:
1. Go to **Backup center**, and then select **+ Policy**. This takes you to the create policy experience.
+ :::image type="content" source="./media/blob-backup-configure-manage/add-policy-inline.png" alt-text="Screenshot shows how to initiate adding backup policy for vaulted blob backup." lightbox="./media/blob-backup-configure-manage/add-policy-expanded.png":::
+ 2. Select the *data source type* as **Azure Blobs (Azure Storage)**, and then select **Continue**.
+ :::image type="content" source="./media/blob-backup-configure-manage/datasource-type-selection-for-vaulted-blob-backup.png" alt-text="Screenshot shows how to select datasource type for vaulted blob backup.":::
+ 3. On the **Basics** tab, enter a name for the policy and select the vault you want this policy to be associated with.
+ :::image type="content" source="./media/blob-backup-configure-manage/add-vaulted-backup-policy-name.png" alt-text="Screenshot shows how to add vaulted blob backup policy name.":::
+ You can view the details of the selected vault in this tab, and then select **continue**. 4. On the **Schedule + retention** tab, enter the *backup details* of the data store, schedule, and retention for these data stores, as applicable.
To create a backup policy, follow these steps:
- **Vaulted backups**: Choose the frequency of backups between *daily* and *weekly*, specify the schedule when the backup recovery points need to be created, and then edit the default retention rule (selecting **Edit**) or add new rules to specify the retention of recovery points using a *grandparent-parent-child* notation. - **Operational backups**: These are continuous and don't require a schedule. Edit the default rule for operational backups to specify the required retention.
+ :::image type="content" source="./media/blob-backup-configure-manage/define-vaulted-backup-schedule-and-retention-inline.png" alt-text="Screenshot shows how to configure vaulted blob backup schedule and retention." lightbox="./media/blob-backup-configure-manage/define-vaulted-backup-schedule-and-retention-expanded.png":::
+ 5. Go to **Review and create**. 6. Once the review is complete, select **Create**.
To configure backup for storage accounts, follow these steps:
1. Go to **Backup center** > **Overview**, and then select **+ Backup**.
+ :::image type="content" source="./media/blob-backup-configure-manage/start-vaulted-backup.png" alt-text="Screenshot shows how to initiate vaulted blob backup.":::
+ 2. On the **Initiate: Configure Backup** tab, choose **Azure Blobs (Azure Storage)** as the **Datasource type**.
+ :::image type="content" source="./media/blob-backup-configure-manage/choose-datasource-for-vaulted-backup.png" alt-text="Screenshot shows how to initiate configuring vaulted blob backup.":::
+ 3. On the **Basics** tab, specify **Azure Blobs (Azure Storage)** as the **Datasource type**, and then select the *Backup vault* that you want to associate with your storage accounts. You can view details of the selected vault on this tab, and then select **Next**.+
+ :::image type="content" source="./media/blob-backup-configure-manage/select-datasource-type-for-vaulted-backup.png" alt-text="Screenshot shows how to select datasource type to initiate vaulted blob backup.":::
4. Select the *backup policy* that you want to use for retention. You can view the details of the selected policy. You can also create a new backup policy, if needed. Once done, select **Next**.
+ :::image type="content" source="./media/blob-backup-configure-manage/select-policy-for-vaulted-backup.png" alt-text="Screenshot shows how to select policy for vaulted blob backup.":::
+ 5. On the **Datasources** tab, select the *storage accounts* you want to back up.
+ :::image type="content" source="./media/blob-backup-configure-manage/select-storage-account-for-vaulted-backup.png" alt-text="Screenshot shows how to select storage account for vaulted blob backup." lightbox="./media/blob-backup-configure-manage/select-storage-account-for-vaulted-backup.png":::
+ You can select multiple storage accounts in the region to back up using the selected policy. Search or filter the storage accounts, if required.
- If you have chosen the vaulted backup policy in step 4, you can also select specific containers to backup. Click "Change" under the "Selected containers" column. In the context blade, choose "browse containers to backup" and unselect the ones you don't want to backup.
+ If you've chosen the vaulted backup policy in step 4, you can also select specific containers to backup. Click "Change" under the "Selected containers" column. In the context blade, choose "browse containers to backup" and unselect the ones you don't want to backup.
6. When you select the storage accounts and containers to protect, Azure Backup performs the following validations to ensure all prerequisites are met. The **Backup readiness** column shows if the Backup vault has enough permissions to configure backups for each storage account.
To configure backup for storage accounts, follow these steps:
To do this, select the storage accounts, and then select **Download role assignment template** to download the template. Once the role assignments are complete, select **Revalidate** to validate the permissions again, and then configure backup.
+ :::image type="content" source="./media/blob-backup-configure-manage/vaulted-backup-role-assignment-success.png" alt-text="Screenshot shows that the role assignment is successful.":::
+ >[!Note] >The template contains details for selected storage accounts only. So, if there are multiple users that need to assign roles for different storage accounts, you can select and download different templates accordingly.
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Title: Container workloads on Azure Batch description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 04/05/2023 Last updated : 05/01/2023 ms.devlang: csharp, python
If the container image for a Batch task is configured with an [ENTRYPOINT](https
- To use the default ENTRYPOINT of the container image, set the task command line to the empty string `""`. -- To override the default ENTRYPOINT, or if the image doesn't have an ENTRYPOINT, set a command line appropriate for the container, for example, `/app/myapp` or `/bin/sh -c python myscript.py`.
+- To override the default ENTRYPOINT, add the `--entrypoint` argument for example: `--endpoint "/bin/sh - python"`
+
+- If the image doesn't have an ENTRYPOINT, set a command line appropriate for the container, for example, `/app/myapp` or `/bin/sh -c python myscript.py`
Optional [ContainerRunOptions](/dotnet/api/microsoft.azure.batch.taskcontainersettings.containerrunoptions) are additional arguments you provide to the `docker create` command that Batch uses to create and run the container. For example, to set a working directory for the container, set the `--workdir <directory>` option. See the [docker create](https://docs.docker.com/engine/reference/commandline/create/) reference for additional options.
batch Batch Mpi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-mpi.md
myCloudPool.TaskSlotsPerNode = 1;
> [!NOTE] > If you try to run a multi-instance task in a pool with internode communication disabled, or with a *taskSlotsPerNode* value greater than 1, the task is never scheduled--it remains indefinitely in the "active" state.
+>
+> Pools with InterComputeNodeCommunication enabled will not allow automatically the deprovision of the node.
### Use a StartTask to install MPI
Sample complete, hit ENTER to exit...
## Next steps - Read more about [MPI support for Linux on Azure Batch](/archive/blogs/windowshpc/introducing-mpi-support-for-linux-on-azure-batch).-- Learn how to [create pools of Linux compute nodes](batch-linux-nodes.md) for use in your Azure Batch MPI solutions.
+- Learn how to [create pools of Linux compute nodes](batch-linux-nodes.md) for use in your Azure Batch MPI solutions.
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md
Title: Use compute-intensive Azure VMs with Batch description: How to take advantage of HPC and GPU virtual machine sizes in Azure Batch pools. Learn about OS dependencies and see several scenario examples. Previously updated : 04/26/2023 Last updated : 05/01/2023 # Use RDMA or GPU instances in Batch pools
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
| Size | Capability | Operating systems | Required software | Pool settings | | -- | -- | -- | -- | -- |
-| [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/linux/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Ubuntu 22.04 LTS, or<br/>CentOS-based HPC<br/>(Azure Marketplace) | Intel MPI 5<br/><br/>Linux RDMA drivers | Enable inter-node communication, disable concurrent task execution |
-| [NC, NCv2, NCv3, NDv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Ubuntu 22.04 LTS, or<br/>CentOS 7.3 or 7.4<br/>(Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers | N/A |
-| [NV, NVv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Ubuntu 22.04 LTS, or<br/>CentOS 7.3<br/>(Azure Marketplace) | NVIDIA GRID drivers | N/A |
+| [H16r, H16mr](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/linux/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Ubuntu 22.04 LTS, or<br/>CentOS-based HPC<br/>(Azure Marketplace) | Intel MPI 5<br/><br/>Linux RDMA drivers | Enable inter-node communication, disable concurrent task execution |
+| [NC, NCv2, NCv3, NDv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Ubuntu 22.04 LTS, or<br/>CentOS 8.1<br/>(Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers | N/A |
+| [NV, NVv2, NVv4 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Ubuntu 22.04 LTS, or<br/>CentOS 8.1<br/>(Azure Marketplace) | NVIDIA GRID drivers | N/A |
<sup>*</sup>RDMA-capable N-series sizes also include NVIDIA Tesla GPUs
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
| Size | Capability | Operating systems | Required software | Pool settings | | -- | | -- | -- | -- |
-| [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/windows/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Windows Server 2016, 2012 R2, or<br/>2012 (Azure Marketplace) | Microsoft MPI 2012 R2 or later, or<br/> Intel MPI 5<br/><br/>Windows RDMA drivers | Enable inter-node communication, disable concurrent task execution |
+| [H16r, H16mr](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/windows/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Windows Server 2016, 2012 R2, or<br/>2012 (Azure Marketplace) | Microsoft MPI 2012 R2 or later, or<br/> Intel MPI 5<br/><br/>Windows RDMA drivers | Enable inter-node communication, disable concurrent task execution |
| [NC, NCv2, NCv3, ND, NDv2 series](../virtual-machines/windows/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Windows Server 2016 or <br/>2012 R2 (Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers| N/A |
-| [NV, NVv2 series](../virtual-machines/windows/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Windows Server 2016 or<br/>2012 R2 (Azure Marketplace) | NVIDIA GRID drivers | N/A |
+| [NV, NVv2, NVv4 series](../virtual-machines/windows/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Windows Server 2016 or<br/>2012 R2 (Azure Marketplace) | NVIDIA GRID drivers | N/A |
<sup>*</sup>RDMA-capable N-series sizes also include NVIDIA Tesla GPUs
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
| Size | Capability | Operating systems | Required software | Pool settings | | -- | - | -- | -- | -- |
-| [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md) | RDMA | Windows Server 2016, 2012 R2, 2012, or<br/>2008 R2 (Guest OS family) | Microsoft MPI 2012 R2 or later, or<br/>Intel MPI 5<br/><br/>Windows RDMA drivers | Enable inter-node communication,<br/> disable concurrent task execution |
+| [H16r, H16mr](../virtual-machines/sizes-hpc.md) | RDMA | Windows Server 2016, 2012 R2, 2012, or<br/>2008 R2 (Guest OS family) | Microsoft MPI 2012 R2 or later, or<br/>Intel MPI 5<br/><br/>Windows RDMA drivers | Enable inter-node communication,<br/> disable concurrent task execution |
> [!NOTE] > N-series sizes are not supported in Cloud Services Configuration pools.
To configure a specialized VM size for your Batch pool, you have several options
* For pools in the virtual machine configuration, choose a preconfigured [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/) VM image that has drivers and software preinstalled. Examples:
- * [CentOS-based 7.4 HPC](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview) - includes RDMA drivers and Intel MPI 5.1
+ * [CentOS-based 8.1 HPC](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview) - includes RDMA drivers and Intel MPI 5.1
* [Data Science Virtual Machine](../machine-learning/data-science-virtual-machine/overview.md) for Linux or Windows - includes NVIDIA CUDA drivers
To run CUDA applications on a pool of Windows NC nodes, you need to install NVDI
## Example: NVIDIA GPU drivers on a Linux NC VM pool
-To run CUDA applications on a pool of Linux NC nodes, you need to install necessary NVIDIA Tesla GPU drivers from the CUDA Toolkit. The following sample steps create and deploy a custom Ubuntu 16.04 LTS image with the GPU drivers:
+To run CUDA applications on a pool of Linux NC nodes, you need to install necessary NVIDIA Tesla GPU drivers from the CUDA Toolkit. The following sample steps create and deploy a custom Ubuntu 22.04 LTS image with the GPU drivers:
1. Deploy an Azure NC-series VM running Ubuntu 22.04 LTS. For example, create the VM in the US South Central region. 2. Add the [NVIDIA GPU Drivers extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) to the VM by using the Azure portal, a client computer that connects to the Azure subscription, or Azure Cloud Shell. Alternatively, follow the steps to connect to the VM and [install CUDA drivers](../virtual-machines/linux/n-series-driver-setup.md) manually.
To run Windows MPI applications on a pool of Azure H16r VM nodes, you need to co
## Example: Intel MPI on a Linux H16r VM pool
-To run MPI applications on a pool of Linux H-series nodes, one option is to use the [CentOS-based 7.4 HPC](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview) image from the Azure Marketplace. Linux RDMA drivers and Intel MPI are preinstalled. This image also supports Docker container workloads.
+To run MPI applications on a pool of Linux HB-series nodes, one option is to use the [CentOS-based 8.1 HPC](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview) image from the Azure Marketplace. Linux RDMA drivers and Intel MPI are preinstalled. This image also supports Docker container workloads.
Using the Batch APIs or Azure portal, create a pool using this image and with the desired number of nodes and scale. The following table shows sample pool settings:
Using the Batch APIs or Azure portal, create a pool using this image and with th
| **Image Type** | Marketplace (Linux/Windows) | | **Publisher** | OpenLogic | | **Offer** | CentOS-HPC |
-| **Sku** | 7.4 |
+| **Sku** | 8.1 |
| **Node size** | H16r Standard | | **Internode communication enabled** | True | | **Max tasks per node** | 1 |
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md
Request Body
"resizeTimeout": "PT15M" } }
+ }
+}
``` ## Get extension data from a pool
The example below retrieves data from the Azure Key Vault extension.
REST API URI ```http
- GET https://<accountname>.<region>.batch.azure.com/pools/test3/nodes/tvmps_a3ce79db285d6c124399c5bd3f3cf308d652c89675d9f1f14bfc184476525278_d/extensions/secretext?api-version=2010-01-01
+ GET https://<accountName>.<region>.batch.azure.com/pools/<poolName>/nodes/<tvmNodeName>/extensions/secretext?api-version=2010-01-01
``` Response Body
cognitive-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md
Specify which visual features you'd like to extract in your analysis. See the [V
#### [JavaScript](#tab/javascript)
-Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/javascript/api/@azure/cognitiveservices-computervision/visualfeaturetypes?view=azure-node-latest&preserve-view=true) enum for a complete list.
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/javascript/api/@azure/cognitiveservices-computervision/computervisionmodels.visualfeaturetypes) enum for a complete list.
[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_features_remote)]
ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(p
#### [JavaScript](#tab/javascript)
-Use the **language** property of the [ComputerVisionClientAnalyzeImageOptionalParams](/javascript/api/@azure/cognitiveservices-computervision/computervisionclientanalyzeimageoptionalparams) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
+Use the **language** property of the [ComputerVisionClientAnalyzeImageOptionalParams](/javascript/api/@azure/cognitiveservices-computervision/computervisionmodels.computervisionclientanalyzeimageoptionalparams) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
```javascript const result = (await computerVisionClient.analyzeImage(imageURL,{visualFeatures: features, language: 'en'}));
The following code calls the Image Analysis API and prints the results to the co
## Next steps * Explore the [concept articles](../concept-object-detection.md) to learn more about each feature.
-* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality.
+* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to
## Recent highlights
-* Speech SDK 1.27.0 was released in April 2023.
+* Speech SDK 1.28.0 was released in May 2023.
* Speech-to-text and text-to-speech container versions were updated in March 2023. * Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription. * Custom Speech-to-Text container disconnected mode was released in January 2023.
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
Previously updated : 09/25/2022 Last updated : 05/02/2023 ms.devlang: csharp
Use cases for the speech-to-text REST API for short audio are limited. Use it on
Before you use the speech-to-text REST API for short audio, consider the following limitations:
-* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md).
+* Requests that use the REST API for short audio and transmit audio directly can contain no more than 30 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md).
* The REST API for short audio returns only final results. It doesn't provide partial results. * [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md). * [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to Text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech.
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
The following table describes the minimum and recommended CPU cores and memory t
| Container | Minimum |Recommended | Language Pair | |--|||-|
-| Translator connected |`2` cores, 2-GB memory |`4` cores, 8-GB memory | 4 |
+| Translator |`2` cores, 2-GB memory |`4` cores, 8-GB memory | 4 |
* Each core must be at least 2.6 gigahertz (GHz) or faster.
Application for Gated Services**](https://aka.ms/csgate-translator) to request a
## Translator container image
-The Translator container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64-preview`.
+The Translator container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest`.
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags).
+To use the latest version of the container, you can use the `latest` tag. You can find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags).
## Get container images with **docker commands**
docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \
-e eula=accept \ -e billing={ENDPOINT_URI} \ -e Languages=en,fr,es,ar,ru \
-mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64-preview
+mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
``` The above command:
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. | | ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.| | ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
-| ```suffix```| string | Optional | null | he suffix that comes after a completion of inserted text. |
+| ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. |
| ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. | | ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. | | ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
# [Bash](#tab/bash) ```Bash
-echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment && source /etc/environment
-```
+echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment
+echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment
-```Bash
-echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment && source /etc/environment
+source /etc/environment
```
r = requests.get(url, headers={"api-key": API_KEY})
print(r.text) ```
-**Output:**
-
-```cmd
+```output
{ "data": [ {
df_bills = df_bills[df_bills.n_tokens<8192]
len(df_bills) ```
-**Output:**
-
-```cmd
+```output
20 ```
decode
For our docs we're intentionally truncating the output, but running this command in your environment will return the full text from index zero tokenized into chunks. You can see that in some cases an entire word is represented with a single token whereas in others parts of words are split across multiple tokens.
-**Output:**
-
-```cmd
+```output
[b'SECTION', b' ', b'1',
If you then check the length of the `decode` variable, you'll find it matches th
len(decode) ```
-**Output:**
-
-```cmd
+```output
1466 ```
Finally, we'll show the top result from document search based on user query agai
res["summary"][9] ```
-**Output:**
-
-```cmd
+```output
"Taxpayer's Right to View Act of 1993 - Amends the Communications Act of 1934 to prohibit a cable operator from assessing separate charges for any video programming of a sporting, theatrical, or other entertainment event if that event is performed at a facility constructed, renovated, or maintained with tax revenues or by an organization that receives public financial support. Authorizes the Federal Communications Commission and local franchising authorities to make determinations concerning the applicability of such prohibition. Sets forth conditions under which a facility is considered to have been constructed, maintained, or renovated with tax revenues. Considers events performed by nonprofit or public organizations that receive tax subsidies to be subject to this Act if the event is sponsored by, or includes the participation of a team that is part of, a tax exempt organization." ```
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
Previously updated : 03/12/2023 Last updated : 05/01/2023 - # What's new in Azure Communication Services
-We're adding new capabilities to Azure Communication Services all the time, so we created this page to share the latest developments in the platform. Bookmark this page and make it your go-to resource to find out all the latest capabilities of Azure Communication Services.
+We've created this page to keep you updated on new features, blog posts, and other useful information related to Azure Communication Services. Be sure to check back monthly for all the newest and latest information!
+
+<br>
+<br>
+<br>
+
+## From the community
+See examples and get inspired by what's being done in the community of Azure Communication Services users.
++
+### Extend Azure Communication Services with Power Platform Connectors
+
+Listen to Azure Communication Services PMs Tomas Chladek and David de Matheu talk about how to connect your Azure Communication Services app to Microsoft Teams, and extend it with the Microsoft Power Platform.
+
+[Watch the video](https://www.youtube.com/watch?v=-TPI293h0mY&t=3s&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D)
+
+[Read the Power Pages documentation](https://learn.microsoft.com/power-pages/configure/component-framework)
+
+[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial)
++
+<br>
+<br>
+
+### Integrate Azure Communication Services calling into a React App
+
+Learn how to create an app using Azure Communication services front-end components in React.
+
+[Watch the video](https://www.youtube.com/watch?v=ZyBNYblzISs&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D)
+[View the Microsoft Cloud Integrations repo](https://github.com/microsoft/microsoftcloud)
-## Updated documentation
-We heard your feedback and made it easier to find the documentation you need as quickly and easily as possible. We're making our docs more readable, easier to understand, and more up-to-date. There's a new landing page design and an updated, better organized table of contents. We've added some of the content you've told us you need and will be continuing to do so, and we're editing existing documentation as well. Don't hesitate to use the feedback link at the top of each page to tell us if a page needs refreshing. Thanks!
+[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial)
-## Teams interoperability (General Availability)
-Azure Communication Services can be used to build custom applications and experiences that enable interaction with Microsoft Teams users over voice, video, chat, and screen sharing. The [Communication Services UI Library](./concepts/ui-library/ui-library-overview.md) provides customizable, production-ready UI components that can be easily added to these applications. The following video demonstrates some of the capabilities of Teams interoperability:
+[Read more about the UI Library](https://aka.ms/acs-ui-library)
+
+<br>
<br>
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWGTqQ]
+### Dynamically create an Azure Communication Services identity and token
-To learn more about teams interoperability, visit the [teams interop overview page](./concepts/teams-interop.md)
+Learn how an external developer can get a token that allows your app to join a teams meeting through Azure Communication Services.
-## New calling features
-Our calling team has been working hard to expand and improve our feature set in response to your requests. Some of the new features we've enabled include:
+[Watch the video](https://www.youtube.com/watch?v=OgE72PGq6TM&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D)
-### Background blur and custom backgrounds (Public Preview)
-Background blur gives users a way to remove visual distractions behind a participant so that callers can engage in a conversation without disruptive activity or confidential information appearing in the background. This feature is especially useful in a context such as telehealth, where a provider or patient might want to obscure their surroundings to protect sensitive information. Background blur can be applied across all virtual appointment scenarios to protect user privacy, including telebanking and virtual hearings. In addition to enhanced confidentiality, the custom backgrounds capability allows for more creativity of expression, allowing users to upload custom backgrounds to host a more fun, personalized calling experience. This feature is currently available on Web Desktop and will be expanding to other platforms in the future.
+[Read more about Microsoft Cloud Integrations](https://aka.ms/microsoft-cloud)
-*Figure 1: Custom background*
+[View the Microsoft Cloud Integrations repo](https://github.com/microsoft/microsoftcloud)
-To learn more about custom backgrounds and background blur, visit the overview on [adding visual effects to your call](./concepts/voice-video-calling/video-effects.md).
+[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial)
-### Raw media access (Public Preview)
-The video media access API provides support for developers to get real-time access to video streams so that they can capture, analyze, and process video content during active calls. Developers can access the incoming call video stream directly on the call object and send custom outgoing video stream during the call. This feature sets the foreground services to support different kinds of video and audio manipulation. Outgoing video access can be captured and implemented with screen sharing, background blur, and video filters before being published to the recipient, allowing viewers to build privacy into their calling experience. In more complex scenarios, video access can be fitted with a virtual environment to support augmented reality. Spatial audio can be injected into remote incoming audio to add music to enhance a waiting room lobby.
+[Read more about the UI Library](https://aka.ms/acs-ui-library)
-To learn more about raw media access visit the [media access overview](./concepts/voice-video-calling/media-access.md)
+[Read the documentation on Azure Functions](https://aka.ms/msazure--functions)
-Other new calling features include:
-- Webview support for iOS and Android-- Early media support in call flows-- Chat composite for mobile native development-- Added browser support for JS Calling SDK-- Call readiness tools-- Simulcast
+[View the Graph Explorer](https://aka.ms/ge)
-Take a look at our feature update blog posts from [January](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-calling-features-update/ba-p/3735073) and [February](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-february-2023-feature-updates/ba-p/3737486) for more detailed information and links to numerous quickstarts.
-## Rooms (Public Preview)
-Azure Communication Services provides a concept of a room for developers who are building structured conversations such as virtual appointments or virtual events. Rooms currently allow voice and video calling.
+<br>
+<br>
-To learn more about rooms, visit the [overview page](./concepts/rooms/room-concept.md)
+### Deploy an Azure Communication Services app to Azure
-## Sample Builder Rooms integration
+Learn how to quickly and easily deploy your Azure Communication Services app to Azure.
-**We are excited to announce that we have integrated Rooms into our Virtual Appointment Sample.**
+[Watch the video](https://www.youtube.com/watch?v=JYs5CPyu2Io&pp=ygUcYXp1cmUgY29tbXVuaWNhdGlvbiBzZXJ2aWNlcw%3D%3D)
-Azure Communication Services (ACS) provides the concept of a room. Rooms allow developers to build structured conversations such as scheduled virtual appointments or virtual events. Rooms allow control through roles and permissions and enable invite-only experiences. Rooms currently allow voice and video calling.
+[Read more about Microsoft Cloud Integrations](https://aka.ms/microsoft-cloud)
-## Enabling a faster sample building experience
+[View the Microsoft Cloud Integrations repo](https://github.com/microsoft/microsoftcloud)
-Data indicates that ~40% of customers abandon the Sample Builder due to the challenging nature of the configuration process, particularly during the Microsoft Bookings setup. To address this issue, we've implemented a solution that streamlines the deployment process by using Rooms for direct virtual appointment creation within the Sample Builder. This change results in a significant reduction of deployment time, as the configuration of Microsoft Bookings isn't enforced, but rather transformed into an optional feature that can be configured in the deployed Sample. Additionally, we've incorporated a feedback button into the Sample Builder and made various enhancements to its accessibility. With Sample Builder, customers can effortlessly customize and deploy their applications to Azure or their Git repository, without the need for any coding expertise.
+[Read the tutorial on integrating with Teams](https://aka.ms/mscloud-acs-teams-tutorial)
+[Read more about the UI Library](https://aka.ms/acs-ui-library)
-*Figure 2: Scheduling experience options.*
+[Read the documentation on Azure Functions](https://aka.ms/msazure--functions)
+[View the Graph Explorer](https://aka.ms/ge)
-*Figure 3:  Feedback form.*
+<br>
+<br>
+
+<br>
+## New features
+Get detailed information on the latest Azure Communication Services feature launches.
+### Email service now generally available
-Sample Builder is already in General Availability and can be accessed [on Azure portal](https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/50ad1522-5c2c-4d9a-a6c8-67c11ecb75b8/resourceGroups/serooney-tests/providers/Microsoft.Communication/CommunicationServices/email-tests/sample_applications).
+Azure Communication Services announces the general availability of our Email service. Email is powered by Exchange Online and meets the security and privacy requirements of enterprises.
+[Read about ACS Email](https://techcommunity.microsoft.com/t5/azure-communication-services/simpler-faster-azure-communication-services-email-now-generally/ba-p/3788541)
+
+<br>
+<br>
-## Call Automation (Public Preview)
-Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
-ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the following high-level architecture. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
+### View of April's new features
-*Figure 4: Call Automation Architecture*
+In April, we launched a host of new features, including:
+* Troubleshooting capability in UI library for native
+* Toll-free verification
+* SMS insights dashboard
+* and others...
-To learn more, visit our [Call Automation overview article](./concepts/call-automation/call-automation.md).
+[View the complete list](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-april-2023-feature-updates/ba-p/3786509) of all new features added to Azure Communication Services in April.
-## Phone number expansion now in General Availability (Generally Available)
- We're excited to announce that we have launched Phone numbers in Canada, United Kingdom, Italy, Ireland and Sweden from Public Preview into General Availability. ACS Direct Offers are now generally available in the following countries and regions: **United States, Puerto Rico, Canada, United Kingdom, Italy, Ireland** and **Sweden**.
+<br>
+<br>
+
+<br>
+
+## Blog posts and case studies
+Go deeper on common scenarios and learn more about how customers are using advanced Azure Communication
+Services features.
++
+### ABN AMRO case study
+
+ABN AMRO used Azure Communication Services to make it easier for customers to get financial advice from anywhere. And they boosted their NPS in the process!
+
+[Read the full story](https://customers.microsoft.com/story/1607768338625418317-abnamro-bankingandcapitalmarkets-microsofteams)
+
+<br>
+<br>
++
+### Get insights from customer interactions with Azure Communication Services and OpenAI
+
+Use the gold mine of customer conversations to automatically generate customer insights and create better customer experiences.
+
+[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/get-insights-from-customer-interactions-with-azure-communication/ba-p/3783858)
+
+[Read about the Azure OpenAI service](https://azure.microsoft.com/products/cognitive-services/openai-service/)
+
+<br>
+<br>
++
+### Latest updates to the UI library
+
+Get up-to-date on the latest additions to the Azure Communication Services UI library. UI library makes it easier to create custom applications with only a few lines of code.
+
+[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/build-communication-apps-for-microsoft-teams-users-with-azure/ba-p/3775688)
+
+[View the UI Library documentation](https://azure.github.io/communication-ui-library/)
++
+<br>
+<br>
-To learn more about the different ways you can acquire a phone number in these regions, visit the [article on how to get and manage phone numbers](./quickstarts/telephony/get-phone-number.md), or [reaching out to the IC3 Service Desk](https://github.com/Azure/Communication/blob/master/special-order-numbers.md). 
Enjoy all of these new features. Be sure to check back here periodically for more news and updates on all of the new capabilities we've added to our platform! For a complete list of new features and bug fixes, visit our [releases page](https://github.com/Azure/Communication/releases) on GitHub.
connectors Connectors Create Api Office365 Outlook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-office365-outlook.md
A [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an event
> [!NOTE] > Your connection doesn't expire until revoked, even if you change your sign-in credentials.
- > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/active-directory-configurable-token-lifetimes.md).
+ > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md).
This example selects the calendar that the trigger checks, for example:
An [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an opera
> [!NOTE] > Your connection doesn't expire until revoked, even if you change your sign-in credentials.
- > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/active-directory-configurable-token-lifetimes.md).
+ > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md).
This example selects the contacts folder where the action creates the new contact, for example:
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md
Image import into an Azure container registry has the following benefits over us
* Access to the target registry doesn't have to use the registry's public endpoint.
+> [!IMPORTANT]
+>* Importing images requires the external registry support [RFC 7233](https://www.rfc-editor.org/rfc/rfc7233#section-2.3). We recommend using a registry that supports RFC 7233 ranges while using az acr import command with the registry URI to avoid failures.
+ ## Limitations * The maximum number of manifests for an imported image is 50.
cosmos-db Secondary Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md
It's not advised to create an index on a frequently updated column. It is pruden
> [!NOTE]
-> Secondary index is not supported on the following objects:
+> Secondary indexes can only be created by using the CQL commands mentioned in this article, and not through the Resource Provider utilities (ARM templates, Azure CLI, PowerShell, or Terraform). Secondary indexes are not supported on the following objects:
> - data types such as frozen collection types, decimal, and variant types. > - Static columns > - Clustering keys
cosmos-db How To Migrate Desktop Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-migrate-desktop-tool.md
The [Azure Cosmos DB desktop data migration tool](https://github.com/azurecosmos
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. - Latest version of [Azure CLI](/cli/azure/install-azure-cli).
+- [.NET 6.0](https://dotnet.microsoft.com/download/dotnet/6.0) or later.
## Install the desktop data migration tool First, install the latest version of the desktop data migration tool from the GitHub repository.
+> [!NOTE]
+> The desktop data migration tool requires [.NET 6.0](https://dotnet.microsoft.com/download/dotnet/6.0) or later on your local machine.
+ 1. In your browser, navigate to the **Releases** section of the repository: [azurecosmosdb/data-migration-desktop-tool/releases](https://github.com/azurecosmosdb/data-migration-desktop-tool/releases). 1. Download the latest compressed folder for your platform. There are compressed folders for the **win-x64**, **mac-x64**, and **linux-x64** platforms.
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
If your data flow has many joins and lookups, you may want to use a **memory opt
Data flows distribute the data processing over different nodes in a Spark cluster to perform operations in parallel. A Spark cluster with more cores increases the number of nodes in the compute environment. More nodes increase the processing power of the data flow. Increasing the size of the cluster is often an easy way to reduce the processing time.
-The default cluster size is four driver nodes and four worker nodes (small). As you process more data, larger clusters are recommended. Below are the possible sizing options:
+The default cluster size is four driver nodes and four worker nodes (small). As you process more data, larger clusters are recommended. Below are the possible sizing options:
| Worker Nodes | Driver Nodes | Total Nodes | Notes | | | | -- | -- |
A best practice is to start small and scale up to meet your performance needs.
## Custom shuffle partition
-Dataflow divides the data into partitions and transforms it using different processes. If the data size in a partition is more than the process can hold in memory, the process fails with OOM(out of memory) errors. If dataflow contains huge amounts of data having joins/aggregations, you may want to try changing shuffle partitions in incremental way. You can set it from 50 up to 2000, to avoid OOM errors. **Compute Custom properties** in dataflow runtime, is a way to control your compute requirements. Property name is **Shuffle partitions** and it's integer type. This customization should only be used in known scenarios, otherwise it can cause unnecessary dataflow failures.
+Dataflow divides the data into partitions and transforms it using different processes. If the data size in a partition is more than the process can hold in memory, the process fails with OOM(out of memory) errors. If dataflow contains huge amounts of data having joins/aggregations, you may want to try changing shuffle partitions in incremental way. You can set it from 50 up to 2000, to avoid OOM errors. **Compute Custom properties** in dataflow runtime, is a way to control your compute requirements. Property name is **Shuffle partitions** and it's integer type. This customization should only be used in known scenarios, otherwise it can cause unnecessary dataflow failures.
-While increasing the shuffle partitions, make sure data is spread across well. A rough number is to have approximately 1.5 GB of data per partition. If data is skewed, increasing the "Shuffle partitions" won't be helpful. For example, if you have 500 GB of data, having a value between 400 to 500 should work. Default limit for shuffle partitions is 200 that works well for approximately 300 GB of data.
+While increasing the shuffle partitions, make sure data is spread across well. A rough number is to have approximately 1.5 GB of data per partition. If data is skewed, increasing the "Shuffle partitions" won't be helpful. For example, if you have 500 GB of data, having a value between 400 to 500 should work. Default limit for shuffle partitions is 200 that works well for approximately 300 GB of data.
-Here are the steps on how it's set in a custom integration runtime. You can't set it for autoresolve integrtaion runtime.
+Here are the steps on how it's set in a custom integration runtime. You can't set it for autoresolve integration runtime.
1. From ADF portal under **Manage**, select a custom integration run time and you go to edit mode. 2. Under dataflow run time tab, go to **Compute Custom Properties** section. 3. Select **Shuffle Partitions** under Property name, input value of your choice, like 250, 500 etc.
-You can do same by editing JSON file of runtime by adding an array with property name and value after an existing property like *cleanup* property.
+You can do same by editing JSON file of runtime by adding an array with property name and value after an existing property like *cleanup* property.
## Time to live
data-factory Data Factory Build Your First Pipeline Using Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-editor.md
Last updated 04/12/2023
> This article applies to version 1 of Azure Data Factory, which is generally available. If you use the current version of the Data Factory service, see [Quickstart: Create a data factory by using Data Factory](../quickstart-create-data-factory-dot-net.md). > [!WARNING]
-> The JSON editor in Azure Portal for authoring & deploying ADF v1 pipelines will be turned OFF on 31st July 2019. After 31st July 2019, you can continue to use [ADF v1 PowerShell cmdlets](/powershell/module/az.datafactory/), [ADF v1 .Net SDK](/dotnet/api/microsoft.azure.management.datafactories.models), [ADF v1 REST APIs](/rest/api/datafactory/) to author & deploy your ADF v1 pipelines.
+> The JSON editor in Azure Portal for authoring & deploying ADF v1 pipelines will be turned OFF on 31st July 2019. After 31st July 2019, you can continue to use [ADF v1 PowerShell cmdlets](/powershell/module/az.datafactory/), [ADF v1 .NET SDK](/dotnet/api/microsoft.azure.management.datafactories.models), [ADF v1 REST APIs](/rest/api/datafactory/) to author & deploy your ADF v1 pipelines.
In this article, you learn how to use the [Azure portal](https://portal.azure.com/) to create your first data factory. To do the tutorial by using other tools/SDKs, select one of the options from the drop-down list.
data-manager-for-agri Concepts Understanding Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-understanding-throttling.md
Title: API throttling guidance for customers using Azure Data Manager for Agriculture.
-description: Provides information on API throttling limits to plan usage.
+ Title: APIs throttling guidance for customers using Azure Data Manager for Agriculture.
+description: Provides information on APIs throttling limits to plan usage.
Last updated 04/18/2023
-# API throttling guidance for Azure Data Manager for Agriculture.
+# APIs throttling guidance for Azure Data Manager for Agriculture.
-The API throttling in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers calling our service APIs. Throttling limits, the number of requests to our service in a time span to prevent overuse of resources. Azure Data Manager for Agriculture is designed to handle a high volume of requests, if an overwhelming number of requests occur by few customers, throttling helps maintain optimal performance and reliability for all customers.
+The APIs throttling in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers calling our service APIs. Throttling limits, the number of requests to our service in a time span to prevent overuse of resources. Azure Data Manager for Agriculture is designed to handle a high volume of requests, if an overwhelming number of requests occur by few customers, throttling helps maintain optimal performance and reliability for all customers.
Throttling limits vary based on product type and capabilities being used. Currently we have two versions, standard and basic (for your POC needs).
-## DPS API limits
+## Data Plane Service API limits
Throttling category | Units available per Standard version| Units available per Basic version | |:|:|:|
The maximum queue size for each job type is 10,000.
Throttling category| Units available per Standard version| Units available per Basic version| |:|:|:| Per 5 Minutes |1,000 |1,000
-Per Month |1,000,000 |200,000
+Per Month |500,000 |100,000
### Maximum create job requests allowed for standard version Job Type| Per 5 mins| Per month| |:|:|:|
-Cascade delete| 1,000| 500,000
+Cascade delete| 500| 250,000
Satellite| 1,000| 500,000 Model inference| 200| 100,000 Farm Operation| 200| 100,000 Rasterize| 500| 250,000
-Weather| 500| 250,000
+Weather| 1,000| 250,000
### Maximum create job requests allowed for basic version Job Type| Per 5 mins| Per month |:|:|:|
-Cascade delete| 1,000| 100,000
+Cascade delete| 500| 50,000
Satellite| 1,000| 100,000 Model inference| 200| 20,000 Farm Operation| 200| 20,000 Rasterize| 500| 50,000
-Weather| 500| 50,000
+Weather| 1000| 100,000
### Sensor events limits 100,000 event ingestion per hour by our sensor job.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
The following tables include the Defender for Servers security alerts [to be dep
| **Alert Type** | **Alert Display Name** | **Severity** ||||
-VM.Windows_KnownCredentialAccessTools | Suspicious process executed | High
-VM.Windows_SuspiciousAccountCreation | Suspicious Account Creation Detected | Medium
VM_AbnormalDaemonTermination | Abnormal Termination | Low VM_BinaryGeneratedFromCommandLine | Suspicious binary detected | Medium VM_CommandlineSuspectDomain Suspicious | domain name reference | Low
VM.Windows_ExecutableDecodedUsingCertutil | Detected decoding of an executable u
VM.Windows_FileDeletionIsSospisiousLocation | Suspicious file deletion detected | Medium VM.Windows_KerberosGoldenTicketAttack | Suspected Kerberos Golden Ticket attack parameters observed | Medium VM.Windows_KeygenToolKnownProcessName | Detected possible execution of keygen executable Suspicious process executed | Medium
+VM.Windows_KnownCredentialAccessTools | Suspicious process executed | High
VM.Windows_KnownSuspiciousPowerShellScript | Suspicious use of PowerShell detected | High VM.Windows_KnownSuspiciousSoftwareInstallation | High risk software detected | Medium VM.Windows_MsHtaAndPowerShellCombination | Detected suspicious combination of HTA and PowerShell | Medium
VM.Windows_RansomwareIndication | Ransomware indicators detected | High
VM.Windows_SqlDumperUsedSuspiciously | Possible credential dumping detected [seen multiple times] | Medium VM.Windows_StopCriticalServices | Detected the disabling of critical services | Medium VM.Windows_SubvertingAccessibilityBinary | Sticky keys attack detected <br/> Suspicious account creation detected Medium
+VM.Windows_SuspiciousAccountCreation | Suspicious Account Creation Detected | Medium
VM.Windows_SuspiciousFirewallRuleAdded | Detected suspicious new firewall rule | Medium VM.Windows_SuspiciousFTPSSwitchUsage | Detected suspicious use of FTP -s switch | Medium VM.Windows_SuspiciousSQLActivity | Suspicious SQL activity | Medium
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
If you don't have access to install the extension, you must request access from
> [!Note] > The MicrosoftSecurityDevOps build task depends on .NET 6. The CredScan analyzer depends on .NET 3.1. See more [here](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops).
-1. Select **Save and run**.
-
-1. To commit the pipeline, select **Save and run**.
+9. To commit the pipeline, select **Save and run**.
The pipeline will run for a few minutes and save the results.
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
On this part of the screen you see:
> [!NOTE] > Currently, this information is available only for GitHub repositories. -- **Code scanning findings** ΓÇô Shows the number of code vulnerabilities and misconfigurations identified in the repositories.
+- **IaC scanning findings** ΓÇô Shows the number of infrastructure as code misconfigurations identified in the repositories.
- > [!NOTE]
- > Currently, this information is available only for GitHub repositories.
+- **Code scanning findings** ΓÇô Shows the number of code vulnerabilities and misconfigurations identified in the repositories.
## Learn more
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
If you donΓÇÖt see SARIF file in the expected path, you may have chosen a differ
### I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud
-Currently, OSS vulnerabilities, IaC scanning vulnerabilities, and Total code scanning vulnerabilities are only available for GitHub repositories.
+Currently, OSS vulnerability findings are only available for GitHub repositories.
-Azure DevOps repositories only have the total exposed secrets available and will show `N/A` for all other fields. You can learn more about how to [Review your findings](defender-for-devops-introduction.md).
+Azure DevOps repositories will have the total exposed secrets, IaC misconfigurations, and code security findings available. It will show `N/A` for OSS vulnerabilities. You can learn more about how to [Review your findings](defender-for-devops-introduction.md).
### Why is my Azure DevOps repository not refreshing to healthy?
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
# Add and configure a catalog from GitHub or Azure DevOps
-Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments Preview dev center. You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [catalog items](./concept-environments-key-concepts.md#catalog-items).
+Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments Preview dev center. You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [catalog items](./concept-environments-key-concepts.md#catalog-items). Your catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which are managed by Microsoft for Azure Services.
For more information about catalog items, see [Add and configure a catalog item](./configure-catalog-item.md).
deployment-environments How To Create Configure Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-dev-center.md
+
+ Title: Create and configure a dev center for Azure Deployment Environments by using the Azure CLI
+
+description: Learn how to create and access an environment in an Azure Deployment Environments project using Azure CLI.
+++++ Last updated : 04/28/2023++
+# Create and configure a dev center for Azure Deployment Environments by using the Azure CLI
+
+This quickstart shows you how to create and configure a dev center in Azure Deployment Environments Preview.
+
+An enterprise development infrastructure team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
+
+> [!IMPORTANT]
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, review the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+- [Install the Azure CLI](/cli/azure/install-azure-cli).
+- [Install dev center CLI extension](how-to-install-devcenter-cli-extension.md)
+- A GitHub Account and a [Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) with Repo Access.
+
+## Create a dev center
+To create and configure a Dev center in Azure Deployment Environments by using the Azure portal:
+
+1. Sign in to the Azure CLI:
+
+ ```azurecli
+ az login
+ ```
+
+1. Install the Azure Dev Center extension for the CLI.
+
+ ```azurecli
+ az extension add --name devcenter --upgrade
+ ```
+
+1. Configure the default subscription as the subscription in which you want to create the dev center:
+
+ ```azurecli
+ az account set --subscription <name>
+ ```
+
+1. Configure the default location as the location in which you want to create the dev center. Make sure to choose an [available regions for Azure Deployment Environments](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=deployment-environments&regions=all):
+
+ ```azurecli
+ az configure --defaults location=eastus
+ ```
+
+1. Create the resource group in which you want to create the dev center:
+
+ ```azurecli
+ az group create -n <group name>
+ ```
+
+1. Configure the default resource group as the resource group you created:
+
+ ```azurecli
+ az config set defaults.group=<group name>
+ ```
+
+1. Create the dev center:
+
+ ```azurecli
+ az devcenter admin devcenter create -n <devcenter name>
+ ```
+
+ After a few minutes, you'll get an output that it's created:
+
+ ```output
+ {
+ "devCenterUri": "https://...",
+ "id": "/subscriptions/.../<devcenter name>",
+ "location": "eastus",
+ "name": "<devcenter name>",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "<group name>",
+ "systemData": {
+ "createdAt": "...",
+ "createdBy": "...",
+ ...
+ },
+ "type": "microsoft.devcenter/devcenters"
+ }
+ ```
+
+> [!NOTE]
+> You can use `--help` to view more details about any command, accepted arguments, and examples. For example, use `az devcenter admin devcenter create --help` to view more details about creating a dev center.
+
+## Adding personal access token to Key Vault
+You need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository.
+
+1. Create a Key Vault:
+
+ ```azurecli
+ # Change the name to something Globally unique
+ az keyvault create -n <kv name>
+ ```
+
+ > [!NOTE]
+ > You may get the following Error:
+ `Code: VaultAlreadyExists Message: The vault name 'kv-devcenter-unique' is already in use. Vault names are globally unique so it is possible that the name is already taken.` You must use a globally unique key vault name.
+
+1. Add GitHub personal access token (PAT) to Key Vault as a secret:
+
+ ```azurecli
+ az keyvault secret set --vault-name <kv name> --name GHPAT --value <PAT>
+ ```
+
+## Attach an identity to the dev center
+
+After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
+
+In this quickstart, you configure a system-assigned managed identity for your dev center.
+
+## Attach a system-assigned managed identity
+
+To attach a system-assigned managed identity to your dev center:
+
+ ```azurecli
+ az devcenter admin devcenter update -n <devcenter name> --identity-type SystemAssigned
+ ```
+
+### Assign the system-assigned managed identity access to the key vault secret
+Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository. Key Vaults support two methods of access; Azure role-based access control or Vault access policy. In this quickstart, you use a vault access policy.
+
+1. Retrieve Object ID of your dev center's identity:
+
+ ```azurecli
+ OID=$(az ad sp list --display-name <devcenter name> --query [].id -o tsv)
+ echo $OID
+ ```
+
+1. Add a Key Vault Policy to allow dev center to get secrets from Key Vault:
+
+ ```azurecli
+ az keyvault set-policy -n <kv name> --secret-permissions get --object-id $OID
+ ```
+
+## Add a catalog to the dev center
+Azure Deployment Environments Preview supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
+
+In this quickstart, you attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team.
+
+To add a catalog to your dev center, you first need to gather some information.
+
+### Gather GitHub repo information
+To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your catalog items. You can gather this information before you begin the process of adding the catalog to the dev center.
+
+> [!TIP]
+> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-of-an-azure-devops-repository).
+
+1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy.
+1. Take a note of the branch that you're working in.
+1. Take a note of the folder that contains your catalog items.
+
+ :::image type="content" source="media/how-to-create-configure-dev-center/github-info.png" alt-text="Screenshot that shows the GitHub repo with Code, branch, and folder highlighted.":::
+
+### Add a catalog to your dev center
+
+1. Retrieve the secret identifier:
+
+ ```azurecli
+ SECRETID=$(az keyvault secret show --vault-name <kv name> --name GHPAT --query id -o tsv)
+ echo $SECRETID
+ ```
+
+1. Add Catalog:
+
+ ```azurecli
+ # Sample Catalog example
+ REPO_URL="https://github.com/Azure/deployment-environments.git"
+ az devcenter admin catalog create --git-hub path="/Environments" branch="main" secret-identifier=$SECRETID uri=$REPO_URL -n <catalog name> -d <devcenter name>
+ ```
+
+1. Confirm that the catalog is successfully added and synced:
+
+ ```azurecli
+ az devcenter admin catalog list -d <devcenter name> -o table
+ ```
+
+## Create an environment type
+
+Use an environment type to help you define the different types of environments your development teams can deploy. You can apply different settings for each environment type.
+
+1. Create an Environment Type:
+
+ ```azurecli
+ az devcenter admin environment-type create -d <devcenter name> -n <environment type name>
+ ```
+
+1. Confirm that the Environment type is created:
+
+ ```azurecli
+ az devcenter admin environment-type list -d <devcenter name> -o table
+ ```
+
+## Next steps
+
+In this quickstart, you created a dev center and configured it with an identity, a catalog, and an environment type. To learn how to create and configure a project, advance to the next quickstart.
+
+> [!div class="nextstepaction"]
+> [Create and configure a project with Azure CLI](how-to-create-configure-projects.md)
deployment-environments How To Create Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-projects.md
+
+ Title: Create and configure a project by using the Azure CLI
+
+description: Learn how to create a project in Azure Deployment Environments and associate the project with a dev center using Azure CLI.
+++++ Last updated : 04/28/2023++
+# Create and configure a project by using the Azure CLI
+
+This quickstart shows you how to create a project in Azure Deployment Environments Preview. Then, you associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
+
+An enterprise development infrastructure team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
+
+> [!IMPORTANT]
+> Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+
+## Create a project
+
+To create a project in your dev center:
+
+1. Sign in to the Azure CLI:
+
+ ```azurecli
+ az login
+ ```
+
+1. Install the Azure Dev Center extension for the CLI.
+
+ ```azurecli
+ az extension add --name devcenter --upgrade
+ ```
+
+1. Configure the default subscription as the subscription where your dev center resides:
+
+ ```azurecli
+ az account set --subscription <name>
+ ```
+
+1. Configure the default resource group as the resource group where your dev center resides:
+
+ ```azurecli
+ az configure --defaults group=<name>
+ ```
+
+1. Configure the default location as the location where your dev center resides. Location of project must match the location of dev center:
+
+ ```azurecli
+ az configure --defaults location=eastus
+ ```
+
+1. Retrieve dev center resource ID:
+
+ ```azurecli
+ DEVCID=$(az devcenter admin devcenter show -n <devcenter name> --query id -o tsv)
+ echo $DEVCID
+ ```
+
+1. Create project in dev center:
+
+ ```azurecli
+ az devcenter admin project create -n <project name> \
+ --description "My first project." \
+ --dev-center-id $DEVCID
+ ```
+
+1. Confirm that the project was successfully created:
+
+ ```azurecli
+ az devcenter admin project show -n <project name>
+ ```
+
+### Assign a managed identity the owner role to the subscription
+Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types).
+
+In this quickstart, you assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
+
+1. Retrieve Subscription ID:
+
+ ```azurecli
+ SUBID=$(az account show -n <name> --query id -o tsv)
+ echo $SUBID
+ ```
+
+1. Retrieve Object ID of Dev Center's Identity using name of dev center resource:
+
+ ```azurecli
+ OID=$(az ad sp list --display-name <devcenter name> --query [].id -o tsv)
+ echo $SUBID
+ ```
+
+1. Assign dev center the Role of Owner on the Subscription:
+
+ ```azurecli
+ az role assignment create --assignee $OID \
+ --role "Owner" \
+ --scope "/subscriptions/$SUBID"
+ ```
+
+## Configure a project
+
+To configure a project, add a [project environment type](how-to-configure-project-environment-types.md):
+
+1. Retrieve Role ID for the Owner of Subscription
+
+ ```azurecli
+ # Remove group default scope for next command. Leave blank for group.
+ az configure --defaults group=
+
+ ROID=$(az role definition list -n "Owner" --scope /subscriptions/$SUBID --query [].name -o tsv)
+ echo $ROID
+
+ # Set default resource group again
+ az configure --defaults group=<group name>
+ ```
+
+1. Show allowed environment type for project:
+
+ ```azurecli
+ az devcenter admin project-allowed-environment-type list --project <project name> --query [].name
+ ```
+
+1. Choose an environment type and create it for the project:
+
+ ```azurecli
+ az devcenter admin project-environment-type create -n <available env type> \
+ --project <project name> \
+ --identity-type "SystemAssigned" \
+ --roles "{\"${ROID}\":{}}" \
+ --deployment-target-id "/subscriptions/${SUBID}" \
+ --status Enabled
+ ```
+
+> [!NOTE]
+> At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Owner role](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type.
+
+## Assign environment access
+
+In this quickstart, you give access to your own ID. Optionally, you can replace the value of `--assignee` for the following commands with another member's object ID.
+
+1. Retrieve your own Object ID:
+
+ ```azurecli
+ MYOID=$(az ad signed-in-user show --query id -o tsv)
+ echo $MYOID
+ ```
+
+1. Assign admin access:
+
+ ```azurecli
+ az role assignment create --assignee $MYOID \
+ --role "DevCenter Project Admin" \
+ --scope "/subscriptions/$SUBID"
+ ```
+
+1. Optionally, you can assign Dev Environment User:
+
+ ```azurecli
+ az role assignment create --assignee $MYOID \
+ --role "Deployment Environments User" \
+ --scope "/subscriptions/$SUBID"
+ ```
++
+> [!NOTE]
+> Only a user who has the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role, the [DevCenter Project Admin](how-to-configure-project-admin.md) role, or a built-in role that has appropriate permissions can create an environment.
+
+## Next steps
+
+In this quickstart, you created a project and granted project access to your development team. To learn about how your development team members can create environments, advance to the next quickstart.
+
+> [!div class="nextstepaction"]
+> [Create and access an environment with Azure CLI](how-to-create-access-environments.md)
dev-box How To Configure Dev Box Azure Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-azure-diagnostic-logs.md
+
+ Title: Configure Azure Diagnostic Logs
+
+description: Learn how to use Azure diagnostic logs to see an audit history for your dev center.
+++++ Last updated : 04/28/2023++
+# Configure Azure diagnostic logs for a dev center
+
+With Azure diagnostic logs for DevCenter, you can view audit logs for dataplane operations in your dev center. These logs can be routed to any of the following destinations:
+
+* Azure Storage account
+* Log Analytics workspace
+
+This feature is available on all dev centers.
+
+Diagnostics logs allow you to export basic usage information from your dev center to different kinds sources so that you can consume them in a customized way. The dataplane audit logs expose information around CRUD operations for dev boxes within your dev center. Including, for example, start and stop commands executed on dev boxes. Some sample ways you can choose to export this data:
+
+* Export data to blob storage, export to CSV.
+* Export data to Azure Monitor logs and view and query data in your own Log Analytics workspace
+
+A dev center is required for the following step.
+
+## Enable logging with the Azure portal
+
+Follow these steps enable logging for your Azure DevCenter resource:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the Azure portal, navigate to **All resources** -> **your-devcenter**
+
+3. Select **Diagnostics settings** in the **Monitoring** section.
+
+4. Select **Add diagnostic setting** in the open page.
++
+### Enable logging with Azure Storage
+
+To use a storage account to store the logs, follow these steps:
+
+ >[!NOTE]
+ >A storage account in the same region as your dev center is required to complete these steps. Refer to: **[Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)** for more information.
+
+1. For **Diagnostic setting name**, enter a name for your diagnostic log settings.
+
+2. Select **Archive to a storage account**, then select **Dataplane audit logs**.
+
+3. For **Retention (days)**, choose the number of retention days. A retention of zero days stores the logs indefinitely.
+
+4. Select the subscription and storage account for the logs.
+
+3. Select **Save**.
+
+### Send to Log Analytics
+
+To use Log Analytics for the logs, follow these steps:
+
+>[!NOTE]
+>A log analytics workspace is required to complete these steps. Refer to: **[Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md)** for more information.
+
+1. For **Diagnostic setting name**, enter a name for your diagnostic log settings.
+
+2. Select **Send to Log Analytics**, then select **Dataplane audit logs**.
+
+3. Select the subscription and Log Analytics workspace for the logs.
+
+4. Select **Save**.
+
+## Enable logging with PowerShell
+
+The following example shows how to enable diagnostic logs via the Azure PowerShell Cmdlets.
++
+### Enable diagnostic logs in a storage account
+
+1. Sign in to Azure PowerShell:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+
+2. To enable Diagnostic Logs in a storage account, enter these commands. Replace the variables with your values:
+
+ ```azurepowershell-interactive
+ $rg = <your-resource-group-name>
+ $devcenterid = <your-devcenter-ARM-resource-id>
+ $storageacctid = <your-storage-account-resource-id>
+ $diagname = <your-diagnostic-setting-name>
+
+ $log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category DataplaneAuditEvent -RetentionPolicyDay 7 -RetentionPolicyEnabled $true
+
+ New-AzDiagnosticSetting -Name $diagname -ResourceId $devcenterid -StorageAccountId $storageacctid -Log $log
+ ```
+
+### Enable diagnostics logs for Log Analytics workspace
+
+1. Sign in to Azure PowerShell:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+2. To enable Diagnostic Logs for a Log Analytics workspace, enter these commands. Replace the variables with your values:
+
+ ```azurepowershell-interactive
+ $rg = <your-resource-group-name>
+ $devcenterid = <your-devcenter-ARM-resource-id>
+ $workspaceid = <your-log-analytics-workspace-resource-id>
+ $diagname = <your-diagnostic-setting-name>
+
+ $log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category DataplaneAuditEvent -RetentionPolicyDay 7 -RetentionPolicyEnabled $true
+
+ New-AzDiagnosticSetting -Name $diagname -ResourceId $devcenterid -WorkspaceId $workspaceid -Log $log
+ ```
+
+## Analyzing Logs
+This section describes existing tables for DevCenter diagnostic logs and how to query them.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Common and service-specific schemas for Azure resource logs](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema).
+
+DevCenter stores data in the following tables.
+
+| Table | Description |
+|:|:|
+| DevCenterDiagnosticLogs | Table used to store dataplane request/response information on dev box or environments within the dev center. |
++
+### Sample Kusto Queries
+After enabling diagnostic settings on your dev center, you should be able to view audit logs for the tables within a log analytics workspace.
+
+Here are some queries that you can enter into Log search to help your monitor your dev boxes.
+
+To query for all data-plane logs from DevCenter:
+
+```kusto
+DevCenterDiagnosticLogs
+```
+
+To query for a filtered list of data-plane logs, specific to a single devbox:
+
+```kusto
+DevCenterDiagnosticLogs
+| where TargetResourceId contains "<devbox-name>"
+```
+
+To generate a chart for data-plane logs, grouped by operation result status:
+
+```kusto
+DevCenterDiagnosticLogs
+| summarize count() by OperationResult
+| render piechart
+```
+
+These examples are just a small sample of the rich queries that can be performed in Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
+
+## Next steps
+
+To learn more about Azure logs, see the following articles:
+
+* [Azure Diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md)
+* [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md)
+* [Azure Log Analytics REST API](/rest/api/loganalytics)
dev-box Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/monitor-reference.md
+
+ Title: DevCenter Diagnostic Logs Reference
+
+description: Reference for the schema for Dev Center Diagnostic logs
+++++ Last updated : 04/28/2023++
+# Monitoring Microsoft DevCenter data reference
+
+This article provides a reference for log and metric data collected to analyze the performance and availability of resources within your dev center. See the [How To Monitor DevCenter Diagnostic Logs](how-to-configure-dev-box-azure-diagnostic-logs.md) article for details on collecting and analyzing monitoring data for a dev center.
++
+## Resource logs
+
+The following table lists the properties of resource logs in DevCenter. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the **DevCenterDiagnosticLogs** table under the resource provider name of `MICROSOFT.DEVCENTER`.
+
+| Azure Storage field or property | Azure Monitor Logs property | Description |
+| | | |
+| **time** | **TimeGenerated** | The date and time (UTC) when the operation occurred. |
+| **resourceId** | **ResourceId** | The DevCenter resource for which logs are enabled.|
+| **operationName** | **OperationName** | Name of the operation. If the event represents an Azure role-based access control (RBAC) operation, specify the the Azure RBAC operation name (for example, `Microsoft.DevCenter/projects/users/devboxes/write`). This name is typically modeled in the form of an Azure Resource Manager operation, even if it's not a documented Resource Manager operation: (`Microsoft.<providerName>/<resourceType>/<subtype>/<Write/Read/Delete/Action>`)|
+| **identity** | **CallerIdentity** | The OID of the caller of the event. |
+| **TargetResourceId** | **ResourceId** | The subresource that pertains to the request. Depending on the operation performed, this value may point to a `devbox` or `environment`.|
+| **resultSignature** | **ResponseCode** | The HTTP status code returned for the operation. |
+| **resultType** | **OperationResult** | Whether the operation failed or succeeded. |
+| **correlationId** | **CorrelationId** | The unique correlation ID for the operation that can be shared with the app team if investigations are necessary.|
+
+For a list of all Azure Monitor log categories and links to associated schemas, see [Azure Monitor Logs categories and schemas](../azure-monitor/essentials/resource-logs-schema.md).
+
+## Azure Monitor Logs tables
+
+DevCenter uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a list of Kusto tables DevCenter uses, see the [Azure Monitor Logs table reference](how-to-configure-dev-box-azure-diagnostic-logs.md) article.
+
+## Next steps
+
+For more information on monitoring DevCenter resources, see the following articles:
+
+- To learn how to configure Azure diagnostic logs for a dev center, see [Configure Azure diagnostic logs for a DevCenter](how-to-configure-dev-box-azure-diagnostic-logs.md).
+- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
digital-twins Resources Migrate From Preview Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/resources-migrate-from-preview-apis.md
description: Migrate from preview API versions of the control plane to the stable GA version Previously updated : 02/08/2023 Last updated : 05/02/2023
# Migrate from Azure Digital Twins preview control plane APIs to the stable GA version
-As of May 2, 2023, the following Azure Digital Twins preview control plane APIs will be retired:
+As of May 2, 2023, the following Azure Digital Twins preview control plane APIs have been retired and are no longer maintained:
* [2020-03-01-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/preview/2020-03-01-preview) * [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/preview/2021-06-30-preview)
event-hubs Schema Registry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-overview.md
Title: Azure Schema Registry in Azure Event Hubs
-description: This article provides an overview of Schema Registry support by Azure Event Hubs.
+ Title: Use Azure Schema Registry from Apache Kafka and other apps
+description: This article provides an overview of Schema Registry support by Azure Event Hubs and how it can be used from your Apache Kafka and other apps.
Last updated 05/04/2022
event-hubs Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/sdks.md
Here's a list of currently available management-specific libraries. None of thes
### Client libraries -- **Azure.Messaging.EventHubs**: It's the current version of the library, conforming to the unified Azure SDK design guidelines and under active development for new features. It supports the NetStandard platform, allowing it to be used by both the full .NET Framework and .NET Core. There's feature parity at a high level with Microsoft.Azure.EventHubs, with details and the client hierarchy taking a different form. This library is the one that we recommend you to use. -- **Microsoft.Azure.EventHubs**: It was the initial library to break out Event Hubs into a dedicated client that isnΓÇÖt bundled with Service Bus. It supports the NetStandard platform, allowing it to be used by both the full .NET Framework and .NET Core. It's still the dominant version of the library with respect to usage and third-party blog entries, extensions, and such. The baseline functionality is the same as the current library, though there are some minor bits that one offers and the other doesnΓÇÖt. It's currently receiving bug fixes and critical updates but is no longer receiving new features.
+- **Azure.Messaging.EventHubs**: It's the current version of the library, conforming to the unified Azure SDK design guidelines and under active development for new features. It supports the .NET Standard platform, allowing it to be used by both the full .NET Framework and .NET Core. There's feature parity at a high level with Microsoft.Azure.EventHubs, with details and the client hierarchy taking a different form. This library is the one that we recommend you to use.
+- **Microsoft.Azure.EventHubs**: It was the initial library to break out Event Hubs into a dedicated client that isnΓÇÖt bundled with Service Bus. It supports the .NET Standard 2.0 platform, allowing it to be used by both the full .NET Framework and .NET Core. It's still the dominant version of the library with respect to usage and third-party blog entries, extensions, and such. The baseline functionality is the same as the current library, though there are some minor bits that one offers and the other doesnΓÇÖt. It's currently receiving bug fixes and critical updates but is no longer receiving new features.
- **Windows.Azure.ServiceBus**: It was the original library, back when Event Hubs was still more entangled with Service Bus. It supports only the full .NET Framework, because it predates .NET Core. This library offers some corollary functionality that isnΓÇÖt supported by the newer libraries. ### Management libraries -- **Microsoft.Azure.Management.EventHub**: It's the current GA version of the management library for Event Hubs. It supports the NetStandard platform, allowing it to be used by both the full .NET Framework and .NET Core.
+- **Microsoft.Azure.Management.EventHub**: It's the current GA version of the management library for Event Hubs. It supports the .NET Standard 2.0 platform, allowing it to be used by both the full .NET Framework and .NET Core.
## Next steps
firewall-manager Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/deployment-overview.md
Previously updated : 08/28/2020 Last updated : 05/02/2023 # Azure Firewall Manager deployment overview
-There's more than one way to deploy Azure Firewall Manager, but the following general process is recommended.
+There's more than one way to use Azure Firewall Manager to deploy Azure Firewall, but the following general process is recommended.
+
+To review network architecture options, see [What are the Azure Firewall Manager architecture options?](vhubs-and-vnets.md)
## General deployment process
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Ensure auditd service is enabled<br /><sub>(162)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Install audit package (systemctl enable auditd) | |Run AuditD service<br /><sub>(163)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Run AuditD service (systemctl start auditd) | |Ensure SNMP Server is not enabled<br /><sub>(179)</sub> |Description: The SNMP server can communicate using SNMP v1, which transmits data in the clear and does not require authentication to execute commands. Unless absolutely necessary, it's recommended that the SNMP service not be used. If SNMP is required the server should be configured to disallow SNMP v1. |Run one of the following commands to disable `snmpd`: ``` # chkconfig snmpd off ``` ``` # systemctl disable snmpd ``` ``` # update-rc.d snmpd disable ``` |
-|Ensure rsync service is not enabled<br /><sub>(181)</sub> |Description: The `rsyncd` service presents a security risk as it uses unencrypted protocols for communication. |Run one of the following commands to disable `rsyncd` : `chkconfig rsyncd off`, `systemctl disable rsyncd`, `update-rc.d rsyncd disable` or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rsysnc' |
+|Ensure rsync service is not enabled<br /><sub>(181)</sub> |Description: The `rsyncd` service presents a security risk as it uses unencrypted protocols for communication. |Run one of the following commands to disable `rsyncd` : `chkconfig rsyncd off`, `systemctl disable rsyncd`, `update-rc.d rsyncd disable` or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rsync' |
|Ensure NIS server is not enabled<br /><sub>(182)</sub> |Description: The NIS service is an inherently insecure system that has been vulnerable to DOS attacks, buffer overflows and has poor authentication for querying NIS maps. NIS is generally replaced by protocols like Lightweight Directory Access Protocol (LDAP). It's recommended that the service be disabled and more secure services be used |Run one of the following commands to disable `ypserv` : ``` # chkconfig ypserv off ``` ``` # systemctl disable ypserv ``` ``` # update-rc.d ypserv disable ``` | |Ensure rsh client is not installed<br /><sub>(183)</sub> |Description: These legacy clients contain numerous security exposures and have been replaced with the more secure SSH package. Even if the server is removed, it's best to ensure the clients are also removed to prevent users from inadvertently attempting to use these commands and therefore exposing their credentials. Note that removing the `rsh `package removes the clients for `rsh`, `rcp `and `rlogin`. |Uninstall `rsh` using the appropriate package manager or manual installation: ``` yum remove rsh ``` ``` apt-get remove rsh ``` ``` zypper remove rsh ``` | |Disable SMB V1 with Samba<br /><sub>(185)</sub> |Description: SMB v1 has well-known, serious vulnerabilities and does not encrypt data in transit. If it must be used for business reasons, it's strongly recommended that additional steps be taken to mitigate the risks inherent to this protocol. |If Samba is not running, remove package, otherwise there should be a line in the [global] section of /etc/samba/smb.conf: min protocol = SMB2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-smb-min-version |
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
Azure Health Data Services typically expect a [JSON Web Token](https://en.wikipe
* Header * Payload (the claims)
-* Signature, as shown in the image below. For more information, see [Azure access tokens](../active-directory/develop/active-directory-configurable-token-lifetimes.md).
+* Signature, as shown in the image below. For more information, see [Azure access tokens](../active-directory/develop/configurable-token-lifetimes.md).
[ ![JASON web token signature.](media/azure-access-token.png) ](media/azure-access-token.png#lightbox)
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions. > [!NOTE]
-> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 are considered a security risk, and any connections using old TLS protocols will be disallowed in 2023. You can monitor TLS version used by clients by monitoring Key Vault logs with sample Kusto query [here](monitor-key-vault.md#sample-kusto-queries).
+> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .NET Framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .NET Framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 are considered a security risk, and any connections using old TLS protocols will be disallowed in 2023. You can monitor TLS version used by clients by monitoring Key Vault logs with sample Kusto query [here](monitor-key-vault.md#sample-kusto-queries).
> [!WARNING] > TLS 1.0 and 1.1 is deprecated by Azure Active Directory and tokens to access key vault may not longer be issued for users or services requesting them with deprecated protocols. This may lead to loss of access to Key vaults. More information on AAD TLS support can be found in [Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment/#why-this-change-is-being-made)
key-vault Tls Offload Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/tls-offload-library.md
The TLS Offload Library includes a key creation tool, mhsm_p11_create_key. Runni
The key creation tool requires a service principal, which is assigned to the "Managed HSM Crypto User" role at the "/keys" scope.
-The key creation tool reads the service principal credentials from the environment variables MHSM_CLIENT_ID and MHSM_CLIENT_SECRET.
+The key creation tool reads the service principal credentials from the environment variables MHSM_CLIENT_ID and MHSM_CLIENT_SECRET:
- MHSM_CLIENT_ID ΓÇô must be set to the service principal's application (client) ID - MHSM_CLIENT_SECRET ΓÇô must be set to the service principal's password (client secret)
+For Managed Identities, the environment variables above are not needed.
+- Use the `--identity` argument to enable managed identity with the mhsm_p11_create_key tool.
+- The `client_id` of user-assigned managed identity should be cited in the MHSM configuration file (mhsm-pkcs11.conf). If the `client_id` of a user-assigned managed identity is not provided, it will consider it as system-assigned managed identity.
+ The key creation tool randomly generates a name for the key at the time of creation. The full Azure Key Vault key ID and the key name are printed to the console for your convenience. ```azurepowershell
For more information on Azure Managed HSM local RBAC, see:
- [Azure Managed HSM local RBAC built-in roles](built-in-roles.md) - [Azure Managed HSM role management](role-management.md)
-The following section describes different approaches to implement access control for the TLS Offload Library service principal.
+The following section describes different approaches to implement access control for the TLS Offload Library service principal and Managed Identity.
#### TLS Offload service principal
az keyvault role assignment create --hsm-name ContosoMHSM \
--assignee TLSOffloadServicePrincipal@contoso.com \ --scope /keys ```
+For Managed Identities,specify command arguments as follows:
+
+```azurecli
+az keyvault role assignment create --hsm-name ContosoMHSM \
+ --role "Managed HSM Crypto User" \
+ --assignee-object-id <object_id> \
+ --assignee-principal-type MSI \
+ --scope /keys
+```
### Granular approach
az keyvault role assignment create --hsm-name ContosoMHSM \
--assignee TLSOffloadServicePrincipal@contoso.com \ --scope /keys/p11-6a2155dc40c94367a0f97ab452dc216f ```
+## Connection Caching
+
+To improve the performance of Sign calls to the Managed HSM Service, TLS Offload Library caches its TLS connections to the Managed HSM service servers. By default, TLS Offload Library caches up to 20 TLS connections.
+Connection Caching can be controlled through MHSM configuration file (mhsm-pkcs11.conf).
+
+```json
+"ConnectionCache": {
+ "Disable": false,
+ "MaxConnections": 20
+}
+```
+
+**Disable**
+
+If this value is true, Connection Caching will be disabled. It is enabled by default.
+
+**MaxConnections**
+
+Specifies maximum number of connections to cache. The maximum connection limit should be configured based on the number of concurrent PKCS11 sessions being used by the application. Applications typically create a pool of PKCS11 sessions and use them from a pool of threads to generate signing requests in parallel. The MaxConnections should match the number of concurrent signing requests generated by the applications.
+
+The Signing Request Per Second (RPS) is dependent on the number of concurrent requests and the number of connections cached. Specifying a higher number or even the default limit will not improve the signing RPS if the number of concurrent PKCS11 Signing requests is lower than this limit.
+The maximum number of concurrent connections to achieve burst mode of Standard B1 HSM pool is about 30 depending on the instance type. But you should try with different numbers to figure out the optimal number concurrent connections.
+
+Refer to your application documentation or contact your application vendor to learn more about how the application uses the PKCS11 library.
## Using the TLS Offload Library
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
For information on VM sizes and their cost, see the [Azure Lab Services Pricing]
## RBAC roles
-By using [Azure role-based access control (RBAC)](../role-based-access-control/overview.md) for access to lab plans and labs, you can assign the following roles:
+Azure Lab Services provides built-in Azure role-based access control (Azure RBAC) for common management scenarios in Azure Lab Services. An individual who has a profile in Azure Active Directory can assign these Azure roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations on Azure Lab Services resources. This article describes the different built-in roles that Azure Lab Services supports.
-- **Owner**-
- An administrator who creates a lab plan is automatically assigned the lab plan Owner role. The Owner role can:
-
- - Change the lab plan settings.
- - Grant other administrators access to the lab plan as an Owner or Contributor.
- - Grant educators access to labs as a Creator, Owner, or Contributor.
- - Create and manage all labs in the lab plan.
--- **Contributor**-
- An administrator who is assigned the Contributor role can:
-
- - Change the lab plan settings.
- - Create and manage all labs in the lab plan.
-
- However, the Contributor *can't* grant other users access to either lab plans or labs.
--- **Lab Creator**-
- When set on the lab plan, this role enables the user account to create labs from the lab plan. The user account can also see existing labs that are in the same resource group as the lab plan. When applied to a resource group, this role enables the user to view existing lab and create new labs. They'll have full control over any labs they create as they're assigned as Owner to those created labs. For more information, see [Add a user to the Lab Creator role](./quick-create-resources.md#add-a-user-to-the-lab-creator-role).
--- **Lab Contributor**-
- When applied to an existing lab, this role enables the user to fully manage the lab. When applied to a resource group, this role enables the user account to fully manage existing labs and create new labs in that resource group.
-
- A key difference between the lab Owner and Contributor roles is that only an Owner can grant other users access to manage a lab. A Contributor *can't* grant other users access to manage a lab.
--- **Lab Operator**-
- When applied to a resource group or a lab, this role enables the user to have limited ability to manage existing labs. This role won't give the user the ability to create new labs. In an existing lab, the user can manage users, adjust individual users' quota, manage schedules, and start/stop VMs. The user account will be able to publish a lab. The user won't have the ability to change lab capacity or change quota at the lab level. The user won't be able to change the template title or description.
--- **Lab Assistant**-
- When applied to a resource group or a lab, this role enables the user to view an existing lab. Lab assistants can only perform actions on the lab VMs (reset, start, stop, connect) and send invitations to the lab. They don't have the ability to change a lab, create a lab, publish a lab, change lab capacity, or manage lab quota, individual quota nor schedules.
--- **Lab Services Contributor**-
- When applied to a resource group, enables the user to fully control all Lab Services scenarios in that resource group.
--- **Lab Services Reader**-
- When applied to a resource group, enables the user to view, but not change, all lab plans and lab resources. External resources like image galleries and virtual networks that may be connected to a lab plan aren't included.
-
-When you're assigning roles, it helps to follow these tips:
--- Ordinarily, only administrators should be members of a lab plan Owner or Contributor role. The lab plan might have more than one Owner or Contributor.-- To give educators the ability to create new labs and manage the labs that they create, you need only assign them the Lab Creator role.-- To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they'll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab.--
-For more detail about the permissions assigned to each role, see [Azure built-in roles](../role-based-access-control/built-in-roles.md#lab-assistant)
+Learn more about [Azure role-based access control in Azure Lab Services](./concept-lab-services-role-based-access-control.md).
## Content filtering
lab-services Concept Lab Services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-role-based-access-control.md
+
+ Title: Azure role-based access control
+
+description: Learn how Azure Lab Services provides protection with Azure role-based access control (Azure RBAC) integration.
+++++ Last updated : 04/20/2023++
+# Azure role-based access control in Azure Lab Services
+
+Azure Lab Services provides built-in Azure role-based access control (Azure RBAC) for common management scenarios in Azure Lab Services. An individual who has a profile in Azure Active Directory can assign these Azure roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations on Azure Lab Services resources. This article describes the different built-in roles that Azure Lab Services supports.
+
+Azure role-based access control (RBAC) is an authorization system built on [Azure Resource Manager](/azure/azure-resource-manager/management/overview) that provides fine-grained access management of Azure resources.
+
+Azure RBAC specifies built-in role definitions that outline the permissions to be applied. You assign a user or group this role definition via a role assignment for a particular scope. The scope can be an individual resource, a resource group, or across the subscription. In the next section, you learn which [built-in roles](#built-in-roles) Azure Lab Services supports.
+
+For more information, see [What is Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview)?
+
+> [!NOTE]
+> When you make role assignment changes, it can take a few minutes for these updates to propagate.
+
+## Built-in roles
+
+In this article, the Azure built-in roles are logically grouped into two role types, based on their scope of influence:
+
+- Administrator roles: influence permissions for lab plans and labs
+- Lab management roles: influence permissions for labs
+
+The following are the built-in roles supported by Azure Lab
+
+| Role type | Built-in role | Description |
+| | - | -- |
+| Administrator | Owner | Grant full control to create/manage lab plans and labs, and grant permissions to other users. Learn more about the [Owner role](#owner-role). |
+| Administrator | Contributor | Grant full control to create/manage lab plans and labs, except for assigning roles to other users. Learn more about the [Contributor role](#contributor-role). |
+| Administrator | Lab Services Contributor | Grant the same permissions as the Owner role, except for assigning roles. Learn more about the [Lab Services Contributor role](#lab-services-contributor-role). |
+| Lab management | Lab Creator | Grant permission to create labs and have full control over the labs that they create. Learn more about the [Lab Creator role](#lab-creator-role). |
+| Lab management | Lab Contributor | Grant permission to help manage an existing lab, but not create new labs. Learn more about the [Lab Contributor role](#lab-contributor-role). |
+| Lab management | Lab Assistant | Grant permission to view an existing lab. Can also start, stop, or reset any VM in the lab. Learn more about the [Lab Assistant role](#lab-assistant-role). |
+| Lab management | Lab Services Reader | Grant permission to view existing labs. Learn more about the [Lab Services Reader role](#lab-services-reader-role). |
+
+## Role assignment scope
+
+In Azure RBAC, *scope* is the set of resources that access applies to. When you assign a role, it's important to understand scope so that you grant just the access that is needed.
+
+In Azure, you can specify a scope at four levels: management group, subscription, resource group, and resource. Scopes are structured in a parent-child relationship. Each level of hierarchy makes the scope more specific. You can assign roles at any of these levels of scope. The level you select determines how widely the role is applied. Lower levels inherit role permissions from higher levels. Learn more about [scope for Azure RBAC](/azure/role-based-access-control/scope-overview).
+
+For Azure Lab Services, consider the following scopes:
+
+| Scope | Description |
+| -- | -- |
+| Subscription | Used to manage billing and security for all Azure resources and services. Typically, only administrators have subscription-level access because this role assignment grants access to all resources in the subscription. |
+| Resource group | A logical container for grouping together resources. Role assignment for the resource group grants permission to the resource group and all resources within it, such as labs and lab plans. |
+| Lab plan | An Azure resource used to apply common configuration settings when you create a lab. Role assignment for the lab plan grants permission only to a specific lab plan. |
+| Lab | An Azure resource used to apply common configuration settings for creating and running lab virtual machines. Role assignment for the lab grants permission only to a specific lab. |
++
+> [!IMPORTANT]
+> In Azure Lab Services, lab plans and labs are *sibling* resources to each other. As a result, labs donΓÇÖt inherit any roles assignments from the *lab plan*. However, role assignments from the *resource group* are inherited by lab plans and labs in that resource group.
+
+## Roles for common lab activities
+
+The following table shows common lab activities and the role that's needed for a user to perform that activity.
+
+| Activity | Role type | Role | Scope |
+| -- | | - | -- |
+| Grant permission to create a resource group. A resource group is a logical container in Azure to hold the lab plans and labs. *Before* you can create a lab plan or lab, this resource group needs to exist. | Administrator | [Owner](#owner-role) or [Contributor](#contributor-role) | Subscription |
+| Grant permission to submit a Microsoft support ticket, including to [request capacity](./capacity-limits.md). | Administrator | [Owner](#owner-role), [Contributor](#contributor-role), [Support Request Contributor](/azure/role-based-access-control/built-in-roles#support-request-contributor) | Subscription |
+| Grant permission to: <br/>- Assign roles to other users.<br/>- Create/manage lab plans, labs, and other resources within the resource group.<br/>- Enable/disable marketplace and custom images on a lab plan.<br/>- Attach/detach compute gallery on a lab plan. | Administrator | [Owner](#owner-role) | Resource group |
+| Grant permission to: <br/>- Create/manage lab plans, labs, and other resources within the resource group.<br/>- Enable or disable Azure Marketplace and custom images on a lab plan.<br/><br/>However, *not* the ability to assign roles to other users. | Administrator | [Contributor](#contributor-role) | Resource group |
+| Grant permission to create or manage your own labs for *all* lab plans within a resource group. | Lab management | [Lab Creator](#lab-creator-role) | Resource group |
+| Grant permission to create or manage your own labs for a specific lab plan. | Lab management | [Lab Creator](#lab-creator-role) | Lab plan |
+| Grant permission to co-manage a lab, but *not* the ability to create labs. | Lab management | [Lab Contributor](#lab-contributor-role) | Lab |
+| Grant permission to only start/stop/reset VMs for *all* labs within a resource group. | Lab management | [Lab Assistant](#lab-assistant-role) | Resource group |
+| Grant permission to only start/stop/reset VMs for a specific lab. | Lab management | [Lab Assistant](#lab-assistant-role) | Lab |
+
+> [!IMPORTANT]
+> An organizationΓÇÖs subscription is used to manage billing and security for all Azure resources and services. You can assign the Owner or Contributor role on the [subscription](./administrator-guide.md#subscription). Typically, only administrators have subscription-level access because this includes full access to all resources in the subscription.
+
+## Administrator roles
+
+To grant users permission to manage Azure Lab Services within your organizationΓÇÖs subscription, you should assign them the [Owner](#owner-role), [Contributor](#contributor-role), or the [Lab Services Contributor](#lab-services-contributor-role) role.
+
+Assign these roles on the *resource group*. The lab plans and labs within the resource group inherit these role assignments.
++
+The following table compares the different administrator roles when they're assigned on the resource group.
+
+| Lab plan/Lab | Activity | Owner | Contributor | Lab Services Contributor |
+| | -- | :--: | :--: | :: |
+| Lab plan | View all lab plans within the resource group | Yes | Yes | Yes |
+| Lab plan | Create, change or delete all lab plans within the resource group | Yes | Yes | Yes |
+| Lab plan | Assign roles to lab plans within the resource group | Yes | No | No |
+| Lab | Create labs within the resource group** | Yes | Yes | Yes |
+| Lab | View other usersΓÇÖ labs within the resource group | Yes | Yes | Yes |
+| Lab | Change or delete other usersΓÇÖ labs within the resource group | Yes | Yes | No |
+| Lab | Assign roles to other usersΓÇÖ labs within the resource group | Yes | No | No |
+
+** Users are automatically granted permission to view, change settings, delete, and assign roles for the labs that they create.
+
+### Owner role
+
+Assign the Owner role to give a user full control to create or manage lab plans and labs, and grant permissions to other users. When a user has the Owner role on the resource group, they can do the following activities across all resources within the resource group:
+
+- Assign roles to administrators, so they can manage lab-related resources.
+- Assign roles to lab managers, so they can create and manage labs.
+- Create lab plans and labs.
+- View, delete, and change settings for all lab plans, including attaching or detaching the compute gallery and enabling or disabling Azure Marketplace and custom images on lab plans.
+- View, delete, and change settings for all labs.
+
+> [!CAUTION]
+> When you assign the Owner or Contributor role on the resource group, then these permissions also apply to non-lab related resources that exist in the resource group. For example, resources such as virtual networks, storage accounts, compute galleries, and more.
+
+### Contributor role
+
+Assign the Contributor role to give a user full control to create or manage lab plans and labs within a resource group. The Contributor role has the same permissions as the Owner role, *except* for:
+
+- Performing role assignments
+
+### Lab Services Contributor role
+
+The Lab Services Contributor is the most restrictive of the administrator roles. Assign the Lab Services Contributor role to enable the same activities as the Owner role, *except* for:
+
+- Performing role assignments
+- Changing or deleting other usersΓÇÖ labs
+
+> [!NOTE]
+> The Lab Services Contributor role doesn't allow changes to resources that unrelated to Azure Lab Services. On the other hand, the *Contributor* role allows changes to all Azure resources within the resource group.
+
+## Lab management roles
+
+Use the following roles to grant users permissions to create and manage labs:
+
+- Lab Creator
+- Lab Contributor
+- Lab Assistant
+- Lab Services Reader
+
+These lab management roles only grant permission to view lab plans. These roles don't allow creating, changing, deleting, or assigning roles to lab plans. In addition, users with these roles canΓÇÖt attach or detach a compute gallery and enable or disable virtual machine images.
+
+### Lab Creator role
+
+Assign the Lab Creator role to give a user permission to create labs and have full control over the labs that they create. For example, they can change their labsΓÇÖ settings, delete their labs, and even grant other users permission to their labs.
+
+Assign the Lab Creator role on either the *resource group or lab plan*.
++
+The following table compares the Lab Creator role assignment for the resource group or lab plan.
+
+| Activity | Resource group | Lab plan |
+| -- | :--: | :--: |
+| Create labs within the resource group** | Yes | Yes |
+| View labs they created | Yes | Yes |
+| View other usersΓÇÖ labs within the resource group | Yes | No |
+| Change or delete labs the user created | Yes | Yes |
+| Change or delete other usersΓÇÖ labs within the resource group | No | No |
+| Assign roles to other usersΓÇÖ labs within the resource group | No | No |
+
+** Users are automatically granted permission to view, change settings, delete, and assign roles for the labs that they create.
+
+### Lab Contributor role
+
+Assign the Lab Contributor role to give a user permission to help manage an existing lab.
+
+Assign the Lab Contributor role on the *lab*.
++
+When you assign the Lab Contributor role on the lab, the user can manage the assigned lab. Specifically, the user:
+
+- Can view, change all settings, or delete the assigned lab.
+- The user canΓÇÖt view other usersΓÇÖ labs.
+- CanΓÇÖt create new labs.
+
+### Lab Assistant role
+
+Assign the Lab Assistant role to grant a user permission to view a lab, and start, stop, and reset lab virtual machines for the lab.
+
+Assign the Lab Assistant role on the *resource group or lab*.
++
+When you assign the Lab Assistant role on the resource group, the user:
+
+- Can view all labs within the resource group and start, stop, or reset lab virtual machines for each lab.
+- CanΓÇÖt delete or make any other changes to the labs.
+
+When you assign the Lab Assistant role on the lab, the user:
+
+- Can view the assigned lab and start, stop, or reset lab virtual machines.
+- CanΓÇÖt delete or make any other changes to the lab.
+- CanΓÇÖt create new labs.
+
+When you have the Lab Assistant role, to view other labs you're granted access to, make sure to choose the **All labs** filter in the Azure Lab Services website.
+
+### Lab Services Reader role
+
+Assign the Lab Services Reader role to grant a user permission view existing labs. The user canΓÇÖt make any changes to existing labs.
+
+Assign the Lab Services Reader role on the *resource group or lab*.
++
+When you assign the Lab Services Reader role on the resource group, the user can:
+
+- View all labs within the resource group.
+
+When you assign the Lab Services Reader role on the lab, the user can:
+
+- Only view the specific lab.
+
+## Identity and access management (IAM)
+
+The **Access control (IAM)** page in the Azure portal is used to configure Azure role-based access control on Azure Lab Services resources. You can use built-in roles for individuals and groups in Active Directory. The following screenshot shows Active Directory integration (Azure RBAC) using access control (IAM) in the Azure portal:
++
+For detailed steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+
+## Resource group and lab plan structure
+
+Your organization should invest time up front to plan the structure of resource groups and lab plans. This is especially important when you assign roles on the resource group because it also applies permissions to all resources in the resource group.
+
+To ensure that users are only granted permission to the appropriate resources:
+
+- Create resource groups that only contain lab-related resources.
+
+- Organize lab plans and labs into separate resource groups according to the users that should have access.
+
+For example, you might create separate resource groups for different departments to isolate each departmentΓÇÖs lab resources. Lab creators in one department can then be granted permissions on the resource group, which only grants them access to the lab resources of their department.
+
+> [!IMPORTANT]
+> Plan the resource group and lab plan structure upfront because itΓÇÖs not possible to move lab plans or labs to a different resource group after they're created.
+
+### Access to multiple resource groups
+
+You can grant users access to multiple resource groups. In the [Azure Lab Services website](https://labs.azure.com), the user can then choose from the list of resource groups to view their labs.
++
+### Access to multiple lab plans
+
+You can grant users access to multiple lab plans. For example, when you assign the Lab Creator role to a user on a resource group that contains more than one lab plan. The user can then choose from the list of lab plans when creating a new lab.
++
+## Next steps
+
+- [What is Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview)
+- [Move role assignments from lab accounts to lab plans](./concept-migrate-from-lab-accounts-roles.md)
+- [Understand scope for Azure RBAC](/azure/role-based-access-control/scope-overview)
lab-services Concept Migrate From Lab Accounts Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-migrate-from-lab-accounts-roles.md
+
+ Title: Migrate lab account role assignments
+
+description: Learn how role assignment in different when migrating from lab accounts to lab plans in Azure Lab Services.
+++++ Last updated : 04/20/2023++
+# Migrate lab account role assignments to lab plans in Azure Lab Services
+
+Lab plans have a different Azure management hierarchy than lab accounts in Azure Lab Services. Role assignments on lab accounts behave differently and influence lab permissions. This article discusses the authorization differences between lab accounts and lab plans. Learn how you should update role assignments when transitioning from lab accounts to lab plans.
+
+## Differences between lab accounts and lab plans
+
+Lab accounts are a parent Azure resource to labs. When you assign a role on a lab account, the associated labs automatically inherit this role and permissions.
+
+On the other hand, lab plans and labs are sibling resources in Azure, which means that labs *donΓÇÖt inherit* roles and permissions from the associated lab plan.
+
+For example, assume that you assign the Contributor role for users on the lab account. To achieve the same permissions with a lab plan, you should instead assign the Contributor role on the lab plan's *resource group*. When you assign the role on the resource group, all labs within that resource group are also assigned this role.
+
+## Recommendations
+
+The following table shows recommendations for mapping role assignments from lab accounts to lab plans in Azure Lab Services.
+
+| Role type | Lab account role | Lab account assignment | Lab plan role | Lab plan assignment |
+| | - | - | - | - |
+| Administrator | Owner | Lab account | Owner | Resource group |
+| Administrator | Contributor | Lab account | Contributor | Resource group |
+| Lab management | Lab Creator | Lab account | Lab Creator | Lab plan |
+| Lab management | Owner** | Lab | Owner | Resource group or lab |
+| Lab management | Contributor** | Lab | Lab Contributor | Lab |
+
+** For lab accounts, the labΓÇÖs Contributor and Owner roles require that you also assign the Reader role on the lab account. For lab plans, you don't have to assign the Reader role on the lab plan or resource group.
+
+## Next steps
+
+- Learn more about [Azure role-based access control for Azure Lab Services](./concept-lab-services-role-based-access-control.md)
+
+- Learn more about [moving from lab accounts to lab plans](./migrate-to-2022-update.md)
lab-services How To Manage Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs.md
Title: Manage labs in Azure Lab Services | Microsoft Docs
+ Title: View and manage labs
description: Learn how to create a lab, configure a lab, view all the labs, or delete a lab. ++++ Previously updated : 01/21/2022 Last updated : 01/21/2023 # Manage labs in Azure Lab Services
This article describes how to create and delete labs. It also shows you how to v
1. Select **Sign in**. Select or enter a **user ID** that is a member of the **Lab Creator** role in the lab plan, and enter password. Azure Lab Services supports organizational accounts and Microsoft accounts. [!INCLUDE [Select a tenant](./includes/multi-tenant-support.md)]
-1. Confirm that you see all the labs in the selected resource group. On the lab's tile, you see the number of virtual machines in the lab and the quota for each user.
- ![All labs](./media/how-to-manage-labs/all-labs.png)
+1. Confirm that you see all the labs in the selected resource group.
+
+ On the lab's tile, you can see the number of virtual machines in the lab and the quota for each user.
+
+ :::image type="content" source="./media/how-to-manage-labs/all-labs.png" alt-text="Screenshot that shows the list of labs in the Azure Lab Services website.":::
+ 1. Use the drop-down list at the top to select a different lab plan. You see labs in the selected lab plan.
+> [!NOTE]
+> If you're granted access but are unable to view the labs from other people, you might select *All labs* instead of *My labs* in the **Show** filter.
+ ## Delete a lab 1. On the tile for the lab, select three dots (...) in the corner, and then select **Delete**.
- ![Delete button](./media/how-to-manage-labs/delete-button.png)
+ :::image type="content" source="./media/how-to-manage-labs/delete-button.png" alt-text="Screenshot that shows the list of labs in the Azure Lab Services website, highlighting the Delete button.":::
+ 1. On the **Delete lab** dialog box, select **Delete** to continue with the deletion. ## Switch to another lab To switch to another lab from the current, select the drop-down list of labs at the top.
-![Select the lab from drop-down list at the top](./media/how-to-manage-labs/switch-lab.png)
To switch to a different group, select the left drop-down and choose the lab plan's resource group. To switch to a different lab account, select the left drop-down and choose the lab account name. The Azure Lab Services portal organizes labs by lab plan's resource group/lab account, then by lab name.
load-balancer Howto Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/howto-load-balancer-imds.md
Title: Retrieve load balancer metadata using Azure Instance Metadata Service (IMDS)
+ Title: Retrieve load balancer and virtual machine IP metadata using Azure Instance Metadata Service (IMDS)
description: Get started learning how to retrieve load balancer metadata using Azure Instance Metadata Service.
load-balancer Instance Metadata Service Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/instance-metadata-service-load-balancer.md
Title: Retrieve load balancer information by using Azure Instance Metadata Service
+ Title: Retrieve load balancer and virtual machine IP information by using Azure Instance Metadata Service
description: Get started learning about using Azure Instance Metadata Service to retrieve load balancer information.
When you place virtual machine or virtual machine set instances behind an Azure
The metadata includes the following information for the virtual machines or virtual machine scale sets:
-* Standard SKU public IP.
+* The instance level Public or Private IP of the specific Virtual Machine instance
* Inbound rule configurations of the load balancer of each private IP of the network interface. * Outbound rule configurations of the load balancer of each private IP of the network interface.
load-balancer Load Balancer Ipv6 For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile,
Previously updated : 12/02/2022 Last updated : 04/21/2023 -+ # Configure DHCPv6 for Linux VMs
This document describes how to enable DHCPv6 so that your Linux virtual machine
> [!WARNING] > By improperly editing network configuration files, you can lose network access to your VM. We recommended that you test your configuration changes on non-production systems. The instructions in this article have been tested on the latest versions of the Linux images in the Azure Marketplace. For more detailed instructions, consult the documentation for your own version of Linux.
-## Ubuntu (17.10 or higher)
+# [RHEL/CentOS/Oracle](#tab/redhat)
-1. Edit the **`/etc/dhcp/dhclient.conf`** file, and add the following line:
+For RHEL, CentOS, and Oracle Linux versions 7.4 or higher, follow these steps:
+
+1. Edit the */etc/sysconfig/network* file, and add the following parameter:
```config
- timeout 10;
+ NETWORKING_IPV6=yes
```
-2. Create a new file in the cloud.cfg.d folder that will retain your configuration through reboots. **The information in this file will override the default [NETPLAN]( https://netplan.io) config (in YAML configuration files at this location: /etc/netplan/*.yaml)**.
-
- Create a */etc/cloud/cloud.config.d/91-azure-network.cfg* file. Ensure that **`dhcp6: true`** is reflected under the required interface, as shown by the sample below:
+2. Edit the */etc/sysconfig/network-scripts/ifcfg-eth0* file, and add the following two parameters:
- ```config
- network:
- version: 2
- ethernets:
- eth0:
- dhcp4: true
- dhcp6: true
- match:
- driver: hv_netvsc
- set-name: eth0
+ ```config
+ IPV6INIT=yes
+ DHCPV6C=yes
```
-3. Save the file and reboot.
-
-4. Use **`ifconfig`** to verify virtual machine received IPv6 address.
-
- If **`ifconfig`** isn't installed, run the following commands:
+3. Renew the IPv6 address:
```bash
- sudo apt update
- sudo apt install net-tools
+ sudo ifdown eth0 && sudo ifup eth0
```
+
+# [openSUSE/SLES](#tab/suse)
- :::image type="content" source="./media/load-balancer-ipv6-for-linux/ipv6-ip-address-ifconfig.png" alt-text="Screenshot of ifconfig showing IPv6 IP address.":::
+Recent SUSE Linux Enterprise Server (SLES) and openSUSE images in Azure have been preconfigured with DHCPv6. No other changes are required when you use these images. If you have a VM that's based on an older or custom SUSE image, use one of the following procedures to configure DHCPv6.
-## Debian
+## OpenSuSE 13 and SLES 11
-1. Edit the */etc/dhcp/dhclient6.conf* file, and add the following line:
+1. Install the `dhcp-client` package, if needed:
- ```config
- timeout 10;
+ ```bash
+ sudo zypper install dhcp-client
```
-2. Edit the */etc/network/interfaces* file, and add the following configuration:
+2. Edit the */etc/sysconfig/network/ifcfg-eth0* file, and add the following parameter:
```config
- iface eth0 inet6 auto
- up sleep 5
- up dhclient -1 -6 -cf /etc/dhcp/dhclient6.conf -lf /var/lib/dhcp/dhclient6.eth0.leases -v eth0 || true
- ```
+ DHCLIENT6_MODE='managed'
+
3. Renew the IPv6 address: ```bash sudo ifdown eth0 && sudo ifup eth0
- ```
+ ```
+## OpenSUSE Leap and SLES 12
-## RHEL, CentOS, and Oracle Linux
+For openSUSE Leap and SLES 12, follow these steps:
-1. Edit the */etc/sysconfig/network* file, and add the following parameter:
+1. Edit the */etc/sysconfig/network/ifcfg-eth0* file, and replace the `#BOOTPROTO='dhcp4'` parameter with the following value:
```config
- NETWORKING_IPV6=yes
+ BOOTPROTO='dhcp'
```
-2. Edit the */etc/sysconfig/network-scripts/ifcfg-eth0* file, and add the following two parameters:
+2. To the */etc/sysconfig/network/ifcfg-eth0* file, add the following parameter:
```config
- IPV6INIT=yes
- DHCPV6C=yes
+ DHCLIENT6_MODE='managed'
``` 3. Renew the IPv6 address: ```bash sudo ifdown eth0 && sudo ifup eth0
- ```
+ ```
-## SLES 11 and openSUSE 13
+# [Ubuntu](#tab/ubuntu)
-Recent SUSE Linux Enterprise Server (SLES) and openSUSE images in Azure have been pre-configured with DHCPv6. No other changes are required when you use these images. If you have a VM that's based on an older or custom SUSE image, follow the steps below:
+For Ubuntu versions 17.10 or higher, follow these steps:
-1. Install the `dhcp-client` package, if needed:
+1. Edit the **`/etc/dhcp/dhclient.conf`** file, and add the following line:
- ```bash
- sudo zypper install dhcp-client
+ ```config
+ timeout 10;
```
-2. Edit the */etc/sysconfig/network/ifcfg-eth0* file, and add the following parameter:
+1. Create a new file in the cloud.cfg.d folder that retains your configuration through reboots. **The information in this file will override the default [NETPLAN]( https://netplan.io) config (in YAML configuration files at this location: /etc/netplan/*.yaml)**.
- ```config
- DHCLIENT6_MODE='managed'
-
+ Create a */etc/cloud/cloud.config.d/91-azure-network.cfg* file. Ensure that **`dhcp6: true`** is reflected under the required interface, as shown by the following sample:
-3. Renew the IPv6 address:
+ ```config
+ network:
+ version: 2
+ ethernets:
+ eth0:
+ dhcp4: true
+ dhcp6: true
+ match:
+ driver: hv_netvsc
+ set-name: eth0
+ ```
+
+1. Save the file and reboot.
+1. Use **`ifconfig`** to verify virtual machine received IPv6 address.
+
+ If **`ifconfig`** isn't installed, run the following commands:
```bash
- sudo ifdown eth0 && sudo ifup eth0
+ sudo apt update
+ sudo apt install net-tools
```
-## SLES 12 and openSUSE Leap
+ :::image type="content" source="./media/load-balancer-ipv6-for-linux/ipv6-ip-address-ifconfig.png" alt-text="Screenshot of ifconfig showing IPv6 IP address.":::
-Recent SLES and openSUSE images in Azure have been pre-configured with DHCPv6. No other changes are required when you use these images. If you have a VM that's based on an older or custom SUSE image, follow the steps below:
+# [Debian](#tab/debian)
-1. Edit the */etc/sysconfig/network/ifcfg-eth0* file, and replace the `#BOOTPROTO='dhcp4'` parameter with the following value:
+1. Edit the */etc/dhcp/dhclient6.conf* file, and add the following line:
```config
- BOOTPROTO='dhcp'
+ timeout 10;
```
-2. To the */etc/sysconfig/network/ifcfg-eth0* file, add the following parameter:
+1. Edit the */etc/network/interfaces* file, and add the following configuration:
```config
- DHCLIENT6_MODE='managed'
+ iface eth0 inet6 auto
+ up sleep 5
+ up dhclient -1 -6 -cf /etc/dhcp/dhclient6.conf -lf /var/lib/dhcp/dhclient6.eth0.leases -v eth0 || true
```
-3. Renew the IPv6 address:
+1. Renew the IPv6 address:
```bash sudo ifdown eth0 && sudo ifup eth0 ```
-## CoreOS
+## [CoreOS](#tab/coreos)
-Recent CoreOS images in Azure have been pre-configured with DHCPv6. No other changes are required when you use these images. If you have a VM based on an older or custom CoreOS image, follow the steps below:
+Recent CoreOS images in Azure have been preconfigured with DHCPv6. No other changes are required when you use these images. If you have a VM based on an older or custom CoreOS image, follow these steps:
1. Edit the */etc/systemd/network/10_dhcp.network* file:
Recent CoreOS images in Azure have been pre-configured with DHCPv6. No other cha
```bash sudo systemctl restart systemd-networkd ```++
load-balancer Load Balancer Nat Pool Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md
+
+ Title: Azure Load Balancer NAT Pool to NAT Rule Migration
+description: Process for migrating NAT Pools to NAT Rules on Azure Load Balancer.
+++++ Last updated : 05/01/2023++++
+# Tutorial: Migrate from Inbound NAT Pools to NAT Rules
+
+Azure Load Balancer NAT Pools are the legacy approach for automatically assigning Load Balancer front end ports to each instance in a Virtual Machine Scale Set. [NAT Rules](inbound-nat-rules.md) on Standard SKU Load Balancers have replaced this functionality with an approach that is both easier to manage and faster to configure.
+
+## Why Migrate to NAT Rules?
+
+NAT Rules provide the same functionality as NAT Pools, but have the following advantages:
+* NAT Rules can be managed using the Portal
+* NAT Rules can leverage Backend Pools, simplifying configuration
+* NAT Rules configuration changes apply more quickly than NAT Pools
+* NAT Pools cannot be used in conjunction with user-configured NAT Rules
+
+## Migration Process
+
+The migration process will create a new Backend Pool for each Inbound NAT Pool existing on the target Load Balancer. A corresponding NAT Rule will be created for each NAT Pool and associated with the new Backend Pool. Existing Backend Pool membership will be retained.
+
+> [!IMPORTANT]
+> The migration process removes the Virtual Machine Scale Set(s) from the NAT Pools before associating the Virtual Machine Scale Set(s) with the new NAT Rules. This requires an update to the Virtual Machine Scale Set(s) model, which may cause a brief downtime while instances are upgraded with the model.
+
+> [!NOTE]
+> Frontend port mapping to Virtual Machine Scale Set instances may change with the move to NAT Rules, especially in situations where a single NAT Pool has multiple associated Virtual Machine Scale Sets. The new port assignment will align sequentially to instance ID numbers; when there are multiple Virtual Machine Scale Sets, ports will be assigned to all instances in one scale set, then the next, continuing.
+
+> [!NOTE]
+> Service Fabric Clusters take significantly longer to update the Virtual Machine Scale Set model (up to an hour).
+
+### Prerequisites
+
+* In order to migrate a Load Balancer's NAT Pools to NAT Rules, the Load Balancer SKU must be 'Standard'. To automate this upgrade process, see the steps provided in [Upgrade a basic load balancer used with Virtual Machine Scale Sets](upgrade-basic-standard-virtual-machine-scale-sets.md).
+* Virtual Machine Scale Sets associated with the target Load Balancer must use either a 'Manual' or 'Automatic' upgrade policy--'Rolling' upgrade policy is not supported. For more information, see [Virtual Machine Scale Sets Upgrade Policies](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model)
+* Install the latest version of [PowerShell](/powershell/scripting/install/installing-powershell)
+* Install the [Azure PowerShell modules](/powershell/azure/install-az-ps)
+
+### Install the 'AzureLoadBalancerNATPoolMigration' module
+
+Install the module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureLoadBalancerNATPoolMigration)
+
+```azurepowershell
+Install-Module -Name AzureLoadBalancerNATPoolMigration -Scope CurrentUser -Repository PSGallery -Force
+```
+
+### Use the module to upgrade NAT Pools to NAT Rules
+
+1. Connect to Azure with `Connect-AzAccount`
+1. Find the target Load Balancer for the NAT Rules upgrade and note its name and Resource Group name
+1. Run the migration command
+
+#### Example: specify the Load Balancer name and Resource Group name
+ ```azurepowershell
+ Start-AzNATPoolMigration -ResourceGroupName <loadBalancerResourceGroupName> -LoadBalancerName <LoadBalancerName>
+ ```
+
+#### Example: pass a Load Balancer from the pipeline
+ ```azurepowershell
+ Get-AzLoadBalancer -ResourceGroupName -ResourceGroupName <loadBalancerResourceGroupName> -Name <LoadBalancerName> | Start-AzNATPoolMigration
+ ```
+
+## Common Questions
+
+### Will migration cause downtime to my NAT ports?
+
+Yes, because we must first remove the NAT Pools before we can create the NAT Rules, there will be a brief time where there is no mapping of the front end port to a back end port.
+
+> [!NOTE]
+> Downtime for NAT'ed port on Service Fabric clusters will be significantly longer--up to an hour for a Silver cluster in testing.
+
+### Do I need to keep both the new Backend Pools created during the migration and my existing Backend Pools if the membership is the same?
+
+No, following the migration, you can review the new backend pools. If the membership is the same between backend pools, you can replace the new backend pool in the NAT Rule with an existing backend pool, then remove the new backend pool.
+
+## Next steps
+
+- Learn about [Managing Inbound NAT Rules](./manage-inbound-nat-rules.md)
+- Learn about [Azure Load Balancer NAT Pools and NAT Rules](https://azure.microsoft.com/blog/manage-port-forwarding-for-backend-pool-with-azure-load-balancer/)
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
As your needs change or requirements for automation increase you can also manage
[!notebook-python[](~/azureml-examples-main/sdk/python/resources/workspace/workspace.ipynb?name=subscription_id)]
- 1. Get a handle to the subscription. `ml_client` will be used in all the Python code in this article.
+ 1. Get a handle to the subscription. `ml_client` is used in all the Python code in this article.
[!notebook-python[](~/azureml-examples-main/sdk/python/resources/workspace/workspace.ipynb?name=ml_client)]
As your needs change or requirements for automation increase you can also manage
DefaultAzureCredential(interactive_browser_tenant_id="<TENANT_ID>") ```
- * (Optional) If you're working on a [sovereign cloud](reference-machine-learning-cloud-parity.md)**, specify the sovereign cloud to authenticate with into the `DefaultAzureCredential`..
+ * (Optional) If you're working on a [sovereign cloud](reference-machine-learning-cloud-parity.md), specify the sovereign cloud to authenticate with into the `DefaultAzureCredential`..
```python from azure.identity import AzureAuthorityHosts
As your needs change or requirements for automation increase you can also manage
[!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)]
-* If you're using Azure Container Registry (ACR), Storage Account, Key Vault, or Application Insights in the different subscription than the workspace, you cannot use network isolation with managed online endpoints. If you want to use network isolation with managed online endpoints, you must have ACR, Storage Account, Key Vault, and Application Insights in the same subscription with the workspace. For limitations that apply to network isolation with managed online endpoints, see [How to secure online endpoint](how-to-secure-online-endpoint.md#limitations).
+* If you're using Azure Container Registry (ACR), Storage Account, Key Vault, or Application Insights in the different subscription than the workspace, you can't use network isolation with managed online endpoints. If you want to use network isolation with managed online endpoints, you must have ACR, Storage Account, Key Vault, and Application Insights in the same subscription with the workspace. For limitations that apply to network isolation with managed online endpoints, see [How to secure online endpoint](how-to-secure-online-endpoint.md#limitations).
* By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR doesn't currently support unicode characters in resource group names, use a resource group that doesn't contain these characters.
As your needs change or requirements for automation increase you can also manage
## Create a workspace
-You can create a workspace [directly in Azure Machine Learning studio](./quickstart-create-resources.md#create-the-workspace), with limited options available. Or use one of the methods below for more control of options.
+You can create a workspace [directly in Azure Machine Learning studio](./quickstart-create-resources.md#create-the-workspace), with limited options available. Or use one of the following methods for more control of options.
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-* **Default specification.** By default, dependent resources and the resource group will be created automatically. This code creates a workspace named `myworkspace` and a resource group named `myresourcegroup` in `eastus2`.
+* **Default specification.** By default, dependent resources and the resource group are created automatically. This code creates a workspace named `myworkspace` and a resource group named `myresourcegroup` in `eastus2`.
[!notebook-python[](~/azureml-examples-main/sdk/python/resources/workspace/workspace.ipynb?name=basic_workspace_name)]
If you have problems in accessing your subscription, see [Set up authentication
:::image type="content" source="media/how-to-manage-workspace/create-workspace-form.png" alt-text="Configure your workspace.":::
-1. When you're finished configuring the workspace, select **Review + Create**. Optionally, use the [Networking](#networking) and [Advanced](#advanced) sections to configure more settings for the workspace.
+1. When you're finished configuring the workspace, select **Review + Create**. Optionally, use the [Networking](#networking), [Advanced](#advanced), and [Tags](#tags) sections to configure more settings for the workspace.
1. Review the settings and make any other changes or corrections. When you're satisfied with the settings, select **Create**.
By default, metadata for the workspace is stored in an Azure Cosmos DB instance
To limit the data that Microsoft collects on your workspace, select __High business impact workspace__ in the portal, or set `hbi_workspace=true ` in Python. For more information on this setting, see [Encryption at rest](concept-data-encryption.md#encryption-at-rest). > [!IMPORTANT]
-> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
+> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
#### Use your own data encryption key You can provide your own key for data encryption. Doing so creates the Azure Cosmos DB instance that stores metadata in your Azure subscription. For more information, see [Customer-managed keys](concept-customer-managed-keys.md). - Use the following steps to provide your own key: > [!IMPORTANT]
-> Before following these steps, you must first perform the following actions:
+> Before following these steps, you must first perform the following actions:
> > Follow the steps in [Configure customer-managed keys](how-to-setup-customer-managed-keys.md) to:
+>
> * Register the Azure Cosmos DB provider > * Create and configure an Azure Key Vault > * Generate a key
ml_client.workspaces.begin_create(ws)
+### Tags
+
+While using a workspace, you have opportunities to provide feedback about Azure Machine Learning. You provide feedback by using:
+
+* Occasional in-product surveys
+* The smile-frown feedback tool in the banner of the workspace
+
+You can turn off all feedback opportunities for a workspace. When off, users of the workspace won't see any surveys, and the smile-frown feedback tool is no longer visible. Use the Azure portal to turn off feedback.
+
+* When creating the workspace, turn off feedback from the **Tags** section:
+
+ 1. Select the **Tags** section
+ 1. Add the key value pair "ADMIN_HIDE_SURVEY: TRUE"
+
+* Turn off feedback on an existing workspace:
+
+ 1. Go to workspace resource in the Azure portal
+ 1. Open **Tags** from left navigation panel
+ 1. Add the key value pair "ADMIN_HIDE_SURVEY: TRUE"
+ 1. Select **Apply**.
++ ### Download a configuration file
-If you'll be running your code on a [compute instance](quickstart-create-resources.md), skip this step. The compute instance will create and store copy of this file for you.
+If you'll be running your code on a [compute instance](quickstart-create-resources.md), skip this step. The compute instance creates and stores copy of this file for you.
If you plan to use code on your local environment that references this workspace, download the file: 1. Select your workspace in [Azure studio](https://ml.azure.com)
When running machine learning tasks using the SDK, you require a MLClient object
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-* **With a configuration file:** This code will read the contents of the configuration file to find your workspace. You'll get a prompt to sign in if you aren't already authenticated.
+* **With a configuration file:** This code reads the contents of the configuration file to find your workspace. You'll get a prompt to sign in if you aren't already authenticated.
```python from azure.ai.ml import MLClient
In the [Azure portal](https://portal.azure.com/), select **Delete** at the top
### Deleting the Azure Container Registry
-The Azure Machine Learning workspace uses Azure Container Registry (ACR) for some operations. It will automatically create an ACR instance when it first needs one.
+The Azure Machine Learning workspace uses Azure Container Registry (ACR) for some operations. It automatically creates an ACR instance when it first needs one.
[!INCLUDE [machine-learning-delete-acr](../../includes/machine-learning-delete-acr.md)]
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| China East 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | China North 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | China North 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
networking Architecture Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/architecture-guides.md
The following table includes articles that provide a networking overview of a vi
||| |[Virtual Datacenters](/azure/architecture/vdc/networking-virtual-datacenter) | Provides a networking perspective of a virtual datacenter in Azure. | |[Hub-spoke topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) |Provides an overview of the hub and spoke network topology in Azure along with information about subscription limits and multiple hubs. |
+|[Hub-spoke network topology with Azure Virtual WAN](/azure/architecture/networking/hub-spoke-vwan-architecture) | Provides an alternative solution to the hub-spoke network topology that uses Azure Virtual WAN. Azure Virtual WAN is used to replace hubs as a managed service. |
+ ## Connect to Azure resources
The following table includes articles about Azure Networking services that provi
|[Choose between virtual network peering and VPN gateways](/azure/architecture/reference-architectures/hybrid-networking/vnet-peering) | Compares two ways to connect virtual networks in Azure: virtual network peering and VPN gateways. | |[Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) | Compares options for connecting an on-premises network to an Azure Virtual Network (VNet). For each option, a more detailed reference architecture is available. | |[SD-WAN connectivity architecture with Azure Virtual WAN](../../virtual-wan/sd-wan-connectivity-architecture.md)|Describes the different connectivity options for interconnecting a private Software Defined WAN (SD-WAN) with Azure Virtual WAN.|-
+|[Cross-cloud scaling with Traffic Manager](/azure/architecture/example-scenario/hybrid/hybrid-cross-cloud-scaling) | Describes how to use Traffic Manager to extend your on-premises application with your application running in the public cloud. |
+|[Hybrid geo-distributed architecture](/azure/architecture/example-scenario/hybrid/hybrid-geo-distributed) | Describes how to use Azure Traffic Manager to route traffic to endpoints to meet regional requirements, corporate policies, and international regulations. |
+| [Design a hybrid Domain Name System solution with Azure](/azure/architecture/hybrid/hybrid-dns-infra) | Describes how to design a hybrid Domain Name System (DNS) solution to resolve names for workloads deployed across on-premises and Azure. The solution uses Azure DNS Public for internet users, Azure DNS Private zones for resolution between virtual networks, and DNS servers for on-premises users and Azure systems. |
+| [Choosing between virtual network peering and VPN gateways](/azure/architecture/reference-architectures/hybrid-networking/vnet-peering) | Describes how to choose between virtual network peering and VPN gateways to connect virtual networks in Azure. |
+| [Guide to Private Link and DNS in Azure Virtual WAN](/azure/architecture/guide/networking/private-link-virtual-wan-dns-guide) | This guide describes how to use Azure Private Link and Azure Private DNS to connect to Azure PaaS services over a private endpoint in Azure Virtual WAN. |
+
## Deploy highly available applications The following table includes articles that describe how to deploy your applications for high availability using a combination of Azure Networking services.
The following table includes articles that describe how to deploy your applicati
||| |[Multi-region N-tier application](/azure/architecture/reference-architectures/n-tier/multi-region-sql-server)) | Describes a multi-region N-tier application that uses Traffic Manager to route incoming requests to a primary region and if that region becomes unavailable, Traffic Manager fails over to the secondary region. | | [Multitenant SaaS on Azure](/azure/architecture/example-scenario/multi-saas/multitenant-saas) | Uses a multi-tenant solution that includes a combination of Front Door and Application Gateway. Front Door helps load balance traffic across regions and Application Gateway routes and load-balances traffic internally in the application to the various services that satisfy client business needs. |
-| [Multi-tier web application built for high availability and disaster recovery ](/azure/architecture/example-scenario/infrastructure/multi-tier-app-disaster-recovery) | Deploys resilient multi-tier applications built for high availability and disaster recovery. If the primary region becomes unavailable, Traffic Manager fails over to the secondary region. |
+| [Multi-tier web application built for high availability and disaster recovery](/azure/architecture/example-scenario/infrastructure/multi-tier-app-disaster-recovery) | Deploys resilient multi-tier applications built for high availability and disaster recovery. If the primary region becomes unavailable, Traffic Manager fails over to the secondary region. |
+| [Application Gateway Ingress Controller (AGIC) with a multitenant Azure Kubernetes Service](/azure/architecture/example-scenario/aks-agic/aks-agic) | Protect your Azure Kubernetes Service (AKS) cluster with Application Gateway Ingress Controller (AGIC) and Web Application Firewall (WAF). AGIC is a Kubernetes application that makes it easy to manage an Application Gateway instance in your AKS cluster. |
|[IaaS: Web application with relational database](/azure/architecture/high-availability/ref-arch-iaas-web-and-db) | Describes how to use resources spread across multiple zones to provide a high availability architecture for hosting an Infrastructure as a Service (IaaS) web application and SQL Server database. | |[Sharing location in real time using low-cost serverless Azure services](/azure/architecture/example-scenario/signalr/#azure-front-door) | Uses Azure Front Door to provide higher availability for your applications than deploying to a single region. If a regional outage affects the primary region, you can use Front Door to fail over to the secondary region. | |[Highly available network virtual appliances](/azure/architecture/reference-architectures/dmz/nva-ha) | Shows how to deploy a set of network virtual appliances (NVAs) for high availability in Azure. | |[Multi-region load balancing with Traffic Manager and Application Gateway](/azure/architecture/high-availability/reference-architecture-traffic-manager-application-gateway) | Describes how to deploy resilient multi-tier applications in multiple Azure regions, in order to achieve availability and a robust disaster recovery infrastructure. |
+| [Scalable and secure WordPress on Azure](/azure/architecture/example-scenario/infrastructure/wordpress) | Describes how to deploy a scalable and secure WordPress application on Azure. The architecture uses Azure Cloud Delivery Network (CDN) to cache static content. The CDN uses an Azure Load Balancer as the origin to retrieve content from the WordPress application. |
+| [Azure Firewall - Well-Architected Framework](/azure/well-architected/services/networking/azure-firewall) | Architectural best practice for Azure Firewall. This guide covers the five pillars of architecture excellence: cost optimization, operational excellence, performance efficiency, reliability, and security. |
+| [Expose Azure Spring Apps through a reverse proxy](/azure/architecture/reference-architectures/microservices/spring-cloud-reverse-proxy) | Describes how to use a reverse proxy service such as Azure Application Gateway or Azure Front Door to expose Azure Spring Cloud applications to the internet. By placing a service in front of Azure Spring Apps you can protect, load balance, route and filter requests based on your business needs. |
+| [Oracle database migration to Azure](/azure/architecture/solution-ideas/articles/reference-architecture-for-oracle-database-migration-to-azure) | Describes how to migrate an Oracle database to Azure using Oracle Active Data Guard. The architecture uses Azure Load Balancer to load balance traffic to the Oracle database. |
+ ## Secure your network resources
-The following table includes articles that describe how protect your network resources using Azure Networking services.
+The following table includes articles that describe how to protect your network resources using Azure Networking services.
|Title |Description | |||
The following table includes articles that describe how protect your network res
|[Implement a secure hybrid network](/azure/architecture/reference-architectures/dmz/secure-vnet-dmz) | Describes an architecture that implements a DMZ, also called a perimeter network, between the on-premises network and an Azure virtual network. All inbound and outbound traffic passes through Azure Firewall. | |[Secure and govern workloads with network level segmentation](/azure/architecture/reference-architectures/hybrid-networking/network-level-segmentation) | Describes the three common patterns used for organizing workloads in Azure from a networking perspective. Each of these patterns provides a different type of isolation and connectivity. | |[Firewall and Application Gateway for virtual networks](/azure/architecture/example-scenario/gateway/firewall-application-gateway) | Describes Azure Virtual Network security services like Azure Firewall and Azure Application Gateway, when to use each service, and network design options that combine both. |
+|[Secure managed web applications](/azure/architecture/example-scenario/apps/fully-managed-secure-apps) | Overview of deploying secure applications using Azure App Service Environment (ASE), Azure Application Gateway, and Azure Web Application Firewall (WAF). |
+| [Secure virtual network applications](/azure/architecture/example-scenario/gateway/firewall-application-gateway) | Describes Azure Virtual Network security services like Azure Firewall, Azure DDoS Protection, Azure Application Gateway, and Azure Web Application Firewall (WAF), when to use each service, and network design options that combine both. |
+| [Open-source jump server solution on Azure](/azure/architecture/example-scenario/infrastructure/apache-guacamole) | Describes how to deploy a jump server solution on Azure using Apache Guacamole. Apache Guacamole is a clientless remote desktop gateway. It supports standard protocols like VNC, RDP, and SSH. |
+| [Secure your Microsoft Teams channel bot and web app behind a firewall](/azure/architecture/example-scenario/teams/securing-bot-teams-channel) | Describes how to use Azure Firewall, Azure Private Link and Azure Private Endpoint to secure connectivity to Microsoft Teams channel bot web app. |
+| [Secure access to an Azure Kubernetes Service (AKS) API server](/azure/architecture/guide/security/access-azure-kubernetes-service-cluster-api-server) | Describes various options to secure access to an Azure Kubernetes Service (AKS) API server. |
+| [Enterprise cloud file share with Azure sharing solution](/azure/architecture/hybrid/azure-files-private) | Describes how to use Azure Files, Azure File sync, Azure DNS, and Azure Private Link to create a secure enterprise cloud file share solution. This solution saves cost by outsourcing the management of file server and infrastructure while retaining control of the data. |
+| [Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis) | Describes how to Azure Application Gateway to restrict and protect access to APIs hosted in Azure API Management. |
+| [Deploy AD DS in an Azure virtual network](/azure/architecture/example-scenario/identity/adds-extend-domain) | Describes how to extend an on-premises Active Directory Domain Services (AD DS) environment to Azure by deploying domain controllers in an Azure virtual network. |
+| [Securing PaaS deployments](/azure/security/fundamentals/paas-deployments) | Provide general guidance for securing PaaS deployments in Azure. |
## Next steps
partner-solutions Palo Alto Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-create.md
+
+ Title: Create a Cloud NGFW by Palo Alto Networks Preview resource
+description: This article describes how to use the Azure portal to create a Cloud NGFW by Palo Alto Networks Preview resource.
+++ Last updated : 04/26/2023+++
+# QuickStart: Get started with Cloud NGFW by Palo Alto Networks Preview
+
+In this quickstart, you use the Azure Marketplace to find and create an instance of **Cloud NGFW by Palo Alto Networks Preview**.
+
+## Create a new Cloud NGFW by Palo Alto Networks resource
+
+### Basics
+
+1. In the Azure portal, create a Cloud NGFW by Palo Alto Networks resource using the Marketplace. Use search to find _Cloud NGFW by Palo Alto Networks_. Then, select **Subscribe**. Then, select **Create**.
+
+1. Set the following values in the Basics tab.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-basics.png" alt-text="Screenshot of Basics tab of the Palo Alto Networks create experience.":::
+
+ | Property | Description |
+ |||
+ | **Subscription** | From the drop-down, select your Azure subscription where you have owner access. |
+ | **Resource group** | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, see Azure Resource Group overview. |
+ | **Name** | Put the name for the Palo Alto Networks account you want to create. |
+ | **Region** | Select an appropriate region. |
+ | **Pricing Plan** | Specified based on the selected Palo Alto Networks plan. |
+
+### Networking
+
+1. After completing the Basics tap, select the **Next: Networking** to see the **Networking** tab. 1. Select either **Virtual Network** or **Virtual Wan Hub**.
+
+1. Use the dropdowns to set the **Virtual Network**, **Private Subnet**, and Public **Public Subnet** associated with the Palo Alto Networks deployment.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-networking.png" alt-text="Screenshot of the networking pane in the Palo Alto Networks create experience.":::
+
+1. For **Public IP Address Configuration**, select either **Create New** or **Use Existing** and type in a name for **Public IP Address Name(s)**.
+
+1. Select the checkbox **Enable Source NAT** to indicate your preferred NAT settings.
+
+### Security Policy
+
+1. After setting the DNS values, select the **Next: Security Policy** to see the **Security Policies** tab. You can set the policies for the firewall using this tab.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-rulestack.png" alt-text="Screenshot of the Rulestack in the Palo Alto Networks create experience.":::
+
+1. Select checkbox **Managed By** to indicate either **Azure Portal** or **Palo Alto Networks Panorama**.
+
+1. For **Choose Local Rulestack**, select either **Create New** or **Use Existing** options.
+
+1. Input an existing rulestack in the **Local Rulestack** option.
+
+1. Select the checkbox **Best practice rule** to indicate Firewall mode or IDS mode options.
+
+### DNS Proxy
+
+1. After completing the **Security Policies** values, select the **Next: DNS Proxy** to see the **DNS Proxy** screen.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-dns-proxy.png" alt-text="Screenshot of the DNS Proxy in the Palo Alto Networks create experience.":::
+
+1. Select the checkbox **DNS Proxy** to indicate **Disabled** or **Enabled**.
+
+### Tags
+
+You can specify custom tags for the new Palo Alto Networks resource in Azure by adding custom key-value pairs.
+
+1. Select Tags.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-tags.png" alt-text="Screenshot showing the tags pane in the Palo Alto Networks create experience.":::
+
+1. Type in the **Name** and **Value** properties that you need.
+
+ | Property | Description |
+ |-| -|
+ |**Name** | Name of the tag corresponding to the Azure Palo Alto Networks resource. |
+ | **Value** | Value of the tag corresponding to the Azure Palo Alto Networks resource. |
+
+### Terms
+
+Next, you must accept the Terms of Use for the new Palo Alto Networks resource.
+
+1. Select Terms.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-terms.png" alt-text="Screenshot showing the terms pane in the Palo Alto create experience.":::
+
+1. Select the checkbox **I Agree** to indicate approval.
+
+### Review and create
+
+1. Select the **Next: Review + Create** to navigate to the final step for resource creation. When you get to the **Review + Create** page, all validations are run. At this point, review all the selections made in the Basics, Networking, and optionally Tags panes. You can also review the Palo Alto and Azure Marketplace terms and conditions.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-review-create.png" alt-text="Screenshot of Review and Create resource tab.":::
+
+1. When you've reviewed all the information, select **Create**. Azure now deploys the Cloud NGFW by Palo Alto Networks.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-deploying.png" alt-text="Screenshot showing Palo Alto Networks deployment in process.":::
+
+## Deployment completed
+
+1. Once the create process is completed, select **Go to Resource** to navigate to the specific Cloud NGFW by Palo Alto Networks resource.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-deploy-complete.png" alt-text="Screenshot of a completed Palo Alto Networks deployment.":::
+
+1. Select **Overview** in the Resource menu to see information on the deployed resources.
+
+ :::image type="content" source="media/palo-alto-create/palo-alto-overview-essentials.png" alt-text="Screenshot of information on the Palo Alto Networks resource overview.":::
+
+## Next steps
+
+- [Manage the Palo Alto Networks resource](palo-alto-manage.md)
partner-solutions Palo Alto Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-manage.md
+
+ Title: Manage Cloud NGFW by Palo Alto Networks resource through the Azure portal
+description: This article describes management functions for Cloud NGFW by Palo Alto Networks on the Azure portal.
++ Last updated : 04/25/2023+++
+# Manage your Cloud NGFW by Palo Alto Networks Preview through the portal
+
+Once your Cloud NGFW by Palo Alto Networks Preview resource is created in the Azure portal, you might need to get information about it or change it. Here's list of ways to manage your Palo Alto resource.
+
+- [Networking and NAT](#networking-and-nat)
+- [Rulestack](#rulestack)
+- [Log settings](#log-settings)
+- [DNS Proxy](#dns-proxy)
+- [Rules](#rules)
+- [Delete a Cloud NGFW by Palo Alto Networks resource](#delete-a-cloud-ngfw-by-palo-alto-networks-resource)
+
+From the Resource menu, select your Cloud NGFW by Palo Alto Networks deployment. Use the Resource menu to move through the settings for your Cloud NGFW by Palo Alto Networks.
++
+## Networking and NAT
+
+1. Select **Networking & NAT** in the Resource menu.
+
+1. Select the **Type** by checking the **Virtual Network** or **Virtual WAN** options.
+
+1. You can see the **Virtual Network** , **Private Subnet** and **Public Subnet** details.
+
+1. From **Source Network Address Translation (SNAT)**, you can select the **Enable Source NAT**.
+
+1. From **Destination Network Address Translation (DNAT)**, you can search in the table for the settings that you want.
+
+## Rulestack
+
+1. Select **Rulestack** in the Resource menu.
+
+1. For the **Managed by**, select either **Azure Portal** or **"Palo Alto Networks Panorama** to determine the mechanism for managing Rulestack. You must have Palo Alto Networks Panorama set up in order to select it.
+
+1. For the **Local Rulestack**, select an existing Rulestack from the dropdown.
+
+## Log settings
+
+1. Select **Log Settings** in the Resource menu.
+
+1. Select **edit** to enable **Log Settings**.
+
+1. Select the **Enable Log Settings** checkbox.
+
+1. Select **Log Setting** from the dropdown list.
+
+## DNS Proxy
+
+1. Select **DNS Proxy** in the Resource menu.
+
+1. Select either **Enable** or **Disable**.
+
+1. Select **Save** to enable DNS Proxy.
+
+## Rules
+
+Search for the Local rules under the **Search** option.
+
+## Delete a Cloud NGFW by Palo Alto Networks resource
+
+To delete a Cloud NGFW by Palo Alto Networks resource
+
+1. Select **Overview** in the Resource menu.
+
+1. Select **Delete**.
+
+1. Confirm that you want to delete the Cloud NGFW by Palo Alto Networks resource.
+
+1. Select **Delete**.
+
+After the account is deleted, logs are no longer sent to Cloud NGFW by Palo Alto Networks. Also, all billing stops for Cloud NGFW by Palo Alto Networks through Azure Marketplace.
+
+> [!NOTE]
+> The delete button on the main account is only activated if all the sub-accounts mapped to the main account are already deleted. Refer to section for deleting sub-accounts here.
+
+## Next steps
+
+- For help with troubleshooting, see [Troubleshooting Palo Alto integration with Azure](palo-alto-troubleshoot.md).
partner-solutions Palo Alto Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-overview.md
+
+ Title: What is Cloud NGFW by Palo Alto Networks
+description: Learn about using the Cloud NGFW by Palo Alto Networks from the Marketplace.
++ Last updated : 04/26/2023++++
+# What is Cloud NGFW by Palo Alto Networks Preview?
+
+In this article, you learn how to use the integration of the Palo Alto Networks NGFW (Next Generation Firewall) service with Azure.
+
+With the integration of Cloud NGFW for Azure into the Azure ecosystem, we are delivering an integrated platform and empowering a growing ecosystem of developers and customers to help protect their organizations on Azure.
+
+The Palo Alto Networks offering in the Azure Marketplace allows you to manage the Cloud NGFW by Palo Alto Networks in the Azure portal as an integrated service. You can set up the Cloud NGFW by Palo Alto Networks resources through a resource provider namedΓÇ»`PAN.NGFW`.
+
+You can create and manage Palo Alto Networks resources through the Azure portal. Palo Alto Networks owns and runs the software as a service (SaaS) application including the accounts created.
+
+Here are the key capabilities provided by the Palo Alto integration:
+
+- **Seamless onboarding** of Palo Alto software as an integrated service on Azure.
+- **Unified billing** of Palo Alto through Azure monthly billing.
+- **Single-Sign on to Palo Alto** - No separate sign-up needed from NGINX portal.
+- **Manage VNET and VWAN traffic** to use existing configuration (.conf) files for Palo Alto deployment.
+
+## Pre-requisites for Cloud NGFW by Palo Alto Networks
+
+- Subscription owner
+ - Cloud NGFW by Palo Alto Networks resource can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration.
+
+## Find the Palo Alto Network offerings in the Azure Marketplace
+
+1. Navigate to the Azure Marketplace page.
+
+1. Search for _Palo Alto_. Select **Cloud Next-Generation Firewall by Palo Alto**.
+
+1. In the Marketplace, you see the offer for **Cloud Next-Generation Firewall by Palo Alto Networks - an Azure Native ISV Service**. Select, **Subscribe**.
+
+ :::image type="content" source="media/palo-alto-overview/palo-alto-marketplace.png" alt-text="Screenshot of Cloud NGFW by Palo Alto Networks in the Azure Marketplace.":::
+
+1. In the working pane, you see the options from Palo Alto Networks. Select **Create** in Cloud NGFW by Palo Alto Networks.
+
+ :::image type="content" source="media/palo-alto-overview/palo-alto-offerings.png" alt-text="Screenshot showing the two offerings from Palo Alto Networks.":::
+
+1. The **Create** form opens in the working pane.
+
+## Next steps
+
+To create an instance of Palo Alto, see [QuickStart: Get started with Palo Alto](palo-alto-create.md).
partner-solutions Palo Alto Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-troubleshoot.md
+
+ Title: Troubleshooting your Cloud NGFW by Palo Alto Networks
+description: This article provides information about getting support and troubleshooting a Cloud NGFW by Palo Alto Networks.
++ Last updated : 04/25/2023+++
+# Troubleshooting Cloud NGFW by Palo Alto Networks
+
+You can get support for your Palo Alto deployment through a **New Support request**. The procedure for creating the request is here. In addition, we have included troubleshooting for problems you might experience in creating and using a Palo Alto deployment.
+
+## Getting support
+
+1. To contact support about a Cloud NGFW by Palo Alto Networks resource, select your Cloud NGFW by Palo Alto Networks resource in the Resource menu.
+
+1. Select the **New Support request** in Resource menu on the left.
+
+1. Select **Raise a support ticket** and fill out the details.
+
+## Troubleshooting
+
+### Unable to create a PCloud NGFW by Palo Alto Networks as not a subscription owner
+
+Only users who have Owner access can setup a Palo Alto resource on the Azure subscription. Ensure you have the appropriate Owner access before starting to create a Palo Alto resource.
+
+## Next steps
+
+- Learn about [managing your instance](palo-alto-manage.md) of Palo Alto.
partner-solutions Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/partners.md
Title: Partner services description: Learn about services offered by partners on Azure. -- Previously updated : 01/18/2023 Last updated : 04/25/2023
Azure Native ISV Services are available through the Marketplace.
|[Azure Native Dynatrace Service](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. | |[Azure Native New Relic Service Preview](new-relic/new-relic-overview.md) | A cloud-based end-to-end observability platform for analyzing and troubleshooting the performance of applications, infrastructure, logs, real-user monitoring, and more. | - ## Data and storage |Partner |Description |
Azure Native ISV Services are available through the Marketplace.
|Partner |Description | ||-| |[NGINXaaS - Azure Native ISV Service](nginx/nginx-overview.md) | Use NGINXaaS as a reverse proxy within your Azure environment. |
+|[Cloud NGFW by Palo Alto Networks Preview](palo-alto/palo-alto-overview.md) | Use Palo Alto Networks as a firewall in the Azure environment. |
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Azure Database for PostgreSQL provides various metrics that give insight into th
The following metrics are available for a flexible server instance of Azure Database for PostgreSQL:
-|Display name|Metric ID|Unit|Description|
-|||||
-|**Active Connections**|`active_connections`|Count|Number of connections to your server.|
-|**Backup Storage Used**|`backup_storage_used`|Bytes|Amount of backup storage used. This metric represents the sum of storage that's consumed by all the full backups, differential backups, and log backups that are retained based on the backup retention period that's set for the server. The frequency of the backups is service managed. For geo-redundant storage, backup storage usage is twice the usage for locally redundant storage.|
-|**Failed Connections**|`connections_failed`|Count|Number of failed connections.|
-|**Succeeded Connections** |`connections_succeeded`|Count |Number of succeeded connections.|
-|**CPU Credits Consumed**|`cpu_credits_consumed` |Count |Number of credits used by the flexible server. Applies to the Burstable tier.|
-|**CPU Credits Remaining** |`cpu_credits_remaining`|Count |Number of credits available to burst. Applies to the Burstable tier. |
-|**CPU percent** |`cpu_percent`|Percent |Percentage of CPU in use.|
-|**Disk Queue Depth**|`disk_queue_depth` |Count |Number of outstanding I/O operations to the data disk.|
-|**IOPS**|`iops` |Count |Number of I/O operations to disk per second.|
-|**Maximum Used Transaction IDs**|`maximum_used_transactionIDs`|Count |Maximum number of transaction IDs in use. |
-|**Memory percent**|`memory_percent` |Percent |Percentage of memory in use. |
-|**Network Out** |`network_bytes_egress` |Bytes |Amount of outgoing network traffic.|
-|**Network In**|`network_bytes_ingress`|Bytes |Amount of incoming network traffic.|
-|**Read IOPS** |`read_iops`|Count |Number of data disk I/O read operations per second. |
-|**Read Throughput** |`read_throughput`|Bytes |Bytes read per second from disk. |
-|**Storage Free**|`storage_free` |Bytes |Amount of storage space that's available.|
-|**Storage percent** |`storage_percent`|Percentage|Percent of storage space that's used. The storage that's used by the service can include database files, transaction logs, and server logs.|
-|**Storage Used**|`storage_used` |Bytes |Amount of storage space that's used. The storage that's used by the service can include the database files, transaction logs, and the server logs.|
-|**Transaction Log Storage Used**|`txlogs_storage_used`|Bytes |Amount of storage space that's used by the transaction logs. |
-|**Write Throughput**|`write_throughput` |Bytes |Bytes written to disk per second.|
-|**Write IOPS**|`write_iops` |Count |Number of data disk I/O write operations per second.|
+|Display name |Metric ID |Unit |Description |Default enabled|
+|--|--|-|--||
+|**Active Connections** |`active_connections` |Count |Number of connections to your server. |Yes |
+|**Backup Storage Used** |`backup_storage_used` |Bytes |Amount of backup storage used. This metric represents the sum of storage that's consumed by all the full backups, differential backups, and log backups that are retained based on the backup retention period that's set for the server. The frequency of the backups is service managed. For geo-redundant storage, backup storage usage is twice the usage for locally redundant storage.|Yes |
+|**Failed Connections** |`connections_failed` |Count |Number of failed connections. |Yes |
+|**Succeeded Connections** |`connections_succeeded` |Count |Number of succeeded connections. |Yes |
+|**CPU Credits Consumed** |`cpu_credits_consumed` |Count |Number of credits used by the flexible server. Applies to the Burstable tier. |Yes |
+|**CPU Credits Remaining** |`cpu_credits_remaining` |Count |Number of credits available to burst. Applies to the Burstable tier. |Yes |
+|**CPU percent** |`cpu_percent` |Percent |Percentage of CPU in use. |Yes |
+|**Disk Queue Depth** |`disk_queue_depth` |Count |Number of outstanding I/O operations to the data disk. |Yes |
+|**IOPS** |`iops` |Count |Number of I/O operations to disk per second. |Yes |
+|**Maximum Used Transaction IDs**|`maximum_used_transactionIDs`|Count |Maximum number of transaction IDs in use. |Yes |
+|**Memory percent** |`memory_percent` |Percent |Percentage of memory in use. |Yes |
+|**Network Out** |`network_bytes_egress` |Bytes |Amount of outgoing network traffic. |Yes |
+|**Network In** |`network_bytes_ingress` |Bytes |Amount of incoming network traffic. |Yes |
+|**Read IOPS** |`read_iops` |Count |Number of data disk I/O read operations per second. |Yes |
+|**Read Throughput** |`read_throughput` |Bytes |Bytes read per second from disk. |Yes |
+|**Storage Free** |`storage_free` |Bytes |Amount of storage space that's available. |Yes |
+|**Storage percent** |`storage_percent` |Percentage|Percent of storage space that's used. The storage that's used by the service can include database files, transaction logs, and server logs. |Yes |
+|**Storage Used** |`storage_used` |Bytes |Amount of storage space that's used. The storage that's used by the service can include the database files, transaction logs, and the server logs. |Yes |
+|**Transaction Log Storage Used**|`txlogs_storage_used` |Bytes |Amount of storage space that's used by the transaction logs. |Yes |
+|**Write Throughput** |`write_throughput` |Bytes |Bytes written to disk per second. |Yes |
+|**Write IOPS** |`write_iops` |Count |Number of data disk I/O write operations per second. |Yes |
+ ## Enhanced metrics
You can use PgBouncer metrics to monitor the performance of the PgBouncer proces
## Database availability metric
-Is-db-alive is an database server availability metric for Azure Postgres Flexible Server, that returns boolean `[1 for available]` and `[0 for not-available]`. Each metric is emitted at a *1 minute* frequency, and has up to *93 days* of retention. Customers can configure alerts on the metric.
+Is-db-alive is an database server availability metric for Azure Postgres Flexible Server, that returns `[1 for available]` and `[0 for not-available]`. Each metric is emitted at a *1 minute* frequency, and has up to *93 days* of retention. Customers can configure alerts on the metric.
|Display Name |Metric ID |Unit |Description |Dimension |Default enabled| |-|-|-|--|||
-|**Database Is Alive** (Preview) |is-db-alive |Boolean|Indicates if the database is up or not |N/a |Yes |
+|**Database Is Alive** (Preview) |`is-db-alive` |Count |Indicates if the database is up or not |N/a |Yes |
#### Considerations when using the Database availability metrics
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|:-|:--|:-|:--| |Azure Event Grid| All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Event Grid.](../event-grid/network-security.md) | |Azure Service Bus | All public region<br/>All Government regions | Supported with premium tier of Azure Service Bus. [Select for tiers](../service-bus-messaging/service-bus-premium-messaging.md) | GA <br/> [Learn how to create a private endpoint for Azure Service Bus.](../service-bus-messaging/private-link-service.md) |
-| Azure API Management | All public regions | | Preview <br/> [Connect privately to API Management using a private endpoint.](../api-management/private-endpoint.md) |
+| Azure API Management | All public regions | | GA <br/> [Connect privately to API Management using a private endpoint.](../api-management/private-endpoint.md) |
| Azure Logic Apps | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Logic Apps.](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md) | ### Internet of Things (IoT)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Disk Encryption](../virtual-machines/disks-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure DNS - Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure DNS - Azure DNS Private Resolver](../dns/dns-private-resolver-get-started-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Energy Data Services](reliability-energy-data-services.md )|
[Azure Event Grid](../event-grid/availability-zones-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Firewall](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Firewall Manager](../firewall-manager/quick-firewall-policy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
route-server Route Injection In Spokes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-injection-in-spokes.md
If the NVA is used to provide connectivity to on-premises network via IPsec VPNs
The previous sections depict the traffic being inspected by the network virtual appliance (NVA) by injecting a `0.0.0.0/0` default route from the NVA to the Route Server. However, if you wish to only inspect spoke-to-spoke and spoke-to-on-premises traffic through the NVA, you should consider that Azure Route Server doesn't advertise a route that is the same or longer prefix than the virtual network address space learned from the NVA. In other words, Azure Route Server won't inject these prefixes into the virtual network and they won't be programmed on the NICs of virtual machines in the hub or spoke VNets.
-Azure Route Server, however, will advertise a larger subnet than the VNet address space that is learned from the NVA. It's possible to advertise from the NVA a supernet of what you have in your virtual network. For example, if your virtual network uses the RFC 1918 address space `10.0.0.0/16`, your NVA can advertise `10.0.0.0/8` to the Azure Route Server and these prefixes will be injected into the hub and spoke VNets. This VNet behavior is referenced in [About BGP with VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md#can-i-advertise-the-exact-prefixes-as-my-virtual-network-prefixes).
+Azure Route Server, however, will advertise a larger subnet than the VNet address space that is learned from the NVA. It's possible to advertise from the NVA a supernet of what you have in your virtual network. For example, if your virtual network uses the RFC 1918 address space `10.0.0.0/16`, your NVA can advertise `10.0.0.0/8` to the Azure Route Server and these prefixes will be injected into the hub and spoke VNets. This VNet behavior is referenced in [About BGP with VPN Gateway](../vpn-gateway/vpn-gateway-vpn-faq.md#can-i-advertise-the-exact-prefixes-as-my-virtual-network-prefixes).
:::image type="content" source="./media/scenarios/influencing-private-traffic-nva.png" alt-text="Diagram showing the injection of private prefixes through Azure Route Server and NVA.":::
sap Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/plan-deployment.md
When planning the DNS configuration for the automation framework, consider the f
- Is there an existing Private DNS that the solutions can integrate with or do you need to use a custom Private DNS zone for the deployment environment? - Are you going to use predefined IP addresses for the Virtual Machines or let Azure assign them dynamically?
-You can integrate with an exiting Private DNS Zone by providing the following values in your tfvars files:
+You can integrate with an existing Private DNS Zone by providing the following values in your tfvars files:
```tfvars management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
sap Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/run-ansible.md
The following tasks are executed on Linux virtual machines:
# [Windows](#tab/windows)
+- Add local groups and permissions
- Connects to the Windows file shares
+### Local software download
+
+This playbooks downloads the installation media from the control plane to the installation media source. The installation media can be shared out from the Central Services instance or from Azure Files or Azure NetApp Files.
+
+# [Linux](#tab/linux)
+
+The following tasks are executed on the Central services instance virtual machine:
+- Download the software
+
+# [Windows](#tab/windows)
+
+The following tasks are executed on the Central services instance virtual machine:
+- Download the software
++++
security Threat Modeling Tool Sensitive Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-sensitive-data.md
If the application is not an enterprise application, then use platform provided
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [Crypto Obfuscation For .Net](https://www.ssware.com/cryptoobfuscator/obfuscator-net.htm) |
+| **References** | [Crypto Obfuscation For .NET](https://www.ssware.com/cryptoobfuscator/obfuscator-net.htm) |
| **Steps** | Generated binaries (assemblies within apk) should be obfuscated to stop reverse engineering of assemblies.Tools like `CryptoObfuscator` may be used for this purpose. | ## <a id="cert"></a>Set clientCredentialType to Certificate or Windows
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
Title: Forward syslog data to Microsoft Sentinel and Azure Monitor by using the Azure Monitor agent
-description: Monitor linux-based devices by forwarding syslog data to a Log Analytics workspace.
+ Title: Tutorial - Forward syslog data to Microsoft Sentinel and Azure Monitor by using the Azure Monitor agent
+description: In this tutorial, you'll learn how to monitor linux-based devices by forwarding syslog data to a Log Analytics workspace.
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
Schema references outline the fields that comprise each schema. ASIM currently d
| | - | | | [Audit Event](normalization-schema-audit.md) | 0.1 | Preview | | [Authentication Event](normalization-schema-authentication.md) | 0.1.3 | Preview |
-| [DNS Activity](normalization-schema-dns.md) | 0.1.6 | Preview |
+| [DNS Activity](normalization-schema-dns.md) | 0.1.7 | Preview |
| [DHCP Activity](normalization-schema-dhcp.md) | 0.1 | Preview |
-| [File Activity](normalization-schema-file-event.md) | 0.2 | Preview |
-| [Network Session](normalization-schema.md) | 0.2.5 | Preview |
+| [File Activity](normalization-schema-file-event.md) | 0.2.1 | Preview |
+| [Network Session](normalization-schema.md) | 0.2.6 | Preview |
| [Process Event](normalization-schema-process-event.md) | 0.1.4 | Preview | | [Registry Event](normalization-schema-registry-event.md) | 0.1.2 | Preview | | [User Management](normalization-schema-user-management.md) | 0.1 | Preview |
The allowed values for a username type are:
| Field | Class | Type | Description | |-|-||-|
-| <a name="usertype"></a>**UserType** | Optional | UserType | The type of source user. Supported values include: `Regular`, `Machine`, `Admin`, `System`, `Application`, `Service Principal`, and `Other`. The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [OriginalUserType](#originalusertype) field. |
+| <a name="usertype"></a>**UserType** | Optional | UserType | The type of source user. Supported values include:<br> - `Regular`<br> - `Machine`<br> - `Admin`<br> - `System`<br> - `Application`<br> - `Service Principal`<br> - `Service`<br> - `Anonymous`<br> - `Other`.<br><br> The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [OriginalUserType](#originalusertype) field. |
| <a name="originalusertype"></a>**OriginalUserType** | Optional | String | The original destination user type, if provided by the reporting device. |
sentinel Tutorial Log4j Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-log4j-detection.md
Title: Tutorial - Detect Log4j vulnerability exploits with Microsoft Sentinel
-description: In this tutorial, learn how to detect exploits of the Apache Log4j vulnerability in any of your susceptible systems with Microsoft Sentinel analytics rules, taking advantage of alert enrichment capabilities to surface as much information as possible to benefit an investigation.
+ Title: Tutorial - Detect threats by using analytics rules in Microsoft Sentinel
+description: In this tutorial, learn how to use analytics rules in Microsoft Sentinel to detect exploits of the Apache Log4j vulnerability in any of your susceptible systems. Take advantage of the alert enrichment capabilities to surface as much information as possible for your investigation.
Last updated 01/08/2023
-# Tutorial: Detect Log4j vulnerability exploits in your systems and produce enriched alerts
+# Tutorial: Detect threats by using analytics rules in Microsoft Sentinel
As a Security Information and Event Management (SIEM) service, Microsoft Sentinel is responsible for detecting security threats to your organization. It does this by analyzing the massive volumes of data generated by all of your systems' logs.
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
Title: Use playbooks with automation rules in Microsoft Sentinel
+ Title: Tutorial - Automate threat response in Microsoft Sentinel
description: Use this tutorial to help you use playbooks together with automation rules in Microsoft Sentinel to automate your incident response and remediate security threats.
Last updated 01/17/2023
-# Tutorial: Use playbooks with automation rules in Microsoft Sentinel
+# Tutorial: Respond to threats by using playbooks with automation rules in Microsoft Sentinel
This tutorial shows you how to use playbooks together with automation rules to automate your incident response and remediate security threats detected by Microsoft Sentinel. When you complete this tutorial you will be able to:
sentinel Ueba Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ueba-reference.md
These are the data sources from which the UEBA engine collects and analyzes data
| **Azure Active Directory**<br>Sign-in logs | All | | **Azure Active Directory**<br>Audit logs | ApplicationManagement<br>DirectoryManagement<br>GroupManagement<br>Device<br>RoleManagement<br>UserManagementCategory | | **Azure Activity logs** | Authorization<br>AzureActiveDirectory<br>Billing<br>Compute<br>Consumption<br>KeyVault<br>Devices<br>Network<br>Resources<br>Intune<br>Logic<br>Sql<br>Storage |
-| **Windows Security events** | 4624: An account was successfully logged on<br>4625: An account failed to log on<br>4648: A logon was attempted using explicit credentials<br>4672: Special privileges assigned to new logon<br>4688: A new process has been created |
+| **Windows Security events**<br>*WindowsEvent* or<br>*SecurityEvent* | 4624: An account was successfully logged on<br>4625: An account failed to log on<br>4648: A logon was attempted using explicit credentials<br>4672: Special privileges assigned to new logon<br>4688: A new process has been created |
## UEBA enrichments
The following table describes the behavior analytics data displayed on each [ent
> - The first, in **bold**, is the "friendly name" of the enrichment. > - The second *(in italics and parentheses)* is the field name of the enrichment as stored in the [**Behavior Analytics table**](#behavioranalytics-table). -- #### UsersInsights field The following table describes the enrichments featured in the **UsersInsights** dynamic field in the BehaviorAnalytics table:
This document described the Microsoft Sentinel entity behavior analytics table s
- Learn more about [entity behavior analytics](identify-threats-with-entity-behavior-analytics.md). - [Enable UEBA in Microsoft Sentinel](enable-entity-behavior-analytics.md). - [Put UEBA to use](investigate-with-ueba.md) in your investigations.++
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
To give you more flexibility in scheduling your analytics rule execution times a
## Announcements -- [Microsoft Defender for Identity alerts now available in Government Community Cloud](#microsoft-defender-for-identity-alerts-now-available-in-government-community-cloud)
+- [When disconnecting and connecting the MDI alerts connector - UniqueExternalId field is not populated (use the AlertName field)](#when-disconnecting-and-connecting-the-mdi-alerts-connectoruniqueexternalid-field-is-not-populated-use-the-alertname-field)
- [Microsoft Defender for Identity alerts will no longer refer to the MDA policies in the Alert ExternalLinks properties](#microsoft-defender-for-identity-alerts-will-no-longer-refer-to-the-mda-policies-in-the-alert-externallinks-properties) - [WindowsEvent table enhancements](#windowsevent-table-enhancements) - [Out-of-the-box content centralization changes](#out-of-the-box-content-centralization-changes)
To give you more flexibility in scheduling your analytics rule execution times a
- [Account enrichment fields removed from Azure AD Identity Protection connector](#account-enrichment-fields-removed-from-azure-ad-identity-protection-connector) - [Name fields removed from UEBA UserPeerAnalytics table](#name-fields-removed-from-ueba-userpeeranalytics-table)
-### Microsoft Defender for Identity alerts now available in Government Community Cloud
+### When disconnecting and connecting the MDI alerts connector - UniqueExternalId field is not populated (use the AlertName field) 
-Microsoft Defender for Identity alerts are now available in Government Community Cloud (GCC).
+The Microsoft Defender for Identity alerts now support the Government Community Cloud (GCC). To enable this support, there is a change to the way alerts are sent to Microsoft Sentinel. 
-If you previously used the MDI alerts connector, with the introduction of the new alerts, the `UniqueExternalId` field is no longer populated. The ID represents the alert, and was formerly located in the `ExternalProperties` field. You can now be obtain the ID through the `AlertName` field, which contains the alert’s name.
+For customers connecting and disconnecting the MDI alerts connector, the `UniqueExternalId` field is no longer populated. The `UniqueExternalId` represents the alert, and was formerly located in the`ExternalProperties` field. You can now obtain the ID through the `AlertName` field, which contains the alert’s name. 
-If you've used this ID in your custom queries, we recommend that you adjust your queries accordingly. Review the [Security alert name mapping and unique external IDs](/defender-for-identity/alerts-overview#security-alert-name-mapping-and-unique-external-ids).
+Review the [complete mapping between the alert names and unique external IDs](/defender-for-identity/alerts-overview#security-alert-name-mapping-and-unique-external-ids).
### Microsoft Defender for Identity alerts will no longer refer to the MDA policies in the Alert ExternalLinks properties
service-connector How To Integrate Web Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-web-pubsub.md
Supported authentication and clients for App Service, Container Apps and Azure S
| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
service-fabric How To Deploy Service Fabric Application System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-service-fabric-application-system-assigned-managed-identity.md
Last updated 07/11/2022
In order to access the managed identity feature for Azure Service Fabric applications, you must first enable the Managed Identity Token Service on the cluster. This service is responsible for the authentication of Service Fabric applications using their managed identities, and for obtaining access tokens on their behalf. Once the service is enabled, you can see it in Service Fabric Explorer under the **System** section in the left pane, running under the name **fabric:/System/ManagedIdentityTokenService** next to other system services. > [!NOTE]
-> Deployment of Service Fabric applications with managed identities are supported starting with API version `"2019-06-01-preview"`. You can also use the same API version for application type, application type version and service resources. The minimum supported Service Fabric runtime is 6.5 CU2. In addition, the build / package environment should also have the SF .Net SDK at CU2 or higher
+> Deployment of Service Fabric applications with managed identities are supported starting with API version `"2019-06-01-preview"`. You can also use the same API version for application type, application type version and service resources. The minimum supported Service Fabric runtime is 6.5 CU2. In addition, the build / package environment should also have the Service Fabric .NET SDK at CU2 or higher
## System-assigned managed identity
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
If you're running a Hyper-V Core server, download the setup file and complete th
1. Register the server by running this command:
- ```bash
+ ```cmd
cd "C:\Program Files\Microsoft Azure Site Recovery Provider" "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r /Friendlyname "FriendlyName of the Server" /Credentials "path to where the credential file is saved" ```
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Uninstall from the UI or from a command prompt.
1. On the Linux machine, sign in as a **root** user. 2. In a terminal, go to /usr/local/ASR. 3. Run the following command:
- ```
- ./uninstall.sh -Y
+ ```bash
+ ./uninstall.sh -Y
``` ## Install Site Recovery VSS provider on source machine
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
Previously updated : 01/03/2022 Last updated : 05/02/2023 ms.devlang: powershell
Connect-AzAccount
After the connection has been established, create the Azure context. Authenticating with Azure AD automatically creates an Azure context for your default subscription. In some cases, you may need to access resources in a different subscription after authenticating. You can change the subscription associated with your current Azure session by modifying the active session context.
-To use your default subscription, create the context by calling the `New-AzStorageContext` cmdlet. Include the `-UseConnectedAccount` parameter so that data operations will be performed using your Azure AD credentials.
+To use your default subscription, create the context by calling the `New-AzStorageContext` cmdlet. Include the `-UseConnectedAccount` parameter so that data operations are performed using your Azure AD credentials.
```azurepowershell #Create a context object using Azure AD credentials
To change subscriptions, retrieve the context object with the [Get-AzSubscriptio
### Create a container
-All blob data is stored within containers, so you'll need at least one container resource before you can upload data. If needed, use the following example to create a storage container. For more information, see [Managing blob containers using PowerShell](blob-containers-powershell.md).
+All blob data is stored within containers, so you need at least one container resource before you can upload data. If needed, use the following example to create a storage container. For more information, see [Managing blob containers using PowerShell](blob-containers-powershell.md).
```azurepowershell #Create a container object $container = New-AzStorageContainer -Name "mycontainer" -Context $ctx ```
-When you use the following examples, you'll need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+When you use the following examples, you need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
## Upload a blob
-To upload a file to a block blob, pass the required parameter values to the `Set-AzStorageBlobContent` cmdlet. Supply the path and file name with the `-File` parameter, and the name of the container with the `-Container` parameter. You'll also need to provide a reference to the context object with the `-Context` parameter.
+To upload a file to a block blob, pass the required parameter values to the `Set-AzStorageBlobContent` cmdlet. Supply the path and file name with the `-File` parameter, and the name of the container with the `-Container` parameter. You also need to provide a reference to the context object with the `-Context` parameter.
This command creates the blob if it doesn't exist, or prompts for overwrite confirmation if it exists. You can overwrite the file without confirmation if you pass the `-Force` parameter to the cmdlet.
Processed 5257 blobs in demo-container.
Depending on your use case, the `Get-AzStorageBlobContent` cmdlet can be used to download either single or multiple blobs. As with most operations, both approaches require a context object.
-To download a single named blob, you can call the cmdlet directly and supply values for the `-Blob` and `-Container` parameters. The blob will be downloaded to the working PowerShell directory by default, but an alternate location can be specified. To change the target location, a valid, existing path must be passed with the `-Destination` parameter. Because the operation can't create a destination, it will fail with an error if your specified path doesn't exist.
+To download a single named blob, you can call the cmdlet directly and supply values for the `-Blob` and `-Container` parameters. The blob is downloaded to the working PowerShell directory by default, but an alternate location can be specified. To change the target location, a valid, existing path must be passed with the `-Destination` parameter. Because the operation can't create a destination, it fails with an error if your specified path doesn't exist.
Multiple blobs can be downloaded by combining the `Get-AzStorageBlob` cmdlet and the PowerShell pipeline operator. First, create a list of blobs with the `Get-AzStorageBlob` cmdlet. Next, use the pipeline operator and the `Get-AzStorageBlobContent` cmdlet to retrieve the blobs from the container.
$properties = $blob.BlobClient.GetProperties()
Echo $properties.Value ```
-The result displays a list of the blob's properties as shown below.
+The result displays a list of the blob's properties as shown in the following example.
```Result LastModified : 11/16/2021 3:42:07 PM +00:00
HasLegalHold : False
### Read and write blob metadata
-Blob metadata is an optional set of name/value pairs associated with a blob. As shown in the previous example, there's no metadata associated with a blob initially, though it can be added when necessary. To update blob metadata, you'll use the `BlobClient.UpdateMetadata` method. This method only accepts key-value pairs stored in a generic `IDictionary` object. For more information, see the [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) class definition.
+Blob metadata is an optional set of name/value pairs associated with a blob. As shown in the previous example, there's no metadata associated with a blob initially, though it can be added when necessary. To update blob metadata, use the `BlobClient.UpdateMetadata` method. This method only accepts key-value pairs stored in a generic `IDictionary` object. For more information, see the [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) class definition.
The example below first updates and then commits a blob's metadata, and then retrieves it. The sample blob is flushed from memory to ensure the metadata isn't being read from the in-memory object.
$properties = $blob.BlobClient.GetProperties()
Echo $properties.Value.Metadata ```
-The result returns the blob's newly updated metadata as shown below.
+The result returns the blob's newly updated metadata as shown in the following example.
```Result Key Value
You can use the `-Force` parameter to overwrite an existing blob with the same n
The resulting destination blob is a writeable blob and not a snapshot.
-The source blob for a copy operation may be a block blob, an append blob, a page blob, or a snapshot. If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob will be overwritten.
+The source blob for a copy operation may be a block blob, an append blob, a page blob, or a snapshot. If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob is overwritten.
The destination blob can't be modified while a copy operation is in progress. A destination blob can only have one outstanding copy operation. In other words, a blob can't be the destination for multiple pending copy operations.
$blob.BlobClient.CreateSnapshot()
When you change a blob's tier, you move the blob and all of its data to the target tier. To make the change, retrieve a blob with the `Get-AzStorageBlob` cmdlet, and call the `BlobClient.SetAccessTier` method. This approach can be used to change the tier between **Hot**, **Cool**, and **Archive**.
-Changing tiers from **Cool** or **Hot** to **Archive** take place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline, and can't be read or modified. Before you can read or modify an archived blob's data, you'll need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+Changing tiers from **Cool** or **Hot** to **Archive** take place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline, and can't be read or modified. Before you can read or modify an archived blob's data, you need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
The following sample code sets the tier to **Hot** for all blobs within the `archive` container.
Foreach($blob in $blobs) {
Blob index tags make data management and discovery easier. Blob index tags are user-defined key-value index attributes that you can apply to your blobs. Once configured, you can categorize and find objects within an individual container or across all containers. Blob resources can be dynamically categorized by updating their index tags without requiring a change in container organization. Index tags offer a flexible way to cope with changing data requirements. You can use both metadata and index tags simultaneously. For more information on index tags, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
-The following example illustrates how to add blob index tags to a series of blobs. The example reads data from an XML file and uses it to create index tags on several blobs. To use the sample code, create a local *blob-list.xml* file in your *C:\temp* directory. The XML data is provided below.
+The following example illustrates how to add blob index tags to a series of blobs. The example reads data from an XML file and uses it to create index tags on several blobs. To use the sample code, create a local *blob-list.xml* file in your *C:\temp* directory. The XML data is provided in the following example.
```xml <Venue Name="House of Prime Rib" Type="Restaurant">
$data.Venue.Files.ChildNodes | ForEach-Object {
## Delete blobs
-You can delete either a single blob or series of blobs with the `Remove-AzStorageBlob` cmdlet. When deleting multiple blobs, you can utilize conditional operations, loops, or the PowerShell pipeline as shown in the examples below.
+You can delete either a single blob or series of blobs with the `Remove-AzStorageBlob` cmdlet. When deleting multiple blobs, you can utilize conditional operations, loops, or the PowerShell pipeline as shown in the following examples.
> [!WARNING] > Running the following examples may permanently delete blobs. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see [Soft delete for containers](soft-delete-blob-overview.md).
for ($i = 1; $i -le 3; $i++) {
Get-AzStorageBlob -Prefix $prefixName -Container $containerName -Context $ctx | Remove-AzStorageBlob ```
-In some cases, it's possible to retrieve blobs that have been deleted. If your storage account's soft delete data protection option is enabled, the `-IncludeDeleted` parameter will return blobs deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
+In some cases, it's possible to retrieve blobs that have been deleted. If your storage account's soft delete data protection option is enabled, the `-IncludeDeleted` parameter returns blobs deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
Use the following example to retrieve a list of blobs deleted within container's associated retention period. The result displays a list of recently deleted blobs.
file4.txt BlockBlob 22 application/octet-stream 2021-12-17 00:14:25Z C
As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore blobs deleted within the associated retention period. You may also use versioning to maintain previous versions of your blobs for each recovery and restoration.
-If blob versioning and blob soft delete are both enabled, then modifying, overwriting, deleting, or restoring a blob automatically creates a new version. The method you'll use to restore a deleted blob will depend upon whether versioning is enabled on your storage account.
+If blob versioning and blob soft delete are both enabled, then modifying, overwriting, deleting, or restoring a blob automatically creates a new version. The method you use to restore a deleted blob depends upon whether versioning is enabled on your storage account.
The following code sample restores all soft-deleted blobs or, if versioning is enabled, restores the latest version of a blob. It first determines whether versioning is enabled with the `Get-AzStorageBlobServiceProperty` cmdlet.
If versioning is enabled, the `Get-AzStorageBlob` cmdlet retrieves a list of all
If versioning is disabled, the `BlobBaseClient.Undelete` method is used to restore each soft-deleted blob in the container.
-Before you can follow this example, you'll need to enable soft delete or versioning on at least one of your storage accounts.
+Before you can follow this example, you need to enable soft delete or versioning on at least one of your storage accounts.
+
+> [!IMPORTANT]
+> The following example enumerates a group of blobs and stores them in memory before processing them. If versioning is enabled, the blobs are also sorted. The use of the `-ContinuationToken` parameter with `$maxCount` variable limits the number of blobs within the group to conserve resources. If a container has millions of blobs, this will be extremely expensive. You can adjust the value of the `$maxCount` variable, though if a container has millions of blobs the script will process the blobs slowly.
To learn more about the soft delete data protection option, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
To learn more about the soft delete data protection option, refer to the [Soft d
$accountName ="myStorageAccount" $groupName ="myResourceGroup" $containerName ="mycontainer"
+$maxCount = 1000
+$token = $null
$blobSvc = Get-AzStorageBlobServiceProperty `
if($blobSvc.DeleteRetentionPolicy.Enabled)
$ctx = New-AzStorageContext ` -StorageAccountName $accountName ` -UseConnectedAccount-
- # Get all blobs and versions using -Unique
- # to avoid processing duplicates/versions
- $blobs = Get-AzStorageBlob `
- -Container $containerName `
- -Context $ctx -IncludeVersion | `
- Where-Object {$_.VersionId -ne $null} | `
- Sort-Object -Property Name -Unique
-
- # Iterate the collection
- foreach ($blob in $blobs)
+ do
{-
- # Process versions
- if($blob.VersionId -ne $null)
+ # Get all blobs and versions using -Unique
+ # to avoid processing duplicates/versions
+ $blobs = Get-AzStorageBlob `
+ -Container $containerName `
+ -Context $ctx -IncludeVersion | `
+ Where-Object {$_.VersionId -ne $null} | `
+ Sort-Object -Property Name -Unique
+
+ # Iterate the collection
+ foreach ($blob in $blobs)
{
-
- # Get all versions of the blob, newest to oldest
- $delBlob = Get-AzStorageBlob `
- -Container $containerName `
- -Context $ctx `
- -Prefix $blob.Name `
- -IncludeDeleted -IncludeVersion | `
- Sort-Object -Property VersionId -Descending
-
- # Verify that the newest version is NOT the latest (that the version is "deleted")
- if (-Not $delBlob[0].IsLatestVersion)
+ # Process versions
+ if($blob.VersionId -ne $null)
{
- $delBlob[0] | Copy-AzStorageBlob `
- -DestContainer $containerName `
- -DestBlob $delBlob[0].Name
- }
+
+ # Get all versions of the blob, newest to oldest
+ $delBlob = Get-AzStorageBlob `
+ -Container $containerName `
+ -Context $ctx `
+ -Prefix $blob.Name `
+ -IncludeDeleted -IncludeVersion | `
+ Sort-Object -Property VersionId -Descending
+
+ # Verify that the newest version is NOT the latest (that the version is "deleted")
+ if (-Not $delBlob[0].IsLatestVersion)
+ {
+ $delBlob[0] | Copy-AzStorageBlob `
+ -DestContainer $containerName `
+ -DestBlob $delBlob[0].Name
+ }
- #Dispose the temporary object
- $delBlob = $null
-
+ #Dispose the temporary object
+ $delBlob = $null
+ }
}-
+ $token = $blobs[$blobs.Count -1].ContinuationToken;
}-
+ while ($null -ne $token)
} # Otherwise (if versioning is disabled)
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
+
+ Title: Create and manage container leases with JavaScript
+
+description: Learn how to manage a lock on a container in your Azure Storage account using the JavaScript client library.
++++++ Last updated : 05/01/2023+
+ms.devlang: javascript
+++
+# Create and manage container leases with JavaScript
+
+This article shows how to create and manage container leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break container leases.
+
+## About container leases
++
+Lease operations are handled by the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about blob leases using the client library, see [Create and manage blob leases with JavaScript](storage-blob-lease-javascript.md).
+
+## Acquire a lease
+
+When you acquire a container lease, you obtain a lease ID that your code can use to operate on the container. If the container already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, and then use one of the following methods:
+
+- [acquireLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-acquirelease)
+
+The following example acquires a 30-second lease for a container:
++
+## Renew a lease
+
+You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it has expired, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [renewLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-renewlease)
+
+The following example renews a container lease:
++
+## Release a lease
+
+You can release a container lease if the lease ID specified on the request matches the lease ID associated with the container. Releasing a lease allows another client to acquire a lease for the container immediately after the release is complete.
+
+You can release a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [releaseLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-releaselease)
+
+The following example releases a lease on a container:
++
+## Break a lease
+
+You can break a container lease if the container has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
+
+You can break a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [breakLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-breaklease)
+
+The following example breaks a lease on a container:
+++
+## Resources
+
+To learn more about managing container leases using the Azure Blob Storage client library for JavaScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing container leases use the following REST API operation:
+
+- [Lease Container](/rest/api/storageservices/lease-container)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-container.js)
++
+### See also
+
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
+
+ Title: Create and manage container leases with TypeScript
+
+description: Learn how to manage a lock on a container in your Azure Storage account with TypeScript using the JavaScript client library.
++++++ Last updated : 05/01/2023+
+ms.devlang: typescript
+++
+# Create and manage container leases with TypeScript
+
+This article shows how to create and manage container leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break container leases.
+
+## About container leases
++
+Lease operations are handled by the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about blob leases using the client library, see [Create and manage blob leases with TypeScript](storage-blob-lease-typescript.md).
+
+## Acquire a lease
+
+When you acquire a container lease, you obtain a lease ID that your code can use to operate on the container. If the container already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, and then use one of the following methods:
+
+- [acquireLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-acquirelease)
+
+The following example acquires a 30-second lease for a container:
++
+## Renew a lease
+
+You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it has expired, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [renewLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-renewlease)
+
+The following example renews a container lease:
++
+## Release a lease
+
+You can release a container lease if the lease ID specified on the request matches the lease ID associated with the container. Releasing a lease allows another client to acquire a lease for the container immediately after the release is complete.
+
+You can release a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [releaseLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-releaselease)
+
+The following example releases a lease on a container:
++
+## Break a lease
+
+You can break a container lease if the container has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
+
+You can break a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [breakLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-breaklease)
+
+The following example breaks a lease on a container:
+++
+## Resources
+
+To learn more about managing container leases using the Azure Blob Storage client library for JavaScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing container leases use the following REST API operation:
+
+- [Lease Container](/rest/api/storageservices/lease-container)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/lease-container.ts)
++
+### See also
+
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
+
+ Title: Create and manage blob leases with JavaScript
+
+description: Learn how to manage a lock on a blob in your Azure Storage account using the JavaScript client library.
++++++ Last updated : 05/01/2023+
+ms.devlang: javascript
+++
+# Create and manage blob leases with JavaScript
+
+This article shows how to create and manage blob leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break blob leases.
+
+## About blob leases
++
+Lease operations are handled by the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about container leases using the client library, see [Create and manage container leases with JavaScript](storage-blob-container-lease-javascript.md).
+
+## Acquire a lease
+
+When you acquire a blob lease, you obtain a lease ID that your code can use to operate on the blob. If the blob already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, and then use one of the following methods:
+
+- [acquireLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-acquirelease)
+
+The following example acquires a 30-second lease for a blob:
++
+## Renew a lease
+
+You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it has expired, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [renewLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-renewlease)
+
+The following example renews a lease for a blob:
++
+## Release a lease
+
+You can release a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. Releasing a lease allows another client to acquire a lease for the blob immediately after the release is complete.
+
+You can release a lease using one of the following methods on a JavaScript[BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [releaseLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-releaselease)
+
+The following example releases a lease on a blob:
++
+## Break a lease
+
+You can break a blob lease if the blob has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
+
+You can break a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [breakLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-breaklease)
+
+The following example breaks a lease on a blob:
+++
+## Resources
+
+To learn more about managing blob leases using the Azure Blob Storage client library for JavaScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing blob leases use the following REST API operation:
+
+- [Lease Blob](/rest/api/storageservices/lease-blob)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-blob.js)
++
+### See also
+
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
+
+ Title: Create and manage blob leases with TypeScript
+
+description: Learn how to manage a lock on a blob in your Azure Storage account with TypeScript using the JavaScript client library.
++++++ Last updated : 05/01/2023+
+ms.devlang: typescript
+++
+# Create and manage blob leases with TypeScript
+
+This article shows how to create and manage blob leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break blob leases.
+
+## About blob leases
++
+Lease operations are handled by the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about container leases using the client library, see [Create and manage container leases with TypeScript](storage-blob-container-lease-typescript.md).
+
+## Acquire a lease
+
+When you acquire a blob lease, you obtain a lease ID that your code can use to operate on the blob. If the blob already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, and then use one of the following methods:
+
+- [acquireLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-acquirelease)
+
+The following example acquires a 30-second lease for a blob:
++
+## Renew a lease
+
+You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it has expired, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [renewLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-renewlease)
+
+The following example renews a lease for a blob:
++
+## Release a lease
+
+You can release a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. Releasing a lease allows another client to acquire a lease for the blob immediately after the release is complete.
+
+You can release a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [releaseLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-releaselease)
+
+The following example releases a lease on a blob:
++
+## Break a lease
+
+You can break a blob lease if the blob has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
+
+You can break a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
+
+- [breakLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-breaklease)
+
+The following example breaks a lease on a blob:
+++
+## Resources
+
+To learn more about managing blob leases using the Azure Blob Storage client library for JavaScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing blob leases use the following REST API operation:
+
+- [Lease Blob](/rest/api/storageservices/lease-blob)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/lease-blob.ts)
++
+### See also
+
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
Previously updated : 04/28/2023 Last updated : 05/02/2023
To migrate a classic storage account to the Azure Resource Manager deployment mo
1. After a successful validation, select **Prepare** button to simulate the migration.
- > [!IMPORTANT]
- > There may be a delay of a few minutes after validation is complete before the Prepare button is enabled.
- 1. If the Prepare step completes successfully, you'll see a link to the new resource group. Select that link to navigate to the new resource group. The migrated storage account appears under the **Resources** tab in the **Overview** page for the new resource group. At this point, you can compare the configuration and data in the classic storage account to the newly migrated storage account. You'll see both in the list of storage accounts in the portal. Both the classic account and the migrated account have the same name.
To migrate a classic storage account to the Azure Resource Manager deployment mo
1. If you're not satisfied with the results of the migration, select **Abort** to delete the new storage account and resource group. You can then address any problems and try again. 1. When you're ready to commit, type **yes** to confirm, then select **Commit** to complete the migration.
+> [!IMPORTANT]
+> There may be a delay of a few minutes after validation is complete before the Prepare button is enabled.
+ # [PowerShell](#tab/azure-powershell) To migrate a classic storage account to the Azure Resource Manager deployment model with PowerShell, you must use the Azure PowerShell Service Management module. To learn how to install this module, see [Install and configure the Azure PowerShell Service Management module](/powershell/azure/servicemanagement/install-azure-ps#checking-the-version-of-azure-powershell). The key steps are included here for convenience.
storage Classic Account Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md
Previously updated : 05/01/2023 Last updated : 05/02/2023
To migrate your classic storage accounts, you should:
1. Search for **Help + support** in the [Azure portal](https://portal.azure.com#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). 1. Select **Create a support request**.
- 1. Under **Issue type**, select **Technical**.
- 1. Under **Subscription**, select your subscription.
- 1. Under **Service**, select **My services**.
- 1. Under **Service type**, search for and select **Storage Account Management**.
- 1. Under **Resource**, select the resource you want to migrate.
- 1. Under **Summary**, type a description of your issue.
- 1. Under **Problem type**, select **Migrate a classic storage account to Azure Resource Manager**.
+ 1. For **Summary**, type a description of your issue.
+ 1. For **Issue type**, select **Technical**.
+ 1. For **Subscription**, select your subscription.
+ 1. For **Service**, select **My services**.
+ 1. For **Service type**, select **Storage Account Management**.
+ 1. For **Resource**, select the resource you want to migrate.
+ 1. For **Problem type**, select **Data Migration**.
+ 1. For **Problem subtype**, select **Migrate account to new resource group/subscription/region/tenant**.
1. Select **Next**, then follow the instructions to submit your support request. ## FAQ
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
description: An overview of Azure Elastic SAN Preview, a service that enables yo
Previously updated : 02/22/2023 Last updated : 05/02/2023
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
description: Understand planning for an Azure Elastic SAN deployment. Learn abou
Previously updated : 02/22/2023 Last updated : 05/02/2023
Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Sa
In Preview, Elastic SAN supports public endpoint from selected virtual network, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. You must enable [service endpoint for Azure Storage](../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group.
+If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+ ## Redundancy To protect the data in your Elastic SAN against data loss or corruption, all SANs store multiple copies of each file as they're written. Depending on the requirements of your workload, you can select additional degrees of redundancy. The following data redundancy options are currently supported:
The following iSCSI features aren't currently supported:
- iSCSI Error Recovery Levels 1 and 2 - ESXi iSCSI flow control - More than one LUN per iSCSI target-- Multiple connections per session (MC/S) ## Next steps
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure Elas
Previously updated : 12/09/2022 Last updated : 05/02/2023
storage Storage How To Use Files Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md
description: Learn to use Azure file shares with Windows and Windows Server. Use
Previously updated : 05/01/2023 Last updated : 05/02/2023
In order to use an Azure file share via the public endpoint outside of the Azure
| Windows version | SMB version | Azure Files SMB Multichannel | Maximum SMB channel encryption | |-|-|-|-| | Windows 11, version 22H2 | SMB 3.1.1 | Yes | AES-256-GCM |
-| Windows 10, version 22H2 | SMB 3.1.1 | Yes | AES-256-GCM |
+| Windows 10, version 22H2 | SMB 3.1.1 | Yes | AES-128-GCM |
| Windows Server 2022 | SMB 3.1.1 | Yes | AES-256-GCM | | Windows 11, version 21H2 | SMB 3.1.1 | Yes | AES-256-GCM | | Windows 10, version 21H2 | SMB 3.1.1 | Yes | AES-128-GCM |
stream-analytics Stream Analytics Clean Up Your Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-clean-up-your-job.md
# Stop or delete your Azure Stream Analytics job
-Azure Stream Analytics jobs can be easily stopped or deleted through the Azure portal, Azure PowerShell, Azure SDK for .Net, or REST API. A Stream Analytics job cannot be recovered once it has been deleted.
+Azure Stream Analytics jobs can be easily stopped or deleted through the Azure portal, Azure PowerShell, Azure SDK for .NET, or REST API. A Stream Analytics job cannot be recovered once it has been deleted.
>[!NOTE] >When you stop your Stream Analytics job, the data persists only in the input and output storage, such as Event Hubs or Azure SQL Database. If you are required to remove data from Azure, be sure to follow the removal process for the input and output resources of your Stream Analytics job.
synapse-analytics Synapse Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-service-identity.md
You can create, delete, manage user-assigned managed identities in Azure Active
In order to use a user-assigned managed identity, you must first [create credentials](../data-factory/credentials.md) in your service instance for the UAMI.
+>[!NOTE]
+> User-assigned Managed Identity is not currently supported in Synapse notebooks and Spark job definitions.
+ ## Next steps - [Create credentials](../data-factory/credentials.md).
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
description: Learn about the new features and documentation improvements for Azu
Previously updated : 04/04/2023 Last updated : 05/02/2023
The following table lists the features of Azure Synapse Analytics that are curre
| **Feature** | **Learn more**| |:-- |:-- |
-| **Spark Advisor for Azure Synapse Notebook** | The [Spark Advisor for Azure Synapse Notebook](monitoring/apache-spark-advisor.md) analyzes code run by Spark and displays real-time advice for Notebooks. The Spark advisor offers recommendations for code optimization based on built-in common patterns, performs error analysis, and locates the root cause of failures.ΓÇ» |
| **Apache Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).| | **Apache Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach more disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).| | **Apache Spark Optimized Write** | [Optimize Write is a Delta Lake on Azure Synapse](spark/optimize-write-for-apache-spark.md) feature reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data.| | **Apache Spark R language support** | Built-in [R support for Apache Spark](spark/apache-spark-r-language.md) is now in preview. | | **Azure Synapse Data Explorer** | The [Azure Synapse Data Explorer](./data-explorer/data-explorer-overview.md) provides an interactive query experience to unlock insights from log and telemetry data. Connectors for Azure Data Explorer are available for Synapse Data Explorer. For more news, see [Azure Synapse Data Explorer (preview)](#azure-synapse-data-explorer-preview).| | **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now browse an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder in your Azure Synapse Analytics workspace in Synapse Studio. To learn more, see [Browse an ADLS Gen2 folder with ACLs in Azure Synapse Analytics](how-to-access-container-with-access-control-lists.md).|
+| **Capture changed data from Cosmos DB analytical store** | Azure Cosmos DB analytical store now supports change data capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for MongoDB. For more information, see [Capture Changed Data from your Cosmos DB analytical store](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/capture-changed-data-from-your-cosmos-db-analytical-store/ba-p/3783530) and [DevBlog: Change Data Capture (CDC) with Azure Cosmos DB analytical store](https://devblogs.microsoft.com/cosmosdb/now-in-preview-change-data-capture-cdc-with-azure-cosmos-db-analytical-store/).|
| **Data flow improvements to Data Preview** | To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | | **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).| | **Distributed Deep Neural Network Training** | Learn more about new distributed training libraries like Horovod, Petastorm, TensorFlow, and PyTorch in [Deep learning tutorials](./machine-learning/concept-deep-learning.md). | | **Embed ADX dashboards** | Azure Data Explorer dashboards be [embedded in an IFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). | | **Reject options for delimited text files** | Reject options for CREATE EXTERNAL TABLE on delimited files is in preview. |
+| **Spark Advisor for Azure Synapse Notebook** | The [Spark Advisor for Azure Synapse Notebook](monitoring/apache-spark-advisor.md) analyzes code run by Spark and displays real-time advice for Notebooks. The Spark advisor offers recommendations for code optimization based on built-in common patterns, performs error analysis, and locates the root cause of failures.|
| **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).| | **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows.To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
This section summarizes recent new features and capabilities of [Apache Spark fo
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| April 2023 | **Delta Lake - Low Shuffle Merge** | [Low Shuffle Merge optimization for Delta tables](spark/low-shuffle-merge-for-apache-spark.md) is now available in Apache Spark 3.2 and 3.3 pools. You can now update a Delta table with advanced conditions using the Delta Lake MERGE command. |
| March 2023 | **Library management new ability: in-line installation** | `%pip` and `%conda` are now available in Apache Spark for Synapse! `%pip` and `%conda` are commands that can be used on Notebooks to install Python packages. For more information, see [Manage session-scoped Python packages through %pip and %conda commands](spark/apache-spark-manage-session-packages.md#manage-session-scoped-python-packages-through-pip-and-conda-commands). | | March 2023 | **Increasing Azure Synapse Analytics Spark performance up to 77%** | More regions are receiving the [performance increase for Azure Synapse Spark workloads](https://azure.microsoft.com/updates/increasing-azure-synapse-analytics-spark-performance-by-up-to-77/), including most recently Korea Central, Central India, and Australia Southeast. | | March 2023 | **Azure Synapse Spark Notebook ΓÇô Unit Testing** | Learn how to [test and create unit test cases for Spark jobs developed using Synapse Notebook](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-spark-notebook-unit-testing/ba-p/3725137). |
This section summarizes recent new features and capabilities of Azure Synapse An
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| April 2023 | **Capture changed data from Cosmos DB analytical store (Public Preview)** | Azure Cosmos DB analytical store now supports change data capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for MongoDB. For more information, see [Capture Changed Data from your Cosmos DB analytical store](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/capture-changed-data-from-your-cosmos-db-analytical-store/ba-p/3783530) and [DevBlog: Change Data Capture (CDC) with Azure Cosmos DB analytical store](https://devblogs.microsoft.com/cosmosdb/now-in-preview-change-data-capture-cdc-with-azure-cosmos-db-analytical-store/).|
| March 2023 | **Deep dive: Synapse pipelines storage event trigger security** | This Customer Success Engineering blog post is a deep dive into [Azure Synapse pipelines storage event trigger security](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-pipelines-storage-event-trigger-security-deep-dive/ba-p/3778250). ADF and Synapse Pipelines offer a feature that allows pipeline execution to be triggered based on various events, such as storage blob creation or deletion. This can be used by customers to implement event-driven pipeline orchestration.| | January 2023 | **SQL CDC incremental extract now supports numeric columns** | Enabling incremental [extract from SQL Server CDC in dataflows](../data-factory/connector-sql-server.md?tabs=data-factory#native-change-data-capture) allows you to only process rows that have changed since the last time that pipeline was executed. Supported incremental column types now include date/time and numeric columns. | | December 2022 | **Express virtual network injection** | Both the standard and express methods to [inject your SSIS Integration Runtime (IR) into a VNet](https://techcommunity.microsoft.com/t5/sql-server-integration-services/vnet-or-no-vnet-secure-data-access-from-ssis-in-azure-data/ba-p/1062056) are generally available now. For more information, see [General Availability of Express Virtual Network injection for SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/general-availability-of-express-virtual-network-injection-for/ba-p/3699993).|
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| April 2023 | **ARM template to deploy Azure Data Explorer DB with Cosmos DB connection** | An [ARM template is now available to quickly deploy an Azure Data Explorer cluster](/samples/azure/azure-quickstart-templates/kusto-cosmos-db/) with System Assigned Identity, a database, an Azure Cosmos DB account (NoSql), an Azure Cosmos DB database, an Azure Cosmos DB container, and a data connection between the Cosmos DB container and the Kusto database (using the system assigned identity). |
+| April 2023 | **Ingest data from Azure Events Hub to ADX free tier** | Azure Data Explorer now supports integration with Events Hub in ADX free tier. For more information, see [Free Event Hub data analysis with Azure Data Explorer](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/free-event-hub-data-analysis-with-azure-data-explorer/ba-p/3775034). |
| March 2023 | **View cluster history in Kusto Data Explorer** | It is now easier to track the history of queries and commands run on a Kusto cluster using [`.show queries`](/azure/data-explorer/kusto/management/queries) and [`.show commands-and-queries`](/azure/data-explorer/kusto/management/commands-and-queries). | | March 2023 | **Amazon S3 support in Kusto Web Explorer** | You can now [ingest data from Amazon S3](/azure/data-explorer/kusto/api/connection-strings/storage-connection-strings) seamlessly via the Ingestion Hub in Kusto Web Explorer (KWE). | | March 2023 | **Plotly visuals support** | Use the [Plotly graphing library](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/plotly-visualizations-in-azure-data-explorer/ba-p/3717768) to create visualizations for [a KQL query using 'render' operator](/azure/data-explorer/kusto/query/renderoperator?pivots=azuredataexplorer) or interactively when [building ADX dashboards](/azure/data-explorer/azure-data-explorer-dashboards). |
update-center Manage Multiple Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md
Title: Manage multiple machines in update management center (preview) description: The article details how to use Update management center (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal. Previously updated : 04/26/2023 Last updated : 05/02/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. > [!IMPORTANT]
-> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)*. If you fail to update the patch mode, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
-> - To update the patch mode, go to **Update management center (Preview)** home page > **Update Settings**. In **Change update settings**, add the machines and under **Patch orchestration**, select *Azure Managed - Safe Deployment*.
+> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
+ This article describes the various features that update management center (Preview) offers to manage the system updates on your machines. Using the update management center (preview), you can:
Instead of performing these actions from a selected Azure VM or Arc-enabled serv
- **Patch orchestration configuration of Azure virtual machines** ΓÇö all the Azure machines inventoried in the subscription are summarized by each update orchestration method. Values are:
- - **Azure orchestrated**ΓÇöthis mode enables automatic VM guest patching for the Azure virtual machine. Subsequent patch installation is orchestrated by Azure.
+ - **Customer Managed Schedules (Preview)**ΓÇöenables schedule patching on your existing VMs.
+ - **Azure Managed - Safe Deployment**ΓÇöthis mode enables automatic VM guest patching for the Azure virtual machine. Subsequent patch installation is orchestrated by Azure.
- **Image Default**ΓÇöfor Linux machines, it uses the default patching configuration. - **OS orchestrated**ΓÇöthe OS automatically updates the machine. - **Manual updates**ΓÇöyou control the application of patches to a machine by applying patches manually inside the machine. In this mode, automatic updates are disabled for Windows OS. -
+
+
+
For more information about each orchestration method see, [automatic VM guest patching for Azure VMs](../virtual-machines/automatic-vm-guest-patching.md#patch-orchestration-modes). - **Update installation status**ΓÇöby default, the tile shows the status for the last 30 days. Using the **Time** picker, you can choose a different range. The values are:
Update management center (preview) in Azure enables you to browse information ab
The column **Patch Orchestration**, in the machine's patch mode has the following values:
+ * **Customer Managed Schedules (Preview)**ΓÇöenables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties - **Patch mode = Azure-orchestrated** and **BypassPlatformSafetyChecksOnUserSchedule = TRUE** on your behalf after receiving your consent.
+ * **Azure Managed - Safe Deployment**ΓÇöfor a group of virtual machines undergoing an update, the Azure platform will orchestrate updates. The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).(i.e), the patch mode is **AutomaticByPlatform**.
* **Automatic by OS**ΓÇöthe machine is automatically updated by the OS.
- * **Azure orchestrated**ΓÇöfor a group of virtual machines undergoing an update, the Azure platform will orchestrate updates. The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), and for an Azure virtual machine scale set, it's set to [automatic OS image upgrade](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md).
* **Image Default**ΓÇöfor Linux machines, its default patching configuration is used. * **Manual**ΓÇöyou control the application of patches to a machine by applying patches manually inside the machine. In this mode automatic updates are disabled for Windows OS.
+
The machine's statusΓÇöfor an Azure VM, it shows it's [power state](../virtual-machines/states-billing.md#power-states-and-billing), and for an Arc-enabled server, it shows if it's connected or not.
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
To configure update settings on your machines on a single VM, follow these steps
- **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use update management center (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting. - **Patch orchestration** option provides the following:-
- - **Automatic by OS (Windows Automatic Updates)** - When the workload running on the VM doesn't have to meet availability targets, the operating system updates are automatically downloaded and installed. Machines are rebooted as needed.
- - **Azure-orchestrated** - Patch orchestration set to Azure-orchestrated for an Azure VM (not applicable for Arc-enabled server) has two different implications depending on whether customer [schedule](../update-center/scheduled-patching.md#) is attached to it or not.
-
- | Patch orchestration type | Description
- |-|-|
- |Azure-orchestrated with no schedule attached | Machine is enabled for [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). It implies that the available Critical and Security patches are downloaded and applied automatically on the Azure VM. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.|
- |Azure-orchestrated with schedule attached | Patching will happen according to the schedule and [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) will not take effect on the machine. Patch orchestration set to Azure-orchestrated is a necessary pre-condition for enabling schedules. You cannot enable a machine for custom schedule unless you set Patch orchestration to Azure-orchestrated. |
-
- - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
+
+ - **Customer Managed Schedules (Preview)**ΓÇöenables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties - **Patch mode = Azure-orchestrated** and **BypassPlatformSafetyChecksOnUserSchedule = TRUE** on your behalf after receiving your consent.
+ - **Azure Managed - Safe Deployment**ΓÇöfor a group of virtual machines undergoing an update, the Azure platform will orchestrate updates. (not applicable for Arc-enabled server). The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).(i.e), the patch mode is **AutomaticByPlatform**. There are different implications depending on whether customer schedule is attached to it or not. For more information, see the [user scenarios](prerequsite-for-schedule-patching.md#user-scenarios).
+ - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
+ - **Windows Automatic Updates** (AutomaticByOS) - When the workload running on the VM doesn't have to meet availability targets, the operating system updates are automatically downloaded and installed. Machines are rebooted as needed.
- **Manual updates** - This mode disables Windows automatic updates on VMs. Patches are installed manually or using a different solution. - **Image Default** - Only supported for Linux Virtual Machines, this mode uses the default patching configuration in the image used to create the VM.
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Title: Scheduling recurring updates in Update management center (preview) description: The article details how to use update management center (preview) in Azure to set update schedules that install recurring updates on your machines. Previously updated : 04/26/2023 Last updated : 05/02/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. > [!IMPORTANT]
-> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)*. If you fail to update the patch mode, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
-> - To update the patch mode, go to **Update management center (Preview)** home page > **Update Settings**. In **Change update settings**, add the machines and under **Patch orchestration**, select *Azure Managed - Safe Deployment*.
+> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
+ You can use update management center (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale.
Update management center (preview) uses maintenance control schedule instead of
## Prerequisites for scheduled patching 1. See [Prerequisites for Update management center (preview)](./overview.md#prerequisites)
-1. Patch orchestration of the Azure machines should be set to **Azure Orchestrated (Automatic By Platform)**. For Azure Arc-enabled machines, it isn't a requirement.
+1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules (Preview)**. For more information, see [how to enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement.
> [!Note]
- > If you set the patch orchestration mode to Azure orchestrated (AutomaticByPlatform) but don't attach a maintenance configuration to an Azure machine, it is treated as [Automatic Guest patching](../virtual-machines/automatic-vm-guest-patching.md) enabled machine and Azure platform will automatically install updates as per its own schedule.
+ > If you set the patch mode to Azure orchestrated (AutomaticByPlatform) but do not enable the **BypassPlatformSafetyChecksOnUserSchedule** flag and do not attach a maintenance configuration to an Azure machine, it is treated as [Automatic Guest patching](../virtual-machines/automatic-vm-guest-patching.md) enabled machine and Azure platform will automatically install updates as per its own schedule. [Learn more](./overview.md#prerequisites).
## Schedule recurring updates on single VM
You can create a new Guest OS update maintenance configuration or modify an exis
:::image type="content" source="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-inline.png" alt-text="Change update selection criteria of Maintenance configuration." lightbox="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-expanded.png":::
-## Grouping using policy
+## Onboarding to Schedule using Policy
The update management center (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case. > [!NOTE]
-> This policy also ensures that the patch orchestration property for Azure machines is set to **Azure-orchestrated (Automatic by Platform)** as it is a prerequisite for scheduled patching.
+> This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules (Preview)** as it is a prerequisite for scheduled patching.
### Assign a policy
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
Title: Updates and maintenance in update management center (preview). description: The article describes the updates and maintenance options available in Update management center (preview). Previously updated : 04/26/2023 Last updated : 05/02/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. > [!IMPORTANT]
-> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)*. If you fail to update the patch mode, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
-> - To update the patch mode, go to **Update management center (Preview)** home page > **Update Settings**. In **Change update settings**, add the machines and under **Patch orchestration**, select *Azure Managed - Safe Deployment*.
+> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. [Learn more](prerequsite-for-schedule-patching.md).
This article provides an overview of the various update and maintenance options available by update management center (preview).
Update management center (preview) uses maintenance control schedule instead of
Start using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. > [!NOTE]
-> Patch orchestration set to Azure-orchestrated is a pre-condition to enable schedule patching on Azure VM. For more information, see the [list of prerequisites](../update-center/scheduled-patching.md#prerequisites-for-scheduled-patching)
+> Patch orchestration property for Azure machines is set to **Customer Managed Schedules (Preview)** as it is a prerequisite for scheduled patching. For more information, see the [list of prerequisites](../update-center/scheduled-patching.md#prerequisites-for-scheduled-patching).
+ ## Automatic VM Guest patching in Azure This mode of patching lets the Azure platform automatically download and install all the security and critical updates on your machines every month and apply them on your machines following the availability-first principles. For more information, see [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).
-This VM property can be enabled by setting the value of Patch orchestration update setting to **Azure Orchestrated/Automatic by Platform** value.
+In **Update management center** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property.
## Windows automatic updates
update-center Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md
Last updated 03/03/2023
### New prerequisite for scheduled patching
-A new patch mode - **Azure orchestrated with user managed schedules (Preview)** is introduced as a prerequisite to enable scheduled patching on Azure VMs. The new patch enables the *Azure-orchestrated using Automatic guest patching* and *BypassPlatformSafteyChecksOnUserSchedule* VM properties on your behalf after receiving the consent. [Learn more](prerequsite-for-schedule-patching.md).
+A new patch orchestration - **Customer Managed Schedules (Preview)** is introduced as a prerequisite to enable scheduled patching on Azure VMs. The new patch enables the *Azure-orchestrated* and *BypassPlatformSafteyChecksOnUserSchedule* VM properties on your behalf after receiving the consent. [Learn more](prerequsite-for-schedule-patching.md).
> [!IMPORTANT]
-> For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch mode to *Azure orchestrated with user managed schedules (preview)*. If you fail to update the patch mode, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.
+> For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.
## November 2022
virtual-machines Disks Pools Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-deploy.md
- Title: Deploy an Azure disk pool (preview)
-description: Learn how to deploy an Azure disk pool.
--- Previously updated : 02/28/2023----
-# Deploy an Azure disk pool (preview)
-
-> [!IMPORTANT]
-> Disk pools are being retired soon. If you're looking for an alternative solution, see either [Azure Elastic SAN (preview)](../storage/elastic-san/elastic-san-introduction.md) or [Azure NetApp Files](../aks/azure-netapp-files.md).
-
-This article covers how to deploy and configure an Azure disk pool (preview). Before deploying a disk pool, read the [conceptual](disks-pools.md) and [planning](disks-pools-planning.md) articles.
-
-For a disk pool to work correctly, you must complete the following steps:
-- Register your subscription for the preview.-- Delegate a subnet to your disk pool.-- Assign the resource provider of disk pool role-based access control (RBAC) permissions for managing your disk resources.-- Create the disk pool.
- - Add disks to your disk pool.
--
-## Prerequisites
-
-To successfully deploy a disk pool, you must have:
--- A set of managed disks you want to add to a disk pool.-- A virtual network with a dedicated subnet deployed for your disk pool.
- - Outbound ports 53, 443, and 5671 must be open.
- - Ensure that your network setting don't block any of your disk pool's required outbound dependencies. You can use either the [Azure PowerShell module](/powershell/module/az.diskpool/get-azdiskpooloutboundnetworkdependencyendpoint) or [Azure CLI](/cli/azure/disk-pool#az-disk-pool-list-outbound-network-dependency-endpoint) to get the complete list of all outbound dependencies.
-
-If you're going to use the Azure PowerShell module, install [version 6.1.0 or newer](/powershell/module/az.diskpool/).
-
-If you're going to use the Azure CLI, install [the latest version](/cli/azure/disk-pool).
-
-## Register your subscription for the preview
-
-Register your subscription to the **Microsoft.StoragePool** provider, to be able to create and use disk pools.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. On the Azure portal menu, search for and select **Subscriptions**.
-1. Select the subscription you want to use for disk pools.
-1. On the left menu, under **Settings**, select **Resource providers**.
-1. Find the resource provider **Microsoft.StoragePool** and select **Register**.
-
-Once your subscription has been registered, you can deploy a disk pool.
-
-## Delegate subnet permission
-
-For your disk pool to work with your client machines, you must delegate a subnet to your Azure disk pool. When creating a disk pool, you specify a virtual network and the delegated subnet. You may either create a new subnet or use an existing one and delegate to the **Microsoft.StoragePool/diskPools** resource provider.
-
-1. Go to the virtual networks pane in the Azure portal and select the virtual network to use for the disk pool.
-1. Select **Subnets** from the virtual network pane and select **+Subnet**.
-1. Create a new subnet by completing the following required fields in the **Add subnet** pane:
- - Subnet delegation: Select Microsoft.StoragePool/diskPools
-
-For more information on subnet delegation, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md).
-
-## Assign StoragePool resource provider permissions
-
-For a disk to be able to be used in a disk pool, it must meet the following requirements:
--- The **StoragePool** resource provider must have been assigned an RBAC role that contains **Read** and **Write** permissions for every managed disk in the disk pool.-- Must be either a premium SSD, standard SSD, or an ultra disk in the same availability zone as the disk pool.
- - For ultra disks, it must have a disk sector size of 512 bytes.
-- Disk pools can't be configured to contain both Premium/standard SSDs and ultra disks. A disk pool configured for ultra disks can only contain ultra disks. Likewise, a disk pool configured for premium or standard SSDs can only contain premium and standard SSDs.-- Must be a shared disk with a maxShares value of two or greater.-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Search for and select either the resource group that contains the disks or each disk themselves.
-1. Select **Access control (IAM)**.
-1. Select **Add role assignment (Preview)**, and select **Disk Pool Operator** in the role list.
-
- If you prefer, you may create your own custom role instead. A custom role for disk pools must have the following RBAC permissions to function: **Microsoft.Compute/disks/write** and **Microsoft.Compute/disks/read**.
-
-1. For **Assign access to**, select **User, group, or service principal**.
-1. Select **+ Select members** and then search for **StoragePool Resource Provider**, select it, and save.
-
-## Create a disk pool
-
-For optimal performance, deploy the disk pool in the same Availability Zone of your clients. If you are deploying a disk pool for an Azure VMware Solution cloud and need guidance on identifying the Availability Zone, fill in this [form](https://aka.ms/DiskPoolCollocate).
-
-# [Portal](#tab/azure-portal)
-
-1. Search for and select **Disk pool**.
-1. Select **+Add** to create a new disk pool.
-1. Fill in the details requested, select the same region and availability zone as the clients that will use the disk pool.
-1. Select the subnet that has been delegated to the **StoragePool** resource provider, and its associated virtual network.
-1. Select **Next** to add disks to your disk pool.
-
- :::image type="content" source="media/disks-pools-deploy/create-a-disk-pool.png" alt-text="Screenshot of the basic pane for create a disk pool.":::
-
-### Add disks
-
-#### Prerequisites
-
-To add a disk, it must meet the following requirements:
--- Must be either a premium SSD, standard SSD, or an ultra disk in the same availability zone as the disk pool.
- - Currently, you can only add premium SSDs and Standard SSDs in the portal. Ultra disks must be added with either the Azure PowerShell module or the Azure CLI.
- - For ultra disks, it must have a disk sector size of 512 bytes.
-- Must be a shared disk with a maxShares value of two or greater.-- Disk pools can't be configured to contain both premium/standard SSDs and ultra disks. A disk pool configured for ultra disks can only contain ultra disks. Likewise, a disk pool configured for premium or standard SSDs can only contain premium and standard SSDs.-- You must grant RBAC permissions to the resource provide of disk pool to manage the disk you plan to add.-
-If your disk meets these requirements, you can add it to a disk pool by selecting **+Add disk** in the disk pool pane.
--
-### Enable iSCSI
-
-1. Select the **iSCSI** pane.
-1. Select **Enable iSCSI**.
-1. Enter the name of the iSCSI target, the iSCSI target IQN will generate based on this name.
- - The ACL mode is set to **Dynamic** by default. To use your disk pool as a storage solution for Azure VMware Solution, the ACL mode must be set to **Dynamic**.
-1. Select **Review + create**.
-
- :::image type="content" source="media/disks-pools-deploy/create-a-disk-pool-iscsi-blade.png" alt-text="Screenshot of the iscsi pane for create a disk pool.":::
-
-# [PowerShell](#tab/azure-powershell)
-
-The provided script performs the following:
-- Installs the necessary module for creating and using disk pools.-- Creates a disk and assigns RBAC permissions to it. If you already did this, you can comment out these sections of the script.-- Creates a disk pool and adds the disk to it.-- Creates and enable an iSCSI target.-
-Replace the variables in this script with your own variables before running the script. You'll also need to modify it to use an existing ultra disk if you've filled out the ultra disk form.
-
-```azurepowershell
-# Install the required module for Disk Pool
-Install-Module -Name Az.DiskPool -RequiredVersion 0.3.0 -Repository PSGallery
-
-# Sign in to the Azure account and setup the variables
-$subscriptionID = "<yourSubID>"
-Set-AzContext -Subscription $subscriptionID
-$resourceGroupName= "<yourResourceGroupName>"
-$location = "<desiredRegion>"
-$diskName = "<desiredDiskName>"
-$availabilityZone = "<desiredAvailabilityZone>"
-$subnetId='<yourSubnetID>'
-$diskPoolName = "<desiredDiskPoolName>"
-$iscsiTargetName = "<desirediSCSITargetName>" # This will be used to generate the iSCSI target IQN name
-$lunName = "<desiredLunName>"
-
-# You can skip this step if you have already created the disk and assigned proper RBAC permission to the resource group the disk is deployed to
-$diskconfig = New-AzDiskConfig -Location $location -DiskSizeGB 1024 -AccountType Premium_LRS -CreateOption Empty -zone $availabilityZone -MaxSharesCount 2
-$disk = New-AzDisk -ResourceGroupName $resourceGroupName -DiskName $diskName -Disk $diskconfig
-$diskId = $disk.Id
-$scopeDef = "/subscriptions/" + $subscriptionId + "/resourceGroups/" + $resourceGroupName
-$rpId = (Get-AzADServicePrincipal -SearchString "StoragePool Resource Provider").id
-
-New-AzRoleAssignment -ObjectId $rpId -RoleDefinitionName "Virtual Machine Contributor" -Scope $scopeDef
-
-# Create a Disk Pool
-# If you want to create a disk pool configured for ultra disks, add -AdditionalCapability "DiskPool.Disk.Sku.UltraSSD_LRS" to the command
-New-AzDiskPool -Name $diskPoolName -ResourceGroupName $resourceGroupName -Location $location -SubnetId $subnetId -AvailabilityZone $availabilityZone -SkuName Standard_S1
-$diskpool = Get-AzDiskPool -ResourceGroupName $resourceGroupName -Name $DiskPoolName
-
-# Add disks to the Disk Pool
-Update-AzDiskPool -ResourceGroupName $resourceGroupName -Name $diskPoolName -DiskId $diskId
-$lun = New-AzDiskPoolIscsiLunObject -ManagedDiskAzureResourceId $diskId -Name $lunName
-
-# Create an iSCSI Target and expose the disks as iSCSI LUNs
-New-AzDiskPoolIscsiTarget -DiskPoolName $diskPoolName -Name $iscsiTargetName -ResourceGroupName $resourceGroupName -Lun $lun -AclMode Dynamic
-
-Write-Output "Print details of the iSCSI target exposed on Disk Pool"
-
-Get-AzDiskPoolIscsiTarget -name $iscsiTargetName -DiskPoolName $diskPoolName -ResourceGroupName $resourceGroupName | fl
-```
--
-# [Azure CLI](#tab/azure-cli)
-
-The provided script performs the following:
-- Installs the necessary extension for creating and using disk pools.-- Creates a disk and assigns RBAC permissions to it. If you already did this, you can comment out these sections of the script.-- Creates a disk pool and adds the disk to it.-- Creates and enable an iSCSI target.-
-Replace the variables in this script with your own variables before running the script. You'll also need to modify it to use an existing ultra disk if you've filled out the ultra disk form.
-
-```azurecli
-# Add disk pool CLI extension
-az extension add -n diskpool
-
-#az extension add -s https://azcliprod.blob.core.windows.net/cli-extensions/diskpool-0.2.0-py3-none-any.whl
-
-#Select subscription
-az account set --subscription "<yourSubscription>"
-
-##Initialize input parameters
-resourceGroupName='<yourRGName>'
-location='<desiredRegion>'
-zone=<desiredZone>
-subnetId='<yourSubnetID>'
-diskName='<desiredDiskName>'
-diskPoolName='<desiredDiskPoolName>'
-targetName='<desirediSCSITargetName>'
-lunName='<desiredLunName>'
-
-#You can skip this step if you have already created the disk and assigned permission in the prerequisite step. Below is an example for premium disks.
-az disk create --name $diskName --resource-group $resourceGroupName --zone $zone --location $location --sku Premium-LRS --max-shares 2 --size-gb 1024
-
-#You can deploy all your disks into one resource group and assign StoragePool Resource Provider permission to the group
-storagePoolObjectId=$(az ad sp list --filter "displayName eq 'StoragePool Resource Provider'" --query "[0].objectId" -o json)
-storagePoolObjectId="${storagePoolObjectId%"}"
-storagePoolObjectId="${storagePoolObjectId#"}"
-
-az role assignment create --assignee-object-id $storagePoolObjectId --role "Virtual Machine Contributor" --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName --resource-group $resourceGroupName
-
-#Create a disk pool
-#To create a disk pool configured for ultra disks, add --additional-capabilities "DiskPool.Disk.Sku.UltraSSD_LRS" to your command
-az disk-pool create --name $diskPoolName \
resource-group $resourceGroupName \location $location \availability-zones $zone \subnet-id $subnetId \sku name="Standard_S1" \-
-#Initialize an iSCSI target. You can have 1 iSCSI target per disk pool
-az disk-pool iscsi-target create --name $targetName \
disk-pool-name $diskPoolName \resource-group $resourceGroupName \acl-mode Dynamic-
-#Add the disk to disk pool
-diskId=$(az disk show --name $diskName --resource-group $resourceGroupName --query "id" -o json)
-diskId="${diskId%"}"
-diskId="${diskId#"}"
-
-az disk-pool update --name $diskPoolName --resource-group $resourceGroupName --disks $diskId
-
-#Expose disks added in the Disk Pool as iSCSI Lun
-az disk-pool iscsi-target update --name $targetName \
- --disk-pool-name $diskPoolName \
- --resource-group $resourceGroupName \
- --luns name=$lunName managed-disk-azure-resource-id=$diskId
-```
--
-## Next steps
--- If you encounter any issues deploying a disk pool, see [Troubleshoot Azure disk pools (preview)](disks-pools-troubleshoot.md).-- [Attach disk pools to Azure VMware Solution hosts (Preview)](../azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md).-- [Manage a disk pool](disks-pools-manage.md).
virtual-machines Disks Pools Deprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-deprovision.md
- Title: Deprovision an Azure disk pool (preview)
-description: Learn how to deprovision, stop, and delete and Azure disk pool.
- Previously updated : 02/28/2023-------
-# Deprovision an Azure disk pool (preview)
-
-> [!IMPORTANT]
-> Disk pools are being retired soon. If you're looking for an alternative solution, see either [Azure Elastic SAN (preview)](../storage/elastic-san/elastic-san-introduction.md) or [Azure NetApp Files](../aks/azure-netapp-files.md).
-
-This article covers the deletion process for an Azure disk pool (preview) and how to disable iSCSI support.
-
-## Stop a disk pool
-
-You can stop a disk pool to save costs and preserve all configurations. When a disk pool is stopped, you can no longer connect to it over iSCSI. The managed resources deployed to support the disk pool will not be deleted. You must disconnect all clients with iSCSI connections to the disk pool first before stopping a disk pool. You can start a disk pool at any time. This will reactivate the iSCSI target exposed on this disk pool.
-# [Portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to your disk pool, and select **Stop**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Stop-AzDiskPool -Name '<yourDiskPool>' -ResourceGroupName '<yourResourceGroup>'
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az disk-pool stop --name "<yourDiskPool>" --resource-group "<yourResourceGroup>"
-```
--
-## Disable iSCSI support
-
-If you disable iSCSI support on a disk pool, you can no longer connect to a disk pool.
-
-When you first enable iSCSI support on a disk pool, an iSCSI target is created as the endpoint for the iSCSI connection. You can disable iSCSI support on the disk pool by deleting the iSCSI target. Each disk pool can only have one iSCSI target configured.
-
-You can re-enable iSCSI support on an existing disk pool. iSCSI support cannot be disabled on the disk pool if there are outstanding iSCSI connections to the disk pool.
-
-# [Portal](#tab/azure-portal)
-
-1. Search for **Disk pool** and select your disk pool.
-1. Select **iSCSI** under **Settings**.
-1. Clear the **Enable iSCSI** checkbox and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Remove-AzDiskPoolIscsiTarget -DiskPoolName "<yourDiskpoolName>" -Name "<youriSCSITargetName>" -ResourceGroupName "yourResourceGroup>"
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az disk-pool iscsi-target delete --disk-pool-name "<yourDiskPool>" --name "<yourIscsiTarget>" --resource-group "<yourResourceGroup>"
-```
--
-## Delete a disk pool
-
-When you delete a disk pool, all the resources in the managed resource group are also deleted. If there are outstanding iSCSI connections to the disk pool, you cannot delete the disk pool. You must disconnect all clients with iSCSI connections to the disk pool first. Disks that have been added to the disk pool will not be deleted.
-
-# [Portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Search for **Disk pool** and select it, then select the disk pool you want to delete.
-1. Select **Delete** at the top of the pane.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Stop-AzDiskPool -Name "<yourDiskPoolName>" -ResourceGroupName "<yourResourceGroup>"
-
-Remove-AzDiskPool -Name "<yourDiskPoolName>" -ResourceGroupName "<yourResourceGroup>
-Remove-AzDiskPool -Name "<yourDiskpoolName>" -ResourceGroupName "<yourResourceGroup>"
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az disk-pool delete --name "<yourDiskPool>" --resource-group "<yourResourceGroup>"
-```
---
-## Next steps
-
-Learn about [Azure managed disks](managed-disks-overview.md).
virtual-machines Disks Pools Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-manage.md
- Title: Manage an Azure disk pool (preview)
-description: Learn how to add managed disks to an Azure disk pool or disable iSCSI support on a disk.
--- Previously updated : 02/28/2023-----
-# Manage an Azure disk pool (preview)
-
-> [!IMPORTANT]
-> Disk pools are being retired soon. If you're looking for an alternative solution, see either [Azure Elastic SAN (preview)](../storage/elastic-san/elastic-san-introduction.md) or [Azure NetApp Files](../aks/azure-netapp-files.md).
-
-This article covers how to add a managed disk to an Azure disk pool (preview) and how to disable iSCSI support on a disk that has been added to a disk pool.
-
-## Add a disk to a pool
-
-Your disk must meet the following requirements in order to be added to the disk pool:
-- Must be either a premium SSD, standard SSD, or an ultra disk in the same region and availability zone as the disk pool.
- - Ultra disks must have a disk sector size of 512 bytes.
-- Must be a shared disk, with a maxShares value of two or greater.-- You must [provide the StoragePool resource provider RBAC permissions to the disks that will be added to the disk pool](disks-pools-deploy.md#assign-storagepool-resource-provider-permissions).-
-# [Portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to your disk pool, and select **Disks** under **Settings**.
-1. Select **Attach existing disk** and select your disks.
-1. When you have chosen all the disks you'd like to attach, select **Save**.
-
- :::image type="content" source="media/disk-pools-manage/manage-disk-pool-add.png" alt-text="Screenshot of the disks blade for your disk pool.":::
-
- Now that you've attached your disk, you must enable their LUNS.
-
-1. Select **iSCSI** under **Settings**.
-1. Select **Add LUN** under **Disks enabled for iSCSI**.
-1. Select the disk you attached earlier.
-1. Select **Save**.
-
- :::image type="content" source="media/disk-pools-manage/enable-disk-luns.png" alt-text="Screenshot of iSCSI blade, disk luns added and enabled.":::
-
-Now that you've attached your disk and enabled the LUN, you must create and attach it as an iSCSI datastore to your Azure VMware Solution private cloud. See [Attach the iSCSI LUN](../azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md#attach-the-iscsi-lun) for details.
-
-# [PowerShell](#tab/azure-powershell)
-
-### Prerequisites
-
-Install [version 6.1.0 or newer](/powershell/module/az.diskpool/) of the Azure PowerShell module.
-
-Install the disk pool module using the following command:
-
-```azurepowershell
-Install-Module -Name Az.DiskPool -RequiredVersion 0.3.0 -Repository PSGallery
-```
-
-### Add a disk pool
-
-The following script adds an additional disk to the disk pool and exposes it over iSCSI. It keeps the existing disks in the disk pool without any change.
-
-```azurepowershell
-#Initialize input parameters
-$resourceGroupName ="<yourResourceGroupName>"
-$diskPoolName = "<yourDiskPoolName>"
-$iscsiTargetName = "<youriSCSITargetName>"
-$diskName ="<yourDiskName>" #Provide the name of the disk you want to add
-$lunName ='<LunName>' #Provide the Lun name of the added disk
-$diskIds = @()
-
-#Add the disk to disk pool
-$DiskPool = Get-AzDiskPool -Name $diskPoolName -ResourceGroupName $resourceGroupName
-$DiskPoolDiskIDs = $DiskPool.Disk.Id
-foreach ($Id in $DiskPoolDiskIDs)
-{
-$diskIds += ($Id)
-}
-
-$disk = Get-AzDisk -ResourceGroupName $resourceGroupName -DiskName $diskName
-$diskIds += ,($disk.Id)
-Update-AzDiskPool -ResourceGroupName $resourceGroupName -Name $diskPoolName -DiskId $diskIds
-
-#Get the existing iSCSI LUNs and add the new disk
-$target = Get-AzDiskPoolIscsiTarget -name $iscsiTargetName -DiskPoolName $diskPoolName -ResourceGroupName $resourceGroupName
-$existingLuns = $target.Lun
-$luns = @()
-foreach ($lun in $existingLuns)
-{
-$tmpLunName = $lun.Name
-$tmpId = $lun.ManagedDiskAzureResourceId
-$tmplun = New-AzDiskPoolIscsiLunObject -ManagedDiskAzureResourceId $tmpId -Name $tmpLunName
-$luns += ,($tmplun)
-}
-
-$newlun = New-AzDiskPoolIscsiLunObject -ManagedDiskAzureResourceId $disk.Id -Name $lunName
-$luns += ,($newlun)
-Update-AzDiskPoolIscsiTarget -Name $iscsiTargetName -DiskPoolName $diskPoolName -ResourceGroupName $resourceGroupName -Lun $luns
-```
-
-Now that you've attached your disk and enabled the LUN, you must create and attach it as an iSCSI datastore to your Azure VMware Solution private cloud. See [Attach the iSCSI LUN](../azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md#attach-the-iscsi-lun) for details.
-
-# [Azure CLI](#tab/azure-cli)
-
-### Prerequisites
-
-Install [the latest version](/cli/azure/disk-pool) of the Azure CLI.
-
-If you haven't already, install the disk pool extension using the following command:
-
-```azurecli
-az extension add -n diskpool
-```
-
-### Add a disk pool - CLI
-
-The following script adds an additional disk to the disk pool and exposes it over iSCSI. It keeps the existing disks in the disk pool without any change.
-
-```azurecli
-# Add a disk to a disk pool
-
-# Initialize parameters
-resourceGroupName="<yourResourceGroupName>"
-diskPoolName="<yourDiskPoolName>"
-iscsiTargetName="<youriSCSITargetName>"
-diskName="<yourDiskName>"
-lunName="<LunName>"
-
-diskPoolUpdateArgs=("$@")
-diskPoolUpdateArgs+=(--resource-group $resourceGroupName --Name $diskPoolName)
-
-diskIds=$(echo $(az disk-pool show --name $diskPoolName --resource-group $resourceGroupName --query disks[].id -o json) | sed -e 's/\[ //g' -e 's/\ ]//g' -e 's/\,//g')
-for disk in $diskIds; do
-ΓÇ» ΓÇ» diskPoolUpdateArgs+=(--disks $(echo $disk | sed 's/"//g'))
-done
-
-diskId=$(az disk show --resource-group $resourceGroupName --name $diskName --query id | sed 's/"//g')
-diskPoolUpdateArgs+=(--disks $diskId)
-
-az disk-pool update "${diskPoolUpdateArgs[@]}"
-
-# Get existing iSCSI LUNs and expose added disk as a new LUN
-targetUpdateArgs=("$@")
-targetUpdateArgs+=(--resource-group $resourceGroupName --disk-pool-name $diskPoolName --name $iscsiTargetName)
-
-luns=$(az disk-pool iscsi-target show --name $iscsiTargetName --disk-pool-name $diskPoolName --resource-group $resourceGroupName --query luns)
-lunsCounts=$(echo $luns | jq length)
-
-for (( i=0; i < $lunCounts; i++ )); do
-ΓÇ» ΓÇ» tmpLunName=$(echo $luns | jq .[$i].name | sed 's/"//g')
-ΓÇ» ΓÇ» tmpLunId=$(echo $luns | jq .[$i].managedDiskAzureResourceId | sed 's/"//g')
-ΓÇ» ΓÇ» targetUpdateArgs+=(--luns name=$tmpLunName managed-disk-azure-resource-id=$tmpLunId)
-done
-
-targetUpdateArgs+=(--luns name=$lunName managed-disk-azure-resource-id=$diskId)
-
-az disk-pool iscsi-target update "${targetUpdateArgs[@]}"
-```
-
-Now that you've attached your disk and enabled the LUN, you must create and attach it as an iSCSI datastore to your Azure VMware Solution private cloud. See [Attach the iSCSI LUN](../azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md#attach-the-iscsi-lun) for details.
---
-## Disable iSCSI on a disk and remove it from the pool
-
-Before you disable iSCSI support on a disk, confirm there is no outstanding iSCSI connections to the iSCSI LUN the disk is exposed as. When a disk is removed from the disk pool, it isn't automatically deleted. This prevents any data loss but you will still be billed for storing data. If you don't need the data stored in a disk, you can manually delete the disk. This will delete the disk and all data stored on it and will prevent further charges.
-
-# [Portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to your disk pool and select **iSCSI** under **Settings**.
-1. Under **Disks enabled for iSCSI** select the disks you'd like to remove and select **Remove LUN**.
-1. Select **Save** and wait for the operation to complete.
-
- :::image type="content" source="media/disk-pools-manage/remove-disk-lun.png" alt-text="Screenshot of the disk pool iSCSI blade, removing disk LUNs.":::
-
- Now that you've disabled the LUN, you can remove your disks from the disk pool.
-
-1. Select **Disks** under **Settings**.
-1. Select **Remove disk from disk pool** and select your disks.
-1. Select **Save**.
-
-When the operation completes, your disk will have been completely removed from the disk pool.
--
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-#Initialize input parameters
-$resourceGroupName ="<yourResourceGroupName>"
-$diskPoolName = "<yourDiskPoolName>"
-$iscsiTargetName = "<youriSCSITargetName>"
-$diskName ="<NameOfDiskYouWantToRemove>" #Provide the name of the disk you want to remove
-$lunName ='<LunForDiskYouWantToRemove>' #Provide the Lun name of the disk you want to remove
-$diskIds = @()
-
-#Get the existing iSCSI LUNs and remove it from iSCS target
-$target = Get-AzDiskPoolIscsiTarget -name $iscsiTargetName -DiskPoolName $diskPoolName -ResourceGroupName $resourceGroupName
-$existingLuns = $target.Lun
-$luns = @()
-foreach ($lun in $existingLuns)
-{
-if ($lun.Name -notlike $lunName)
-{
-$tmpLunName = $lun.Name
-$tmpId = $lun.ManagedDiskAzureResourceId
-$tmplun = New-AzDiskPoolIscsiLunObject -ManagedDiskAzureResourceId $tmpId -Name $tmpLunName
-$luns += ,($tmplun)
-}
-}
-
-Update-AzDiskPoolIscsiTarget -Name $iscsiTargetName -DiskPoolName $diskPoolName -ResourceGroupName $resourceGroupName -Lun $luns
-
-#Remove the disk from disk pool
-$disk = Get-AzDisk -ResourceGroupName $resourceGroupName -DiskName $diskName
-$DiskPool = Get-AzDiskPool -Name $diskPoolName -ResourceGroupName $resourceGroupName
-$DiskPoolDiskIDs = $DiskPool.Disk.Id
-foreach ($Id in $DiskPoolDiskIDs)
-{
-if ($Id -notlike $disk.Id)
-{
-$diskIds += ($Id)
-}
-}
-
-Update-AzDiskPool -ResourceGroupName $resourceGroupName -Name $diskPoolName -DiskId $diskIds
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-# Disable iSCSI on a disk and remove it from the pool
-
-# Initialize parameters
-resourceGroupName="<yourResourceGroupName>"
-diskPoolName="<yourDiskPoolName>"
-iscsiTargetName="<youriSCSITargetName>"
-diskName="<yourDiskName>"
-lunName="<LunName>"
-
-# Get existing iSCSI LUNs and remove it from iSCSI target
-targetUpdateArgs=("$@")
-targetUpdateArgs+=(--resource-group $resourceGroupName --disk-pool-name $diskPoolName --name $iscsiTargetName)
-
-luns=$(az disk-pool iscsi-target show --name $iscsiTargetName --disk-pool-name $diskPoolName --resource-group $resourceGroupName --query luns)
-lunCounts=$(echo $luns | jq length)
-
-for (( i=0; i < $lunCounts; i++ )); do
-ΓÇ» ΓÇ» tmpLunName=$(echo $luns | jq .[$i].name | sed 's/"//g')
-ΓÇ» ΓÇ» if [ $tmpLunName != $lunName ]; then
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» tmpLunId=$(echo $luns | jq .[$i].managedDiskAzureResourceId | sed 's/"//g')
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» targetUpdateArgs+=(--luns name=$tmpLunName managed-disk-azure-resource-id=$tmpLunId)
-ΓÇ» ΓÇ» fi
-done
-
-az disk-pool iscsi-target update "${targetUpdateArgs[@]}"
-
-# Remove disk from pool
-diskId=$(az disk show --resource-group $resourceGroupName --name $diskName -- query id | sed 's/"//g')
-
-diskPoolUpdateArgs=("$@")
-diskPoolUpdateArgs+=(--resource-group $resourceGroupName --name $diskPoolName)
-
-diskIds=$(az disk-pool show --name $diskPoolName --resource-group $resourceGroupName --query disks[].id -o json)
-diskLength=$(echo diskIds | jq length)
-
-for (( i=0; i < $diskLength; i++ )); do
-ΓÇ» ΓÇ» tmpDiskId=$(echo $diskIds | jq .[$i] | sed 's/"//g')
-
-ΓÇ» ΓÇ» if [ $tmpDiskId != $diskId ]; then
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» diskPoolUpdateArgs+=(--disks $tmpDiskId)
-ΓÇ» ΓÇ» fi
-done
-
-az disk-pool update "${diskPoolUpdateArgs[@]}"
-```
--
-## Next steps
--- To learn how to move a disk pool to another subscription, see [Move a disk pool to a different subscription](disks-pools-move-resource.md).-- To learn how to deprovision a disk pool, see [Deprovision an Azure disk pool](disks-pools-deprovision.md).
virtual-machines Disks Pools Move Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-move-resource.md
- Title: Move an Azure disk pool (preview) to a different subscription
-description: Learn how to move an Azure disk pool to a different subscription.
--- Previously updated : 02/28/2023-----
-# Move a disk pool (preview) to a different subscription
-
-> [!IMPORTANT]
-> Disk pools are being retired soon. If you're looking for an alternative solution, see either [Azure Elastic SAN (preview)](../storage/elastic-san/elastic-san-introduction.md) or [Azure NetApp Files](../aks/azure-netapp-files.md).
-
-Moving an Azure disk pool (preview) involves moving the disk pool itself, the disks contained in the disk pool, the disk pool's managed resource group, and all the resources contained in the managed resource group. Currently, Azure doesn't support moving multiple resource groups to another subscription at once.
--- Export the template of your existing disk pool.-- Delete the old disk pool.-- Move the Azure resources necessary to create a disk pool.-- Redeploy the disk pool.-
-## Export your existing disk pool template
-
-To make the redeployment process simpler, export the template from your existing disk pool. You can use this template to redeploy the disk pool in a subscription of your choice, with the same configuration. See [this article](../azure-resource-manager/templates/export-template-portal.md#export-template-from-a-resource) to learn how to export a template from a resource.
-
-## Delete the old disk pool
-
-Now that you've exported the template, delete the old disk pool. Deleting the disk pool removes the disk pool resource and its managed resource group. See [this article](disks-pools-deprovision.md) for guidance on how to delete a disk pool.
-
-## Move your disks and virtual network
-
-Now that the disk pool is deleted, you can move the virtual network and your disks, and potentially your clients, to the subscription you want to change to. See [this article](../azure-resource-manager/management/move-resource-group-and-subscription.md) to learn how to move Azure resources to another subscription.
-
-## Redeploy your disk pool
-
-Once you've moved your other resources into the subscription, update the template of your old disk pool so that all the references to your disks, virtual network, subnet, and clients, all now point to their new resource URIs. Once you've done that, redeploy the template to the new subscription. To learn how to edit and deploy a template, see [this article](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md#edit-and-deploy-the-template).
-
-## Next steps
-
-To learn how to manage your disk pool, see [Manage a disk pool](disks-pools-manage.md).
virtual-machines Disks Pools Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-planning.md
- Title: Optimize your Azure disk pools (preview) performance
-description: Learn how to get the most performance out of an Azure disk pool.
--- Previously updated : 02/28/2023-----
-# Azure disk pools (preview) planning guide
-
-> [!IMPORTANT]
-> Disk pools are being retired soon. If you're looking for an alternative solution, see either [Azure Elastic SAN (preview)](../storage/elastic-san/elastic-san-introduction.md) or [Azure NetApp Files](../aks/azure-netapp-files.md).
-
-It's important to understand the performance requirements of your workload before you deploy an Azure disk pool (preview). Determining your requirements in advance allows you to get the most performance out of your disk pool. The performance of a disk pool is determined by three main factors, adjusting any of them will tweak your disk pool's performance:
--- The disk pool's scalability target-- The scalability targets of individual disks contained in the disk pool-- The networking connection between the client machines to the disk pool.-
-## Optimize for low latency
-
-If you're prioritizing for low latency, add ultra disks to your disk pool. Ultra disks provide sub-millisecond disk latency. To get the lowest latency possible, you must also evaluate your network configuration and ensure it's using the most optimal path. Consider using [ExpressRoute FastPath](../expressroute/about-fastpath.md) to minimize network latency.
-
-## Optimize for high throughput
-
-If you're prioritizing throughput, begin by evaluating the performance targets of the different disk pool SKUs, as well as the number of disk pools required to deliver your throughput targets. If your performance needs exceed what a premium disk pool can provide, you can split your deployment across multiple disk pools. Then, you can decide how to best utilize the performance offered on a disk pool amongst each individual disk and their types. For a disk pool, you can either mix and match between premium and standard SSDs, or use ultra disks only. Ultra disks can't be used with premium or standard SSDs. Select the disk type that best fits your needs. Also, confirm the network connectivity from your clients to the disk pool is not a bottleneck, especially the throughput.
--
-## Use cases
-
-The following table lists some typical use cases for disk pools with Azure VMware Solution and a recommended configuration.
--
-|Azure VMware Solution use cases |Suggested disk type |Suggested disk pool SKU |Suggested network configuration |
-|||||
-|Block storage for active working sets, like an extension of Azure VMware Solution vSAN. |Ultra disks |Premium |Use ExpressRoute virtual network gateway: Ultra Performance or ErGw3AZ (10 Gbps) to connect the disk pool virtual network to the Azure VMware Solution cloud and enable FastPath to minimize network latency. |
-|Tiering - tier infrequently accessed data from the Azure VMware Solution vSAN to the disk pool. |Premium SSD, standard SSD |Standard |Use ExpressRoute virtual network gateway: Standard (1 Gbps) or High Performance (2 Gbps) to connect the disk pool virtual network to the Azure VMware Solution cloud. |
-|Data storage for disaster recovery site on Azure VMware Solution: replicate data from on-premises or primary VMware environment to the disk pool as a secondary site. |Premium SSD, standard SSD |Standard, Basic |Use ExpressRoute virtual network gateway: Standard (1 Gbps) or High Performance (2 Gbps) to connect the disk pool virtual network to the Azure VMware Solution cloud. |
--
-Refer to the [Networking planning checklist for Azure VMware Solution](../azure-vmware/tutorial-network-checklist.md) to plan for your networking setup, along with other Azure VMware Solution considerations.
-
-## Disk pool scalability and performance targets
-
-|Resource |Basic disk pool |Standard disk pool |Premium disk pool |
-|||||
-|Maximum number of disks per disk pool |16 |32 |32 |
-|Maximum IOPS per disk pool |12,800 |25,600 |51,200 |
-|Maximum MBps per disk pool |192 |384 |768 |
-
-The following example should give you an idea of how the different performance factors work together:
-
-As an example, if you add two 1-TiB premium SSDs (P30, with a provisioned target of 5000 IOPS and 200 Mbps) into a standard disk pool, you can achieve 2 x 5000 = 10,000 IOPS. However, throughput would be capped at 384 MBps by the disk pool. To exceed this 384-MBps limit, you can deploy more disk pools to scale out for extra throughput. Your network throughput will limit the effectiveness of scaling out.
-
-Disk pools created without specifying the SKU in the REST API are the standard disk pool, by default.
-
-## Availability
-
-Disk pools are currently in preview, and shouldn't be used for production workloads. By default, a disk pool only supports premium and standard SSDs. You can enable support for ultra disks on a disk pool instead but, a disk pool with ultra disks isn't compatible with premium or standard SSDs.
-
-Disk pools with support for premium and standard SSDs are based on a highly available architecture, with multiples hosting the iSCSI endpoint. Disk pools with support for ultra disks are hosted on a single instance deployment.
-
-If your disk pool becomes inaccessible to your Azure VMware Solution cloud for any reason, you will experience the following:
--- All datastores associated to the disk pool will no longer be accessible.-- All VMware VMs hosted in the Azure VMware Solution cloud that is using the impacted datastores will be in an unhealthy state.-- The health of clusters in the Azure VMware Solution cloud won't be impacted, except for one operation: You won't be able to place a host into maintenance mode. Azure VMware Solution will handle this failure and attempt recovery by disconnecting the impacted datastores.-
-## Next steps
--- [Deploy a disk pool](disks-pools-deploy.md).-- To learn about how Azure VMware Solutions integrates disk pools, see [Attach disk pools to Azure VMware Solution hosts (Preview)](../azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md).
virtual-machines Disks Pools Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-troubleshoot.md
- Title: Troubleshoot Azure disk pools (preview) overview
-description: Troubleshoot issues with Azure disk pools. Learn about common failure codes and how to resolve them.
--- Previously updated : 02/28/2023-----
-# Troubleshoot Azure disk pools (preview)
-
-> [!IMPORTANT]
-> Disk pools are being retired soon. If you're looking for an alternative solution, see either [Azure Elastic SAN (preview)](../storage/elastic-san/elastic-san-introduction.md) or [Azure NetApp Files](../aks/azure-netapp-files.md).
-
-This article lists some common failure codes related to Azure disk pools (preview). It also provides possible resolutions and some clarity on disk pool statuses.
-
-## Disk pool status
-
-Disk pools and iSCSI targets each have four states: **Unknown**, **Running**, **Updating**, and **Stopped (deallocated)**.
-
-**Unknown** means that the resource is in a bad or unknown state. To attempt recovery, perform an update operation on the resource (such as adding or removing disks/LUNS) or delete and redeploy your disk pool.
-
-**Running** means the resource is running and in a healthy state.
-
-**Updating** means that the resource is going through an update. This usually happens during deployment or when applying an update like adding disks or LUNs.
-
-**Stopped (deallocated)** means that the resource is stopped and its underlying resources have been deallocated. You can restart the resource to recover your disk pool or iSCSI target.
-
-## Common failure codes when deploying a disk pool
-
-|Code |Description |
-|||
-|UnexpectedError |Usually occurs when a backend unrecoverable error occurs. Retry the deployment. If the issue isn't resolved, contact Azure Support and provide the tracking ID of the error message. |
-|DeploymentFailureZonalAllocationFailed |This occurs when Azure runs out of capacity to provision a VM in the specified region/zone. Retry the deployment at another time. |
-|DeploymentFailureQuotaExceeded |The subscription used to deploy the disk pool is out of VM core quota in this region. You can [request an increase in vCPU quota limits per Azure VM series](../azure-portal/supportability/per-vm-quota-requests.md) for Dsv3 series. |
-|DeploymentFailurePolicyViolation |A policy on the subscription prevented the deployment of Azure resources that are required to support a disk pool. See the error for more details. |
-|DeploymentTimeout |This occurs when the deployment of the disk pool infrastructure gets stuck and doesn't complete in the allotted time. Retry the deployment. If the issue persists, contact Azure support and provide the tracking ID of the error message. |
-|GoalStateApplicationTimeoutError |Occurs when the disk pool infrastructure stops responding to the resource provider. Confirm you meet the [networking prerequisites](disks-pools-deploy.md#prerequisites) and then retry the deployment. If the issue persists, contact Azure support and provide the tracking ID of the error. |
-|OngoingOperationInProgress |An ongoing operation is in-progress on the disk pool. Wait until that operation completes, then retry deployment. |
-
-## Common failure codes when enabling iSCSI on disk pools
-
-|Code |Description |
-|||
-|GoalStateApplicationError |Occurs when the iSCSI target configuration is invalid and cannot be applied to the disk pool. Retry the deployment. If the issue persists, contact Azure support and provide the tracking ID of the error. |
-|GoalStateApplicationTimeoutError |Occurs when the disk pool infrastructure stops responding to the resource provider. Confirm you meet the [networking prerequisites](disks-pools-deploy.md#prerequisites) and then retry the deployment. If the issue persists, contact Azure support and provide the tracking ID of the error. |
-|OngoingOperationInProgress |An ongoing operation is in-progress on the disk pool. Wait until that operation completes, then retry deployment. |
-
-## Next steps
-
-[Manage a disk pool (preview)](disks-pools-manage.md)
virtual-machines Disks Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools.md
- Title: Azure disk pools (preview) overview
-description: Learn about Azure disk pools (preview).
--- Previously updated : 02/28/2023-----
-# Azure disk pools (preview)
-
-> [!IMPORTANT]
-> Disk pools are being retired soon. If you're looking for an alternative solution, see either [Azure Elastic SAN (preview)](../storage/elastic-san/elastic-san-introduction.md) or [Azure NetApp Files](../aks/azure-netapp-files.md).
-
-An Azure disk pool (preview) is an Azure resource that allows your applications and workloads to access a group of managed disks from a single endpoint. A disk pool can expose an Internet Small Computer Systems Interface (iSCSI) target to enable data access to disks inside this pool over iSCSI. Each disk pool can have one iSCSI target and each disk can be exposed as an iSCSI LUN. You can connect disks under the disk pool to Azure VMware Solution hosts as datastores. This allows you to scale your storage independent of your Azure VMware Solution hosts. Once a datastore is configured, you can create volumes on it and attach them to your VMware instances.
-
-## How it works
-
-When a disk pool is deployed, a managed resource group is automatically created for you. This managed resource group contains all Azure resources necessary for the operation of a disk pool. The naming convention for these resource groups is: MSP_(resource-group-name)_(diskpool-name)\_(region-name).
-
-When you add a managed disk to the disk pool, the disk is attached to managed iSCSI controllers. Multiple managed disks can be added as storage targets to a disk pool, each storage target is presented as an iSCSI LUN under the disk pool's iSCSI target. Disk pools offer native support for Azure VMware Solution. An Azure VMware Solution cluster can connect to a disk pool, which would encompass all Azure VMware Solution hosts in that environment. The following diagram shows how you can use disk pools with Azure VMware Solution.
--
-## Restrictions
-
-In preview, disk pools have the following restrictions:
--- Only premium SSD managed disks and standard SSDs, or ultra disks can be added to a disk pool.
- - A disk pool can't be configured to contain both ultra disks and premium/standard SSDs. If a disk pool is configured to use ultra disks, it can only contain ultra disks. Likewise, a disk pool configured to use premium and standard SSDs can only contain premium and standard SSDs.
-- Disks using [zone-redundant storage (ZRS)](disks-redundancy.md#zone-redundant-storage-for-managed-disks) aren't currently supported. -
-### Regional availability
-
-Disk pools are currently available in the following regions:
--- Australia East-- Canada Central-- Central US-- East US-- East US 2-- West US 2-- Japan East-- North Europe-- West Europe-- Southeast Asia-- UK South-- Korea Central-- Sweden Central-- Central India--
-## Billing
-
-When you deploy a disk pool, there are two areas that will incur billing costs: The price of the disk pool service fee itself, and the price of each individual disk added to the pool. For example, if you have a disk pool with one P30 disk added, you will be billed for the P30 disk and the disk pool. Other than the disk pool and your disks, there are no extra service charges for a disk pool and you will not be billed for the resources deployed in the managed resource group: MSP_(resource-group-name)_(diskpool-name)_(region-name).
-
-See the [Azure managed disk pricing page](https://azure.microsoft.com/pricing/details/managed-disks/) for regional pricing on disk pools and disks to evaluate the cost of a disk pool for you.
-
-## Next steps
-
-See the [disk pools planning guide](disks-pools-planning.md).
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
This extension works with both Azure deployment models: Azure Resource Manager a
### Supported Linux distributions
-The following distributions and versions include only Azure-endorsed Linux vendor images. The extension generally doesn't support third-party BYOL and BYOS images, like appliances.
--- Ubuntu 18.04, 16.04, 14.04-- CentOS 8, 7, 6.5+-- Oracle Linux 7, 6.4+-- OpenSUSE 13.1+-- SUSE Linux Enterprise Server 12 SP5-- Debian 9, 8, 7-- Red Hat Enterprise Linux (RHEL) 7.9-- Alma Linux 8-- Rocky Linux 8-
-A distribution that lists only major versions, like Debian 7, is also supported for all minor versions. If a specific minor version is specified, only that version is supported. If a plus sign (+) is appended, minor versions equal to or later than the specified version are supported.
+See [Supported agent operating systems](../../azure-monitor/agents/agents-overview.md#linux).
### Python requirement
virtual-machines Vm Naming Conventions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-naming-conventions.md
subservice: sizes Previously updated : 7/22/2020 Last updated : 05/01/2023 +
This page outlines the naming conventions used for Azure VMs. VMs use these nami
|Value | Explanation| ||| | Family | Indicates the VM Family Series|
-| *Sub-family | Used for specialized VM differentiations only|
+| *Subfamily | Used for specialized VM differentiations only|
| # of vCPUs| Denotes the number of vCPUs of the VM | | *Constrained vCPUs| Used for certain VM sizes only. Denotes the number of vCPUs for the [constrained vCPU capable size](./constrained-vcpu.md) |
-| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> b = Block Storage performance <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br>p = ARM Cpu <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> C = Confidential <br>NP = node packing <br>
-| *Accelerator Type | Denotes the type of hardware accelerator in the specialized/GPU SKUs. Only the new specialized/GPU SKUs launched from Q3 2020 will have the hardware accelerator in the name. |
+| Additive Features | Lower case letters denote additive features, such as: <br> a = AMD-based processor <br> b = Block Storage performance <br> d = diskful (that is, a local temp disk is present); this feature is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br>p = ARM Cpu <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage, such as M128, M64, etc.)<br> C = Confidential <br>NP = node packing <br>
+| *Accelerator Type | Denotes the type of hardware accelerator in the specialized/GPU SKUs. Only the new specialized/GPU SKUs launched from Q3 2020 have the hardware accelerator in the name. |
| Version | Denotes the version of the VM Family Series | ## Example breakdown
-**[Family]** + **[Sub-family*]** + **[# of vCPUs]** + **[Additive Features]** + **[Accelerator Type*]** + **[Version]**
+**[Family]** + **[Subfamily*]** + **[# of vCPUs]** + **[Additive Features]** + **[Accelerator Type*]** + **[Version]**
### Example 1: M416ms_v2
This page outlines the naming conventions used for Azure VMs. VMs use these nami
|Value | Explanation| ||| | Family | N |
-| Sub-family | V |
+| Subfamily | V |
| # of vCPUs | 16 | | Additive Features | a = AMD-based processor <br> s = Premium Storage capable | | Version | v4 |
This page outlines the naming conventions used for Azure VMs. VMs use these nami
|Value | Explanation| ||| | Family | N |
-| Sub-family | C |
+| Subfamily | C |
| # of vCPUs | 4 | | Additive Features | a = AMD-based processor <br> s = Premium Storage capable | | Accelerator Type | T4 |
virtual-machines Configure Oracle Asm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-asm.md
To install Oracle ASM, complete the following steps.
For more information about installing Oracle ASM, see [Oracle ASMLib Downloads for Oracle Linux 6](https://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html).
+> [!IMPORTANT]
+> Keep in consideration Oracle Linux 6.x is already EOL. Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf), which [will end on 07/2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
+ 1. You need to login as root in order to continue with ASM installation: ```bash
For more information about installing Oracle ASM, see [Oracle ASMLib Downloads f
The output of this command should list the following users and groups:
- ```bash
+ ```output
uid=3000(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54345(asmadmin),54346(asmdba),54347(asmoper) ```
virtual-wan How To Palo Alto Cloud Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-palo-alto-cloud-ngfw.md
+
+ Title: 'Install Palo Alto Networks Cloud NGFW in a Virtual WAN hub'
+
+description: Learn how to configure Palo Alto Networks Cloud NGFW in a Virtual WAN hub.
+++++ Last updated : 05/02/2023+
+ms.custom : references_regions
++
+# Configure Palo Alto Networks Cloud NGFW in Virtual WAN (preview)
+
+> [!IMPORTANT]
+> Portal capabilities for Palo Alto Networks Cloud NGFW in Virtual WAN are currently rolling out and may not be available for users in all geographical regions. If you cannot see Palo Alto Networks Cloud NGFW as an option in Portal, please use the following [portal link](https://ms.portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridNetworking=mpacflight).
+
+> [!IMPORTANT]
+> Palo Alto Cloud NGFW for Virtual WAN is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+[Palo Alto Networks Cloud Next Generation Firewall (NGFW)](https://aka.ms/pancloudngfwdocs) is a cloud-native software-as-a-service (SaaS) security offering that can be deployed into the Virtual WAN hub as a bump-in-the-wire solution to inspect network traffic. The following document describes some of the key features, critical use cases and how-to associated with using Palo Alto Networks Cloud NGFW in Virtual WAN.
+
+## Background
+
+Palo Alto Networks Cloud NGFW integration with Virtual WAN provides the following benefits to customers:
+
+* **Protect critical workloads** using a highly scalable SaaS security offering that can be injected as a bump-in-the-wire solution in Virtual WAN.
+* **Fully managed infrastructure and software lifecycle** under software-as-a-service model.
+* **Consumption-based pay-as-you-go** billing.
+* **Cloud-native experience** that has a tight integration with Azure to provide end-to-end Firewall management using Azure portal or Azure APIs. Rule and policy management is also optionally configurable through Palo Alto Network management solution Panorama.
+* **Dedicated and streamlined support channel** between Azure and Palo Alto Networks to troubleshoot issues.
+* **One-click routing** to configure Virtual WAN to inspect on-premises, Virtual Network and Internet-outbound traffic using Palo Alto Networks Cloud NGFW.
++
+## Use cases
+
+The following section describes the common security use cases for Palo Alto Networks Cloud NGFW in Virtual WAN.
+
+### Private (on-premises and virtual network) traffic
+
+>[!NOTE]
+> Traffic between connections to Virtual Hubs in **different** Azure regions will be dropped. Support for inter-region traffic flows is coming soon and are delineated with dotted lines.
+
+#### East-west traffic inspection
+
+Virtual WAN routes traffic from Virtual Networks to Virtual Network or from on-premises (Site-to-site VPN, ExpressRoute, Point-to-site VPN) to on-premises to Cloud NGFW deployed in the hub for inspection.
++
+#### North-south traffic inspection
+
+Virtual WAN also routes traffic between Virtual Networks and on-premises (Site-to-site VPN, ExpressRoute, Point-to-site VPN) to on-premises to Cloud NGFW deployed in the hub for inspection.
++
+### Internet edge
+
+>[!NOTE]
+> The 0.0.0.0/0 default route does not propagate across hubs. On-premises and Virtual Networks can only use local Cloud NGFW resources to access the Internet. Additionally, for Destination NAT use cases, Cloud NGFW can only forward incoming traffic to local Virtual Networks and on-premises.
+
+#### Internet egress
+
+Virtual WAN can be configured to route internet-bound traffic from Virtual Networks or on-premises to Cloud NGFW for inspection and internet breakout. You can selectively choose which Virtual Network(s) or on-premise(s) learn the default route (0.0.0.0/0) and use Palo Alto Cloud NGFW for internet egress. In this use case, Azure automatically NATs the source IP of your internet-bound packet to the public IPs associated with the Cloud NGFW.
+
+For more information on internet-outbound capabilities and available settings, see [Palo Alto Networks documentation](https://aka.ms/pancloudngfwdocs).
++
+#### Internet ingress (DNAT)
+You can also configure Palo Alto Networks for Destination-NAT (DNAT). Destination NAT allows a user to access and communicate with an application hosted on-premises or in an Azure Virtual Network via the public IPs associated with the Cloud NGFW.
+
+For more information on internet-inbound (DNAT) capabilities and available settings, see [Palo Alto Networks documentation](https://aka.ms/pancloudngfwdocs).
++
+## Before you begin
+
+The steps in this article assume you have already created a Virtual WAN.
+
+To create a new virtual WAN, use the steps in the following article:
+
+* [Create a Virtual WAN](virtual-wan-site-to-site-portal.md#openvwan)
+
+## Known limitations
+
+* Palo Alto Networks Cloud NGFW is only available in the following Azure regions: Central US, East US, East US 2, West Europe and Australia East. Other Azure regions are on the roadmap.
+* Palo Alto Networks Cloud NGFW can only be deployed in new Virtual WAN hubs deployed with Azure resource tag **"hubSaaSPreview : true"**. Using existing Virtual Hubs with Palo Alto Networks Cloud NGFW is on the roadmap.
+* Palo Alto Networks Cloud NGFW can't be deployed with Network Virtual Appliances in the Virtual WAN hub.
+* For routing between Virtual WAN and Palo Alto Networks Cloud NGFW to work properly, your entire network (on-premises and Virtual Networks) must be within RFC-1918 (subnets within 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). For example, you may not use a subnet such as 40.0.0.0/24 within your Virtual Network or on-premises. Traffic to 40.0.0.0/24 may not be routed properly.
+* All other limitations in the [Routing Intent and Routing policies documentation limitations section](how-to-routing-policies.md) apply to Palo Alto Networks Cloud NGFW deployments in Virtual WAN.
+
+## Register resource provider
+
+To you Palo Alto Networks Cloud NGFW, you must register the **PaloAltoNetworks.Cloudngfw** resource provider to your subscription with an API version that is at minimum **2022-08-29-preview**.
+
+For more information on how to register a Resource Provider to an Azure subscription, see [Azure resource providers and types documentation](../azure-resource-manager/management/resource-providers-and-types.md).
+## Deploy virtual hub
+The following steps describe how to deploy a Virtual Hub that can be used with Palo Alto Networks Cloud NGFW.
+
+1. Navigate to your Virtual WAN resource.
+1. On the left hand menu, select **Hubs** under **Connectivity**.
+1. Click on **New Hub**.
+1. Under **Basics** specify a region for your Virtual Hub. Make sure the region is Central US, East US, East US 2, West Europe or Australia East. Additionally, specify a name, address space, Virtual hub capacity and Hub routing preference for your hub.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/create-hub.png" alt-text="Screenshot showing hub creation page. Region selector box is highlighted." lightbox="./media/how-to-palo-alto-cloudngfw/create-hub.png":::
+1. Select and configure the Gateways (Site-to-site VPN, Point-to-site VPN, ExpressRoute) you want to deploy in the Virtual Hub. You can deploy Gateways later if you wish.
+1. Apply an Azure Resource tag to your Virtual Hub **"hubSaaSPreview":"true"**. This tag must be specified at hub deployment time to use Palo Alto Networks Cloud NGFW.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/apply-tags.png" alt-text="Screenshot showing hub tag creation page." lightbox="./media/how-to-palo-alto-cloudngfw/apply-tags.png":::
+1. Click **Review + create**.
+1. Click **Create**
+1. Navigate to your newly created hub and wait for the **Routing Status** to be **Provisioned**. This step can take up to 30 minutes.
+
+## Deploy Palo Alto Networks Cloud NGFW
+
+>[!NOTE]
+> You must wait for the routing status of the hub to be "Provisioned" before deploying Cloud NGFW.
+
+1. Navigate to your Virtual Hub and click on **SaaS solutions** under **Third-party providers**.
+1. Click **Create SaaS** and select **Palo Alto Networks Cloud NGFW (preview)**.
+1. Click **Create**.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/create-saas.png" alt-text="Screenshot showing SaaS creation page." lightbox="./media/how-to-palo-alto-cloudngfw/create-saas.png":::
+1. Provide a name for your Firewall. Make sure the region of the Firewall is the same as the region of your Virtual Hub. For more information on the available configuration options for Palo Alto Networks Cloud NGFW, see [Palo Alto Networks documentation for Cloud NGFW](https://aka.ms/pancloudngfwdocs).
+
+## Configure Routing
+
+>[!NOTE]
+> You can't configure routing intent until the Cloud NGFW is successfully provisioned.
+
+1. Navigate to your Virtual Hub and click on **Routing intent and policies** under **Routing**
+1. If you want to use Palo Alto Networks Cloud NGFW to inspect outbound Internet traffic (traffic between Virtual Networks or on-premises and the Internet), under **Internet traffic** select **SaaS solution**. For the **Next Hop resource**, select your Cloud NGFW resource.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/internet-routing-policy.png" alt-text="Screenshot showing internet routing policy creation." lightbox="./media/how-to-palo-alto-cloudngfw/internet-routing-policy.png":::
+1. If you want to use Palo Alto Networks Cloud NGFW to inspect private traffic (traffic between all Virtual Networks and on-premises in your Virtual WAN), under **Private traffic** select **SaaS solution**. For the **Next Hop resource**, select your Cloud NGFW resource.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/private-routing-policy.png" alt-text="Screenshot showing private routing policy creation." lightbox="./media/how-to-palo-alto-cloudngfw/private-routing-policy.png":::
+
+## Manage Palo Alto Networks Cloud NGFW
+
+The following section describes how you can manage your Palo Alto Networks Cloud NGFW (rules, IP addresses, security configurations etc.)
+
+1. Navigate to your Virtual Hub and click on **SaaS solutions**.
+1. Click on **Click here** under **Manage SaaS**.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/manage-saas.png" alt-text="Screenshot showing how to manage your SaaS solution." lightbox="./media/how-to-palo-alto-cloudngfw/manage-saas.png":::
+1. For more information on the available configuration options for Palo Alto Networks Cloud NGFW, see [Palo Alto Networks documentation for Cloud NGFW](https://aka.ms/pancloudngfwdocs).
+
+## Delete Palo Alto Networks Cloud NGFW
+
+>[!NOTE]
+> You can't delete your Virtual Hub until both the Cloud NGFW and Virtual WAN SaaS solution are deleted.
+
+The following steps describe how to delete a Cloud NGFW offer:
+
+1. Navigate to your Virtual Hub and click on **SaaS solutions**.
+1. Click on **Click here** under **Manage SaaS**.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/manage-saas.png" alt-text="Screenshot showing how to manage your SaaS solution." lightbox="./media/how-to-palo-alto-cloudngfw/manage-saas.png":::
+1. Click on **Delete** in the upper left-hand corner of the page.
+ :::image type="content" source="./media/how-to-palo-alto-cloudngfw/delete-ngfw.png" alt-text="Screenshot showing delete Cloud NGFW options." lightbox="./media/how-to-palo-alto-cloudngfw/delete-ngfw.png":::
+1. After the delete operation is successful, navigate back to your Virtual Hub's **SaaS solutions** page.
+1. Click on the line that corresponds to your Cloud NGFW and click **Delete SaaS** on the upper left-hand corner of the page. This option won't be available until Step 3 runs to completion.
+
+## Troubleshooting
+
+The following section describes common issues seen when using Palo Alto Networks Cloud NGFW in Virtual WAN.
+
+### Troubleshooting Cloud NGFW creation
+
+* Ensure your Virtual Hubs are deployed in one of the following regions: Central US, East US, East US 2, West Europe or Australia East. Cloud NGFW deployment fails in other regions.
+* Ensure your Virtual Hub was created with the Azure Resource Tag **"hubSaaSPreview" : "true"**. Hubs created without this tag aren't eligible to be used with Cloud NGFW. These tags must be specified at hub creation time and can't be provided after hub deployment. To use Cloud NGFW, you need to create a new Virtual Hub.
+* Ensure the Routing status of the Virtual Hub is "Provisioned." Attempts to create Cloud NGFW prior to routing being provisioned will fail.
+* Ensure registration to the **PaloAltoNetworks.Cloudngfw** resource provider is successful.
+
+### Troubleshooting deletion
+
+* A SaaS solution can't be deleted until the linked Cloud NGFW resource is deleted. Therefore, delete the Cloud NGFW resource before deleting the SaaS solution resource.
+* A SaaS solution resource that is currently the next hop resource for routing intent can't be deleted. Routing intent must be deleted before the SaaS solution resource can be deleted.
+* Similarly, a Virtual Hub resource that has a SaaS solution can't be deleted. The SaaS solution must be deleted before the Virtual Hub is deleted.
+
+### Troubleshooting Routing intent and policies
+
+* Ensure Cloud NGFW deployment is completed successfully before attempting to configure Routing Intent.
+* For more information about troubleshooting routing intent, see [Routing Intent documentation](how-to-routing-policies.md). This document describes pre-requisites, common errors associated with configuring routing intent and troubleshooting tips.
+
+### Troubleshooting Palo Alto Networks Cloud NGFW configuration
+
+* Reference [Palo Alto Networks documentation](https://aka.ms/pancloudngfwdocs).
+
+## Next steps
+
+* For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
+* For more information about routing intent, see the [Routing Intent documentation](how-to-routing-policies.md).
+* For more information about Palo Alto Networks Cloud NGFW, see [Palo Alto Networks Cloud NGFW documentation](https://aka.ms/pancloudngfwdocs).
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
+|Feature|Software-as-a-service|Palo Alto Networks Cloud NGFW|Public preview of [Palo Alto Networks Cloud NGFW](https://aka.ms/pancloudngfwdocs), the first software-as-a-serivce security offering deployable within the Virtual WAN hub.|May 2023|Palo Alto Networks Cloud NGFW is only deployable in newly created Virtual WAN hubs in some Azure regions. See [Limitations of Palo Alto Networks Cloud NGFW](how-to-palo-alto-cloud-ngfw.md) for a full list of limitations.|
| Feature| Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| [Fortinet SD-WAN](https://docs.fortinet.com/document/fortigate-public-cloud/7.2.2/azure-vwan-sd-wan-deployment-guide/12818/deployment-overview)| General availability of Fortinet SD-WAN solution in Virtual WAN. Next-Generation Firewall use cases in preview.| October 2022| SD-WAN solution generally available. Next Generation Firewall use cases in preview.| |Feature |Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| [Versa SD-WAN](about-nva-hub.md#partners)|Preview of Versa SD-WAN.|November 2021| | |Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Cisco Viptela, Barracuda and VMware (Velocloud) SD-WAN](about-nva-hub.md#partners) |General Availability of SD-WAN solutions in Virtual WAN.|June/July 2021| |
vpn-gateway Nat Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-howto.md
description: Learn how to configure NAT for Azure VPN Gateway.
Previously updated : 03/30/2023 Last updated : 05/02/2023
Verify that you have an Azure subscription. If you don't already have an Azure s
## <a name ="vnet"></a>Part 1: Create VNet and gateways
-In this section, you create a virtual network, a VPN gateway, and the local network gateway resources to correspond to the resources shown in [Diagram 1](#diagram).
-
-To create these resources, use the steps in the [Site-to-Site Tutorial](tutorial-site-to-site-portal.md) article. Complete the following sections of the article, but don't create any connections.
+In this section, you create a virtual network, a VPN gateway, and the local network gateway resources to correspond to the resources shown in [Diagram 1](#diagram). To create these resources, you can use the steps in the [Site-to-Site Tutorial](tutorial-site-to-site-portal.md) article. Complete the following sections of the article, but don't create any connections.
* [VNet](tutorial-site-to-site-portal.md#CreatVNet) * [VPN gateway](tutorial-site-to-site-portal.md#VNetGateway) * [Local network gateway](tutorial-site-to-site-portal.md#LocalNetworkGateway) * [Configure your VPN device](tutorial-site-to-site-portal.md#VPNDevice)
->[!IMPORTANT]
-> When using the steps in the following articles, do not create the **connection** resources in the articles. The operation will fail because the IP address spaces are the same between the VNet, Branch 1, and Branch 2. Use the steps in the following section to create the NAT rules, then create the connections with the NAT rules.
->
+> [!IMPORTANT]
+> Don't create any connections. If you try to create connection resources, the operation fails because the IP address spaces are the same between the VNet, Branch1, and Branch2. You'll create connection resources later in this article.
The following screenshots show examples of the resources to create.
The following screenshots show examples of the resources to create.
* **VPN gateway** :::image type="content" source="./media/nat-howto/vpn-gateway.png" alt-text="Screenshot showing the gateway." lightbox="./media/nat-howto/vpn-gateway.png":::
-* **Branch 1 local network gateway**
- :::image type="content" source="./media/nat-howto/branch-1.png" alt-text="Screenshot showing Branch 1 local network gateway." lightbox="./media/nat-howto/branch-1.png" :::
+* **Branch1 local network gateway**
+
+ :::image type="content" source="./media/nat-howto/branch-1.png" alt-text="Screenshot showing Branch1 local network gateway." lightbox="./media/nat-howto/branch-1.png" :::
-* **Branch 2 local network gateway**
+* **Branch2 local network gateway**
- :::image type="content" source="./media/nat-howto/branch-2.png" alt-text="Screenshot showing Branch 2 local network gateway." lightbox="./media/nat-howto/branch-2.png":::
+ :::image type="content" source="./media/nat-howto/branch-2.png" alt-text="Screenshot showing Branch2 local network gateway." lightbox="./media/nat-howto/branch-2.png":::
## <a name ="nat-rules"></a>Part 2: Create NAT rules
Before you create connections, you must create and save NAT rules on the VPN gat
| Name | Type | Mode | Internal | External | Connection | | | | | | | | | VNet | Static | EgressSNAT | 10.0.1.0/24 | 100.0.1.0/24 | Both connections |
-| Branch_1 | Static | IngressSNAT | 10.0.1.0/24 | 100.0.2.0/24 | Branch 1 connection |
-| Branch_2 | Static | IngressSNAT | 10.0.1.0/24 | 100.0.3.0/24 | Branch 2 connection |
+| Branch1 | Static | IngressSNAT | 10.0.1.0/24 | 100.0.2.0/24 | Branch1 connection |
+| Branch2 | Static | IngressSNAT | 10.0.1.0/24 | 100.0.3.0/24 | Branch2 connection |
-Use the following steps to create all the NAT rules on the VPN gateway.
+Use the following steps to create all the NAT rules on the VPN gateway. If you're using BGP, select **Enable** for the Enable Bgp Route Translation setting.
-1. In the Azure portal, navigate to the **Virtual Network Gateway** resource page and select **NAT Rules**.
-1. Using the **NAT rules table**, fill in the values.
+1. In the Azure portal, navigate to the **Virtual Network Gateway** resource page and select **NAT Rules** from the left pane.
+1. Using the **NAT rules table**, fill in the values. If you're using BGP, select **Enable** for the **Enable Bgp Route Translation** setting.
:::image type="content" source="./media/nat-howto/disable-bgp.png" alt-text="Screenshot showing NAT rules." lightbox="./media/nat-howto/disable-bgp.png"::: 1. Click **Save** to save the NAT rules to the VPN gateway resource. This operation can take up to 10 minutes to complete. ## <a name ="connections"></a>Part 3: Create connections and link NAT rules
-In this section, you create the connections, and then associate the NAT rules with the connections to implement the sample topology in [Diagram 1](#diagram).
-
-### 1. Create connections
-
-Follow the steps in [Create a site-to-site connection](tutorial-site-to-site-portal.md) article to create the two connections as shown in the following screenshot:
-
- :::image type="content" source="./media/nat-howto/connections.png" alt-text="Screenshot showing the Connections page." lightbox="./media/nat-howto/connections.png":::
-
-### 2. Associate NAT rules with the connections
-
-In this step, you associate the NAT rules with each connection resource.
-
-1. In the Azure portal, navigate to the connection resources, and select **Configuration**.
+In this section, you create the connections and associate the NAT rules in the same step. Note that if you create the connection objects first, without linking the NAT rules at the same time, the operation fails because the IP address spaces are the same between the VNet, Branch1, and Branch2.
-1. Under Ingress NAT Rules, select the NAT rules created previously.
+The connections and the NAT rules are specified in the sample topology shown in [Diagram 1](#diagram).
- :::image type="content" source="./media/nat-howto/config-nat.png" alt-text="Screenshot showing the configured NAT rules." lightbox="./media/nat-howto/config-nat.png":::
+1. Go to the VPN gateway.
+1. On the **Connections** page, select **+Add** to open the **Add connection** page.
+1. On the **Add connection** page, fill in the values for the VNet-Branch1 connection, specifying the associated NAT rules, as shown in the following screenshot. For Ingress NAT rules, select Branch1. For Egress NAT rules, select VNet. If you are using BGP, you can select **Enable BGP**.
-1. Click **Save** to apply the configurations to the connection resource.
+ :::image type="content" source="./media/nat-howto/branch-1-connection.png" alt-text="Screenshot showing the VNet-Branch1 connection." lightbox="./media/nat-howto/branch-1-connection.png":::
+1. Click **OK** to create the connection.
+1. Repeat the steps to create the VNet-Branch2 connection. For Ingress NAT rules, select Branch2. For Egress NAT rules, select VNet.
+1. After configuring both connections, your configuration should look similar to the following screenshot. The status changes to **Connected** when the connection is established.
-1. Repeat the steps to apply the NAT rules for other connection resources.
+ :::image type="content" source="./media/nat-howto/all-connections.png" alt-text="Screenshot showing all connections." lightbox="./media/nat-howto/all-connections.png":::
-1. If BGP is used, select **Enable BGP Route Translation** in the NAT rules page and click **Save**. Notice that the table now shows the connections linked with each NAT rule.
+1. When you have completed the configuration, the NAT rules look similar to the following screenshot, and you'll have a topology that matches the topology shown in [Diagram 1](#diagram). Notice that the table now shows the connections that are linked with each NAT rule.
- :::image type="content" source="./media/nat-howto/enable-bgp.png" alt-text="Screenshot showing Enable BGP." lightbox="./media/nat-howto/enable-bgp.png":::
+ If you want to enable BGP Route Translation for your connections, select **Enable** then click **Save**.
-After completing these steps, you'll have a setup that matches the topology shown in [Diagram 1](#diagram).
+ :::image type="content" source="./media/nat-howto/all-nat-rules.png" alt-text="Screenshot showing the NAT rules." lightbox="./media/nat-howto/all-nat-rules.png":::
## NAT limitations
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-overview.md
description: Learn about NAT (Network Address Translation) in Azure VPN to conne
Previously updated : 05/11/2022 Last updated : 05/02/2023
The diagram shows an Azure VNet and two on-premises networks, all with address s
In the diagram, each connection resource has the following rules: * Connection 1 (VNet-Branch1):
- * IngressSNAT rule 1
- * EgressSNAT rule 1
+ * IngressSNAT rule 1
+ * EgressSNAT rule 1
* Connection 2 (VNet-Branch2)
- * IngressSNAT rule 2
- * EgressSNAT rule 1
+ * IngressSNAT rule 2
+ * EgressSNAT rule 1
Based on the rules associated with the connections, here are the address spaces for each network:
Based on the rules associated with the connections, here are the address spaces
| Branch 1 | 10.0.1.0/24 | 100.0.2.0/24 | | Branch 2 | 10.0.1.0/24 | 100.0.3.0/24 |
-The diagram below shows an IP packet from Branch 1 to VNet, before and after the NAT translation:
+The following diagram shows an IP packet from Branch 1 to VNet, before and after the NAT translation:
:::image type="content" source="./media/nat-overview/nat-packet.png" alt-text="Diagram showing before and after NAT translation." lightbox="./media/nat-overview/nat-packet.png" border="false":::
The diagram below shows an IP packet from Branch 1 to VNet, before and after the
## <a name="config"></a>NAT configuration
-To implement the NAT configuration as shown above, first create the NAT rules in your Azure VPN gateway, then create the connections with the corresponding NAT rules associated. See [Configure NAT on Azure VPN gateways](nat-howto.md) for steps to configure NAT for your cross-premises connections.
+To implement the NAT configuration shown in the previous section, first create the NAT rules in your Azure VPN gateway, then create the connections with the corresponding NAT rules associated. See [Configure NAT on Azure VPN gateways](nat-howto.md) for steps to configure NAT for your cross-premises connections.
-## NAT limitations
+## NAT limitations and considerations
[!INCLUDE [NAT limitations](../../includes/vpn-gateway-nat-limitations.md)]
vpn-gateway Vpn Gateway About Compliance Crypto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-compliance-crypto.md
description: Learn how to configure Azure VPN gateways to satisfy cryptographic
Previously updated : 02/13/2023 Last updated : 05/02/2023
This article discusses how you can configure Azure VPN gateways to satisfy your
## About IKEv1 and IKEv2 for Azure VPN connections
-Traditionally we allowed IKEv1 connections for Basic SKUs only and allowed IKEv2 connections for all VPN gateway SKUs other than Basic SKUs. The Basic SKUs allow only 1 connection and along with other limitations such as performance, customers using legacy devices that support only IKEv1 protocols were having limited experience. In order to enhance the experience of customers using IKEv1 protocols, we are now allowing IKEv1 connections for all of the VPN gateway SKUs, except Basic SKU. For more information, see [VPN Gateway SKUs](./vpn-gateway-about-vpn-gateway-settings.md#gwsku). Note that VPN gateways using IKEv1 might experience up [tunnel reconnects](./vpn-gateway-vpn-faq.md#why-is-my-ikev1-connection-frequently-reconnecting) during Main mode rekeys.
+Traditionally we allowed IKEv1 connections for Basic SKUs only and allowed IKEv2 connections for all VPN gateway SKUs other than Basic SKUs. The Basic SKUs allow only 1 connection and along with other limitations such as performance, customers using legacy devices that support only IKEv1 protocols were having limited experience. In order to enhance the experience of customers using IKEv1 protocols, we're now allowing IKEv1 connections for all of the VPN gateway SKUs, except Basic SKU. For more information, see [VPN Gateway SKUs](./vpn-gateway-about-vpn-gateway-settings.md#gwsku). Note that VPN gateways using IKEv1 might experience up [tunnel reconnects](./vpn-gateway-vpn-faq.md#why-is-my-ikev1-connection-frequently-reconnecting) during Main mode rekeys.
-![Azure VPN Gateway IKEv1 and IKEv2 connections](./media/vpn-gateway-about-compliance-crypto/ikev1-ikev2-connections.png)
-When IKEv1 and IKEv2 connections are applied to the same VPN gateway, the transit between these two connections is auto-enabled.
+When IKEv1 and IKEv2 connections are applied to the same VPN gateway, the transit between these two connections is autoenabled.
## About IPsec and IKE policy parameters for Azure VPN gateways
-IPsec and IKE protocol standard supports a wide range of cryptographic algorithms in various combinations. If you do not request a specific combination of cryptographic algorithms and parameters, Azure VPN gateways use a set of default proposals. The default policy sets were chosen to maximize interoperability with a wide range of third-party VPN devices in default configurations. As a result, the policies and the number of proposals cannot cover all possible combinations of available cryptographic algorithms and key strengths.
+IPsec and IKE protocol standard supports a wide range of cryptographic algorithms in various combinations. If you don't request a specific combination of cryptographic algorithms and parameters, Azure VPN gateways use a set of default proposals. The default policy sets were chosen to maximize interoperability with a wide range of third-party VPN devices in default configurations. As a result, the policies and the number of proposals can't cover all possible combinations of available cryptographic algorithms and key strengths.
### Default policy
For example, the IKEv2 main mode policies for Azure VPN gateways utilize only Di
Azure VPN gateways now support per-connection, custom IPsec/IKE policy. For a Site-to-Site or VNet-to-VNet connection, you can choose a specific combination of cryptographic algorithms for IPsec and IKE with the desired key strength, as shown in the following example:
-![ipsec-ike-policy](./media/vpn-gateway-about-compliance-crypto/ipsecikepolicy.png)
You can create an IPsec/IKE policy and apply to a new or existing connection. ### Workflow
-1. Create the virtual networks, VPN gateways, or local network gateways for your connectivity topology as described in other how-to documents
-2. Create an IPsec/IKE policy
-3. You can apply the policy when you create a S2S or VNet-to-VNet connection
-4. If the connection is already created, you can apply or update the policy to an existing connection
+1. Create the virtual networks, VPN gateways, or local network gateways for your connectivity topology as described in other how-to documents.
+2. Create an IPsec/IKE policy.
+3. You can apply the policy when you create a S2S or VNet-to-VNet connection.
+4. If the connection is already created, you can apply or update the policy to an existing connection.
## IPsec/IKE policy FAQ
vpn-gateway Vpn Gateway Bgp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-bgp-overview.md
description: Learn about Border Gateway Protocol (BGP) in Azure VPN, the standar
Previously updated : 05/18/2022 Last updated : 05/02/2023
-# About BGP and Azure VPN Gateway
+# About BGP and VPN Gateway
This article provides an overview of BGP (Border Gateway Protocol) support in Azure VPN Gateway.
BGP is the standard routing protocol commonly used in the Internet to exchange r
## <a name="why"></a>Why use BGP?
-BGP is an optional feature you can use with Azure Route-Based VPN gateways. You should also make sure your on-premises VPN devices support BGP before you enable the feature. You can continue to use Azure VPN gateways and your on-premises VPN devices without BGP. It is the equivalent of using static routes (without BGP) *vs.* using dynamic routing with BGP between your networks and Azure.
+BGP is an optional feature you can use with Azure Route-Based VPN gateways. You should also make sure your on-premises VPN devices support BGP before you enable the feature. You can continue to use Azure VPN gateways and your on-premises VPN devices without BGP. It's the equivalent of using static routes (without BGP) *vs.* using dynamic routing with BGP between your networks and Azure.
There are several advantages and new capabilities with BGP:
There are several advantages and new capabilities with BGP:
With BGP, you only need to declare a minimum prefix to a specific BGP peer over the IPsec S2S VPN tunnel. It can be as small as a host prefix (/32) of the BGP peer IP address of your on-premises VPN device. You can control which on-premises network prefixes you want to advertise to Azure to allow your Azure Virtual Network to access.
-You can also advertise larger prefixes that may include some of your VNet address prefixes, such as a large private IP address space (for example, 10.0.0.0/8). Note though the prefixes cannot be identical with any one of your VNet prefixes. Those routes identical to your VNet prefixes will be rejected.
+You can also advertise larger prefixes that may include some of your VNet address prefixes, such as a large private IP address space (for example, 10.0.0.0/8). Note though the prefixes can't be identical with any one of your VNet prefixes. Those routes identical to your VNet prefixes will be rejected.
### <a name="multitunnel"></a>Support multiple tunnels between a VNet and an on-premises site with automatic failover based on BGP
-You can establish multiple connections between your Azure VNet and your on-premises VPN devices in the same location. This capability provides multiple tunnels (paths) between the two networks in an active-active configuration. If one of the tunnels is disconnected, the corresponding routes will be withdrawn via BGP and the traffic automatically shifts to the remaining tunnels.
+You can establish multiple connections between your Azure VNet and your on-premises VPN devices in the same location. This capability provides multiple tunnels (paths) between the two networks in an active-active configuration. If one of the tunnels is disconnected, the corresponding routes will be withdrawn via BGP, and the traffic automatically shifts to the remaining tunnels.
The following diagram shows a simple example of this highly available setup:
-![Multiple active paths](./media/vpn-gateway-bgp-overview/multiple-active-tunnels.png)
### <a name="transitrouting"></a>Support transit routing between your on-premises networks and multiple Azure VNets
-BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they are directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks.
+BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they're directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks.
The following diagram shows an example of a multi-hop topology with multiple paths that can transit traffic between the two on-premises networks through Azure VPN gateways within the Microsoft Networks:
-![Multi-hop transit](./media/vpn-gateway-bgp-overview/full-mesh-transit.png)
## <a name="faq"></a>BGP FAQ
+See the VPN Gateway [BGP FAQ](vpn-gateway-vpn-faq.md#bgp) for frequently asked questions.
## Next steps