Updates from: 06/23/2023 01:16:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider.md
Previously updated : 04/24/2023 Last updated : 06/24/2023
Now that your policy can create SAML responses, you must configure the policy to
1. Open the *SignUpOrSigninSAML.xml* file in your preferred editor.
-1. Change the `PolicyId` and `PublicPolicyUri` values of the policy to `B2C_1A_signup_signin_saml` and `http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml`.
+1. Change the value of:
+
+ 1. `PolicyId` to `B2C_1A_signup_signin_saml`
+
+ 1. `PublicPolicyUri` to `http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml`. Replace `<tenant-name>` placeholder with the subdomain of your Azure AD B2C tenant's domain name. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn [how to read your tenant details](tenant-management-read-tenant-name.md#get-your-tenant-name).
```xml <TrustFrameworkPolicy
Now that your policy can create SAML responses, you must configure the policy to
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06" PolicySchemaVersion="0.3.0.0"
- TenantId="tenant-name.onmicrosoft.com"
+ TenantId="<tenant-name>.onmicrosoft.com"
PolicyId="B2C_1A_signup_signin_saml" PublicPolicyUri="http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml"> ```
If you started from a different folder in the starter pack or you customized the
The relying party element determines which protocol your application uses. The default is `OpenId`. The `Protocol` element must be changed to `SAML`. The output claims will create the claims mapping to the SAML assertion.
-Replace the entire `<TechnicalProfile>` element in the `<RelyingParty>` element with the following technical profile XML. Update `tenant-name` with the name of your Azure AD B2C tenant.
+Replace the entire `<TechnicalProfile>` element in the `<RelyingParty>` element with the following technical profile XML.
```xml <TechnicalProfile Id="PolicyProfile">
active-directory-b2c Technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technicalprofiles.md
Previously updated : 11/30/2021 Last updated : 06/22/2023
The **Protocol** element specifies the protocol to be used for the communication
| Attribute | Required | Description | | | -- | -- | | Name | Yes | The name of a valid protocol supported by Azure AD B2C that's used as part of the technical profile. Possible values are `OAuth1`, `OAuth2`, `SAML2`, `OpenIdConnect`, `Proprietary`, or `None`. |
-| Handler | No | When the protocol name is set to `Proprietary`, specifies the name of the assembly that's used by Azure AD B2C to determine the protocol handler. |
+| Handler | No | When the protocol name is set to `Proprietary`, specifies the name of the assembly that's used by Azure AD B2C to determine the protocol handler. If you set the protocol *Name* attribute to `None`, do not include the *Handler* attribute.|
## Metadata
active-directory-domain-services Join Rhel Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *rhel.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name. ```bash
- sudo ssh -l contosoadmin@AADDSCONTOSO.com rhel.aaddscontoso.com
+ ssh -l contosoadmin@AADDSCONTOSO.com rhel.aaddscontoso.com
``` 1. When you've successfully connected to the VM, verify that the home directory was initialized correctly: ```bash
- sudo pwd
+ pwd
``` You should be in the */home* directory with your own directory that matches the user account.
To verify that the VM has been successfully joined to the managed domain, start
1. Now check that the group memberships are being resolved correctly: ```bash
- sudo id
+ id
``` You should see your group memberships from the managed domain.
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 03/12/2023 Last updated : 06/22/2023
# Protecting authentication methods in Azure Active Directory >[!NOTE]
->The Microsoft managed value for Authenticator Lite will move from disabled to enabled on June 9th, 2023. All tenants left in the default state 'Microsoft managed' will be enabled for the feature on June 9th.
+>The Microsoft managed value for Authenticator Lite will move from disabled to enabled on June 26th, 2023. All tenants left in the default state **Microsoft managed** will be enabled for the feature on June 26th.
Azure Active Directory (Azure AD) adds and improves security features to better protect customers against increasing attacks. As new attack vectors become known, Azure AD may respond by enabling protection by default to help customers stay ahead of emerging security threats.
Number matching is a good example of protection for an authentication method tha
As MFA fatigue attacks rise, number matching becomes more critical to sign-in security. As a result, Microsoft will change the default behavior for push notifications in Microsoft Authenticator.
->[!NOTE]
->Number matching will begin to be enabled for all users of Microsoft Authenticator starting May 08, 2023.
- ## Microsoft managed settings In addition to configuring Authentication methods policy settings to be either **Enabled** or **Disabled**, IT admins can configure some settings in the Authentication methods policy to be **Microsoft managed**. A setting that is configured as **Microsoft managed** allows Azure AD to enable or disable the setting.
The following table lists each setting that can be set to Microsoft managed and
| Setting | Configuration | |-||
-| [Registration campaign](how-to-mfa-registration-campaign.md) | Disabled |
+| [Registration campaign](how-to-mfa-registration-campaign.md) | Beginning in July, 2023, enabled for SMS and voice call users with free and trial subscriptions. |
| [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Disabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Disabled |
-As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/).
+As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication.
## Next steps
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 06/10/2023 Last updated : 06/22/2023
You can nudge users to set up Microsoft Authenticator during sign-in. Users will go through their regular sign-in, perform multifactor authentication as usual, and then be prompted to set up Microsoft Authenticator. You can include or exclude users or groups to control who gets nudged to set up the app. This allows targeted campaigns to move users from less secure authentication methods to the Authenticator app.
-In addition to choosing who can be nudged, you can define how many days a user can postpone, or "snooze", the nudge. If a user taps **Not now** to snooze the app setup, they'll be nudged again on the next MFA attempt after the snooze duration has elapsed.
+In addition to choosing who can be nudged, you can define how many days a user can postpone, or "snooze", the nudge. If a user taps **Not now** to postpone the app setup, they'll be nudged again on the next MFA attempt after the snooze duration has elapsed. Users with free and trial subscriptions can postpone the app setup up to three times.
>[!NOTE] >As users go through their regular sign-in, Conditional Access policies that govern security info registration apply before the user is prompted to set up Authenticator. For example, if a Conditional Access policy requires security info updates can only occur on an internal network, then users won't be prompted to set up Authenticator unless they are on the internal network.
In addition to choosing who can be nudged, you can define how many days a user c
![Installation complete](./media/how-to-nudge-authenticator-app/finish.png)
-1. If a user wishes to not install the Authenticator app, they can tap **Not now** to snooze the prompt for up to 14 days, which can be set by an admin.
+1. If a user wishes to not install the Authenticator app, they can tap **Not now** to snooze the prompt for up to 14 days, which can be set by an admin. Users with free and trial subscriptions can snooze the prompt up to three times.
![Snooze installation](./media/how-to-nudge-authenticator-app/snooze.png)
In addition to choosing who can be nudged, you can define how many days a user c
To enable a registration campaign in the Azure portal, complete the following steps: 1. In the Azure portal, click **Security** > **Authentication methods** > **Registration campaign**.
-1. For **State**, click **Enabled**, select any users or groups to exclude from the registration campaign, and then click **Save**.
+1. For **State**, click **Microsoft managed** or **Enabled**. In the following screenshot, the registration campaign is **Microsoft managed**. That setting allows Microsoft to set the default value to be either enabled or disabled. For the registration campaign, the Microsoft managed value is Enabled for voice call and SMS users with free and trial subscriptions. For more information, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md).
![Screenshot of enabling a registration campaign.](./media/how-to-nudge-authenticator-app/registration-campaign.png)
+1. Select any users or groups to exclude from the registration campaign, and then click **Save**.
+ ## Enable the registration campaign policy using Graph Explorer In addition to using the Azure portal, you can also enable the registration campaign policy using Graph Explorer. To enable the registration campaign policy, you must use the Authentication Methods Policy using Graph APIs. **Global administrators** and **Authentication Method Policy administrators** can update the policy.
active-directory Howto Authentication Passwordless Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-faqs.md
For a full list of endpoints needed to use Microsoft online products, see [Offic
To check if the Windows 10 client device has the right domain join type, use the following command: ```console
-Dsregcmd/status
+Dsregcmd /status
``` The following sample output shows that the device is Azure AD joined as *AzureADJoined* is set to *YES*:
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Users can register for passwordless phone sign-in directly within the Microsoft
6. Once signed-in, continue following the additional steps to set up phone sign-in. ### Guided registration with My Sign-ins
+> [!NOTE]
+> Users will only be able to register Microsoft Authenticator via combined registration if the Microsoft Authenticator authentication mode is to Any or Push.
+ To register the Microsoft Authenticator app, follow these steps: 1. Browse to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo).
To register the Microsoft Authenticator app, follow these steps:
1. Follow the instructions to install and configure the Microsoft Authenticator app on your device. 1. Select **Done** to complete Microsoft Authenticator configuration.
-### Enable phone sign-in
+#### Enable phone sign-in
After users registered themselves for the Microsoft Authenticator app, they need to enable phone sign-in:
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
Previously updated : 01/29/2023 Last updated : 06/22/2023
# Configure and enable users for SMS-based authentication using Azure Active Directory
-To simplify and secure sign-in to applications and services, Azure Active Directory (Azure AD) provides multiple authentication options. SMS-based authentication lets users sign-in without providing, or even knowing, their user name and password. After their account is created by an identity administrator, they can enter their phone number at the sign-in prompt. They receive an authentication code via text message that they can provide to complete the sign-in. This authentication method simplifies access to applications and services, especially for Frontline workers.
+To simplify and secure sign-in to applications and services, Azure Active Directory (Azure AD) provides multiple authentication options. SMS-based authentication lets users sign-in without providing, or even knowing, their user name and password. After their account is created by an identity administrator, they can enter their phone number at the sign-in prompt. They receive an SMS authentication code that they can provide to complete the sign-in. This authentication method simplifies access to applications and services, especially for Frontline workers.
This article shows you how to enable SMS-based authentication for select users or groups in Azure AD. For a list of apps that support using SMS-based sign-in, see [App support for SMS-based authentication](how-to-authentication-sms-supported-apps.md).
To complete this article, you need the following resources and privileges:
* An Azure Active Directory tenant associated with your subscription. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * You need *global administrator* privileges in your Azure AD tenant to enable SMS-based authentication.
-* Each user that's enabled in the text message authentication method policy must be licensed, even if they don't use it. Each enabled user must have one of the following Azure AD, EMS, Microsoft 365 licenses:
+* Each user that's enabled in the SMS authentication method policy must be licensed, even if they don't use it. Each enabled user must have one of the following Azure AD, EMS, Microsoft 365 licenses:
* [Microsoft 365 F1 or F3][m365-firstline-workers-licensing] * [Azure Active Directory Premium P1 or P2][azure-ad-pricing] * [Enterprise Mobility + Security (EMS) E3 or E5][ems-licensing] or [Microsoft 365 E3 or E5][m365-licensing]
First, let's enable SMS-based authentication for your Azure AD tenant.
1. Sign-in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions. 1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side. 1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
-1. From the list of available authentication methods, select **Text message**.
+1. From the list of available authentication methods, select **SMS**.
- ![Screenshot that shows how to select the text message authentication method.](./media/howto-authentication-sms-signin/select-text-message-policy.png)
+ ![Screenshot that shows how to select the SMS authentication method.](./media/howto-authentication-sms-signin/authentication-methods-policy.png)
-1. Set **Enable** to *Yes*. Then select the **Target users**.
+1. Click **Enable** and select **Target users**. You can choose to enable SMS-based authentication for *All users* or *Select users* and groups.
- ![Enable text authentication in the authentication method policy window](./media/howto-authentication-sms-signin/enable-text-authentication-method.png)
-
- You can choose to enable SMS-based authentication for *All users* or *Select users* and groups. In the next section, you enable SMS-based authentication for a test user.
+ ![Enable SMS authentication in the authentication method policy window](./media/howto-authentication-sms-signin/enable-sms-authentication-method.png)
## Assign the authentication method to users and groups With SMS-based authentication enabled in your Azure AD tenant, now select some users or groups to be allowed to use this authentication method.
-1. In the text message authentication policy window, set **Target** to *Select users*.
+1. In the SMS authentication policy window, set **Target** to *Select users*.
1. Choose to **Add users or groups**, then select a test user or group, such as *Contoso User* or *Contoso SMS Users*. 1. When you've selected your users or groups, choose **Select**, then **Save** the updated authentication method policy.
-Each user that's enabled in the text message authentication method policy must be licensed, even if they don't use it. Make sure you have the appropriate licenses for the users you enable in the authentication method policy, especially when you enable the feature for large groups of users.
+Each user that's enabled in SMS authentication method policy must be licensed, even if they don't use it. Make sure you have the appropriate licenses for the users you enable in the authentication method policy, especially when you enable the feature for large groups of users.
## Set a phone number for user accounts
To test the user account that's now enabled for SMS-based sign-in, complete the
![Enter a phone number at the sign-in prompt for the test user](./media/howto-authentication-sms-signin/sign-in-with-phone-number.png)
-1. A text message is sent to the phone number provided. To complete the sign-in process, enter the 6-digit code provided in the text message at the sign-in prompt.
+1. An SMS message is sent to the phone number provided. To complete the sign-in process, enter the 6-digit code provided in the SMS message at the sign-in prompt.
- ![Enter the confirmation code sent via text message to the user's phone number](./media/howto-authentication-sms-signin/sign-in-with-phone-number-confirmation-code.png)
+ ![Enter the SMS confirmation code sent to the user's phone number](./media/howto-authentication-sms-signin/sign-in-with-phone-number-confirmation-code.png)
1. The user is now signed in without the need to provide a username or password.
For more information on the end-user experience, see [SMS sign-in user experienc
If you receive an error when you try to set a phone number for a user account in the Azure portal, review the following troubleshooting steps: 1. Make sure that you're enabled for the SMS-based sign-in.
-1. Confirm that the user account is enabled in the *Text message* authentication method policy.
+1. Confirm that the user account is enabled in the **SMS** authentication method policy.
1. Make sure you set the phone number with the proper formatting, as validated in the Azure portal (such as *+1 4251234567*). 1. Make sure that the phone number isn't used elsewhere in your tenant. 1. Check there's no voice number set on the account. If a voice number is set, delete and try to the phone number again.
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
description: Learn how to use token protection in Conditional Access policies.
Previously updated : 06/05/2023 Last updated : 06/21/2023
Token protection creates a cryptographically secure tie between the token and th
With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens (refresh tokens) for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices.
+> [!IMPORTANT]
+> The following changes have been made to Token Protection since the initial public preview release:
+> * **Sign In logs output:** The value of the string used in "enforcedSessionControls" and "sessionControlsNotSatisfied" changed from "Binding" to "SignInTokenProtection" in late June 2023. Queries on Sign In Log data should be updated to reflect this change.
+ > [!NOTE] > We may interchange sign in tokens and refresh tokens in this content. This preview doesn't currently support access tokens or web cookies.
This preview supports the following configurations:
- PowerQuery extension for Excel - Extensions to Visual Studio Code which access Exchange or SharePoint - Visual Studio
+ - The new Teams 2.1 preview client gets blocked after sign out due to a bug. This bug should be fixed in an August release.
- The following Windows client devices aren't supported: - Windows Server - Surface Hub
You can also use [Log Analytics](../reports-monitoring/tutorial-log-analytics-wi
Here's a sample Log Analytics query searching the non-interactive sign-in logs for the last seven days, highlighting **Blocked** versus **Allowed** requests by **Application**. These queries are only samples and are subject to change.
+> [!NOTE]
+> **Sign In logs output:** The value of the string used in "enforcedSessionControls" and "sessionControlsNotSatisfied" changed from "Binding" to "SignInTokenProtection" in late June 2023. Queries on Sign In Log data should be updated to reflect this change.
+ ```kusto //Per Apps query // Select the log you want to query (SigninLogs or AADNonInteractiveUserSignInLogs )
AADNonInteractiveUserSignInLogs
//Add userPrinicpalName if you want to filter // | where UserPrincipalName =="<user_principal_Name>" | mv-expand todynamic(ConditionalAccessPolicies)
-| where ConditionalAccessPolicies ["enforcedSessionControls"] contains '["Binding"]'
+| where ConditionalAccessPolicies ["enforcedSessionControls"] contains '["SignInTokenProtection"]'
| where ConditionalAccessPolicies.result !="reportOnlyNotApplied" and ConditionalAccessPolicies.result !="notApplied" | extend SessionNotSatisfyResult = ConditionalAccessPolicies["sessionControlsNotSatisfied"]
-| extend Result = case (SessionNotSatisfyResult contains 'Binding', 'Block','Allow')
+| extend Result = case (SessionNotSatisfyResult contains 'SignInTokenProtection', 'Block','Allow')
| summarize by Id,UserPrincipalName, AppDisplayName, Result | summarize Requests = count(), Users = dcount(UserPrincipalName), Block = countif(Result == "Block"), Allow = countif(Result == "Allow"), BlockedUsers = dcountif(UserPrincipalName, Result == "Block") by AppDisplayName | extend PctAllowed = round(100.0 * Allow/(Allow+Block), 2)
AADNonInteractiveUserSignInLogs
//Add userPrincipalName if you want to filter // | where UserPrincipalName =="<user_principal_Name>" | mv-expand todynamic(ConditionalAccessPolicies)
-| where ConditionalAccessPolicies.enforcedSessionControls contains '["Binding"]'
+| where ConditionalAccessPolicies.enforcedSessionControls contains '["SignInTokenProtection"]'
| where ConditionalAccessPolicies.result !="reportOnlyNotApplied" and ConditionalAccessPolicies.result !="notApplied" | extend SessionNotSatisfyResult = ConditionalAccessPolicies.sessionControlsNotSatisfied
-| extend Result = case (SessionNotSatisfyResult contains 'Binding', 'Block','Allow')
+| extend Result = case (SessionNotSatisfyResult contains 'SignInTokenProtection', 'Block','Allow')
| summarize by Id, UserPrincipalName, AppDisplayName, ResourceDisplayName,Result | summarize Requests = count(),Block = countif(Result == "Block"), Allow = countif(Result == "Allow") by UserPrincipalName, AppDisplayName,ResourceDisplayName | extend PctAllowed = round(100.0 * Allow/(Allow+Block), 2)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md
Previously updated : 02/13/2023 Last updated : 06/20/2023 -+ # What is Conditional Access?
-The modern security perimeter now extends beyond an organization's network to include user and device identity. Organizations can use identity-driven signals as part of their access control decisions.
+Microsoft is providing Conditional Access templates to organizations in report-only mode starting in January of 2023. We may add more policies as new threats emerge.
+
+The modern security perimeter extends beyond an organization's network perimeter to include user and device identity. Organizations now use identity-driven signals as part of their access control decisions.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4MwZs]
-Conditional Access brings signals together, to make decisions, and enforce organizational policies. Azure AD Conditional Access is at the heart of the new identity-driven control plane.
+Azure AD Conditional Access brings signals together, to make decisions, and enforce organizational policies. Conditional Access is Microsoft's [Zero Trust policy engine](/security/zero-trust/deploy/identity) taking signals from various sources into account when enforcing policy decisions.
-![Conceptual Conditional signal plus decision to get enforcement](./media/overview/conditional-access-signal-decision-enforcement.png)
Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to do multifactor authentication to access it.
Administrators are faced with two primary goals:
Use Conditional Access policies to apply the right access controls when needed to keep your organization secure.
-![Conceptual Conditional Access process flow](./media/overview/conditional-access-overview-how-it-works.png)
- > [!IMPORTANT] > Conditional Access policies are enforced after first-factor authentication is completed. Conditional Access isn't intended to be an organization's first line of defense for scenarios like denial-of-service (DoS) attacks, but it can use signals from these events to determine access. ## Common signals
-Common signals that Conditional Access can take in to account when making a policy decision include the following signals:
+Conditional Access takes signals from various sources into account when making access decisions.
++
+These signals include:
- User or group membership - Policies can be targeted to specific users and groups giving administrators fine-grained control over access.
Common signals that Conditional Access can take in to account when making a poli
- Application - Users attempting to access specific applications can trigger different Conditional Access policies. - Real-time and calculated risk detection
- - Signals integration with [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify risky sign-in behavior. Policies can then force users to change their password, do multifactor authentication to reduce their risk level, or block access until an administrator takes manual action.
+ - Signals integration with [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify and remediate risky users and sign-in behavior.
- [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps)
- - Enables user application access and sessions to be monitored and controlled in real time, increasing visibility and control over access to and activities done within your cloud environment.
+ - Enables user application access and sessions to be monitored and controlled in real time. This integration increases visibility and control over access to and activities done within your cloud environment.
## Common decisions - Block access - Most restrictive decision - Grant access
- - Least restrictive decision, can still require one or more of the following options:
+ - Less restrictive decision, can require one or more of the following options:
- Require multifactor authentication
+ - Require authentication strength
- Require device to be marked as compliant - Require Hybrid Azure AD joined device - Require approved client app
- - Require app protection policy (preview)
+ - Require app protection policy
+ - Require password change
+ - Require terms of use
## Commonly applied policies
Many organizations have [common access concerns that Conditional Access policies
- Blocking risky sign-in behaviors - Requiring organization-managed devices for specific applications
+Administrators can create policies from scratch or start from a template policy in the portal or using the Microsoft Graph API.
+
+## Administrator experience
+
+Administrators with the [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) role can manage policies in Azure AD.
+
+Conditional Access is found in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access**.
++
+- The **Overview** page provides a summary of policy state, users, devices, and applications as well as general and security alerts with suggestions.
+- The **Coverage** page provides a synopsis of applications with and without Conditional Access policy coverage over the last seven days.
+- The **Monitoring** page allows administrators to see a graph of sign-ins that can be filtered to see potential gaps in policy coverage.
+ ## License requirements [!INCLUDE [Active Directory P1 license](../../../includes/active-directory-p1-license.md)]
Risk-based policies require access to [Identity Protection](../identity-protecti
Other products and features that may interact with Conditional Access policies require appropriate licensing for those products and features.
-When licenses required for Conditional Access expire, policies aren't automatically disabled or deleted so customers can migrate away from Conditional Access policies without a sudden change in their security posture. Remaining policies can be viewed and deleted, but no longer updated.
+When licenses required for Conditional Access expire, policies aren't automatically disabled or deleted. This grants customers the ability to migrate away from Conditional Access policies without a sudden change in their security posture. Remaining policies can be viewed and deleted, but no longer updated.
[Security defaults](../fundamentals/concept-fundamentals-security-defaults.md) help protect against identity-related attacks and are available for all customers.
When licenses required for Conditional Access expire, policies aren't automatica
- [Building a Conditional Access policy piece by piece](concept-conditional-access-policies.md) - [Plan your Conditional Access deployment](plan-conditional-access.md)-- [Learn about Identity Protection](../identity-protection/overview-identity-protection.md)-- [Learn about Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)-- [Learn about Microsoft Intune](/intune/index)
active-directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configurable-token-lifetimes.md
Previously updated : 04/04/2023 Last updated : 06/21/2023 # Configurable token lifetimes in the Microsoft identity platform (preview)
-You can specify the lifetime of an access, ID, or SAML token issued by the Microsoft identity platform. You can set token lifetimes for all apps in your organization or for a multi-tenant (multi-organization) application. We currently don't support configuring the token lifetimes for service principals or [managed identity service principals](../managed-identities-azure-resources/overview.md).
+You can specify the lifetime of an access, ID, or SAML token issued by the Microsoft identity platform. You can set token lifetimes for all apps in your organization or for a multi-tenant (multi-organization) application. We currently don't support configuring the token lifetimes for [managed identity service principals](../managed-identities-azure-resources/overview.md).
In Azure AD, a policy object represents a set of rules that are enforced on individual applications or on all applications in an organization. Each policy type has a unique structure, with a set of properties that are applied to objects to which they're assigned.
Refresh and session token configuration are affected by the following properties
|Refresh Token Max Inactive Time |MaxInactiveTime |Refresh tokens |90 days | |Single-Factor Refresh Token Max Age |MaxAgeSingleFactor |Refresh tokens (for any users) |Until-revoked | |Multi-Factor Refresh Token Max Age |MaxAgeMultiFactor |Refresh tokens (for any users) |Until-revoked |
-|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
-|Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
+|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and non-persistent) |Until-revoked |
+|Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and non-persistent) |Until-revoked |
Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 90 days. Anytime the SSO session token is used within its validity period, the validity period is extended another 24 hours or 90 days. If the SSO session token isn't used within its Max Inactive Time period, it's considered expired and will no longer be accepted. Any changes to this default period should be changed using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
All timespans used here are formatted according to the C# [TimeSpan](/dotnet/api
You can configure token lifetime policies and assign them to apps using Microsoft Graph. For more information, see the [tokenLifetimePolicy resource type](/graph/api/resources/tokenlifetimepolicy) and its associated methods.
+### Service principal policies
+
+You can use the following Microsoft Graph REST API commands for service principal policies.</br></br>
+
+| Command | Description |
+| | |
+| [Assign tokenLifetimePolicy](/graph/api/application-post-tokenlifetimepolicies) | Specify the service principal object ID to link the specified policy to a service principal. |
+| [List assigned tokenLifetimePolicy](/graph/api/application-list-tokenlifetimepolicies) | Specify the service principal object ID to get the policies that are assigned to a service principal. |
+| [Remove tokenLifetimePolicy](/graph/api/application-delete-tokenlifetimepolicies) | Specify the service principal object ID to remove a policy from the service principal. |
+ ## Cmdlet reference These are the cmdlets in the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation). ### Manage policies
-You can use the following cmdlets to manage policies.
+You can use the following commands to manage policies.
-| Cmdlet | Description |
+| Cmdlet | Description |
| | | | [New-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/new-mgpolicytokenlifetimepolicy) | Creates a new policy. | | [Get-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/get-mgpolicytokenlifetimepolicy) | Gets all token lifetime policies or a specified policy. |
You can use the following cmdlets to manage policies.
### Application policies You can use the following cmdlets for application policies.</br></br>
-| Cmdlet | Description |
+| Cmdlet | Description |
| | | | [New-MgApplicationTokenLifetimePolicyByRef](/powershell/module/microsoft.graph.applications/new-mgapplicationtokenlifetimepolicybyref) | Links the specified policy to an application. | | [Get-MgApplicationTokenLifetimePolicyByRef](/powershell/module/microsoft.graph.applications/get-mgapplicationtokenlifetimepolicybyref) | Gets the policies that are assigned to an application. | | [Remove-MgApplicationTokenLifetimePolicyByRef](/powershell/module/microsoft.graph.applications/remove-mgapplicationtokenlifetimepolicybyref) | Removes a policy from an application. |
-### Service principal policies
-Service principal policies are not supported.
- ## Next steps To learn more, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
active-directory Configure Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md
Previously updated : 05/01/2023 Last updated : 06/21/2023 # Configure token lifetime policies (preview)
-In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific service principal. They can also be set for multi-organizations (multi-tenant application).
+In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific app or service principal. They can also be set for multi-organizations (multi-tenant application).
For more information, see [configurable token lifetimes](configurable-token-lifetimes.md).
For more information, see [configurable token lifetimes](configurable-token-life
To get started, download the latest [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation).
-## Create a policy for web sign-in
+## Create a policy and assign it to an app
-In the following steps, you'll create a policy that requires users to authenticate less frequently in your web app. This policy sets the lifetime of the access/ID tokens for your web app.
+In the following steps, you'll create a policy that requires users to authenticate less frequently in your web app. Assign the policy to an app, which sets the lifetime of the access/ID tokens for your web app.
```powershell Install-Module Microsoft.Graph
Remove-MgApplicationTokenLifetimePolicyByRef -ApplicationId $applicationObjectId
Remove-MgPolicyTokenLifetimePolicy -TokenLifetimePolicyId $tokenLifetimePolicyId ```
+## Create a policy and assign it to a service principal
+
+In the following steps, you'll create a policy that requires users to authenticate less frequently in your web app. Assign the policy to service principal, which sets the lifetime of the access/ID tokens for your web app.
+
+Create a token lifetime policy.
+
+```http
+POST https://graph.microsoft.com/v1.0/policies/tokenLifetimePolicies
+Content-Type: application/json
+
+{
+ "definition": [
+ "{\"TokenLifetimePolicy\":{\"Version\":1,\"AccessTokenLifetime\":\"8:00:00\"}}"
+ ],
+ "displayName": "Contoso token lifetime policy",
+ "isOrganizationDefault": false
+}
+```
+
+Assign the policy to a service principal.
+
+```http
+POST https://graph.microsoft.com/v1.0/servicePrincipals/11111111-1111-1111-1111-111111111111/tokenLifetimePolicies/$ref
+Content-Type: application/json
+
+{
+ "@odata.id":"https://graph.microsoft.com/v1.0/policies/tokenLifetimePolicies/22222222-2222-2222-2222-222222222222"
+}
+```
+
+List the policies on the service principal.
+
+```http
+GET https://graph.microsoft.com/v1.0/servicePrincipals/11111111-1111-1111-1111-111111111111/tokenLifetimePolicies
+```
+
+Remove the policy from the service principal.
+
+```http
+DELETE https://graph.microsoft.com/v1.0/servicePrincipals/11111111-1111-1111-1111-111111111111/tokenLifetimePolicies/22222222-2222-2222-2222-222222222222/$ref
+```
+ ## View existing policies in a tenant To see all policies that have been created in your organization, run the [Get-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/get-mgpolicytokenlifetimepolicy) cmdlet. Any results with defined property values that differ from the defaults listed above are in scope of the retirement.
active-directory Msal Net B2c Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-b2c-considerations.md
+
+ Title: Azure AD B2C and MSAL.NET
+description: Considerations when using Azure AD B2C with the Microsoft Authentication Library for .NET (MSAL.NET).
++++++++ Last updated : 02/21/2023+++
+# Customer intent: As an application developer, I want to learn about specific considerations when using Azure AD B2C and MSAL.NET so I can decide if this platform meets my application development needs and requirements.
++
+# Use MSAL.NET to sign in users with social identities
+
+You can use MSAL.NET to sign in users with social identities by using [Azure Active Directory B2C (Azure AD B2C)](../../active-directory-b2c/overview.md). Azure AD B2C is built around the notion of policies. In MSAL.NET, specifying a policy translates to providing an authority.
+
+- When you instantiate the public client application, specify the policy as part of the authority.
+- When you want to apply a policy, call an override of `AcquireTokenInteractive` that accepts the `authority` parameter.
+
+This article applies to MSAL.NET 3.x. For MSAL.NET 2.x, see [Azure AD B2C specifics in MSAL 2.x](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/AAD-B2C-Specifics-MSAL-2.x) in the MSAL.NET Wiki on GitHub.
+
+## Authority for an Azure AD B2C tenant and policy
+
+The authority format for Azure AD B2C is: `https://{azureADB2CHostname}/tfp/{tenant}/{policyName}`
+
+- `azureADB2CHostname` - The name of the Azure AD B2C tenant plus the host. For example, _contosob2c.b2clogin.com_.
+- `tenant` - The domain name or the directory (tenant) ID of the Azure AD B2C tenant. For example, _contosob2c.onmicrosoft.com_ or a GUID, respectively.
+- `policyName` - The name of the user flow or custom policy to apply. For example, a sign-up/sign-in policy like _b2c_1_susi_.
+
+For more information about Azure AD B2C authorities, see [Set redirect URLs to b2clogin.com](../../active-directory-b2c/b2clogin.md).
+
+## Instantiating the application
+
+Provide the authority by calling `WithB2CAuthority()` when you create the application object:
+
+```csharp
+// Azure AD B2C Coordinates
+public static string Tenant = "fabrikamb2c.onmicrosoft.com";
+public static string AzureADB2CHostname = "fabrikamb2c.b2clogin.com";
+public static string ClientID = "90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6";
+public static string PolicySignUpSignIn = "b2c_1_susi";
+public static string PolicyEditProfile = "b2c_1_edit_profile";
+public static string PolicyResetPassword = "b2c_1_reset";
+
+public static string AuthorityBase = $"https://{AzureADB2CHostname}/tfp/{Tenant}/";
+public static string Authority = $"{AuthorityBase}{PolicySignUpSignIn}";
+public static string AuthorityEditProfile = $"{AuthorityBase}{PolicyEditProfile}";
+public static string AuthorityPasswordReset = $"{AuthorityBase}{PolicyResetPassword}";
+
+application = PublicClientApplicationBuilder.Create(ClientID)
+ .WithB2CAuthority(Authority)
+ .Build();
+```
+
+## Acquire a token to apply a policy
+
+Acquiring a token for an Azure AD B2C-protected API in a public client application requires you to use the overrides with an authority:
+
+```csharp
+AuthenticationResult authResult = null;
+IEnumerable<IAccount> accounts = await application.GetAccountsAsync(policy);
+IAccount account = accounts.FirstOrDefault();
+try
+{
+ authResult = await application.AcquireTokenSilent(scopes, account)
+ .ExecuteAsync();
+}
+catch (MsalUiRequiredException ex)
+{
+ authResult = await application.AcquireTokenInteractive(scopes)
+ .WithAccount(account)
+ .WithParentActivityOrWindow(ParentActivityOrWindow)
+ .ExecuteAsync();
+}
+```
+
+In the preceding code snippet:
+
+- `policy` is a string containing the name of your Azure AD B2C user flow or custom policy (for example, `PolicySignUpSignIn`).
+- `ParentActivityOrWindow` is required for Android (the Activity) and is optional for other platforms that support a parent UI like windows on Microsoft Windows and UIViewController in iOS. For more information on the UI dialog, see [WithParentActivityOrWindow](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Acquiring-tokens-interactively#withparentactivityorwindow) on the MSAL Wiki.
+
+Applying a user flow or custom policy (for example, letting the user edit their profile or reset their password) is currently done by calling `AcquireTokenInteractive`. For these two policies, you don't use the returned token/authentication result.
+
+## Profile edit policies
+
+To enable your users to sign in with a social identity and then edit their profile, apply the Azure AD B2C edit profile policy.
+
+Do so by calling `AcquireTokenInteractive` with the authority for that policy. Because the user is already signed in and has an active cookie session, use `Prompt.NoPrompt` to prevent the account selection dialog from being displayed.
+
+```csharp
+private async void EditProfileButton_Click(object sender, RoutedEventArgs e)
+{
+ IEnumerable<IAccount> accounts = await application.GetAccountsAsync(PolicyEditProfile);
+ IAccount account = accounts.FirstOrDefault();
+ try
+ {
+ var authResult = await application.AcquireTokenInteractive(scopes)
+ .WithPrompt(Prompt.NoPrompt),
+ .WithAccount(account)
+ .WithB2CAuthority(AuthorityEditProfile)
+ .ExecuteAsync();
+ }
+ catch
+ {
+ }
+}
+```
+
+## Resource owner password credentials (ROPC)
+
+For more information on the ROPC flow, see [Sign in with resource owner password credentials grant](v2-oauth-ropc.md).
+
+The ROPC flow is **not recommended** because asking a user for their password in your application isn't secure. For more information about this problem, see [WhatΓÇÖs the solution to the growing problem of passwords?](https://news.microsoft.com/features/whats-solution-growing-problem-passwords-says-microsoft/).
+
+By using username/password in an ROPC flow, you sacrifice several things:
+
+- Core tenets of modern identity: The password can be fished or replayed because the shared secret can be intercepted. By definition, ROPC is incompatible with passwordless flows.
+- Users who use multi-factor authentication (MFA) won't be able to sign in as there's no interaction.
+- Users won't be able to use single sign-on (SSO).
+
+### Configure the ROPC flow in Azure AD B2C
+
+In your Azure AD B2C tenant, create a new user flow and select **Sign in using ROPC** to enable ROPC for the user flow. For more information, see [Configure the resource owner password credentials flow](../../active-directory-b2c/add-ropc-policy.md).
+
+`IPublicClientApplication` contains the `AcquireTokenByUsernamePassword` method:
+
+```csharp
+AcquireTokenByUsernamePassword(
+ IEnumerable<string> scopes,
+ string username,
+ SecureString password)
+```
+
+The `AcquireTokenByUsernamePassword` method takes the following parameters:
+
+- The _scopes_ for which to obtain an access token.
+- A _username_.
+- A SecureString _password_ for the user.
+
+### Limitations of the ROPC flow
+
+The ROPC flow **only works for local accounts**, where your users have registered with Azure AD B2C using an email address or username. This flow doesn't work when federating to an external identity provider supported by Azure AD B2C (Facebook, Google, etc.).
+
+## Google auth and embedded webview
+
+If you're using Google as an identity provider, we recommend you use the system browser as Google doesn't allow [authentication from embedded webviews](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). Currently, `login.microsoftonline.com` is a trusted authority with Google and will work with embedded webview. However, `b2clogin.com` isn't a trusted authority with Google, so users won't be able to authenticate.
+
+## Token caching in MSAL.NET
+
+### Known issue with Azure AD B2C
+
+MSAL.NET supports a [token cache](/dotnet/api/microsoft.identity.client.tokencache). The token caching key is based on the claims returned by the identity provider (IdP).
+
+Currently, MSAL.NET needs two claims to build a token cache key:
+
+- `tid` (the tenant ID)
+- `preferred_username`
+
+Both of these claims may be missing in Azure AD B2C scenarios because not all social identity providers (Facebook, Google, and others) return them in the tokens they return to Azure AD B2C.
+
+A symptom of such a scenario is that MSAL.NET returns `Missing from the token response` when you access the `preferred_username` claim value in tokens issued by Azure AD B2C. MSAL uses the `Missing from the token response` value for `preferred_username` to maintain cache cross-compatibility between libraries.
+
+### Workarounds
+
+#### Mitigation for missing tenant ID
+
+The suggested workaround is to use [caching by policy](#acquire-a-token-to-apply-a-policy) described earlier.
+
+Alternatively, you can use the `tid` claim if you're using [custom policies](../../active-directory-b2c/user-flow-overview.md) in Azure AD B2C. Custom policies can return additional claims to your application by using [claims transformation](../../active-directory-b2c/claims-transformation-technical-profile.md).
+
+#### Mitigation for "Missing from the token response"
+
+One option is to use the `name` claim instead of `preferred_username`. To include the `name` claim in ID tokens issued by Azure AD B2C, select **Display Name** when you configure your user flow.
+
+For more information about specifying which claims are returned by your user flows, see [Tutorial: Create user flows in Azure AD B2C](../../active-directory-b2c/tutorial-create-user-flows.md).
+
+## Next steps
+
+More details about acquiring tokens interactively with MSAL.NET for Azure AD B2C applications are provided in the following sample.
+
+| Sample | Platform | Description |
+| -- | | |
+| [active-directory-b2c-xamarin-native](https://github.com/Azure-Samples/active-directory-b2c-xamarin-native) | Xamarin iOS, Xamarin Android, UWP | A Xamarin Forms app that uses MSAL.NET to authenticate users via Azure AD B2C and then access a web API with the tokens returned. |
active-directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols.md
+
+ Title: OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform
+description: A guide to OAuth 2.0 and OpenID Connect protocols as supported by the Microsoft identity platform.
+++++++ Last updated : 02/27/2023+++++
+# OAuth 2.0 and OpenID Connect (OIDC) in the Microsoft identity platform
+
+Knowing about OAuth or OpenID Connect (OIDC) at the protocol level isn't required to use the Microsoft identity platform. However, you'll encounter protocol terms and concepts as you use the identity platform to add authentication to your apps. As you work with the Azure portal, our documentation, and authentication libraries, knowing some fundamentals can assist your integration and overall experience.
+
+## Roles in OAuth 2.0
+
+Four parties are generally involved in an OAuth 2.0 and OpenID Connect authentication and authorization exchange. These exchanges are often called *authentication flows* or *auth flows*.
+
+![Diagram showing the OAuth 2.0 roles](./media/v2-flows/protocols-roles.svg)
+
+* **Authorization server** - The Microsoft identity platform is the authorization server. Also called an *identity provider* or *IdP*, it securely handles the end-user's information, their access, and the trust relationships between the parties in the auth flow. The authorization server issues the security tokens your apps and APIs use for granting, denying, or revoking access to resources (authorization) after the user has signed in (authenticated).
+
+* **Client** - The client in an OAuth exchange is the application requesting access to a protected resource. The client could be a web app running on a server, a single-page web app running in a user's web browser, or a web API that calls another web API. You'll often see the client referred to as *client application*, *application*, or *app*.
+
+* **Resource owner** - The resource owner in an auth flow is usually the application user, or *end-user* in OAuth terminology. The end-user "owns" the protected resource (their data) which your app accesses on their behalf. The resource owner can grant or deny your app (the client) access to the resources they own. For example, your app might call an external system's API to get a user's email address from their profile on that system. Their profile data is a resource the end-user owns on the external system, and the end-user can consent to or deny your app's request to access their data.
+
+* **Resource server** - The resource server hosts or provides access to a resource owner's data. Most often, the resource server is a web API fronting a data store. The resource server relies on the authorization server to perform authentication and uses information in bearer tokens issued by the authorization server to grant or deny access to resources.
+
+## Tokens
+
+The parties in an authentication flow use **bearer tokens** to assure, verify, and authenticate a principal (user, host, or service) and to grant or deny access to protected resources (authorization). Bearer tokens in the Microsoft identity platform are formatted as [JSON Web Tokens](https://tools.ietf.org/html/rfc7519) (JWT).
+
+Three types of bearer tokens are used by the identity platform as *security tokens*:
+
+* [Access tokens](access-tokens.md) - Access tokens are issued by the authorization server to the client application. The client passes access tokens to the resource server. Access tokens contain the permissions the client has been granted by the authorization server.
+
+* [ID tokens](id-tokens.md) - ID tokens are issued by the authorization server to the client application. Clients use ID tokens when signing in users and to get basic information about them.
+
+* [Refresh tokens](refresh-tokens.md) - The client uses a refresh token, or *RT*, to request new access and ID tokens from the authorization server. Your code should treat refresh tokens and their string content as sensitive data because they're intended for use only by authorization server.
+
+## App registration
+
+Your client app needs a way to trust the security tokens issued to it by the Microsoft identity platform. The first step in establishing trust is by [registering your app](quickstart-register-app.md). When you register your app, the identity platform automatically assigns it some values, while others you configure based on the application's type.
+
+Two of the most commonly referenced app registration settings are:
+
+* **Application (client) ID** - Also called *application ID* and *client ID*, this value is assigned to your app by the identity platform. The client ID uniquely identifies your app in the identity platform and is included in the security tokens the platform issues.
+* **Redirect URI** - The authorization server uses a redirect URI to direct the resource owner's *user-agent* (web browser, mobile app) to another destination after completing their interaction. For example, after the end-user authenticates with the authorization server. Not all client types use redirect URIs.
+
+Your app's registration also holds information about the authentication and authorization *endpoints* you'll use in your code to get ID and access tokens.
+
+## Endpoints
+
+The Microsoft identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0. Standards-compliant authorization servers like the identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
+
+The endpoint URIs for your app are generated automatically when you register or configure your app. The endpoints you use in your app's code depend on the application's type and the identities (account types) it should support.
+
+Two commonly used endpoints are the [authorization endpoint](v2-oauth2-auth-code-flow.md#request-an-authorization-code) and [token endpoint](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). Here are examples of the `authorize` and `token` endpoints:
+
+```
+# Authorization endpoint - used by client to obtain authorization from the resource owner.
+https://login.microsoftonline.com/<issuer>/oauth2/v2.0/authorize
+# Token endpoint - used by client to exchange an authorization grant or refresh token for an access token.
+https://login.microsoftonline.com/<issuer>/oauth2/v2.0/token
+
+# NOTE: These are examples. Endpoint URI format may vary based on application type,
+# sign-in audience, and Azure cloud instance (global or national cloud).
+
+# The {issuer} value in the path of the request can be used to control who can sign into the application.
+# The allowed values are **common** for both Microsoft accounts and work or school accounts,
+# **organizations** for work or school accounts only, **consumers** for Microsoft accounts only,
+# and **tenant identifiers** such as the tenant ID or domain name.
+```
+
+To find the endpoints for an application you've registered, in the [Azure portal](https://portal.azure.com) navigate to:
+
+**Azure Active Directory** > **App registrations** > \<YOUR-APPLICATION\> > **Endpoints**
+
+## Next steps
+
+Next, learn about the OAuth 2.0 authentication flows used by each application type and the libraries you can use in your apps to perform them:
+
+* [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
+* [Microsoft Authentication Library (MSAL)](msal-overview.md)
+
+**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft Authentication Library](reference-v2-libraries.md) is safer and easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the Microsoft identity platform's implementation, we have protocol reference:
+
+* [Authorization code grant flow](v2-oauth2-auth-code-flow.md) - Single-page apps (SPA), mobile apps, native (desktop) applications
+* [Client credentials flow](v2-oauth2-client-creds-grant-flow.md) - Server-side processes, scripts, daemons
+* [On-behalf-of (OBO) flow](v2-oauth2-on-behalf-of-flow.md) - Web APIs that call another web API on a user's behalf
+* [OpenID Connect](v2-protocols-oidc.md) - User sign-in, sign out, and single sign-on (SSO)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 06/5/2023 Last updated : 06/22/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on June 5th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on June 22nd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Exchange Online (Plan 1) for Students | EXCHANGESTANDARD_STUDENT | ad2fe44a-915d-4e2b-ade1-6766d50a9d9c | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | | Exchange Online (Plan 1) for Alumni with Yammer | EXCHANGESTANDARD_ALUMNI | aa0f9eb7-eff2-4943-8424-226fb137fcad | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Exchange Online (PLAN 2) | EXCHANGEENTERPRISE | 19ec0d23-8335-4cbd-94ac-6050e30712fa | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) |
+| Exchange Online (Plan 2) for Faculty | EXCHANGEENTERPRISE_FACULTY | 0b7b15a8-7fd2-4964-bb96-5a566d4e3c15 | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) |
| Exchange Online (Plan 2) for GCC | EXCHANGEENTERPRISE_GOV | 7be8dc28-4da4-4e6d-b9b9-c60f2806df8a | EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/> INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117) | Exchange Online (Plan 2) for Government (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117) | | Exchange Online Archiving for Exchange Online | EXCHANGEARCHIVE_ADDON | ee02fd1b-340e-4a4b-b355-4a514e4c8943 | EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793) | | Exchange Online Archiving for Exchange Server | EXCHANGEARCHIVE | 90b5e015-709a-4b8b-b08e-3200f994494c | EXCHANGE_S_ARCHIVE (da040e0a-b393-4bea-bb76-928b3fa1cf5a) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE SERVER (da040e0a-b393-4bea-bb76-928b3fa1cf5a) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 A3 for faculty | M365EDU_A3_FACULTY | 4b590615-0888-425a-a965-b3bf7789848d | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) | | Microsoft 365 A3 for students | M365EDU_A3_STUDENT | 7cfd9a2b-e110-4c39-bf20-c6a3f36a3121 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) | | Microsoft 365 A3 student use benefits | M365EDU_A3_STUUSEBNFT | 18250162-5d87-4436-a834-d795c15c80f3 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9) |
+| Microsoft 365 A3 Suite features for faculty | Microsoft_365_A3_Suite_features_for_faculty | 32a0e471-8a27-4167-b24f-941559912425 | MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>REMOTE_HELP (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e) | Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Intune Plan 1 for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Remote help (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e) |
| Microsoft 365 A3 - Unattended License for students use benefit | M365EDU_A3_STUUSEBNFT_RPA1 | 1aa94593-ca12-4254-a738-81a5972958e8 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Apps for Enterprise (Unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9) | | Microsoft 365 A5 for Faculty | M365EDU_A5_FACULTY | e97c048c-37a4-45fb-ab50-922fbf07a370 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) | | Microsoft 365 A5 for students | M365EDU_A5_STUDENT | 46c119d4-0379-4a9d-85e4-97c66d3f909e | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>RETIRED - Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>RETIRED - Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) | | Microsoft 365 A5 student use benefits | M365EDU_A5_STUUSEBNFT | 31d57bc7-3a05-4867-ab53-97a17835a411 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9 |
+| Microsoft 365 A5 Suite features for faculty | M365_A5_SUITE_COMPONENTS_FACULTY | 9b8fe788-6174-4c4e-983b-3330c93ec278 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693) | Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Intune Plan 1 for Education (da24caf9-af8e-485c-b7c8-e73336da2693) |
| Microsoft 365 A5 without Audio Conferencing for students use benefit | M365EDU_A5_NOPSTNCONF_STUUSEBNFT | 81441ae1-0b31-4185-a6c0-32b6b84d419f| AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Premium) (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Microsoft 365 Apps for Business | O365_BUSINESS | cdd28e44-67e3-425e-be4c-737fab2899d3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | Microsoft 365 Apps for Business | SMB_BUSINESS | b214fe43-f5a3-4703-beeb-fa97188220fc | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Windows 10/11 Enterprise E5 (Original) | WIN_ENT_E5 | 1e7e1070-8ccb-4aca-b470-d7cb538cb07e | DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | Windows 10/11 Enterprise A3 for faculty | WIN10_ENT_A3_FAC | 8efbe2f6-106e-442f-97d4-a59aa6037e06 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | Windows 10/11 Enterprise A3 for students | WIN10_ENT_A3_STU | d4ef921e-840b-4b48-9a90-ab6698bc7b31 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
+| Windows 10/11 Enterprise A5 for faculty | WIN10_ENT_A5_FAC | 7b1a89a9-5eb9-4cf8-9467-20c943f1122c | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
| WINDOWS 10/11 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) | | WINDOWS 10/11 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | | Windows 10/11 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) |
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
You can assign external users to a group, or Azure AD role when the account is c
The final tab captures several key details from the user creation process. Review the details and select the **Invite** button if everything looks good. An email invitation is automatically sent to the user. After you send the invitation, the user account is automatically added to the directory as a guest.
- ![Screenshot showing the user list including the new Guest user.](media/add-users-administrator//guest-user-type.png)
+ ![Screenshot showing the user list including the new Guest user.](media/add-users-administrator/guest-user-type.png)
### External user invitations <a name="resend-invitations-to-guest-users"></a>
active-directory How To Create Customer Tenant Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-create-customer-tenant-portal.md
In this article, you learn how to:
:::image type="content" source="media/how-to-create-customer-tenant-portal/create-tenant.png" alt-text="Screenshot of the create tenant option.":::
-1. Select **Customer**, and then **Continue**. If you filtered the list of tenants by **Tenant type**: **Customer** in the previous step, this step will be skipped.
+1. Select **Customer**, and then **Continue**.
:::image type="content" source="media/how-to-create-customer-tenant-portal/select-tenant-type.png" alt-text="Screenshot of the select tenant type screen.":::
active-directory How To Customize Languages Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md
You can modify any or all of these attributes in the downloaded file. For exampl
:::image type="content" source="media/how-to-customize-languages-customers/customized-attributes.png" alt-text="Screenshot of the modified sign-up page attributes."::: > [!IMPORTANT]
-> In the customer tenant, we have two options to add custom text to the sign-up and sign-in experience. The function is available under each user flow during language customization and under [Company Branding](https://github.com/csmulligan/entra-previews/blob/PP3/docs/PP3_Customize%20CIAM%20neutral%20branding.md#customize-the-neutral-default-authentication-experience-for-the-ciam-tenant). Although we have to ways to customize strings (via Company branding and via User flows), both ways modify the same JSON file. The most recent change made either via User flows or via Company branding will always override the previous one.
+> In the customer tenant, we have two options to add custom text to the sign-up and sign-in experience. The function is available under each user flow during language customization and under [Company Branding](/azure/active-directory/external-identities/customers/how-to-customize-branding-customers). Although we have to ways to customize strings (via Company branding and via User flows), both ways modify the same JSON file. The most recent change made either via User flows or via Company branding will always override the previous one.
## Right-to-left language support
active-directory How To Facebook Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-facebook-federation-customers.md
Previously updated : 05/24/2023 Last updated : 06/20/2023
If you don't already have a Facebook account, sign up at [https://www.facebook.c
1. Sign in to [Facebook for developers](https://developers.facebook.com/apps) with your Facebook developer account credentials. 1. If you haven't already done so, register as a Facebook developer: Select **Get Started** in the upper-right corner of the page, accept Facebook's policies, and complete the registration steps.
-1. Select **Create App**.
-1. For **Select an app type**, select **customers**, then select **Next**.
-1. Enter an **App Display Name** and a valid **App Contact Email**.
-1. Select **Create App**. This step may require you to accept Facebook platform policies and complete an online security check.
+1. Select **Create App**. Select **Set up Facebook Login**, and then select **Next**.
+1. For **Select an app type**, select **Consumer**, then select **Next**.
+1. Add an app name and a valid app contact mail.
+1. Select **Create app**. This step may require you to accept Facebook platform policies and complete an online security check.
1. Select **Settings** > **Basic**.
- 1. Copy the value of **App ID**.
- 1. Select **Show** and copy the value of **App Secret**. You use both of them to configure Facebook as an identity provider in your tenant. **App Secret** is an important security credential.
+ 1. Copy the value of **App ID**. Then select **Show** and copy the value of **App Secret**. You use both of these values to configure Facebook as an identity provider in your tenant. **App Secret** is an important security credential.
1. Enter a URL for the **Privacy Policy URL**, for example `https://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application. 1. Enter a URL for the **Terms of Service URL**, for example `https://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **User Data Deletion**, for example `https://www.contoso.com/delete_my_data`. The User Data Deletion URL is a page you maintain to provide away for users to request that their data be deleted.
- 1. Choose a **Category**, for example `Business and Pages`. Facebook requires this value, but it's not used for Azure AD.
-1. At the bottom of the page, select **Add Platform**, and then select **Website**.
+ 1. Choose a **Category**, for example `Business and pages`. Facebook requires this value, but it's not used by Azure AD.
+1. At the bottom of the page, select **Add platform**, select **Website**, and then select **Next**.
1. In **Site URL**, enter the address of your website, for example `https://contoso.com`.
-1. Select **Save Changes**.
-1. From the menu, select the **plus** sign or **Add Product** link next to **PRODUCTS**. Under the **Add Products to Your App**, select **Set up** under **Facebook Login**.
-1. From the menu, select **Facebook Login**, select **Settings**.
-1. In **Valid OAuth redirect URIs**, enter the following URIs, replacing `<tenant-ID>` with your customer tenant ID and `<tenant-name>` with your customer tenant name:
+1. Select **Save changes**.
+1. From the menu, select **Products**. Next to **Facebook Login**, select **Configure** > **Settings**.
+1. In **Valid OAuth Redirect URIs**, enter the following URIs, replacing `<tenant-ID>` with your customer tenant ID and `<tenant-name>` with your customer tenant name:
- `https://login.microsoftonline.com/te/<tenant-ID>/oauth2/authresp` - `https://<tenant-name>.ciamlogin.com/<tenant-ID>/federation/oidc/www.facebook.com` - `https://<tenant-name>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oidc/www.facebook.com`
If you don't already have a Facebook account, sign up at [https://www.facebook.c
- `https://<tenant-name>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oauth2` > [!NOTE] > To find your customer tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
-1. Select **Save Changes** at the bottom of the page.
-1. To make your Facebook application available to Azure AD, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point, the Status should change from **Development** to **Live**. For more information, see [Facebook App Development](https://developers.facebook.com/docs/development/release).
+1. Select **Save changes** at the bottom of the page.
+1. At this point, only Facebook application owners can sign in. Because you registered the app, you can sign in with your Facebook account. To make your Facebook application available to your users, from the menu, select **Go live**. Follow all of the steps listed to complete all requirements. You'll likely need to complete the business verification to verify your identity as a business entity or organization. For more information, see [Meta App Development](https://developers.facebook.com/docs/development/release).
## Configure Facebook federation in Azure AD for customers
active-directory How To Protect Web Api Dotnet Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-protect-web-api-dotnet-core-overview.md
Web APIs may contain sensitive information that requires user authentication and
## Prerequisites -- [An API registration](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true) that exposes scopes (permissions) such as *ToDoList.Read*. If you haven't already, register an API in the Microsoft Entra admin center by following the registration steps.
+- [An API registration](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true) that exposes at least one scope (delegated permissions) and one app role (application permission) such as *ToDoList.Read*. If you haven't already, register an API in the Microsoft Entra admin center by following the registration steps.
## Protecting a web API The following are the steps you complete to protect your web API:
-1. [Register your web API](how-to-register-ciam-app.md?tabs=webapi) in the Microsoft Entra admin center.
+1. [Register your web API](how-to-register-ciam-app.md?tabs=webapi&preserve-view=true) in the Microsoft Entra admin center.
1. [Configure your web API](how-to-protect-web-api-dotnet-core-prepare-api.md). 1. [Protect your web API endpoints](how-to-protect-web-api-dotnet-core-protect-endpoints.md). 1. [Test your protected web API](how-to-protect-web-api-dotnet-core-test-api.md).
active-directory How To Protect Web Api Dotnet Core Prepare Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-protect-web-api-dotnet-core-prepare-api.md
We specify these permissions in the *appsettings.json* file as configuration par
```json {
- "AzureAd": {...},
+ "AzureAd": {
+ "Instance": "https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/",
+ "TenantId": "Enter_the_Tenant_Id_Here",
+ "ClientId": "Enter_the_Application_Id_Here",
"Scopes": { "Read": ["ToDoList.Read", "ToDoList.ReadWrite"], "Write": ["ToDoList.ReadWrite"]
Add the following code in the *Program.cs* file.
using ToDoListAPI.Context; using Microsoft.EntityFrameworkCore;
-builder.Services.AddDbContext<TodoContext>(opt =>
+builder.Services.AddDbContext<ToDoContext>(opt =>
opt.UseInMemoryDatabase("ToDos")); ```
active-directory How To Protect Web Api Dotnet Core Protect Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-protect-web-api-dotnet-core-protect-endpoints.md
using Microsoft.EntityFrameworkCore;
using Microsoft.Identity.Web; using Microsoft.Identity.Web.Resource; using ToDoListAPI.Models;
+using ToDoListAPI.Context;
namespace ToDoListAPI.Controllers;
active-directory Quickstart Tenant Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-tenant-setup.md
+
+ Title: Quickstart - Set up a tenant
+description: In this quickstart, you learn how to create a tenant with customer configurations.
+++++++ Last updated : 06/19/2023+++
+#Customer intent: As a dev, devops, or IT admin, I want to create a tenant with customer configurations.
+
+# Quickstart: Create a tenant (preview)
+
+Azure Active Directory (Azure AD) offers a customer identity access management (CIAM) solution that lets you create secure, customized sign-in experiences for your customer-facing apps and services. You'll need to create a tenant with customer configurations in the Microsoft Entra admin center to get started. Once the tenant with customer configurations is created, you can access it in both the Microsoft Entra admin center and the Azure portal.
+
+In this quickstart, you'll learn how to create a tenant with customer configurations if you already have an Azure subscription. If you don't have an Azure subscription, you can create a customer tenant free trial. For more information about the free trial, see [Set up a free trial](quickstart-trial-setup.md).
+
+## Prerequisites
+
+- An Azure subscription.
+- An Azure account that's been assigned at least the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role scoped to the subscription or to a resource group within the subscription.
++
+## Create a new tenant with customer configurations
+
+1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. From the left menu, select **Azure Active Directory** > **Overview**.
+1. Select **Manage tenants** at the top of the page.
+1. Select **Create**.
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/create-tenant.png" alt-text="Screenshot of the create tenant option.":::
+
+1. Select **Customer**, and then select **Continue**.
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/select-tenant-type.png" alt-text="Screenshot of the select tenant type screen.":::
+
+1. Select **Use an Azure Subscription**. If you're creating a tenant with customer configurations for the first time, you have the option to create a trial tenant that doesn't require an Azure subscription. For more information about the free trial, see [Set up a free trial](quickstart-trial-setup.md).
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/create-first-customer-tenant.png" alt-text="Screenshot of the two tenants with customer configurations options available during the initial tenant creation.":::
+
+1. On the **Basics** tab, in the **Create a tenant for customers** page, enter the following information:
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/add-basics-to-customer-tenant.png" alt-text="Screenshot of the Basics tab.":::
+
+ - Type your desired **Tenant Name** (for example *Contoso Customers*).
+
+ - Type your desired **Domain Name** (for example *Contosocustomers*).
+
+ - Select your desired **Location**. This selection can't be changed later.
+
+1. Select **Next: Add a subscription**.
+
+1. On the **Add a subscription** tab, enter the following information:
+
+ - Next to **Subscription**, select your subscription from the menu.
+
+ - Next to **Resource group**, select a resource group from the menu. If there are no available resource groups, select **Create new**, add a name, and then select **OK**.
+
+ - If **Resource group location** appears, select the geographic location of the resource group from the menu.
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/add-subscription.png" alt-text="Screenshot that shows the subscription settings.":::
+
+1. Select **Next: Review + Create**. If the information that you entered is correct, select **Create**. The tenant creation process can take up to 30 minutes. You can monitor the progress of the tenant creation process in the **Notifications** pane. Once the tenant is created, you can access it in both the Microsoft Entra admin center and the Azure portal.
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/tenant-successfully-created.png" alt-text="Screenshot that shows the link to the new tenant.":::
+
+## Clean up resources
+
+If you're not going to continue to use this tenant, you can delete it using the following steps:
+
+1. Ensure that you're signed in to the directory that you want to delete through the **Directory + subscription** filter in the Azure portal. Switch to the target directory if needed.
+1. From the left menu, select **Azure Active Directory** > **Overview**.
+1. Select **Manage tenants** at the top of the page.
+1. Select the tenant you want to delete, and then select **Delete**.
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/delete-tenant.png" alt-text="Screenshot that shows how to delete the tenant.":::
+
+1. You might need to complete required actions before you can delete the tenant. For example, you might need to delete all user flows in the tenant. If you're ready to delete the tenant, select **Delete**.
+
+The tenant and its associated information are deleted.
++
+## Next steps
+- [Customize the sign-in experience](how-to-customize-branding-customers.md)
+- [Register an app](how-to-register-ciam-app.md)
+- [Create user flows](how-to-user-flow-sign-up-sign-in-customers.md)
active-directory Quickstart Trial Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-trial-setup.md
# Quickstart: Get started with Azure AD for customers (preview)
-Get started with Azure AD for customers (Preview) that lets you create secure, customized sign-in experiences for your customer-facing apps and services. With these built-in customer tenant features, Azure AD for customers can serve as the identity provider and access management service for your customers.
+Get started with Azure AD for customers (Preview) that lets you create secure, customized sign-in experiences for your customer-facing apps and services. With these built-in customer configuration features, Azure AD for customers can serve as the identity provider and access management service for your customers.
-Your free trial of a customer tenant provides you with the opportunity to try new features and build applications and processes during the free trial period. Organization (tenant) admins can invite other users. Each user account can only have one active free trial tenant at a time. The free trial isn't designed for scale testing. Trial tenant will support up to 10K resources, learn more about Azure AD service limits [here](/azure/active-directory/enterprise-users/directory-service-limits-restrictions). During your free trial, you'll have the option to unlock the full set of features by upgrading to [Azure free account](https://azure.microsoft.com/free/).
+In this quickstart, you'll learn how to set up a customer tenant free trial. If you already have an Azure subscription, you can create a tenant with customer configurations in the Microsoft Entra admin center. For more information about how to create a tenant see [Set up a tenant](quickstart-tenant-setup.md).
+
+Your free trial of a tenant with customer configurations provides you with the opportunity to try new features and build applications and processes during the free trial period. Organization (tenant) admins can invite other users. Each user account can only have one active free trial tenant at a time. The free trial isn't designed for scale testing. Trial tenant will support up to 10K resources, learn more about Azure AD service limits [here](/azure/active-directory/enterprise-users/directory-service-limits-restrictions). During your free trial, you'll have the option to unlock the full set of features by upgrading to [Azure free account](https://azure.microsoft.com/free/).
> [!NOTE] > At the end of the free trial period, your free trial tenant will be disabled and deleted.
During the free trial period, you'll have access to all product features with fe
You can customize your customer's sign-in and sign-up experience in the Azure AD for customers tenant. Follow the guide that will help you set up the tenant in three easy steps. First you must specify how would you like your customer to sign in. At this step you can choose between two options: **Email and password** or **Email and one-time passcode**. You can configure social accounts later, which would allow your customers to sign in using their [Google](how-to-google-federation-customers.md) or [Facebook](how-to-facebook-federation-customers.md) account. You can also [define custom attributes](how-to-define-custom-attributes.md) to collect from the user during sign-up.
-If you prefer, you can add your company logo, change the background color or adjust the sign-in layout. These optional changes will apply to the look and feel of all your apps in this customer tenant. After you have the created customer tenant, additional branding options are available. You can [customize the default branding](how-to-customize-branding-customers.md) and [add languages](how-to-customize-languages-customers.md). Once you're finished with the customization, select **Continue**.
+If you prefer, you can add your company logo, change the background color or adjust the sign-in layout. These optional changes will apply to the look and feel of all your apps in this tenant with customer configurations. After you have the created tenant, additional branding options are available. You can [customize the default branding](how-to-customize-branding-customers.md) and [add languages](how-to-customize-languages-customers.md). Once you're finished with the customization, select **Continue**.
:::image type="content" source="media/quickstart-trial-setup/customize-branding-in-trial-wizard.png" alt-text="Screenshot of customizing the sign-in experience in the guide."::: ## Try out the sign-up experience and create your first user 1. The guide will configure your tenant with the options you have selected. Once the configuration is complete, the button will change its text from **Setting up...** to **Run it now**.
-1. Select the **Run it now** button. A new browser tab will open with the sign-in page for your customer tenant that can be used to create and sign in users.
-1. Select **No account? Create one** to create a new user in the customer tenant.
+1. Select the **Run it now** button. A new browser tab will open with the sign-in page for your tenant that can be used to create and sign in users.
+1. Select **No account? Create one** to create a new user in the tenant.
1. Add your new user's email address and select **Next**. Don't use the same email you used to create your trial. 1. Complete the sign-up steps on the screen. Typically, once the user has signed in, they're redirected back to your app. However, since you havenΓÇÖt set up an app at this step, you'll be redirected to JWT.ms instead, where you can view the contents of the token issued during the sign-in process. 1. Go back to the guide tab. At this stage, you can either exit the guide and go to the admin center to explore the full range of configuration options for your tenant. Or you can **Continue** and set up a sample app. We recommend setting up the sample app, so that you can use it to test any further configuration changes you make
active-directory Parallel Identity Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/parallel-identity-options.md
Then when a Litware employee wishes to access a Contoso app, they can do so by a
### Option 8 - Configure B2B but with a common HR feed for both directories
-In some situations, after acquisition the organization may converge on a single HR platform, but still run existing identity management systems. In this scenario, MIM could provision users into multiple Active Directory systems, depending on with part of the organization the user is affiliated with. They could continue to use B2B so that users authenticate their existing directory, and have a unified GAL.
+In some situations, after acquisition the organization may converge on a single HR platform, but still run existing identity management systems. In this scenario, MIM could provision users into multiple Active Directory systems, depending on which part of the organization the user is affiliated with. They could continue to use B2B so that users authenticate their existing directory, and have a unified GAL.
![Configure B2B users but with a common HR system feed](media/parallel-identity-options/identity-combined-8.png)
active-directory Check Status Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md
Previously updated : 05/31/2023 Last updated : 06/22/2023
-# Check the status of a workflow (Preview)
+# Check the status of a workflow
When a workflow is created, it's important to check its status, and run history to make sure it ran properly for the users it processed both by schedule and by on-demand. To get information about the status of workflows, Lifecycle Workflows allows you to check run and user processing history. This history also gives you summaries to see how often a workflow has run, and who it ran successfully for. You're also able to check the status of both the workflow, and its tasks. Checking the status of workflows and their tasks allows you to troubleshoot potential problems that could come up during their execution.
You're able to retrieve run information of a workflow using Lifecycle Workflows.
1. Select **Azure Active Directory** and then select **Identity Governance**.
-1. On the left menu, select **Lifecycle Workflows (Preview)**.
+1. On the left menu, select **Lifecycle Workflows**.
-1. On the Lifecycle Workflows overview page, select **Workflows (Preview)**.
+1. On the Lifecycle Workflows overview page, select **Workflows**.
1. Select the workflow you want to run history of.
You're able to retrieve run information of a workflow using Lifecycle Workflows.
To get further information than just the runs summary for a workflow, you're also able to get information about users processed by a workflow. To check the status of users a workflow has processed using the Azure portal, you would do the following steps:
-1. In the left menu, select **Lifecycle Workflows (Preview)**.
+1. In the left menu, select **Lifecycle Workflows**.
-1. select **Workflows (Preview)**.
+1. select **Workflows**.
1. select the workflow you want to see user processing information for.
-1. On the workflow overview screen, select **Workflow history (Preview)**.
+1. On the workflow overview screen, select **Workflow history**.
:::image type="content" source="media/check-status-workflow/workflow-history.png" alt-text="Screenshot of a workflow overview history."::: 1. On the workflow history page, you're presented with a summary of every user processed by the workflow along with counts of successful and failed users and tasks. :::image type="content" source="media/check-status-workflow/workflow-history-list.png" alt-text="Screenshot of a list of workflow summaries.":::
active-directory Check Workflow Execution Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-workflow-execution-scope.md
-# Check execution user scope of a workflow (Preview)
+# Check execution user scope of a workflow
Workflow scheduling will automatically process the workflow for users meeting the workflows execution conditions. This article walks you through the steps to check the users who fall into the execution scope of a workflow. For more information about execution conditions, see: [workflow basics](../governance/understanding-lifecycle-workflows.md#workflow-basics).
To check the users who fall under the execution scope of a workflow, you'd follo
1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-1. In the left menu, select **Lifecycle workflows (Preview)**.
+1. In the left menu, select **Lifecycle workflows**.
1. From the list of workflows, select the workflow you want to check the execution scope of.
-1. On the workflow overview page, select **Execution conditions (Preview)**.
+1. On the workflow overview page, select **Execution conditions**.
1. On the Execution conditions page, select the **Execution User Scope** tab.
active-directory Configure Logic App Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md
Previously updated : 05/31/2023 Last updated : 06/22/2023
-# Configure a Logic App for Lifecycle Workflow use (Preview)
+# Configure a Logic App for Lifecycle Workflow use
-Before you can use an existing Azure Logic App with the custom task extension feature of Lifecycle Workflows, it must first be made compatible. This reference guide provides a list of steps that must be taken to make the Azure Logic App compatible. For a guide on creating a new compatible Logic App via the Lifecycle Workflows portal, see [Trigger Logic Apps based on custom task extensions (preview)](trigger-custom-task.md).
+Before you can use an existing Azure Logic App with the custom task extension feature of Lifecycle Workflows, it must first be made compatible. This reference guide provides a list of steps that must be taken to make the Azure Logic App compatible. For a guide on creating a new compatible Logic App via the Lifecycle Workflows portal, see [Trigger Logic Apps based on custom task extensions](trigger-custom-task.md).
## Determine type of token security of your custom task extension
Before configuring your Azure Logic App custom extension for use with Lifecycle
- Proof of Possession(POP)
-To determine the security token type of your custom task extension, you'd check the **Custom extensions (Preview)** page:
+To determine the security token type of your custom task extension, you'd check the **Custom extensions** page:
:::image type="content" source="media/configure-logic-app-lifecycle-workflows/custom-task-extension-token-type.png" alt-text="Screenshot of custom task extension and token type.":::
If the security token type is **Proof of Possession (POP)** for your custom task
Policy name: POP-Policy
- Policy type: (Preview) AADPOP
+ Policy type: AADPOP
|Claim |Value | |||
Now that your Logic app is configured for use with Lifecycle Workflows, you can
## Next steps -- [Lifecycle workflow extensibility (Preview)](lifecycle-workflow-extensibility.md)
+- [Lifecycle workflow extensibility](lifecycle-workflow-extensibility.md)
- [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
Title: Create a lifecycle workflow (preview) - Azure AD
+ Title: Create a lifecycle workflow - Azure AD
description: This article guides you in creating a lifecycle workflow.
Previously updated : 05/31/2023 Last updated : 06/22/2023
-# Create a lifecycle workflow (preview)
+# Create a lifecycle workflow
-Lifecycle workflows (preview) allow for tasks associated with the lifecycle process to be run automatically for users as they move through their lifecycle in your organization. Workflows consist of:
+Lifecycle workflows allow for tasks associated with the lifecycle process to be run automatically for users as they move through their lifecycle in your organization. Workflows consist of:
- **Tasks**: Actions taken when a workflow is triggered. - **Execution conditions**: The who and when of a workflow. These conditions define which users (scope) this workflow should run against, and when (trigger) the workflow should run.
To create a workflow based on a template:
1. Select **Azure Active Directory** > **Identity Governance**.
-1. On the left menu, select **Lifecycle Workflows (Preview)**.
+1. On the left menu, select **Lifecycle Workflows**.
-1. Select **Workflows (Preview)**.
+1. Select **Workflows**.
1. On the **Choose a workflow** page, select the workflow template that you want to use.
active-directory Customize Workflow Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md
Previously updated : 05/31/2023 Last updated : 06/22/2023
-# Customize emails sent from workflow tasks (preview)
+# Customize emails sent from workflow tasks
-Lifecycle workflows provide several tasks that send email notifications. You can customize email notifications to suit the needs of a specific workflow. For a list of these tasks, see [Lifecycle workflow built-in tasks (preview)](lifecycle-workflow-tasks.md).
+Lifecycle workflows provide several tasks that send email notifications. You can customize email notifications to suit the needs of a specific workflow. For a list of these tasks, see [Lifecycle workflow built-in tasks](lifecycle-workflow-tasks.md).
Email tasks allow for the customization of:
When you're customizing an email sent via lifecycle workflows, you can choose to
1. On the search bar near the top of the page, enter **Identity Governance** and select the result.
-1. On the left menu, select **Lifecycle workflows (Preview)**.
+1. On the left menu, select **Lifecycle workflows**.
-1. On the left menu, select **Workflows (Preview)**.
+1. On the left menu, select **Workflows**.
-1. Select **Tasks (Preview)**.
+1. Select **Tasks**.
1. On the pane that lists tasks, select the task for which you want to customize the email.
You can customize emails that you send via lifecycle workflows to have your own
To enable these features, you need the following prerequisites: - A verified domain. To add a custom domain, see [Managing custom domain names in Azure Active Directory](../enterprise-users/domains-manage.md).-- Custom branding set within Azure AD if you want to use your custom branding in emails. To set organizational branding within your Azure tenant, see [Configure your company branding (preview)](../fundamentals/how-to-customize-branding.md).
+- Custom branding set within Azure AD if you want to use your custom branding in emails. To set organizational branding within your Azure tenant, see [Configure your company branding](../fundamentals/how-to-customize-branding.md).
> [!NOTE] > For compliance with the [RFC for sending and receiving email](https://www.ietf.org/rfc/rfc2142.txt), we recommend using a domain that has the appropriate DNS records to facilitate email validation, like SPF, DKIM, DMARC, and MX. [Learn more about Exchange Online email routing](/exchange/mail-flow-best-practices/mail-flow-best-practices). After you meet the prerequisites, follow these steps:
-1. On the page for lifecycle workflows, select **Workflow settings (Preview)**.
+1. On the page for lifecycle workflows, select **Workflow settings**.
-1. On the **Workflow settings (Preview)** pane, for **Email domain**, select your domain from the drop-down list of verified domains.
+1. On the **Workflow settings** pane, for **Email domain**, select your domain from the drop-down list of verified domains.
:::image type="content" source="media/customize-workflow-email/workflow-email-settings.png" alt-text="Screenshot of workflow domain settings."::: 1. Turn on the **Use company branding banner logo** toggle if you want to use company branding in emails.
active-directory Customize Workflow Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md
-# Customize the schedule of workflows (preview)
+# Customize the schedule of workflows
When you create workflows by using lifecycle workflows, you can fully customize them to match the schedule that fits your organization's needs. By default, workflows are scheduled to run every 3 hours. But you can set the interval to be as frequent as 1 hour or as infrequent as 24 hours. ## Customize the schedule of workflows by using the Azure portal
-Workflows that you create within lifecycle workflows follow the same schedule that you define on the **Workflow settings (Preview)** pane. To adjust the schedule, follow these steps:
+Workflows that you create within lifecycle workflows follow the same schedule that you define on the **Workflow settings** pane. To adjust the schedule, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. On the search bar near the top of the page, enter **Identity Governance** and select the result.
-1. On the left menu, select **Lifecycle workflows (Preview)**.
+1. On the left menu, select **Lifecycle workflows**.
-1. On the **Lifecycle workflows** overview page, select **Workflow settings (Preview)**.
+1. On the **Lifecycle workflows** overview page, select **Workflow settings**.
-1. On the **Workflow settings (Preview)** pane, set the schedule of workflows as an interval of 1 to 24.
+1. On the **Workflow settings** pane, set the schedule of workflows as an interval of 1 to 24.
:::image type="content" source="media/customize-workflow-schedule/workflow-schedule-settings.png" alt-text="Screenshot of the settings for a workflow schedule."::: 1. Select **Save**.
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
-# Delete a lifecycle workflow (preview)
+# Delete a lifecycle workflow
You can remove workflows that you no longer need. Deleting these workflows helps keep your lifecycle strategy up to date.
The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Pr
1. On the search bar near the top of the page, enter **Identity Governance**. Then select **Identity Governance** in the results.
-1. On the left menu, select **Lifecycle Workflows (Preview)**.
+1. On the left menu, select **Lifecycle Workflows**.
-1. Select **Workflows (Preview)**.
+1. Select **Workflows**.
1. On the **Workflows** page, select the workflow that you want to delete. Then select **Delete**.
The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Pr
After you delete workflows, you can view them on the **Deleted workflows** page.
-1. On the left pane, select **Deleted workflows (Preview)**.
+1. On the left pane, select **Deleted workflows**.
1. On the **Deleted workflows** page, check the list of deleted workflows. Each workflow has a description, the date of deletion, and a permanent delete date. By default, the permanent delete date for a workflow is 30 days after it was originally deleted.
active-directory Lifecycle Workflow Audits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-audits.md
After filtering this information, you're also able to see other information in t
## Next steps - [Lifecycle Workflow History](lifecycle-workflow-history.md)-- [Check the status of a workflow (Preview)](check-status-workflow.md)
+- [Check the status of a workflow](check-status-workflow.md)
- [Azure AD audit activity reference](../reports-monitoring/reference-audit-activities.md)
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
The high-level steps for the Azure Logic Apps integration are as follows:
## Next steps - [customTaskExtension resource type](/graph/api/resources/identitygovernance-customtaskextension?view=graph-rest-beta&preserve-view=true)-- [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md)-- [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)
+- [Trigger Logic Apps based on custom task extensions](trigger-custom-task.md)
+- [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md)
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
-# Lifecycle Workflows history (Preview)
+# Lifecycle Workflows history
Lifecycle Workflows introduce a history feature based on summaries and details.
- **Runs summary**: Shows a summary of workflow runs in terms of the workflow. Successful, failed, and total task information when workflow runs are noted. - **Tasks summary**: Shows a summary of tasks processed by a workflow, and which tasks failed, successfully, and totally ran in the workflow.
-Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow (Preview)](check-status-workflow.md).
+Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow](check-status-workflow.md).
## Users Summary information
Task detailed history information allows you to filter for specific information
- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran. - **Tasks**: You can filter based on specific task names.
-Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters (preview)](lifecycle-workflow-tasks.md#common-task-parameters).
+Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters](lifecycle-workflow-tasks.md#common-task-parameters).
## Next steps
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Last updated 01/26/2023
-# Lifecycle Workflow built-in tasks (Preview)
+# Lifecycle Workflow built-in tasks
Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task.
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
Last updated 05/31/2023
-# Lifecycle Workflows templates (Preview)
+# Lifecycle Workflows templates
Lifecycle Workflows allows you to automate the lifecycle management process for your organization by creating workflows that contain both built-in tasks, and custom task extensions. These workflows, and the tasks within them, all fall into categories based on the Joiner-Mover-Leaver(JML) model of lifecycle management. To make this process even more efficient, Lifecycle Workflows also provide you with templates, which you can use to accelerate the setup, creation, and configuration of common lifecycle management processes. You can create workflows based on these templates as is, or you can customize them even further to match the requirements for users within your organization. In this article you get the complete list of workflow templates, common template parameters, default template parameters for specific templates, and the list of compatible tasks for each template. For full task definitions, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md).
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
Detailed **Version information** are as follows:
## Next steps - [workflowVersion resource type](/graph/api/resources/identitygovernance-workflowversion?view=graph-rest-beta&preserve-view=true)-- [Manage workflow Properties (Preview)](manage-workflow-properties.md)-- [Manage workflow versions (Preview)](manage-workflow-tasks.md)
+- [Manage workflow Properties](manage-workflow-properties.md)
+- [Manage workflow versions](manage-workflow-tasks.md)
active-directory Lifecycle Workflows Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflows-deployment.md
When creating custom task extensions, the scenarios for how it interacts with Li
- **Fire-and-forget scenario**- The Logic App is started, and the sequential task execution immediately continues with no response expected from the Logic App. - **Sequential task execution waiting for response from the Logic App** - The Logic app is started, and the sequential task execution waits on the response from the Logic App. - **Sequential task execution waiting for the response of a 3rd party system**- The Logic app is started, and the sequential task execution waits on the response from a 3rd party system that triggers the Logic App to tell the Custom Task extension whether or not it ran successfully. -- For more information on custom extensions, see [Lifecycle Workflow extensibility (Preview)](lifecycle-workflow-extensibility.md)
+- For more information on custom extensions, see [Lifecycle Workflow extensibility](lifecycle-workflow-extensibility.md)
## Create your workflow Now that you have design and planned your workflow, you can create it in the portal. For detailed information on creating a workflow, see [Create a Lifecycle workflow.](create-lifecycle-workflow.md)
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
-# Manage workflow properties (preview)
+# Manage workflow properties
Managing workflows can be accomplished in one of two ways: - Updating the basic properties of a workflow without creating a new version of it
To edit the properties of a workflow using the Azure portal, you do the followin
1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-1. On the left menu, select **Lifecycle workflows (Preview)**.
+1. On the left menu, select **Lifecycle workflows**.
-1. On the left menu, select **Workflows (Preview)**.
+1. On the left menu, select **Workflows**.
1. Here you see a list of all of your current workflows. Select the workflow that you want to edit. :::image type="content" source="media/manage-workflow-properties/manage-list.png" alt-text="Screenshot of the manage workflow list.":::
-6. To change the display name or description, select **Properties (Preview)**.
+6. To change the display name or description, select **Properties**.
:::image type="content" source="media/manage-workflow-properties/manage-properties.png" alt-text="Screenshot of the manage basic properties screen.":::
active-directory Manage Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md
-# Manage workflow versions (Preview)
+# Manage workflow versions
Workflows created with Lifecycle Workflows are able to grow and change with the needs of your organization. Workflows exist as versions from creation. When making changes to other than basic information, you create a new version of the workflow. For more information, see [Manage a workflow's properties](manage-workflow-properties.md).
Tasks within workflows can be added, edited, reordered, and removed at will. To
1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-1. In the left menu, select **Lifecycle workflows (Preview)**.
+1. In the left menu, select **Lifecycle workflows**.
-1. In the left menu, select **workflows (Preview)**.
+1. In the left menu, select **workflows**.
-1. On the left side of the screen, select **Tasks (Preview)**.
+1. On the left side of the screen, select **Tasks**.
1. You can add a task to the workflow by selecting the **Add task** button.
Tasks within workflows can be added, edited, reordered, and removed at will. To
To edit the execution conditions of a workflow using the Azure portal, you do the following steps:
-1. On the left menu of Lifecycle Workflows, select **Workflows (Preview)**.
+1. On the left menu of Lifecycle Workflows, select **Workflows**.
-1. On the left side of the screen, select **Execution conditions (Preview)**.
+1. On the left side of the screen, select **Execution conditions**.
:::image type="content" source="media/manage-workflow-tasks/execution-conditions-details.png" alt-text="Screenshot of the execution condition details of a workflow." lightbox="media/manage-workflow-tasks/execution-conditions-details.png"::: 1. On this screen, you're presented with **Trigger details**. Here we have a trigger type and attribute details. In the template you can edit the attribute details to define when a workflow is run in relation to the attribute value measured in days. This attribute value can be from 0 to 60 days.
To edit the execution conditions of a workflow using the Azure portal, you do th
## See versions of a workflow using the Azure portal
-1. On the left menu of Lifecycle Workflows, select **Workflows (Preview)**.
+1. On the left menu of Lifecycle Workflows, select **Workflows**.
1. On this page, you see a list of all of your current workflows. Select the workflow that you want to see versions of.
-1. On the left side of the screen, select **Versions (Preview)**.
+1. On the left side of the screen, select **Versions**.
:::image type="content" source="media/manage-workflow-tasks/manage-versions.png" alt-text="Screenshot of versions of a workflow." lightbox="media/manage-workflow-tasks/manage-versions.png":::
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
-# Run a workflow on-demand (Preview)
+# Run a workflow on-demand
Scheduled workflows by default run every 3 hours, but can also run on-demand so that they can be applied to specific users whenever you see fit. A workflow can be run on demand for any user, and doesn't take into account whether or not a user meets the workflow's execution conditions. Running a workflow on-demand allows you to test workflows before their scheduled run. This testing, on a set of users up to 10 at a time, allows you to see how a workflow will run before it processes a larger set of users. Testing your workflow before their scheduled runs helps you proactively solve potential lifecycle issues more quickly.
Use the following steps to run a workflow on-demand.
1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-1. On the left menu, select **Lifecycle workflows (Preview)**.
+1. On the left menu, select **Lifecycle workflows**.
-1. select **Workflows (Preview)**
+1. select **Workflows**
1. On the workflow screen, select the specific workflow you want to run.
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
Previously updated : 05/31/2023 Last updated : 06/22/2023
-# Trigger Logic Apps based on custom task extensions (preview)
+# Trigger Logic Apps based on custom task extensions
Lifecycle Workflows can be used to trigger custom tasks via an extension to Azure Logic Apps. This can be used to extend the capabilities of Lifecycle Workflow beyond the built-in tasks. The steps for triggering a Logic App based on a custom task extension are as follows:
To use a custom task extension in your workflow, first a custom task extension m
1. Select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, select **Lifecycle Workflows (Preview)**.
+1. In the left menu, select **Lifecycle Workflows**.
1. On the Lifecycle workflows screen, select **Custom task extension**.
After you've created your custom task extension, you can now add it to a workflo
To Add a custom task extension to a workflow, you'd do the following steps:
-1. In the left menu, select **Lifecycle workflows (Preview)**.
+1. In the left menu, select **Lifecycle workflows**.
-1. In the left menu, select **Workflows (Preview)**.
+1. In the left menu, select **Workflows**.
1. Select the workflow that you want to add the custom task extension to.
To Add a custom task extension to a workflow, you'd do the following steps:
## Next steps -- [Lifecycle workflow extensibility (Preview)](lifecycle-workflow-extensibility.md)
+- [Lifecycle workflow extensibility](lifecycle-workflow-extensibility.md)
- [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Tutorial Offboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md
Title: Execute employee termination tasks by using lifecycle workflows (preview)
-description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows (preview) in the Azure portal.
+ Title: Execute employee termination tasks by using lifecycle workflows
+description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows in the Azure portal.
Previously updated : 03/18/2023 Last updated : 06/22/2023
-# Execute employee termination tasks by using lifecycle workflows (preview)
+# Execute employee termination tasks by using lifecycle workflows
-This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows (preview) in the Azure portal.
+This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows in the Azure portal.
This *leaver* scenario runs a workflow on demand and accomplishes the following tasks:
Use the following steps to create a leaver on-demand workflow that will execute
1. Sign in to the [Azure portal](https://portal.azure.com). 2. On the right, select **Azure Active Directory**. 3. Select **Identity Governance**.
-4. Select **Lifecycle workflows (Preview)**.
+4. Select **Lifecycle workflows**.
5. On the **Overview** tab, select **New workflow**. :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of the Overview tab and the button for creating a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
To run a workflow on demand for users by using the Azure portal:
## Check tasks and workflow status
-At any time, you can monitor the status of workflows and tasks. Three data pivots, users runs, and tasks are currently available in public preview. You can learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In this tutorial, you check the status by using the user-focused reports.
+At any time, you can monitor the status of workflows and tasks. Three data pivots, users runs, and tasks are currently available. You can learn more in the how-to guide [Check the status of a workflow](check-status-workflow.md). In this tutorial, you check the status by using the user-focused reports.
-1. On the **Overview** page for the workflow, select **Workflow history (Preview)**.
+1. On the **Overview** page for the workflow, select **Workflow history**.
:::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of the overview page for a workflow." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png":::
At any time, you can monitor the status of workflows and tasks. Three data pivot
## Next steps -- [Prepare user accounts for lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Prepare user accounts for lifecycle workflows](tutorial-prepare-azure-ad-user-accounts.md)
- [Complete tasks in real time on an employee's last day of work by using lifecycle workflow APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow)
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
Title: 'Automate employee onboarding tasks before their first day of work with Azure portal (preview)'
-description: Tutorial for onboarding users to an organization using Lifecycle workflows with Azure portal (preview).
+ Title: 'Automate employee onboarding tasks before their first day of work with Azure portal'
+description: Tutorial for onboarding users to an organization using Lifecycle workflows with Azure portal.
Previously updated : 03/18/2023 Last updated : 06/22/2023
-# Automate employee onboarding tasks before their first day of work with Azure portal (preview)
+# Automate employee onboarding tasks before their first day of work with Azure portal
This tutorial provides a step-by-step guide on how to automate prehire tasks with Lifecycle workflows using the Azure portal.
Use the following steps to create a pre-hire workflow that generates a TAP and s
1. Sign in to Azure portal. 2. On the right, select **Azure Active Directory**. 3. Select **Identity Governance**.
- 4. Select **Lifecycle workflows (Preview)**.
- 5. On the **Overview (Preview)** page, select **New workflow**.
+ 4. Select **Lifecycle workflows**.
+ 5. On the **Overview** page, select **New workflow**.
:::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: 6. From the templates, select **select** under **Onboard pre-hire employee**.
To run a workflow on-demand, for users using the Azure portal, do the following
## Check tasks and workflow status
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we look at the status using the user focused reports.
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available. You may learn more in the how-to guide [Check the status of a workflow](check-status-workflow.md). In the course of this tutorial, we look at the status using the user focused reports.
- 1. To begin, select the **Workflow history (Preview)** tab to view the user summary and associated workflow tasks and statuses.
+ 1. To begin, select the **Workflow history** tab to view the user summary and associated workflow tasks and statuses.
:::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history.png" alt-text="Screenshot of workflow History status." lightbox="media/tutorial-lifecycle-workflows/workflow-history.png":::
-1. Once the **Workflow history (Preview)** tab has been selected, you land on the workflow history page as shown.
+1. Once the **Workflow history** tab has been selected, you land on the workflow history page as shown.
:::image type="content" source="media/tutorial-lifecycle-workflows/user-summary.png" alt-text="Screenshot of workflow history overview" lightbox="media/tutorial-lifecycle-workflows/user-summary.png"::: 1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
At any time, you may monitor the status of the workflows and the tasks. As a rem
## Enable the workflow schedule
-After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties (Preview) page.
+After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties page.
:::image type="content" source="media/tutorial-lifecycle-workflows/enable-schedule.png" alt-text="Screenshot of enabling workflow schedule." lightbox="media/tutorial-lifecycle-workflows/enable-schedule.png"::: ## Next steps-- [Tutorial: Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Tutorial: Preparing user accounts for Lifecycle workflows](tutorial-prepare-azure-ad-user-accounts.md)
- [Automate employee onboarding tasks before their first day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-onboard-custom-workflow)
active-directory Tutorial Prepare Azure Ad User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md
Title: 'Tutorial: Preparing user accounts for Lifecycle workflows (preview)'
-description: Tutorial for preparing user accounts for Lifecycle workflows (preview).
+ Title: 'Tutorial: Preparing user accounts for Lifecycle workflows'
+description: Tutorial for preparing user accounts for Lifecycle workflows.
-# Preparing user accounts for Lifecycle workflows tutorials (Preview)
+# Preparing user accounts for Lifecycle workflows tutorials
For the on-boarding and off-boarding tutorials you need accounts for which the workflows are executed. This section helps you prepare these accounts, if you already have test accounts that meet the following requirements, you can proceed directly to the on-boarding and off-boarding tutorials. Two accounts are required for the on-boarding tutorials, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set:
Once your user(s) has been successfully created in Azure AD, you may proceed to
## Additional steps for pre-hire scenario
-There are some additional steps that you should be aware of when testing either the [On-boarding users to your organization using Lifecycle workflows with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md) tutorial or the [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md) tutorial.
+There are some additional steps that you should be aware of when testing either the [On-boarding users to your organization using Lifecycle workflows with Azure portal](tutorial-onboard-custom-workflow-portal.md) tutorial or the [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph](tutorial-onboard-custom-workflow-graph.md) tutorial.
### Edit the users attributes using the Azure portal Some of the attributes required for the pre-hire onboarding tutorial are exposed through the Azure portal and can be set there.
To use this feature, it must be enabled on our Azure AD tenant. To do this, use
## Additional steps for leaver scenario
-There are some additional steps that you should be aware of when testing either the Off-boarding users from your organization using Lifecycle workflows with Azure portal (preview) tutorial or the Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview) tutorial.
+There are some additional steps that you should be aware of when testing either the Off-boarding users from your organization using Lifecycle workflows with Azure portal tutorial or the Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph tutorial.
### Set up user with groups and Teams with team membership
A user with groups and Teams memberships is required before you begin the tutori
## Next steps-- [On-boarding users to your organization using Lifecycle workflows with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md)-- [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md)-- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md)-- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md)
+- [On-boarding users to your organization using Lifecycle workflows with Azure portal](tutorial-onboard-custom-workflow-portal.md)
+- [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph](tutorial-onboard-custom-workflow-graph.md)
+- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Azure portal](tutorial-offboard-custom-workflow-portal.md)
+- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph](tutorial-offboard-custom-workflow-graph.md)
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
Title: Automate employee offboarding tasks after their last day of work with Azure portal (preview)
-description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Azure portal (preview).
+ Title: Automate employee offboarding tasks after their last day of work with Azure portal
+description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Azure portal.
-# Automate employee offboarding tasks after their last day of work with Azure portal (preview)
+# Automate employee offboarding tasks after their last day of work with Azure portal
This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
Use the following steps to create a scheduled leaver workflow that will configur
1. Sign in to Azure portal. 2. On the right, select **Azure Active Directory**. 3. Select **Identity Governance**.
- 4. Select **Lifecycle workflows (Preview)**.
- 5. On the **Overview (Preview)** page, select **New workflow**.
+ 4. Select **Lifecycle workflows**.
+ 5. On the **Overview** page, select **New workflow**.
:::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: 6. From the templates, select **Select** under **Post-offboarding of an employee**.
To run a workflow on-demand, for users using the Azure portal, do the following
## Check tasks and workflow status
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we'll look at the status using the user focused reports.
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available. You may learn more in the how-to guide [Check the status of a workflow](check-status-workflow.md). In the course of this tutorial, we'll look at the status using the user focused reports.
- 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses.
+ 1. To begin, select the **Workflow history** tab on the left to view the user summary and associated workflow tasks and statuses.
:::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png" alt-text="Screenshot of the workflow history summary." lightbox="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png":::
-1. Once the **Workflow history (Preview)** tab has been selected, you'll land on the workflow history page as shown.
+1. Once the **Workflow history** tab has been selected, you'll land on the workflow history page as shown.
:::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png" alt-text="Screenshot of the workflow history overview." lightbox="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png"::: 1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
At any time, you may monitor the status of the workflows and the tasks. As a rem
## Enable the workflow schedule
-After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties (Preview) page.
+After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties page.
:::image type="content" source="media/tutorial-lifecycle-workflows/enable-schedule.png" alt-text="Screenshot of workflow enabled schedule." lightbox="media/tutorial-lifecycle-workflows/enable-schedule.png"::: ## Next steps-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Preparing user accounts for Lifecycle workflows](tutorial-prepare-azure-ad-user-accounts.md)
- [Automate employee offboarding tasks after their last day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-scheduled-leaver)
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
Previously updated : 01/25/2023 Last updated : 06/22/2023
active-directory What Are Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md
Previously updated : 05/31/2023 Last updated : 06/22/2023
-# What are lifecycle workflows (preview)?
+# What are lifecycle workflows?
-Lifecycle workflows (preview) are a new identity governance feature that enables organizations to manage Azure Active Directory (Azure AD) users by automating these three basic lifecycle processes:
+Lifecycle workflows are a new identity governance feature that enables organizations to manage Azure Active Directory (Azure AD) users by automating these three basic lifecycle processes:
- **Joiner**: When an individual enters the scope of needing access. An example is a new employee joining a company or organization. - **Mover**: When an individual moves between boundaries within an organization. This movement might require more access or authorization. An example is a user who was in marketing and is now a member of the sales organization.
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
Title: 'Lifecycle workflows FAQs (preview)'
-description: Frequently asked questions about Lifecycle workflows (preview).
+ Title: 'Lifecycle workflows FAQs'
+description: Frequently asked questions about Lifecycle workflows.
-# Lifecycle workflows - FAQs (preview)
+# Lifecycle workflows - FAQs
In this article, you'll find questions to commonly asked questions about [Lifecycle Workflows](what-are-lifecycle-workflows.md). Check back to this page frequently as changes happen often, and answers are continually being added.
Some tasks do update existing attributes; however, we donΓÇÖt currently share th
### Is it possible for me to create new tasks and how? For example, triggering other graph APIs/web hooks?
-We currently donΓÇÖt support the ability to create new tasks outside of the set of tasks supported in the task templates. As an alternative, you may accomplish this by setting up a logic app and then creating a logic apps task in Lifecycle Workflows with the URL. For more information, see [Trigger Logic Apps based on custom task extensions (preview)](trigger-custom-task.md)
+We currently donΓÇÖt support the ability to create new tasks outside of the set of tasks supported in the task templates. As an alternative, you may accomplish this by setting up a logic app and then creating a logic apps task in Lifecycle Workflows with the URL. For more information, see [Trigger Logic Apps based on custom task extensions](trigger-custom-task.md)
## Next steps -- [What are Lifecycle workflows? (Preview)](what-are-lifecycle-workflows.md)
+- [What are Lifecycle workflows?](what-are-lifecycle-workflows.md)
active-directory Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/accounts.md
na Previously updated : 04/04/2023 Last updated : 06/21/2023
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md
Required permissions | For permissions required to apply an update, see [Azure A
## Retiring Azure AD Connect 1.x versions > [!IMPORTANT]
-> *As of August 31, 2022, all 1.x versions of Azure AD Connect are retired because they include SQL Server 2012 components that will no longer be supported.* Upgrade to the most recent version of Azure AD Connect (2.x version) by that date or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
-> AADConnect V1.x may stop working on December 31st, due to the retirement of the ADAL library service on that date.
+> Action required: Synchronization will stop working on October 1, 2023, for any customers still running Azure AD Connect Sync V1. Customers using cloud sync or Azure AD Connect V2 will remain fully operational with no action required. For more information and next step guidance, see [Decommission Azure AD Connect V1](https://aka.ms/DecommissionAADConnectV1) if an upgrade is required.
## Retiring Azure AD Connect 2.x versions > [!IMPORTANT]
To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to
## 2.2.1.0 ### Release status
-5/23/2023: Released for autoupgrade only
+6/19/2023: Released for download and autoupgrade.
### Functional Changes - We have enabled Auto Upgrade for tenants with custom synchronization rules. Note that deleted (not disabled) default rules will be re-created and enabled upon Auto Upgrade.
+ - We have added Microsoft Azure AD Connect Agent Updater service to the install. This new service will be used for future auto upgrades.
- We have removed the Synchronization Service WebService Connector Config program from the install. ### Bug Fixes
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
In the example, the resource enterprise application is Microsoft Graph of object
1. Confirm that you've granted consent to the user by running the following request. ```http
- GET https://graph.microsoft.com/v1.0/oauth2PermissionGrants?$filter=clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' and consentType eq 'principal'
+ GET https://graph.microsoft.com/v1.0/oauth2PermissionGrants?$filter=clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' and consentType eq 'Principal'
``` 1. Assign the app to the user. This ensures that the user can sign in if assignment is required, and ensures that app is available through the user's My Apps portal. In the following example, `resourceId`represents the client app to which the user is being assigned. The user will be assigned the default app role which is `00000000-0000-0000-0000-000000000000`.
active-directory Admin Units Restricted Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-restricted-management.md
Previously updated : 06/09/2023 Last updated : 06/22/2023
> Restricted management administrative units are currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Restricted management administrative units allow you to protect specific objects in your tenant from access by anyone other than a specific set of administrators that you designate. This allows you to meet security or compliance requirements without having to remove tenant-level role assignments from your administrators.
+Restricted management administrative units allow you to protect specific objects in your tenant from modification by anyone other than a specific set of administrators that you designate. This allows you to meet security or compliance requirements without having to remove tenant-level role assignments from your administrators.
## Why use restricted management administrative units?
active-directory Alert Enterprise Guardian Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alert-enterprise-guardian-tutorial.md
+
+ Title: Azure Active Directory SSO integration with AlertEnterprise-Guardian
+description: Learn how to configure single sign-on between Azure Active Directory and AlertEnterprise-Guardian.
++++++++ Last updated : 06/16/2023++++
+# Azure Active Directory SSO integration with AlertEnterprise-Guardian
+
+In this article, you'll learn how to integrate AlertEnterprise-Guardian with Azure Active Directory (Azure AD). Application automates the identity management lifecycle. Built-in Regulatory Compliance ensures controls are in place before granting access to identities. When you integrate AlertEnterprise-Guardian with Azure AD, you can:
+
+* Control in Azure AD who has access to AlertEnterprise-Guardian.
+* Enable your users to be automatically signed-in to AlertEnterprise-Guardian with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for AlertEnterprise-Guardian in a test environment. AlertEnterprise-Guardian supports **IDP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with AlertEnterprise-Guardian, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* AlertEnterprise-Guardian single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the AlertEnterprise-Guardian application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add AlertEnterprise-Guardian from the Azure AD gallery
+
+Add AlertEnterprise-Guardian from the Azure AD application gallery to configure single sign-on with AlertEnterprise-Guardian. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **AlertEnterprise-Guardian** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the value:
+ `urn:mace:saml:pac4j.org`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.alerthsc.com/api/auth/sso/callback?client_name=<Client_Name>`
+
+ > [!Note]
+ > The Reply URL is not real. Update this value with the actual Reply URL. Contact [AlertEnterprise-Guardian support team](mailto:info@alertenterprise.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. AlertEnterprise-Guardian application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, AlertEnterprise-Guardian application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | tenant | <Share_By_ALERT_Team> |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure AlertEnterprise-Guardian SSO
+
+To configure single sign-on on **AlertEnterprise-Guardian** side, you need to send the **App Federation Metadata Url** to [AlertEnterprise-Guardian support team](mailto:info@alertenterprise.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create AlertEnterprise-Guardian test user
+
+In this section, you create a user called Britta Simon at AlertEnterprise-Guardian. Work with [AlertEnterprise-Guardian support team](mailto:info@alertenterprise.com) to add the users in the AlertEnterprise-Guardian platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the AlertEnterprise-Guardian for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the AlertEnterprise-Guardian tile in the My Apps, you should be automatically signed in to the AlertEnterprise-Guardian for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure AlertEnterprise-Guardian you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Isg Governx Federation Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/isg-governx-federation-tutorial.md
+
+ Title: Azure Active Directory SSO integration with ISG GovernX Federation
+description: Learn how to configure single sign-on between Azure Active Directory and ISG GovernX Federation.
++++++++ Last updated : 06/16/2023++++
+# Azure Active Directory SSO integration with ISG GovernX Federation
+
+In this article, you'll learn how to integrate ISG GovernX Federation with Azure Active Directory (Azure AD). Template for Federation between ISG and Clients IDP. When you integrate ISG GovernX Federation with Azure AD, you can:
+
+* Control in Azure AD who has access to ISG GovernX Federation.
+* Enable your users to be automatically signed-in to ISG GovernX Federation with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for ISG GovernX Federation in a test environment. ISG GovernX Federation supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with ISG GovernX Federation, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ISG GovernX Federation single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the ISG GovernX Federation application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add ISG GovernX Federation from the Azure AD gallery
+
+Add ISG GovernX Federation from the Azure AD application gallery to configure single sign-on with ISG GovernX Federation. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **ISG GovernX Federation** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<GovernX_UniqueID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://isg-one.okta.com/sso/saml2/<ID>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://isg-one.okta.com/sso/saml2/<ID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [ISG GovernX Federation support team](mailto:infrastructureteam@isg-one.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up ISG GovernX Federation** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure ISG GovernX Federation SSO
+
+To configure single sign-on on **ISG GovernX Federation** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ISG GovernX Federation support team](mailto:infrastructureteam@isg-one.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ISG GovernX Federation test user
+
+In this section, a user called B.Simon is created in ISG GovernX Federation. ISG GovernX Federation supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in ISG GovernX Federation, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to ISG GovernX Federation Sign-on URL where you can initiate the login flow.
+
+* Go to ISG GovernX Federation Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the ISG GovernX Federation for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the ISG GovernX Federation tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ISG GovernX Federation for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure ISG GovernX Federation you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory It Conductor Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/it-conductor-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure IT-Conductor SSO
-To configure single sign-on on **IT-Conductor** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [IT-Conductor support team](mailto:support@itconductor.com). They set this setting to have the SAML SSO connection set properly on both sides. For more information, please refer [this](https://docs.itconductor.com/wiki/start-here/sso-setup) link.
+To configure single sign-on on **IT-Conductor** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [IT-Conductor support team](mailto:support@itconductor.com). They set this setting to have the SAML SSO connection set properly on both sides. For more information, please refer [this](https://docs.itconductor.com/start-here/sso-setup) link.
### Create IT-Conductor test user
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure IT-Conductor you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure IT-Conductor you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Threatq Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/threatq-platform-tutorial.md
+
+ Title: Azure Active Directory SSO integration with ThreatQ Platform
+description: Learn how to configure single sign-on between Azure Active Directory and ThreatQ Platform.
++++++++ Last updated : 06/16/2023++++
+# Azure Active Directory SSO integration with ThreatQ Platform
+
+In this article, you'll learn how to integrate ThreatQ Platform with Azure Active Directory (Azure AD). ThreatQ improves the efficiency and effectiveness of security operations by fusing disparate data sources, tools and teams to accelerate and automate threat detection, investigation and response. When you integrate ThreatQ Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to ThreatQ Platform.
+* Enable your users to be automatically signed-in to ThreatQ Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for ThreatQ Platform in a test environment. ThreatQ Platform supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with ThreatQ Platform, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ThreatQ Platform single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the ThreatQ Platform application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add ThreatQ Platform from the Azure AD gallery
+
+Add ThreatQ Platform from the Azure AD application gallery to configure single sign-on with ThreatQ Platform. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **ThreatQ Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<Customer_Environment>.threatq.online/api/saml/metadata`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<Customer_Environment>.threatq.online/api/saml/acs`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<Customer_Environment>.threatq.online/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [ThreatQ Platform support team](mailto:support@threatq.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your ThreatQ Platform application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but ThreatQ Platform expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, ThreatQ Platform application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | uid | user.mail |
+ | Groups | user.groups |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png)
+
+1. On the **Set up ThreatQ Platform** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure ThreatQ Platform SSO
+
+To configure single sign-on on **ThreatQ Platform** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [ThreatQ Platform support team](mailto:support@threatq.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ThreatQ Platform test user
+
+In this section, a user called B.Simon is created in ThreatQ Platform. ThreatQ Platform supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in ThreatQ Platform, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to ThreatQ Platform Sign-on URL where you can initiate the login flow.
+
+* Go to ThreatQ Platform Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the ThreatQ Platform tile in the My Apps, this will redirect to ThreatQ Platform Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure ThreatQ Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Veracode Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veracode-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
Notes:
-* These instructions assume you are using the new [Single Sign On/Just-in-Time Provisioning feature from Veracode](https://docs.veracode.com/r/Signing_On). To activate this feature if it is not already active, please contact Veracode Support.
+* These instructions assume you are using the new [Single Sign On/Just-in-Time Provisioning feature from Veracode](https://docs.veracode.com/r/about_saml). To activate this feature if it is not already active, please contact Veracode Support.
* These instructions are valid for all [Veracode regions](https://docs.veracode.com/r/Region_Domains_for_Veracode_APIs). 1. In a different web browser window, sign in to your Veracode company site as an administrator.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Veracode you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Veracode you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Worksmobile Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/worksmobile-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure LINE WORKS SSO
-To configure single sign-on on **LINE WORKS** side, please read the [LINE WORKS SSO documents](https://developers.worksmobile.com/jp/document/1001080101) and configure a LINE WORKS setting.
+To configure single sign-on on **LINE WORKS** side, please read the [LINE WORKS SSO documents](https://jp1-developers.worksmobile.com/jp/docs/?lang=en) and configure a LINE WORKS setting.
> [!NOTE] > You need to convert the downloaded Certificate file from .cert to .pem
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Kubernetes service - UseAzurePolicyForKubernetes (Disable the
This cluster is not using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades
-Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](../aks/cluster-configuration.md#ephemeral-os).
+Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](../aks/concepts-storage.md#ephemeral-os-disk).
### Free and Standard tiers for AKS control plane management
aks Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-cost.md
Explore the following table of recommendations to optimize your AKS configuratio
| Recommendation | Benefit | |-|--| |**Cluster architecture**: Utilize AKS cluster pre-set configurations. |From the Azure portal, the **cluster preset configurations** option helps offload this initial challenge by providing a set of recommended configurations that are cost-conscious and performant regardless of environment. Mission critical applications may require more sophisticated VM instances, while small development and test clusters may benefit from the lighter-weight, preset options where availability, Azure Monitor, Azure Policy, and other features are turned off by default. The **Dev/Test** and **Cost-optimized** pre-sets help remove unnecessary added costs.|
-|**Cluster architecture:** Consider using [ephemeral OS disks](cluster-configuration.md#ephemeral-os).|Ephemeral OS disks provide lower read/write latency, along with faster node scaling and cluster upgrades. Containers aren't designed to have local state persisted to the managed OS disk, and this behavior offers limited value to AKS. AKS defaults to an ephemeral OS disk if you chose the right VM series and the OS disk can fit in the VM cache or temporary storage SSD.|
+|**Cluster architecture:** Consider using [ephemeral OS disks](concepts-storage.md#ephemeral-os-disk).|Ephemeral OS disks provide lower read/write latency, along with faster node scaling and cluster upgrades. Containers aren't designed to have local state persisted to the managed OS disk, and this behavior offers limited value to AKS. AKS defaults to an ephemeral OS disk if you chose the right VM series and the OS disk can fit in the VM cache or temporary storage SSD.|
|**Cluster and workload architectures:** Use the [Start and Stop feature](start-stop-cluster.md) in Azure Kubernetes Services (AKS).|The AKS Stop and Start cluster feature allows AKS customers to pause an AKS cluster, saving time and cost. The stop and start feature keeps cluster configurations in place and customers can pick up where they left off without reconfiguring the clusters.| |**Workload architecture:** Consider using [Azure Spot VMs](spot-node-pool.md) for workloads that can handle interruptions, early terminations, and evictions.|For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates for you to schedule on a spot node pool. Using spot VMs for nodes with your AKS cluster allows you to take advantage of unused capacity in Azure at a significant cost savings.| |**Cluster architecture:** Enforce [resource quotas](operator-best-practices-scheduler.md) at the namespace level.|Resource quotas provide a way to reserve and limit resources across a development team or project. These quotas are defined on a namespace and can be used to set quotas on compute resources, storage resources, and object counts. When you define resource quotas, all pods created in the namespace must provide limits or requests in their pod specifications.| |**Cluster architecture:** Sign up for [Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md). | If you properly planned for capacity, your workload is predictable and exists for an extended period of time, sign up for [Azure Reserved Instances](../virtual-machines/prepay-reserved-vm-instances.md) to further reduce your resource costs.| |**Cluster architecture:** Use Kubernetes [Resource Quotas](operator-best-practices-scheduler.md#enforce-resource-quotas). | Resource quotas can be used to limit resource consumption for each namespace in your cluster, and by extension resource utilization for the Azure service.|
-|**Cluster and workload architectures:** Cost management using monitoring and observability tools. | OpenCost on AKS introduces a new community-driven [specification](https://github.com/opencost/opencost/blob/develop/spec/opencost-specv01.md) and implementation to bring greater visibility into current and historic Kubernetes spend and resource allocation. OpenCost, born out of [Kubecost](https://www.kubecost.com/), is an open-source, vendor-neutral [CNCF sandbox project](https://www.cncf.io/sandbox-projects/) that recently became a [FinOps Certified Solution](https://www.finops.org/certifications/finops-certified-solution/). Customer specific prices are now included using the [Azure Consumption Price Sheet API](/rest/api/consumption/price-sheet), ensuring accurate cost reporting that accounts for consumption and savings plan discounts. For out-of-cluster analysis or to ingest allocation data into an existing BI pipeline, you can export a CSV with daily infrastructure cost breakdown by Kubernetes constructs (namespace, controller, service, pod, job and more) to your Azure Storage Account or local storage with minimal configuration. CSV also includes resource utilization metrics for CPU, GPU, memory, load balancers, and persistent volumes. For in-cluster visualization, OpenCost UI enables real-time cost drill down by Kubernetes constructs. Alternatively, directly query the OpenCost API to access cost allocation data. For more information on Azure specific integration, see [OpenCost docs](https://www.opencost.io/docs).|
+|**Cluster and workload architectures:** Cost management using monitoring and observability tools. | OpenCost on AKS introduces a new community-driven [specification](https://github.com/opencost/opencost/blob/develop/spec/opencost-specv01.md) and implementation to bring greater visibility into current and historic Kubernetes spend and resource allocation. OpenCost, born out of [Kubecost](https://www.kubecost.com/), is an open-source, vendor-neutral [CNCF sandbox project](https://www.cncf.io/sandbox-projects/) that recently became a [FinOps Certified Solution](https://www.finops.org/partner-certifications/#finops-certified-solution). Customer specific prices are now included using the [Azure Consumption Price Sheet API](/rest/api/consumption/price-sheet), ensuring accurate cost reporting that accounts for consumption and savings plan discounts. For out-of-cluster analysis or to ingest allocation data into an existing BI pipeline, you can export a CSV with daily infrastructure cost breakdown by Kubernetes constructs (namespace, controller, service, pod, job and more) to your Azure Storage Account or local storage with minimal configuration. CSV also includes resource utilization metrics for CPU, GPU, memory, load balancers, and persistent volumes. For in-cluster visualization, OpenCost UI enables real-time cost drill down by Kubernetes constructs. Alternatively, directly query the OpenCost API to access cost allocation data. For more information on Azure specific integration, see [OpenCost docs](https://www.opencost.io/docs).|
|**Cluster architecture:** Improve cluster operations efficiency.|Managing multiple clusters increases operational overhead for engineers. [AKS auto upgrade](auto-upgrade-cluster.md) and [AKS Node Auto-Repair](node-auto-repair.md) helps improve day-2 operations. Learn more about [best practices for AKS Operators](operator-best-practices-cluster-isolation.md).| ## Next steps
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Title: Cluster configuration in Azure Kubernetes Services (AKS)
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS) Previously updated : 02/16/2023 Last updated : 06/20/2023 # Configure an AKS cluster
When you create a new cluster or add a new node pool to an existing cluster, by
> [!IMPORTANT] > Default OS disk sizing is only used on new clusters or node pools when ephemeral OS disks are not supported and a default OS disk size isn't specified. The default OS disk size may impact the performance or cost of your cluster, and you cannot change the OS disk size after cluster or node pool creation. This default disk sizing affects clusters or node pools created on July 2022 or later.
-## Ephemeral OS
+## Use Ephemeral OS on new clusters
-By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss when the VM is relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks. These drawbacks include, but aren't limited to, slower node provisioning and higher read/write latency.
-
-By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This configuration provides lower read/write latency, along with faster node scaling and cluster upgrades.
-
-Like the temporary disk, included in the price of the VM is an ephemeral OS disk.
-
-> [!IMPORTANT]
-> When you don't explicitly request managed disks for the OS, AKS defaults to ephemeral OS if possible for a given node pool configuration.
-
-If you chose to use an ephemeral OS, the OS disk must fit in the VM cache. Size requirements and recommendations for VM cache are available in the [Azure VM documentation](../virtual-machines/ephemeral-os-disks.md).
-
-If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB. The default VM size supports ephemeral OS, but only has 86 GiB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you receive a validation error.
-
-If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60 GiB OS disk, this configuration would default to ephemeral OS. The requested size of 60 GiB is smaller than the maximum cache size of 86 GiB.
-
-If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100 GB OS disk, this VM size supports ephemeral OS and has 200 GiB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default.
-
-The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks, but only has 75 GB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you receive a validation error.
-
-If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration defaults to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
-
-If you chose to use [Standard_E4bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) SKU with 100 GiB OS disk, this VM size supports ephemeral OS
-and has 150 GiB of temporary storage. If you don't specify the OS disk type, by default Azure provisions an ephemeral OS disk to the node pool.
-
-Ephemeral OS requires at least version 2.15.0 of the Azure CLI.
-
-### Use Ephemeral OS on new clusters
-
-Configure the cluster to use ephemeral OS disks when the cluster is created. Use the `--node-osdisk-type` flag to set Ephemeral OS as the OS disk type for the new cluster.
+Configure the cluster to use ephemeral OS disks when the cluster is created. Use the `--node-osdisk-type` argument to set Ephemeral OS as the OS disk type for the new cluster.
```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup -s Standard_DS3_v2 --node-osdisk-type Ephemeral ```
-If you want to create a regular cluster using network-attached OS disks, you can do so by specifying `--node-osdisk-type=Managed`. You can also choose to add other ephemeral OS node pools as described below.
+If you want to create a regular cluster using network-attached OS disks, you can do so by specifying the `--node-osdisk-type=Managed` argument. You can also choose to add other ephemeral OS node pools as described below.
-### Use Ephemeral OS on existing clusters
+## Use Ephemeral OS on existing clusters
-Configure a new node pool to use Ephemeral OS disks. Use the `--node-osdisk-type` flag to set as the OS disk type as the OS disk type for that node pool.
+Configure a new node pool to use Ephemeral OS disks. Use the `--node-osdisk-type` argument to set as the OS disk type as the OS disk type for that node pool.
```azurecli az aks nodepool add --name ephemeral --cluster-name myAKSCluster --resource-group myResourceGroup -s Standard_DS3_v2 --node-osdisk-type Ephemeral
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS) description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims. Previously updated : 04/26/2023 Last updated : 06/20/2023
This article introduces the core concepts that provide storage to your applicati
![Storage options for applications in an Azure Kubernetes Services (AKS) cluster](media/concepts-storage/aks-storage-options.png)
+## Ephemeral OS disk
+
+By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss when the VM is relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks. These drawbacks include, but aren't limited to, slower node provisioning and higher read/write latency.
+
+By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. With this configuration, you get lower read/write latency, together with faster node scaling and cluster upgrades.
+
+> [!NOTE]
+> When you don't explicitly request [Azure managed disks][azure-managed-disks] for the OS, AKS defaults to ephemeral OS if possible for a given node pool configuration.
+
+Size requirements and recommendations for ephemeral OS disks are available in the [Azure VM documentation][azure-vm-ephemeral-os-disks]. The following are some general sizing considerations:
+
+* If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB, the default VM size supports ephemeral OS, but only has 86 GiB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you receive a validation error.
+
+* If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60 GiB OS disk, this configuration would default to ephemeral OS. The requested size of 60 GiB is smaller than the maximum cache size of 86 GiB.
+
+* If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100 GB OS disk, this VM size supports ephemeral OS and has 200 GiB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default.
+
+The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. For example, if you selected the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB, it supports ephemeral OS disks, but only has 75 GB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you receive a validation error.
+
+* If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration defaults to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
+
+* If you select [Standard_E4bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) SKU with 100 GiB OS disk, this VM size supports ephemeral OS
+and has 150 GiB of temporary storage. If you don't specify the OS disk type, by default Azure provisions an ephemeral OS disk to the node pool.
+
+### Customer Managed key
+
+You can manage encryption for your ephemeral OS disk with your own keys on an AKS cluster. For more information, see [Azure ephemeral OS disks Customer Managed key][azure-disk-customer-managed-key].
+ ## Volumes Kubernetes typically treats individual pods as ephemeral, disposable resources. Applications have different approaches available to them for using and persisting data. A *volume* represents a way to store, retrieve, and persist data across pods and through the application lifecycle.
Kubernetes typically treats individual pods as ephemeral, disposable resources.
Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly or have Kubernetes automatically create them. Data volumes can use: [Azure Disk][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview]. > [!NOTE]
-> Depending on the VM SKU you're using, the Azure Disk CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
+> Depending on the VM SKU you're using, the Azure Disk CSI driver might have a per-node volume limit. For some high perfomance VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
To help determine best fit for your workload between Azure Files and Azure NetApp Files, review the information provided in the article [Azure Files and Azure NetApp Files comparison][azure-files-azure-netapp-comparison].
For more information on core Kubernetes and AKS concepts, see the following arti
<!-- INTERNAL LINKS --> [disks-types]: ../virtual-machines/disks-types.md
+[azure-managed-disks]: ../virtual-machines/managed-disks-overview.md
+[azure-vm-ephemeral-os-disks]: ../virtual-machines/ephemeral-os-disks.md
[storage-files-planning]: ../storage/files/storage-files-planning.md [azure-netapp-files-service-levels]: ../azure-netapp-files/azure-netapp-files-service-levels.md [storage-account-overview]: ../storage/common/storage-account-overview.md
For more information on core Kubernetes and AKS concepts, see the following arti
[csi-storage-drivers]: csi-storage-drivers.md [azure-blob-csi]: azure-blob-csi.md [general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md
-[azure-files-azure-netapp-comparison]: ../storage/files/storage-files-netapp-comparison.md
+[azure-files-azure-netapp-comparison]: ../storage/files/storage-files-netapp-comparison.md
+[azure-disk-customer-managed-key]: ../virtual-machines/ephemeral-os-disks.md#customer-managed-key
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Title: Concepts - Sustainable software engineering in Azure Kubernetes Services (AKS) description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS). Previously updated : 10/25/2022 Last updated : 06/20/2023 # Sustainable software engineering practices in Azure Kubernetes Service (AKS)
-The sustainable software engineering principles are a set of competencies to help you define, build, and run sustainable applications. The overall goal is to reduce the carbon footprint in every aspect of your application. The Azure Well-Architected Framework guidance for sustainability aligns with the [The Principles of Sustainable Software Engineering](https://principles.green/) from the [Green Software Foundation](https://greensoftware.foundation/), and provides an overview of the principles of sustainable software engineering.
+The sustainable software engineering principles are a set of competencies to help you define, build, and run sustainable applications. The overall goal is to reduce the carbon footprint in every aspect of your application. The Azure Well-Architected Framework guidance for sustainability aligns with the [The Principles of Sustainable Software Engineering](https://principles.green/) from the [Green Software Foundation](https://greensoftware.foundation/) and provides an overview of the principles of sustainable software engineering.
-Sustainable software engineering is a shift in priorities and focus. In many cases, the way most software is designed and run highlights fast performance and low latency. Meanwhile, sustainable software engineering focuses on reducing as much carbon emission as possible. Consider the following:
+Sustainable software engineering is a shift in priorities and focus. In many cases, the way most software is designed and run highlights fast performance and low latency. Sustainable software engineering focuses on reducing as much carbon emission as possible.
-* Applying sustainable software engineering principles can give you faster performance or lower latency, such as by lowering total network traversal.
+* Applying sustainable software engineering principles can give you faster performance or lower latency, such as lowering total network traversal.
* Reducing carbon emissions may cause slower performance or increased latency, such as delaying low-priority workloads.
-The guidance found in this article is focused on Azure Kubernetes Services you are building or operating on Azure and includes design and configuration checklists, recommended design practices, and configuration options. Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
+The following guidance focuses on services you're building or operating on Azure with Azure Kubernetes Service (AKS). This article includes design and configuration checklists, recommended design practices, and configuration options. Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
## Prerequisites * Understanding the Well-Architected Framework sustainability guidance can help you produce a high quality, stable, and efficient cloud architecture. We recommend that you start by reading more about [sustainable workloads](/azure/architecture/framework/sustainability/sustainability-get-started) and reviewing your workload using the [Microsoft Azure Well-Architected Review](https://aka.ms/assessments) assessment.
-* Having clearly defined business requirements is crucial when building applications, as they might have a direct impact on both cluster and workload architectures and configurations. When building or updating existing applications, review the Well-Architected Framework sustainability design areas, alongside your application's holistic lifecycle.
+* It's crucial you have clearly defined business requirements when building applications, as they might have a direct impact on both cluster and workload architectures and configurations. When building or updating existing applications, review the Well-Architected Framework sustainability design areas, alongside your application's holistic lifecycle.
## Understanding the shared responsibility model
-Sustainability ΓÇô just like security ΓÇô is a shared responsibility between the cloud provider and the customer or partner designing and deploying AKS clusters on the platform. Deploying AKS does not automatically make it sustainable, even if the [data centers are optimized for sustainability](https://infrastructuremap.microsoft.com/fact-sheets). Applications that are not properly optimized may still emit more carbon than necessary.
+Sustainability is a shared responsibility between the cloud provider and the customer or partner designing and deploying AKS clusters on the platform. Deploying AKS doesn't automatically make it sustainable, even if the [data centers are optimized for sustainability](https://infrastructuremap.microsoft.com/fact-sheets). Applications that aren't properly optimized may still emit more carbon than necessary.
Learn more about the [shared responsibility model for sustainability](/azure/architecture/framework/sustainability/sustainability-design-methodology#a-shared-responsibility). ## Design principles
-**[Carbon Efficiency](https://learn.greensoftware.foundation/practitioner/carbon-efficiency)**: Emit the least amount of carbon possible.
+* **[Carbon Efficiency](https://learn.greensoftware.foundation/practitioner/carbon-efficiency)**: Emit the least amount of carbon possible.
-A carbon efficient cloud application is one that is optimized, and the starting point is the cost optimization.
+ A carbon efficient cloud application is one that's optimized, and the starting point is the cost optimization.
-**[Energy Efficiency](https://learn.greensoftware.foundation/practitioner/energy-efficiency/)**: Use the least amount of energy possible.
+* **[Energy Efficiency](https://learn.greensoftware.foundation/practitioner/energy-efficiency/)**: Use the least amount of energy possible.
-One way to increase energy efficiency, is to run the application on as few servers as possible, with the servers running at the highest utilization rate; thereby increasing hardware efficiency as well.
+ One way to increase energy efficiency is to run the application on as few servers as possible with the servers running at the highest utilization rate, also increasing hardware efficiency.
-**[Hardware Efficiency](https://learn.greensoftware.foundation/practitioner/hardware-efficiency)**: Use the least amount of embodied carbon possible.
+* **[Hardware Efficiency](https://learn.greensoftware.foundation/practitioner/hardware-efficiency)**: Use the least amount of embodied carbon possible.
-There are two main approaches to hardware efficiency:
+ There are two main approaches to hardware efficiency:
-* For end-user devices, it's extending the lifespan of the hardware.
-* For cloud computing, it's increasing the utilization of the resource.
+ - For end-user devices, it's extending hardware lifespan.
+ - For cloud computing, it's increasing resource utilization.
-**[Carbon Awareness](https://learn.greensoftware.foundation/practitioner/carbon-awareness)**: Do more when the electricity is cleaner and do less when the electricity is dirtier.
+* **[Carbon Awareness](https://learn.greensoftware.foundation/practitioner/carbon-awareness)**: Do more when the electricity is cleaner and do less when the electricity is dirtier.
-Being carbon aware means responding to shifts in carbon intensity by increasing or decreasing your demand.
+ Being carbon aware means responding to shifts in carbon intensity by increasing or decreasing your demand.
## Design patterns and practices
-We recommend careful consideration of these design patterns for building a sustainable workload on Azure Kubernetes Service, before reviewing the detailed recommendations in each of the design areas.
+Before reviewing the detailed recommendations in each of the design areas, we recommend you carefully consider the following design patterns for building sustainable workloads on AKS:
| Design pattern | Applies to workload | Applies to cluster | | | | | | [Design for independent scaling of logical components](#design-for-independent-scaling-of-logical-components) | ✔️ | | | [Design for event-driven scaling](#design-for-event-driven-scaling) | ✔️ | | | [Aim for stateless design](#aim-for-stateless-design) | ✔️ | |
-| [Enable cluster and node auto-updates](#enable-cluster-and-node-auto-updates) | | ✔️ |
+| [Enable cluster and node autoupdates](#enable-cluster-and-node-autoupdates) | | ✔️ |
| [Install supported add-ons and extensions](#install-supported-add-ons-and-extensions) | ✔️ | ✔️ | | [Containerize your workload where applicable](#containerize-your-workload-where-applicable) | ✔️ | | | [Use energy efficient hardware](#use-energy-efficient-hardware) | | ✔️ |
-| [Match the scalability needs and utilize auto-scaling and bursting capabilities](#match-the-scalability-needs-and-utilize-auto-scaling-and-bursting-capabilities) | | ✔️ |
+| [Match the scalability needs and utilize autoscaling and bursting capabilities](#match-the-scalability-needs-and-utilize-autoscaling-and-bursting-capabilities) | | ✔️ |
| [Turn off workloads and node pools outside of business hours](#turn-off-workloads-and-node-pools-outside-of-business-hours) | ✔️ | ✔️ | | [Delete unused resources](#delete-unused-resources) | ✔️ | ✔️ | | [Tag your resources](#tag-your-resources) | ✔️ | ✔️ |
Explore this section to learn more about how to optimize your applications for a
### Design for independent scaling of logical components
-A microservice architecture may reduce the compute resources required, as it allows for independent scaling of its logical components and ensures they are scaled according to the demand.
+A microservice architecture may reduce the compute resources required, as it allows for independent scaling of its logical components and ensures they're scaled according to demand.
-* Consider using [Dapr Framework](https://dapr.io/) or [other CNCF projects](/azure/architecture/example-scenario/apps/build-cncf-incubated-graduated-projects-aks) to help you separate your application functionality into different microservices, to allow independent scaling of its logical components.
+* Consider using the [Dapr Framework](https://dapr.io/) or [other CNCF projects](/azure/architecture/example-scenario/apps/build-cncf-incubated-graduated-projects-aks) to help you separate your application functionality into different microservices and to allow independent scaling of its logical components.
### Design for event-driven scaling
-Scaling your workload based on relevant business metrics such as HTTP requests, queue length, and cloud events can help reduce its resource utilization, hence its carbon emissions.
+When you scale your workload based on relevant business metrics, such as HTTP requests, queue length, and cloud events, you can help reduce resource utilization and carbon emissions.
-* Use [Keda](https://keda.sh/) when building event-driven applications to allow scaling down to zero when there is no demand.
+* Use [Keda](https://keda.sh/) when building event-driven applications to allow scaling down to zero when there's no demand.
### Aim for stateless design
Removing state from your design reduces the in-memory or on-disk data required b
Explore this section to learn how to make better informed platform-related decisions around sustainability.
-### Enable cluster and node auto-updates
+### Enable cluster and node autoupdates
An up-to-date cluster avoids unnecessary performance issues and ensures you benefit from the latest performance improvements and compute optimizations.
-* Enable [cluster auto-upgrade](./auto-upgrade-cluster.md) and [apply security updates to nodes automatically using GitHub Actions](./node-upgrade-github-actions.md), to ensure your cluster has the latest improvements.
+* Enable [cluster autoupgrade](./auto-upgrade-cluster.md) and [apply security updates to nodes automatically using GitHub Actions](./node-upgrade-github-actions.md) to ensure your cluster has the latest improvements.
### Install supported add-ons and extensions
-Add-ons and extensions covered by the [AKS support policy](./support-policies.md) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
+Add-ons and extensions covered by the [AKS support policy](./support-policies.md) provide further supported functionalities to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
-* Ensure you install [KEDA](./integrations.md#available-add-ons) as an add-on and [GitOps & Dapr](./cluster-extensions.md?tabs=azure-cli#currently-available-extensions) as extensions.
+* Install [KEDA](./integrations.md#available-add-ons) as an add-on.
+* Install [GitOps & Dapr](./cluster-extensions.md?tabs=azure-cli#currently-available-extensions) as extensions.
### Containerize your workload where applicable
Ampere's Cloud Native Processors are uniquely designed to meet both the high per
* Evaluate if nodes with [Ampere Altra ArmΓÇôbased processors](https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/) are a good option for your workloads.
-### Match the scalability needs and utilize auto-scaling and bursting capabilities
+### Match the scalability needs and utilize autoscaling and bursting capabilities
-An oversized cluster does not maximize utilization of compute resources and can lead to a waste of energy. Separate your applications into different node pools to allow for cluster right sizing and independent scaling according to the application requirements. As you run out of capacity in your AKS cluster, grow from AKS to ACI to scale out additional pods to serverless nodes and ensure your workload uses all the allocated resources efficiently.
+An oversized cluster doesn't maximize utilization of compute resources and can lead to a waste of energy. Separate your applications into different node pools to allow for cluster right-sizing and independent scaling according to the application requirements. As you run out of capacity in your AKS cluster, grow from AKS to ACI to scale out extra pods to serverless nodes and ensure your workload uses all the allocated resources efficiently.
-* Size your cluster to match the scalability needs of your application and [use cluster autoscaler](./cluster-autoscaler.md) in combination with [virtual nodes](./virtual-nodes.md) to rapidly scale and maximize compute resource utilization. Additionally, [enforce resource quotas](./operator-best-practices-scheduler.md#enforce-resource-quotas) at the namespace level and [scale user node pools to 0](./scale-cluster.md?tabs=azure-cli#scale-user-node-pools-to-0) when there is no demand.
+* Size your cluster to match the scalability needs of your application. Use the [cluster autoscaler](./cluster-autoscaler.md) with [virtual nodes](./virtual-nodes.md) to rapidly scale and maximize compute resource utilization.
+* You can also [enforce resource quotas](./operator-best-practices-scheduler.md#enforce-resource-quotas) at the namespace level and [scale user node pools to zero](./scale-cluster.md?tabs=azure-cli#scale-user-node-pools-to-0) when there's no demand.
### Turn off workloads and node pools outside of business hours
-Workloads may not need to run continuously and could be turned off to reduce energy waste, hence carbon emissions. You can completely turn off (stop) your node pools in your AKS cluster, allowing you to also save on compute costs.
+Workloads may not need to run continuously and could be turned off to reduce energy waste and carbon emissions. You can completely turn off (stop) your node pools in your AKS cluster, allowing you to also save on compute costs.
-* Use the [node pool stop / start](./start-stop-nodepools.md) to turn off your node pools outside of business hours, and [KEDA CRON scaler](https://keda.sh/docs/2.7/scalers/cron/) to scale down your workloads (pods) based on time.
+* Use the [node pool stop/start](./start-stop-nodepools.md) to turn off your node pools outside of business hours.
+* Use the [KEDA CRON scaler](https://keda.sh/docs/2.7/scalers/cron/) to scale down your workloads (pods) based on time.
## Operational procedures
Explore this section to set up your environment for measuring and continuously i
### Delete unused resources
-Unused resources such as unreferenced images and storage resources should be identified and deleted as they have a direct impact on hardware and energy efficiency. Identifying and deleting unused resources must be treated as a process, rather than a point-in-time activity to ensure continuous energy optimization.
+You should identify and delete any unused resources, such as unreferenced images and storage resources, as they have a direct impact on hardware and energy efficiency. To ensure continuous energy optimization, you must treat identifying and deleting unused resources as a process rather than a point-in-time activity.
-* Use [Azure Advisor](../advisor/advisor-cost-recommendations.md) to identify unused resources and [ImageCleaner](./image-cleaner.md?tabs=azure-cli) to clean up stale images and remove an area of risk in your cluster.
+* Use [Azure Advisor](../advisor/advisor-cost-recommendations.md) to identify unused resources.
+* Use [ImageCleaner](./image-cleaner.md?tabs=azure-cli) to clean up stale images and remove an area of risk in your cluster.
### Tag your resources
Explore this section to learn how to design a more sustainable data storage arch
The data retrieval and data storage operations can have a significant impact on both energy and hardware efficiency. Designing solutions with the correct data access pattern can reduce energy consumption and embodied carbon.
-* Understand the needs of your application to [choose the appropriate storage](./operator-best-practices-storage.md#choose-the-appropriate-storage-type) and define it using [storage classes](./operator-best-practices-storage.md#create-and-use-storage-classes-to-define-application-needs) to avoid storage underutilization. Additionally, consider [provisioning volumes dynamically](./operator-best-practices-storage.md#dynamically-provision-volumes) to automatically scale the number of storage resources.
+* Understand the needs of your application to [choose the appropriate storage](./operator-best-practices-storage.md#choose-the-appropriate-storage-type) and define it using [storage classes](./operator-best-practices-storage.md#create-and-use-storage-classes-to-define-application-needs) to avoid storage underutilization.
+* Consider [provisioning volumes dynamically](./operator-best-practices-storage.md#dynamically-provision-volumes) to automatically scale the number of storage resources.
## Network and connectivity
Explore this section to learn how to enhance and optimize network efficiency to
### Choose a region that is closest to users
-The distance from a data center to the users has a significant impact on energy consumption and carbon emissions. Shortening the distance a network packet travels improves both your energy and carbon efficiency.
+The distance from a data center to users has a significant impact on energy consumption and carbon emissions. Shortening the distance a network packet travels improves both your energy and carbon efficiency.
-* Review your application requirements and [Azure geographies](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview) to choose a region that is the closest to the majority of where the network packets are going.
+* Review your application requirements and [Azure geographies](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview) to choose a region closest to where most network packets are going.
### Reduce network traversal between nodes
-Placing nodes in a single region or a single availability zone reduces the physical distance between the instances. However, for business critical workloads, you need to ensure your cluster is spread across multiple availability-zones, which may result in more network traversal and increase in your carbon footprint.
+Placing nodes in a single region or a single availability zone reduces the physical distance between the instances. However, for business critical workloads, you need to ensure your cluster is spread across multiple availability zones, which may result in more network traversal and increase in your carbon footprint.
-* Consider deploying your nodes within a [proximity placement group](../virtual-machines/co-location.md) to reduce the network traversal by ensuring your compute resources are physically located close to each other. For critical workloads configure [proximity placement groups with availability zones](./reduce-latency-ppg.md#configure-proximity-placement-groups-with-availability-zones).
+* Consider deploying your nodes within a [proximity placement group](../virtual-machines/co-location.md) to reduce the network traversal by ensuring your compute resources are physically located close to each other.
+* For critical workloads, configure [proximity placement groups with availability zones](./reduce-latency-ppg.md#configure-proximity-placement-groups-with-availability-zones).
### Evaluate using a service mesh
-A service mesh deploys additional containers for communication, typically in a [sidecar pattern](/azure/architecture/patterns/sidecar), to provide more operational capabilities leading to an increase in CPU usage and network traffic. Nevertheless, it allows you to decouple your application from these capabilities as it moves them out from the application layer, and down to the infrastructure layer.
+A service mesh deploys extra containers for communication, typically in a [sidecar pattern](/azure/architecture/patterns/sidecar), to provide more operational capabilities, which leads to an increase in CPU usage and network traffic. Nevertheless, it allows you to decouple your application from these capabilities as it moves them out from the application layer and down to the infrastructure layer.
* Carefully consider the increase in CPU usage and network traffic generated by [service mesh](./servicemesh-about.md) communication components before making the decision to use one. ### Optimize log collection
-Sending and storing all logs from all possible sources (workloads, services, diagnostics and platform activity) can considerably increase storage and network traffic, which would impact higher costs and carbon emissions.
+Sending and storing all logs from all possible sources (workloads, services, diagnostics, and platform activity) can increase storage and network traffic, which impacts costs and carbon emissions.
-* Make sure you are collecting and retaining only the log data necessary to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-agent-config.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
+* Make sure you're collecting and retaining only the necessary log data to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-agent-config.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
### Cache static data Using Content Delivery Network (CDN) is a sustainable approach to optimizing network traffic because it reduces the data movement across a network. It minimizes latency through storing frequently read static data closer to users, and helps reduce network traffic and server load.
-* Ensure you [follow best practices](/azure/architecture/best-practices/cdn) for CDN and consider using [Azure CDN](../cdn/cdn-how-caching-works.md?toc=%2fazure%2ffrontdoor%2fTOC.json) to lower the consumed bandwidth and keep costs down.
+* Ensure you [follow best practices](/azure/architecture/best-practices/cdn) for CDN.
+* Consider using [Azure CDN](../cdn/cdn-how-caching-works.md?toc=%2fazure%2ffrontdoor%2fTOC.json) to lower the consumed bandwidth and keep costs down.
## Security
Explore this section to learn more about the recommendations leading to a sustai
Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU utilization and might be unnecessary in certain architectures. A balanced level of security can offer a more sustainable and energy efficient workload, while a higher level of security may increase the compute resource requirements.
-* Review the information on TLS termination when using [Application Gateway](../application-gateway/ssl-overview.md) or [Azure Front Door](../application-gateway/ssl-overview.md). Consider if you can terminate TLS at your border gateway and continue with non-TLS to your workload load balancer and onwards to your workload.
+* Review the information on TLS termination when using [Application Gateway](../application-gateway/ssl-overview.md) or [Azure Front Door](../application-gateway/ssl-overview.md). Determine whether you can terminate TLS at your border gateway, and continue with non-TLS to your workload load balancer and workload.
### Use cloud native network security tools and controls
-Azure Font Door and Application Gateway help manage traffic from web applications while Azure Web Application Firewall provides protection against OWASP top 10 attacks and load shedding bad bots at the network edge. Using these capabilities helps remove unnecessary data transmission and reduces the burden on the cloud infrastructure, with lower bandwidth and less infrastructure requirements.
+Azure Front Door and Application Gateway help manage traffic from web applications, while Azure Web Application Firewall provides protection against OWASP top 10 attacks and load shedding bad bots at the network edge. These capabilities help remove unnecessary data transmission and reduce the burden on the cloud infrastructure with lower bandwidth and fewer infrastructure requirements.
* Use [Application Gateway Ingress Controller (AGIC) in AKS](/azure/architecture/example-scenario/aks-agic/aks-agic) to filter and offload traffic at the network edge from reaching your origin to reduce energy consumption and carbon emissions.
Azure Font Door and Application Gateway help manage traffic from web application
Many attacks on cloud infrastructure seek to misuse deployed resources for the attacker's direct gain leading to an unnecessary spike in usage and cost. Vulnerability scanning tools help minimize the window of opportunity for attackers and mitigate any potential malicious usage of resources.
-* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](../defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
+* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management).
+* Run automated vulnerability scanning tools, such as [Defender for Containers](../defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md), to avoid unnecessary resource usage. These tools help identify vulnerabilities in your images and minimize the window of opportunity for attackers.
## Next steps
aks Draft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft.md
Title: Draft extension for Azure Kubernetes Service (AKS) (preview)
-description: Install and use Draft on your Azure Kubernetes Service (AKS) cluster using the Draft extension.
+description: How to install and use Draft on your Azure Kubernetes Service (AKS) cluster using the Draft extension.
Previously updated : 5/02/2022 Last updated : 06/22/2023
Draft has the following commands to help ease your development on Kubernetes: -- **draft create**: Creates the Dockerfile and the proper manifest files.-- **draft setup-gh**: Sets up your GitHub OIDC.-- **draft generate-workflow**: Generates the GitHub Action workflow file for deployment onto your cluster.-- **draft up**: Sets up your GitHub OIDC and generates a GitHub Action workflow file, combining the previous two commands.
+- `draft create`: Creates the Dockerfile and the proper manifest files.
+- `draft setup-gh`: Sets up your GitHub OIDC.
+- `draft generate-workflow`: Generates the GitHub Action workflow file for deployment onto your cluster.
+- `draft up`: Sets up your GitHub OIDC and generates a GitHub Action workflow file, combining the previous two commands.
## Prerequisites
Draft has the following commands to help ease your development on Kubernetes:
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
+1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command.
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+2. Update the extension to make sure you have the latest version using the [`az extension update`][az-extension-update] command.
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
## Create artifacts using `draft create`
-To create a Dockerfile, Helm chart, Kubernetes manifest, or Kustomize files needed to deploy your application onto an AKS cluster, use the `draft create` command:
+You can use `draft create` to create Dockerfiles, Helm charts, Kubernetes manifests, or Kustomize files needed to deploy your application onto an AKS cluster.
+
+- Create an artifact using the [`az aks draft create`][az-aks-draft-create] command.
-```azure-cli-interactive
-az aks draft create
-```
+ ```azure-cli-interactive
+ az aks draft create
+ ```
-You can also run the command on a specific directory using the `--destination` flag:
+ - You can also run the command on a specific directory using the `--destination` flag, as shown in the following example:
-```azure-cli-interactive
-az aks draft create --destination /Workspaces/ContosoAir
-```
+ ```azure-cli-interactive
+ az aks draft create --destination /Workspaces/ContosoAir
+ ```
## Set up GitHub OIDC using `draft setup-gh` To use Draft, you have to register your application with GitHub using `draft setup-gh`. This step only needs to be done once per repository.
-```azure-cli-interactive
-az aks draft setup-gh
-```
+- Register your application with GitHub using the [`az aks draft setup-gh`][az-aks-draft-setup-gh] command.
+
+ ```azure-cli-interactive
+ az aks draft setup-gh
+ ```
## Generate a GitHub Action workflow file for deployment using `draft generate-workflow`
-After you create your artifacts and set up GitHub OIDC, you can generate a GitHub Action workflow file, creating an action that deploys your application onto your AKS cluster. Once your workflow file is generated, you must commit it into your repository in order to initiate the GitHub Action.
+After you create your artifacts and set up GitHub OIDC, you can use `draft generate-workflow` to generate a GitHub Action workflow file, creating an action that deploys your application onto your AKS cluster. Once your workflow file is generated, you must commit it into your repository in order to initiate the GitHub Action.
+
+- Generate a GitHub Action workflow file using the [`az aks draft generate-workflow`][az-aks-draft-generate-workflow] command.
-```azure-cli-interactive
-az aks draft generate-workflow
-```
+ ```azure-cli-interactive
+ az aks draft generate-workflow
+ ```
-You can also run the command on a specific directory using the `--destination` flag:
+ - You can also run the command on a specific directory using the `--destination` flag, as shown in the following example:
-```azure-cli-interactive
-az aks draft generate-workflow --destination /Workspaces/ContosoAir
-```
+ ```azure-cli-interactive
+ az aks draft generate-workflow --destination /Workspaces/ContosoAir
+ ```
## Set up GitHub OpenID Connect (OIDC) and generate a GitHub Action workflow file using `draft up` `draft up` is a single command to accomplish GitHub OIDC setup and generate a GitHub Action workflow file for deployment. It effectively combines the `draft setup-gh` and `draft generate-workflow` commands, meaning it's most commonly used when getting started in a new repository for the first time, and only needs to be run once. Subsequent updates to the GitHub Action workflow file can be made using `draft generate-workflow`.
-```azure-cli-interactive
-az aks draft up
-```
+- Set up GitHub OIDC and generate a GitHub Action workflow file using the [`az aks draft up`][az-aks-draft-up] command.
+
+ ```azure-cli-interactive
+ az aks draft up
+ ```
-You can also run the command on a specific directory using the `--destination` flag:
+ - You can also run the command on a specific directory using the `--destination` flag, as shown in the following example:
-```azure-cli-interactive
-az aks draft up --destination /Workspaces/ContosoAir
-```
+ ```azure-cli-interactive
+ az aks draft up --destination /Workspaces/ContosoAir
+ ```
## Use Web Application Routing with Draft to make your application accessible over the internet
-[Web Application Routing][web-app-routing] is the easiest way to get your web application up and running in Kubernetes securely, removing the complexity of ingress controllers and certificate and DNS management while offering configuration for enterprises looking to bring their own. Web Application Routing offers a managed ingress controller based on nginx that you can use without restrictions and integrates out of the box with Open Service Mesh to secure intra-cluster communications.
+[Web Application Routing][web-app-routing] is the easiest way to get your web application up and running in Kubernetes securely. Web Application Routing removes the complexity of ingress controllers and certificate and DNS management, and it offers configuration for enterprises looking to bring their own. Web Application Routing offers a managed ingress controller based on nginx that you can use without restrictions and integrates out of the box with Open Service Mesh to secure intra-cluster communications.
-To set up Draft with Web Application Routing, use `az aks draft update` and pass in the DNS name and Azure Key Vault-stored certificate when prompted:
+- Set up Draft with Web Application Routing using the [`az aks draft update`][az-aks-draft-update] and pass in the DNS name and Azure Key Vault-stored certificate when prompted.
-```azure-cli-interactive
-az aks draft update
-```
+ ```azure-cli-interactive
+ az aks draft update
+ ```
-You can also run the command on a specific directory using the `--destination` flag:
+ - You can also run the command on a specific directory using the `--destination` flag, as shown in the following example:
-```azure-cli-interactive
-az aks draft update --destination /Workspaces/ContosoAir
-```
+ ```azure-cli-interactive
+ az aks draft update --destination /Workspaces/ContosoAir
+ ```
<!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
-[az-feature-register]: /cli/azure/feature#az-feature-register
-[az-feature-list]: /cli/azure/feature#az-feature-list
-[az-provider-register]: /cli/azure/provider#az-provider-register
-[sample-application]: ./quickstart-dapr.md
-[k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy
[web-app-routing]: web-app-routing.md [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update
+[az-aks-draft-update]: /cli/azure/aks/draft#az-aks-draft-update
+[az-aks-draft-up]: /cli/azure/aks/draft#az-aks-draft-up
+[az-aks-draft-create]: /cli/azure/aks/draft#az-aks-draft-create
+[az-aks-draft-setup-gh]: /cli/azure/aks/draft#az-aks-draft-setup-gh
+[az-aks-draft-generate-workflow]: /cli/azure/aks/draft#az-aks-draft-generate-workflow
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
This article uses the Azure Marketplace offer for Open/WebSphere Liberty to acce
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] * This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.+
+> [!NOTE]
+> This guidance can also be executed from a local developer command line with Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+ * If running the commands in this guide locally (instead of Azure Cloud Shell): * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux). * Install a Java SE implementation (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
+
+ Title: "Deploy WebLogic Server on Azure Kubernetes Service using the Azure portal"
+description: Shows how to quickly stand up WebLogic Server on Azure Kubernetes Service.
+++ Last updated : 06/22/2023+++
+# Deploy a Java application with WebLogic Server on an Azure Kubernetes Service (AKS) cluster
+
+This article shows you how to quickly deploy WebLogic Application Server (WLS) on Azure Kubernetes Service (AKS) with the simplest possible set of configuration choices using the Azure portal. For a more full featured tutorial, including the use of Azure Application Gateway to make WLS on AKS securely visible on the public Internet, see [Tutorial: Migrate a WebLogic Server cluster to Azure with Azure Application Gateway as a load balancer](/azure/developer/java/migration/migrate-weblogic-with-app-gateway).
+
+For step-by-step guidance in setting up WebLogic Server on Azure Kubernetes Service, see the official documentation from Oracle at [Azure Kubernetes Service](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/).
+
+## Prerequisites
+
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- Ensure the Azure identity you use to sign in and complete this article has either the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription or the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) and [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) roles in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview) For details on the specific roles required by WLS on AKS, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
+- Have the credentials for an Oracle single sign-on (SSO) account. To create one, see [Create Your Oracle Account](https://aka.ms/wls-aks-create-sso-account).
+- Accept the license terms for WLS.
+ - Visit the [Oracle Container Registry](https://container-registry.oracle.com/) and sign in.
+ - If you have a support entitlement, select **Middleware**, then search for and select **weblogic_cpu**.
+ - If you don't have a support entitlement from Oracle, select **Middleware**, then search for and select **weblogic**.
+ > [!NOTE]
+ > Get a support entitlement from Oracle before going to production. Failure to do so results in running insecure images that are not patched for critical security flaws. For more information on Oracle's critical patch updates, see [Critical Patch Updates, Security Alerts and Bulletins](https://www.oracle.com/security-alerts/) from Oracle.
+ - Accept the license agreement.
+
+## Create a storage account and storage container to hold the sample application
+
+Use the following steps to create a storage account and container. Some of these steps direct you to other guides. After completing the steps, you can upload a sample application to run on WLS on AKS.
+
+1. Download a sample application as a *.war* or *.ear* file. The sample app should be self-contained and not have any database, messaging, or other external connection requirements. The sample app from the WLS Kubernetes Operator documentation is a good choice. You can download [testwebapp.war](https://aka.ms/wls-aks-testwebapp) from Oracle. Save the file to your local filesystem.
+1. Sign in to the [Azure portal](https://aka.ms/publicportal).
+1. Create a storage account by following the steps in [Create a storage account](/azure/storage/common/storage-account-create). You don't need to perform all the steps in the article. Just fill out the fields as shown on the **Basics** pane, then select **Review + create** to accept the default options. Proceed to validate and create the account, then return to this article.
+1. Create a storage container within the account. Then, upload the sample application you downloaded in step 1 by following the steps in [Quickstart: Upload, download, and list blobs with the Azure portal](/azure/storage/blobs/storage-quickstart-blobs-portal). Upload the sample application as the blob, then return to this article.
+
+## Deploy WLS on AKS
+
+The steps in this section direct you to deploy WLS on AKS in the simplest possible way. WLS on AKS offers a broad and deep selection of Azure integrations. For more information, see [What are solutions for running Oracle WebLogic Server on the Azure Kubernetes Service?](/azure/virtual-machines/workloads/oracle/weblogic-aks)
++
+The following steps show you how to find the WLS on AKS offer and fill out the **Basics** pane.
+
+1. In the search bar at the top of the Azure portal, enter *weblogic*. In the auto-suggested search results, in the **Marketplace** section, select **Oracle WebLogic Server on Azure Kubernetes Service**.
+
+ :::image type="content" source="media/howto-deploy-java-wls-app/marketplace-search-results.png" alt-text="Screenshot of Azure portal showing WLS in search results." lightbox="media/howto-deploy-java-wls-app/marketplace-search-results.png":::
+
+ You can also go directly to the [Oracle WebLogic Server on Azure Kubernetes Service](https://aka.ms/wlsaks) offer.
+
+1. On the offer page, select **Create**.
+1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section.
+1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0723wls`.
+1. Under **Instance details**, select the region for the deployment. For a list of Azure regions where AKS is available, see [AKS region availability](https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service).
+1. Under **Credentials for WebLogic**, leave the default value for **Username for WebLogic Administrator**.
+1. Fill in `wlsAksCluster2022` for the **Password for WebLogic Administrator**. Use the same value for the confirmation and **Password for WebLogic Model encryption** fields.
+1. Scroll to the bottom of the **Basics** pane and notice the helpful links for documentation, community support, and how to report problems.
+1. Select **Next: Configure AKS cluster**.
+
+The following steps show you how to start the deployment process.
+
+1. Scroll to the section labeled **Provide an Oracle Single Sign-On (SSO) account**. Fill in your Oracle SSO credentials from the preconditions.
+1. Accurately answer the question **Is the specified SSO account associated with an active Oracle support contract?** by selecting **Yes** or **No** accordingly. If you answer this question incorrectly, the steps in this quickstart won't work. If in doubt, select **No**.
+1. In the section **Java EE Application**, next to **Deploy your application package**, select **Yes**.
+1. Next to **Application package (.war,.ear,.jar)**, select **Browse**.
+1. Start typing the name of the storage account from the preceding section. When the desired storage account appears, select it.
+1. Select the storage container from the preceding section.
+1. Select the checkbox next to the sample app uploaded from the preceding section. Select **Select**.
+
+The following steps make it so the WLS admin console and the sample app are exposed to the public Internet with a built-in Kubernetes `LoadBalancer` service. For a more secure and scalable way to expose functionality to the public Internet, see [Tutorial: Migrate a WebLogic Server cluster to Azure with Azure Application Gateway as a load balancer](/azure/developer/java/migration/migrate-weblogic-with-app-gateway).
+
+1. Select the **Networking** pane.
+1. Next to the question **Create Standard Load Balancer services for Oracle WebLogic Server?**, select **Yes**.
+1. In the table that appears, under **Service name prefix**, fill in the values as shown in the following table. The port values of *7001* for the admin server and *8001* for the cluster must be filled in exactly as shown.
+
+ | Service name prefix | Target | Port |
+ ||--||
+ | console | admin-server | 7001 |
+ | app | cluster-1 | 8001 |
+
+ :::image type="content" source="media/howto-deploy-java-wls-app/load-balancer-minimal-config.png" alt-text="Screenshot of Azure portal showing the simplest possible load balancer configuration on the Create Oracle WebLogic Server on Azure Kubernetes Service page." lightbox="media/howto-deploy-java-wls-app/load-balancer-minimal-config.png":::
+
+1. Select **Review + create**. Ensure the green **Validation Passed** message appears at the top. If it doesn't, fix any validation problems, then select **Review + create** again.
+1. Select **Create**.
+1. Track the progress of the deployment on the **Deployment is in progress** page.
+
+Depending on network conditions and other activity in your selected region, the deployment may take up to 30 minutes to complete.
+
+## Examine the deployment output
+
+The steps in this section show you how to verify that the deployment has successfully completed.
+
+If you navigated away from the **Deployment is in progress** page, the following steps will show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to the steps after the image below.
+
+1. In the upper left of any portal page, select the hamburger menu and select **Resource groups**.
+1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group.
+1. In the left navigation pane, in the **Settings** section, select **Deployments**. You'll see an ordered list of the deployments to this resource group, with the most recent one first.
+1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown in the following screenshot.
+
+ :::image type="content" source="media/howto-deploy-java-wls-app/resource-group-deployments.png" alt-text="Screenshot of Azure portal showing the resource group deployments list." lightbox="media/howto-deploy-java-wls-app/resource-group-deployments.png":::
+
+1. In the left panel, select **Outputs**. This list shows the output values from the deployment. Useful information is included in the outputs.
+1. The **adminConsoleExternalUrl** value is the fully qualified, public Internet visible link to the WLS admin console for this AKS cluster. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
+1. The **clusterExternalUrl** value is the fully qualified, public Internet visible link to the sample app deployed in WLS on this AKS cluster. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
+
+The other values in the outputs are beyond the scope of this article, but are explained in detail in the [WebLogic on AKS user guide](https://aka.ms/wls-aks-docs).
+
+## Verify the functionality of the deployment
+
+The following steps show you how to verify the functionality of the deployment by viewing the WLS admin console and the sample app.
+
+1. Paste the value for **adminConsoleExternalUrl** in an Internet-connected web browser. You should see the familiar WLS admin console login screen as shown in the following screenshot.
+
+ :::image type="content" source="media/howto-deploy-java-wls-app/wls-admin-login.png" alt-text="Screenshot of WLS admin login screen." border="false":::
+
+ > [!NOTE]
+ > This article shows the WLS admin console merely by way of demonstration. Don't use the WLS admin console for any durable configuration changes when running WLS on AKS. The cloud-native design of WLS on AKS requires that any durable configuration must be represented in the initial docker images or applied to the running AKS cluster using CI/CD techniques such as updating the model, as described in the [Oracle documentation](https://aka.ms/wls-aks-docs-update-model).
+
+1. Understand the `context-path` of the sample app you deployed. If you deployed the recommended sample app, the `context-path` is `testwebapp`.
+1. Construct a fully qualified URL for the sample app by appending the `context-path` to the value of **clusterExternalUrl**. If you deployed the recommended sample app, the fully qualified URL will be something like `http://123.456.789.012:8001/testwebapp/`.
+1. Paste the fully qualified URL in an Internet-connected web browser. If you deployed the recommended sample app, you should see results similar to the following screenshot.
+
+ :::image type="content" source="media/howto-deploy-java-wls-app/test-web-app.png" alt-text="Screenshot of test web app." border="false":::
+
+## Clean up resources
+
+To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the cluster, use the [az group delete](/cli/azure/group#az-group-delete) command. The following command will remove the resource group, container service, container registry, and all related resources.
+
+```azurecli
+az group delete --name <resource-group-name> --yes --no-wait
+```
+
+## Next steps
+
+Learn more about running WLS on AKS or virtual machines by following these links:
+
+> [!div class="nextstepaction"]
+> [WLS on AKS](/azure/virtual-machines/workloads/oracle/weblogic-aks)
+
+> [!div class="nextstepaction"]
+> [WLS on virtual machines](/azure/virtual-machines/workloads/oracle/oracle-weblogic)
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Title: Add-ons, extensions, and other integrations with Azure Kubernetes Service
-description: Learn about the add-ons, extensions, and open-source integrations you can use with Azure Kubernetes Service.
+ Title: Add-ons, extensions, and other integrations with Azure Kubernetes Service (AKS)
+description: Learn about the add-ons, extensions, and open-source integrations you can use with Azure Kubernetes Service (AKS).
Previously updated : 02/22/2022 Last updated : 05/22/2023
-# Add-ons, extensions, and other integrations with Azure Kubernetes Service
+# Add-ons, extensions, and other integrations with Azure Kubernetes Service (AKS)
-Azure Kubernetes Service (AKS) provides additional, supported functionality for your cluster using add-ons and extensions. There are also many more integrations provided by open-source projects and third parties that are commonly used with AKS. These open-source and third-party integrations are not covered by the [AKS support policy][aks-support-policy].
+Azure Kubernetes Service (AKS) provides extra functionality for your clusters using add-ons and extensions. Open-source projects and third parties provide by more integrations that are commonly used with AKS. The [AKS support policy][aks-support-policy] doesn't support the open-source and third-party integrations.
## Add-ons
-Add-ons are a fully supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks enable-addons` to install an add-on or manage the add-ons for your cluster.
+Add-ons are a fully supported way to provide extra capabilities for your AKS cluster. The installation, configuration, and lifecycle of add-ons is managed by AKS. You can use the [`az aks enable-addons`][az-aks-enable-addons] command to install an add-on or manage the add-ons for your cluster.
-The following rules are used by AKS for applying updates to installed add-ons:
+AKS uses the following rules for applying updates to installed add-ons:
-- Only an add-on's patch version can be upgraded within a Kubernetes minor version. The add-on's major/minor version will not be upgraded within the same Kubernetes minor version.-- The major/minor version of the add-on will only be upgraded when moving to a later Kubernetes minor version.-- Any breaking or behavior changes to the add-on will be announced well before, usually 60 days, for a GA minor version of Kubernetes on AKS.-- Add-ons can be patched weekly with every new release of AKS which will be announced in the release notes. AKS releases can be controlled using [maintenance windows][maintenance-windows] and followed using [release tracker][release-tracker].
+- Only an add-on's patch version can be upgraded within a Kubernetes minor version. The add-on's major/minor version isn't upgraded within the same Kubernetes minor version.
+- The major/minor version of the add-on is only upgraded when moving to a later Kubernetes minor version.
+- Any breaking or behavior changes to the add-on are announced well before, usually 60 days, for a GA minor version of Kubernetes on AKS.
+- You can patch add-ons weekly with every new release of AKS, which is announced in the release notes. You can control AKS releases using the [maintenance windows][maintenance-windows] and [release tracker][release-tracker].
### Exceptions -- Add-ons will be upgraded to a new major/minor version (or breaking change) within a Kubernetes minor version if either the cluster's Kubernetes version or the add-on version are in preview. -- It is also possible, in unavoidable circumstances such as CVE security patches or critical bug fixes, that there may be times when an add-on needs to be updated within a GA minor version.
+- Add-ons are upgraded to a new major/minor version (or breaking change) within a Kubernetes minor version if either the cluster's Kubernetes version or the add-on version are in preview.
+- There may be unavoidable circumstances, such as CVE security patches or critical bug fixes, when you need to update an add-on within a GA minor version.
### Available add-ons
-The below table shows the available add-ons.
- | Name | Description | More details | |||| | http_application_routing | Configure ingress with automatic public DNS name creation for your AKS cluster. | [HTTP application routing add-on on Azure Kubernetes Service (AKS)][http-app-routing] |
The below table shows the available add-ons.
| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] | | open-service-mesh | Use Open Service Mesh with your AKS cluster. | [Open Service Mesh AKS add-on][osm] | | azure-keyvault-secrets-provider | Use Azure Keyvault Secrets Provider addon.| [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][keyvault-secret-provider] |
-| web_application_routing | Use a managed NGINX ingress Controller with your AKS cluster.| [Web Application Routing Overview][web-app-routing] |
-| keda | Event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]|
+| web_application_routing | Use a managed NGINX ingress controller with your AKS cluster.| [Web Application Routing Overview][web-app-routing] |
+| keda | Use event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]|
## Extensions
-Cluster extensions build on top of certain Helm charts and provide an Azure Resource Manager-driven experience for installation and lifecycle management of different Azure capabilities on top of your Kubernetes cluster. For more details on the specific cluster extensions for AKS, see [Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)][cluster-extensions]. For more details on the currently available cluster extensions, see [Currently available extensions][cluster-extensions-current].
+Cluster extensions build on top of certain Helm charts and provide an Azure Resource Manager-driven experience for installation and lifecycle management of different Azure capabilities on top of your Kubernetes cluster.
+
+- For more information on the specific cluster extensions for AKS, see [Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)][cluster-extensions].
+- For more information on available cluster extensions, see [Currently available extensions][cluster-extensions-current].
-## Difference between extensions and add-ons
+### Difference between extensions and add-ons
-Both extensions and add-ons are supported ways to add functionality to your AKS cluster. When you install an add-on, the functionality is added as part of the AKS resource provider in the Azure API. When you install an extension, the functionality is added as part of a separate resource provider in the Azure API.
+Extensions and add-ons are both supported ways to add functionality to your AKS cluster. When you install an add-on, the functionality is added as part of the AKS resource provider in the Azure API. When you install an extension, the functionality is added as part of a separate resource provider in the Azure API.
## GitHub Actions
-GitHub Actions helps you automate your software development workflows from within GitHub. For more details on using GitHub Actions with Azure, see [What is GitHub Actions for Azure][github-actions]. For an example of using GitHub Actions with an AKS cluster, see [Build, test, and deploy containers to Azure Kubernetes Service using GitHub Actions][github-actions-aks].
+GitHub Actions helps you automate your software development workflows from within GitHub.
-## Open source and third-party integrations
+- For more information on using GitHub Actions with Azure, see [GitHub Actions for Azure][github-actions].
+- For an example of using GitHub Actions with an AKS cluster, see [Build, test, and deploy containers to Azure Kubernetes Service using GitHub Actions][github-actions-aks].
-You can install many open source and third-party integrations on your AKS cluster, but these open-source and third-party integrations are not covered by the [AKS support policy][aks-support-policy].
+## Open-source and third-party integrations
-The below table shows a few examples of open-source and third-party integrations.
+There are many open-source and third-party integrations you can install on your AKS cluster. The [AKS support policy][aks-support-policy] doesn't support the following open-source and third-party integrations.
| Name | Description | More details | |||| | [Helm][helm] | An open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. | [Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm][helm-qs] |
-| [Prometheus][prometheus] | An open source monitoring and alerting toolkit. | [Container insights with metrics in Prometheus format][prometheus-az-monitor], [Prometheus Helm chart][prometheus-helm-chart] |
+| [Prometheus][prometheus] | An open-source monitoring and alerting toolkit. | [Container insights with metrics in Prometheus format][prometheus-az-monitor], [Prometheus Helm chart][prometheus-helm-chart] |
| [Grafana][grafana] | An open-source dashboard for observability. | [Deploy Grafana on Kubernetes][grafana-install] or use [Managed Grafana][managed-grafana]| | [Couchbase][couchdb] | A distributed NoSQL cloud database. | [Install Couchbase and the Operator on AKS][couchdb-install] | | [OpenFaaS][open-faas]| An open-source framework for building serverless functions by using containers. | [Use OpenFaaS with AKS][open-faas-aks] |
-| [Apache Spark][apache-spark] | An open source, fast engine for large-scale data processing. | Running Apache Spark jobs requires a minimum node size of *Standard_D3_v2*. See [running Spark on Kubernetes][spark-kubernetes] for more details on running Spark jobs on Kubernetes. |
+| [Apache Spark][apache-spark] | An open-source, fast engine for large-scale data processing. | Running Apache Spark jobs requires a minimum node size of *Standard_D3_v2*. See [running Spark on Kubernetes][spark-kubernetes] for more details on running Spark jobs on Kubernetes. |
| [Istio][istio] | An open-source service mesh. | [Istio Installation Guides][istio-install] | | [Linkerd][linkerd] | An open-source service mesh. | [Linkerd Getting Started][linkerd-install] |
-| [Consul][consul] | An open source, identity-based networking solution. | [Getting Started with Consul Service Mesh for Kubernetes][consul-install] |
-
+| [Consul][consul] | An open-source, identity-based networking solution. | [Getting Started with Consul Service Mesh for Kubernetes][consul-install] |
+<!-- LINKS -->
[http-app-routing]: http-application-routing.md [container-insights]: ../azure-monitor/containers/container-insights-overview.md [virtual-nodes]: virtual-nodes.md
The below table shows a few examples of open-source and third-party integrations
[open-faas]: https://www.openfaas.com/ [open-faas-aks]: openfaas.md [apache-spark]: https://spark.apache.org/
-[azure-ml-overview]: ../machine-learning/how-to-attach-kubernetes-anywhere.md
[spark-kubernetes]: https://spark.apache.org/docs/latest/running-on-kubernetes.html
-[dapr-overview]: ./dapr.md
-[gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md
[managed-grafana]: ../managed-grafan [keda]: keda-about.md [web-app-routing]: web-app-routing.md [maintenance-windows]: planned-maintenance.md [release-tracker]: release-tracker.md [github-actions]: /azure/developer/github/github-actions
-[azure/aks-set-context]: https://github.com/Azure/aks-set-context
-[azure/k8s-set-context]: https://github.com/Azure/k8s-set-context
-[azure/k8s-bake]: https://github.com/Azure/k8s-bake
-[azure/k8s-create-secret]: https://github.com/Azure/k8s-create-secret
-[azure/k8s-deploy]: https://github.com/Azure/k8s-deploy
-[azure/k8s-lint]: https://github.com/Azure/k8s-lint
-[azure/setup-helm]: https://github.com/Azure/setup-helm
-[azure/setup-kubectl]: https://github.com/Azure/setup-kubectl
-[azure/k8s-artifact-substitute]: https://github.com/Azure/k8s-artifact-substitute
-[azure/aks-create-action]: https://github.com/Azure/aks-create-action
-[azure/aks-github-runner]: https://github.com/Azure/aks-github-runner
[github-actions-aks]: kubernetes-action.md
+[az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons
aks Network Observability Byo Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-byo-cli.md
az aks create \
## Get cluster credentials ```azurecli-interactive
-az aks get-credentials -name myAKSCluster --resource-group myResourceGroup
+az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
``` ## Enable Visualization on Grafana
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
Use [az aks update](/cli/azure/aks#az-aks-update) to link the Azure Monitor and
az aks update \ --name myAKSCluster \ --resource-group myResourceGroup \
- --enable-azuremonitormetrics \
+ --enable-azure-monitor-metrics \
--azure-monitor-workspace-resource-id $azuremonitorId \ --grafana-resource-id $grafanaId ```
az aks update \
## Get cluster credentials ```azurecli-interactive
-az aks get-credentials -name myAKSCluster --resource-group myResourceGroup
+az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
```
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
Title: Best practices for network resources
+ Title: Best practices for network resources in Azure Kubernetes Service (AKS)
-description: Learn the cluster operator best practices for virtual network resources and connectivity in Azure Kubernetes Service (AKS)
+description: Learn the cluster operator best practices for virtual network resources and connectivity in Azure Kubernetes Service (AKS).
Previously updated : 03/10/2021 Last updated : 06/22/2023 # Best practices for network connectivity and security in Azure Kubernetes Service (AKS)
-As you create and manage clusters in Azure Kubernetes Service (AKS), you provide network connectivity for your nodes and applications. These network resources include IP address ranges, load balancers, and ingress controllers. To maintain a high quality of service for your applications, you need to strategize and configure these resources.
+As you create and manage clusters in Azure Kubernetes Service (AKS), you provide network connectivity for your nodes and applications. These network resources include IP address ranges, load balancers, and ingress controllers.
This best practices article focuses on network connectivity and security for cluster operators. In this article, you learn how to: > [!div class="checklist"]
+>
> * Compare the kubenet and Azure Container Networking Interface (CNI) network modes in AKS. > * Plan for required IP addressing and connectivity. > * Distribute traffic using load balancers, ingress controllers, or a web application firewall (WAF).
This best practices article focuses on network connectivity and security for clu
## Choose the appropriate network model
-> **Best practice guidance**
->
+> **Best practice guidance**
+>
> Use Azure CNI networking in AKS for integration with existing virtual networks or on-premises networks. This network model allows greater separation of resources and controls in an enterprise environment. Virtual networks provide the basic connectivity for AKS nodes and customers to access your applications. There are two different ways to deploy AKS clusters into virtual networks:
-* **Azure CNI networking**
-
- Deploys into a virtual network and uses the [Azure CNI][cni-networking] Kubernetes plugin. Pods receive individual IPs that can route to other network services or on-premises resources.
-* **Kubenet networking**
+* **Azure CNI networking**: Deploys into a virtual network and uses the [Azure CNI][cni-networking] Kubernetes plugin. Pods receive individual IPs that can route to other network services or on-premises resources.
+* **Kubenet networking**: Azure manages the virtual network resources as the cluster is deployed and uses the [kubenet][kubenet] Kubernetes plugin.
- Azure manages the virtual network resources as the cluster is deployed and uses the [kubenet][kubenet] Kubernetes plugin.
--
-For production deployments, both kubenet and Azure CNI are valid options.
+Azure CNI and kubenet are both valid options for production deployments.
### CNI Networking
-Azure CNI is a vendor-neutral protocol that lets the container runtime make requests to a network provider. It assigns IP addresses to pods and nodes, and provides IP address management (IPAM) features as you connect to existing Azure virtual networks. Each node and pod resource receives an IP address in the Azure virtual network - no need for extra routing to communicate with other resources or services.
+Azure CNI is a vendor-neutral protocol that lets the container runtime make requests to a network provider. It assigns IP addresses to pods and nodes, and provides IP address management (IPAM) features as you connect to existing Azure virtual networks. Each node and pod resource receives an IP address in the Azure virtual network. There's no need for extra routing to communicate with other resources or services.
![Diagram showing two nodes with bridges connecting each to a single Azure VNet](media/operator-best-practices-network/advanced-networking-diagram.png) Notably, Azure CNI networking for production allows for separation of control and management of resources. From a security perspective, you often want different teams to manage and secure those resources. With Azure CNI networking, you connect to existing Azure resources, on-premises resources, or other services directly via IP addresses assigned to each pod.
-When you use Azure CNI networking, the virtual network resource is in a separate resource group to the AKS cluster. Delegate permissions for the AKS cluster identity to access and manage these resources. The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network.
+When you use Azure CNI networking, the virtual network resource is in a separate resource group to the AKS cluster. Delegate permissions for the AKS cluster identity to access and manage these resources. The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network.
If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
- * `Microsoft.Network/virtualNetworks/subnets/join/action`
- * `Microsoft.Network/virtualNetworks/subnets/read`
-By default, AKS uses a managed identity for its cluster identity. However, you are able to use a service principal instead. For more information about:
-* AKS service principal delegation, see [Delegate access to other Azure resources][sp-delegation].
-* Managed identities, see [Use managed identities](use-managed-identity.md).
+* `Microsoft.Network/virtualNetworks/subnets/join/action`
+* `Microsoft.Network/virtualNetworks/subnets/read`
+
+By default, AKS uses a managed identity for its cluster identity. However, you can use a service principal instead.
+
+* For more information about AKS service principal delegation, see [Delegate access to other Azure resources][sp-delegation].
+* For more information about managed identities, see [Use managed identities](use-managed-identity.md).
-As each node and pod receives its own IP address, plan out the address ranges for the AKS subnets. Keep in mind:
-* The subnet must be large enough to provide IP addresses for every node, pods, and network resource that you deploy.
- * With both kubenet and Azure CNI networking, each node running has default limits to the number of pods.
-* Avoid using IP address ranges that overlap with existing network resources.
- * Necessary to allow connectivity to on-premises or peered networks in Azure.
-* To handle scale out events or cluster upgrades, you need extra IP addresses available in the assigned subnet.
- * This extra address space is especially important if you use Windows Server containers, as those node pools require an upgrade to apply the latest security patches. For more information on Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade].
+As each node and pod receives its own IP address, plan out the address ranges for the AKS subnets. Keep the following criteria in mind:
+
+* The subnet must be large enough to provide IP addresses for every node, pod, and network resource you deploy.
+ * With both kubenet and Azure CNI networking, each running node has default limits to the number of pods.
+* Avoid using IP address ranges that overlap with existing network resources.
+ * It's necessary to allow connectivity to on-premises or peered networks in Azure.
+* To handle scale out events or cluster upgrades, you need extra IP addresses available in the assigned subnet.
+ * This extra address space is especially important if you use Windows Server containers, as those node pools require an upgrade to apply the latest security patches. For more information on Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade].
To calculate the IP address required, see [Configure Azure CNI networking in AKS][advanced-networking].
-When creating a cluster with Azure CNI networking, you specify other address ranges for the cluster, such as the Docker bridge address, DNS service IP, and service address range. In general, make sure these address ranges:
-* Don't overlap each other.
-* Don't overlap with any networks associated with the cluster, including any virtual networks, subnets, on-premises and peered networks.
+When creating a cluster with Azure CNI networking, you specify other address ranges for the cluster, such as the Docker bridge address, DNS service IP, and service address range. In general, make sure these address ranges don't overlap each other or any networks associated with the cluster, including any virtual networks, subnets, on-premises and peered networks.
For the specific details around limits and sizing for these address ranges, see [Configure Azure CNI networking in AKS][advanced-networking]. ### Kubenet networking
-Although kubenet doesn't require you to set up the virtual networks before the cluster is deployed, there are disadvantages to waiting:
+Although kubenet doesn't require you to set up the virtual networks before deploying the cluster, there are disadvantages to waiting, such as:
* Since nodes and pods are placed on different IP subnets, User Defined Routing (UDR) and IP forwarding routes traffic between pods and nodes. This extra routing may reduce network performance. * Connections to existing on-premises networks or peering to other Azure virtual networks can be complex.
-Since you don't create the virtual network and subnets separately from the AKS cluster, Kubenet is ideal for:
-* Small development or test workloads.
+Since you don't create the virtual network and subnets separately from the AKS cluster, Kubenet is ideal for the following scenarios:
+
+* Small development or test workloads.
* Simple websites with low traffic. * Lifting and shifting workloads into containers. For most production deployments, you should plan for and use Azure CNI networking.
-You can also [configure your own IP address ranges and virtual networks using kubenet][aks-configure-kubenet-networking]. Like Azure CNI networking, these address ranges shouldn't overlap each other and shouldn't overlap with any networks associated with the cluster (virtual networks, subnets, on-premises and peered networks).
+You can also [configure your own IP address ranges and virtual networks using kubenet][aks-configure-kubenet-networking]. Like Azure CNI networking, these address ranges shouldn't overlap each other or any networks associated with the cluster (virtual networks, subnets, on-premises and peered networks).
For the specific details around limits and sizing for these address ranges, see [Use kubenet networking with your own IP address ranges in AKS][aks-configure-kubenet-networking]. ## Distribute ingress traffic
-> **Best practice guidance**
->
+> **Best practice guidance**
+>
> To distribute HTTP or HTTPS traffic to your applications, use ingress resources and controllers. Compared to an Azure load balancer, ingress controllers provide extra features and can be managed as native Kubernetes resources.
-While an Azure load balancer can distribute customer traffic to applications in your AKS cluster, it's limited in understanding that traffic. A load balancer resource works at layer 4, and distributes traffic based on protocol or ports.
+While an Azure load balancer can distribute customer traffic to applications in your AKS cluster, it's limited in understanding that traffic. A load balancer resource works at *layer 4* and distributes traffic based on protocol or ports.
-Most web applications using HTTP or HTTPS should use Kubernetes ingress resources and controllers, which work at layer 7. Ingress can distribute traffic based on the URL of the application and handle TLS/SSL termination. Ingress also reduces the number of IP addresses you expose and map.
+Most web applications using HTTP or HTTPS should use Kubernetes ingress resources and controllers, which work at *layer 7*. Ingress can distribute traffic based on the URL of the application and handle TLS/SSL termination. Ingress also reduces the number of IP addresses you expose and map.
With a load balancer, each application typically needs a public IP address assigned and mapped to the service in the AKS cluster. With an ingress resource, a single IP address can distribute traffic to multiple applications. ![Diagram showing Ingress traffic flow in an AKS cluster](media/operator-best-practices-network/aks-ingress.png)
- There are two components for ingress:
+There are two components for ingress:
- * An ingress *resource*
- * An ingress *controller*
+1. An ingress *resource*
+2. An ingress *controller*
### Ingress resource
-The *ingress resource* is a YAML manifest of `kind: Ingress`. It defines the host, certificates, and rules to route traffic to services running in your AKS cluster.
+The *ingress resource* is a YAML manifest of `kind: Ingress`. It defines the host, certificates, and rules to route traffic to services running in your AKS cluster.
-The following example YAML manifest would distribute traffic for *myapp.com* to one of two services, *blogservice* or *storeservice*. The customer is directed to one service or the other based on the URL they access.
+The following example YAML manifest distributes traffic for *myapp.com* to one of two services, *blogservice* or *storeservice*, and directs the customer to one service or the other based on the URL they access.
```yaml apiVersion: networking.k8s.io/v1
There are many scenarios for ingress, including the following how-to guides:
> > To scan incoming traffic for potential attacks, use a web application firewall (WAF) such as [Barracuda WAF for Azure][barracuda-waf] or Azure Application Gateway. These more advanced network resources can also route traffic beyond just HTTP and HTTPS connections or basic TLS termination.
-Typically, an ingress controller is a Kubernetes resource in your AKS cluster that distributes traffic to services and applications. The controller runs as a daemon on an AKS node, and consumes some of the node's resources, like CPU, memory, and network bandwidth. In larger environments, you'll want to:
+Typically, an ingress controller is a Kubernetes resource in your AKS cluster that distributes traffic to services and applications. The controller runs as a daemon on an AKS node, and consumes some of the node's resources, like CPU, memory, and network bandwidth. In larger environments, you may want to consider the following:
+ * Offload some of this traffic routing or TLS termination to a network resource outside of the AKS cluster. * Scan incoming traffic for potential attacks. ![A web application firewall (WAF) such as Azure App Gateway can protect and distribute traffic for your AKS cluster](media/operator-best-practices-network/web-application-firewall-app-gateway.png)
-For that extra layer of security, a web application firewall (WAF) filters the incoming traffic. With a set of rules, the Open Web Application Security Project (OWASP) watches for attacks like cross-site scripting or cookie poisoning. [Azure Application Gateway][app-gateway] (currently in preview in AKS) is a WAF that integrates with AKS clusters, locking in these security features before the traffic reaches your AKS cluster and applications.
+For that extra layer of security, a web application firewall (WAF) filters the incoming traffic. With a set of rules, the Open Web Application Security Project (OWASP) watches for attacks like cross-site scripting or cookie poisoning. [Azure Application Gateway][app-gateway] (currently in preview in AKS) is a WAF that integrates with AKS clusters, locking in these security features before the traffic reaches your AKS cluster and applications.
Since other third-party solutions also perform these functions, you can continue to use existing investments or expertise in your preferred product.
Load balancer or ingress resources continually run in your AKS cluster and refin
## Control traffic flow with network policies
-> **Best practice guidance**
+> **Best practice guidance**
> > Use network policies to allow or deny traffic to pods. By default, all traffic is allowed between pods within a cluster. For improved security, define rules that limit pod communication. Network policy is a Kubernetes feature available in AKS that lets you control the traffic flow between pods. You allow or deny traffic to the pod based on settings such as assigned labels, namespace, or traffic port. Network policies are a cloud-native way to control the flow of traffic for pods. As pods are dynamically created in an AKS cluster, required network policies can be automatically applied.
-To use network policy, enable the feature when you create a new AKS cluster. You can't enable network policy on an existing AKS cluster. Plan ahead to enable network policy on the necessary clusters.
+To use network policy, enable the feature when you create a new AKS cluster. You can't enable network policy on an existing AKS cluster. Plan ahead to enable network policy on the necessary clusters.
>[!NOTE]
->Network policy should only be used for Linux-based nodes and pods in AKS.
+> Network policy should only be used for Linux-based nodes and pods in AKS.
-You create a network policy as a Kubernetes resource using a YAML manifest. Policies are applied to defined pods, with ingress or egress rules defining traffic flow.
+You create a network policy as a Kubernetes resource using a YAML manifest. Policies are applied to defined pods, with ingress or egress rules defining traffic flow.
-The following example applies a network policy to pods with the *app: backend* label applied to them. The ingress rule only allows traffic from pods with the *app: frontend* label:
+The following example applies a network policy to pods with the *app: backend* label applied to them. The ingress rule only allows traffic from pods with the *app: frontend* label.
```yaml kind: NetworkPolicy
To get started with policies, see [Secure traffic between pods using network pol
## Securely connect to nodes through a bastion host
-> **Best practice guidance**
+> **Best practice guidance**
> > Don't expose remote connectivity to your AKS nodes. Create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks.
You can complete most operations in AKS using the Azure management tools or thro
![Connect to AKS nodes using a bastion host, or jump box](media/operator-best-practices-network/connect-using-bastion-host-simplified.png)
-The management network for the bastion host should be secured, too. Use an [Azure ExpressRoute][expressroute] or [VPN gateway][vpn-gateway] to connect to an on-premises network, and control access using network security groups.
+You should also secure the management network for the bastion host. Use an [Azure ExpressRoute][expressroute] or [VPN gateway][vpn-gateway] to connect to an on-premises network and control access using network security groups.
## Next steps
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
To configure Planned Maintenance using pre-created configurations, see [Use Plan
This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+Be sure to upgrade Azure CLI to the latest version using [`az upgrade`](/cli/azure/update-azure-cli#manual-update).
+ [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ### Limitations
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
Title: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid
-description: Learn how to use Azure Event Grid to subscribe to Azure Kubernetes Service (AKS) events.
+ Title: Subscribe to Azure Kubernetes Service events with Azure Event Grid
+description: Use Azure Event Grid to subscribe to Azure Kubernetes Service events
Previously updated : 06/16/2023 Last updated : 06/22/2023 # Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a publish-subscribe model.
-In this quickstart, you create an Azure Kubernetes Service (AKS) cluster and subscribe to AKS events with Azure Event Grid.
+In this quickstart, you create an AKS cluster and subscribe to AKS events.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] installed.
+> [!NOTE]
+> In case there are issues specifically with EventGrid notifications, as can be seen here [Service Outages](https://azure.status.microsoft/status), please note that AKS operations wont be impacted and they are independent of Event Grid outages.
+ ## Create an AKS cluster ### [Azure CLI](#tab/azure-cli)
-1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-
- ```azurecli-interactive
- az group create --name myResourceGroup --location eastus
- ```
-
-2. Create an AKS cluster using the [`az aks create`][az-aks-create] command.
+Create an AKS cluster using the [`az aks create`][az-aks-create] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group:
- ```azurecli-interactive
- az aks create -g myResourceGroup -n myManagedCluster --location eastus --node-count 1 --generate-ssh-keys
- ```
+```azurecli-interactive
+az group create --name MyResourceGroup --location eastus
+az aks create -g MyResourceGroup -n MyAKS --location eastus --node-count 1 --generate-ssh-keys
+```
### [Azure PowerShell](#tab/azure-powershell)
-1. Create an Azure resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
-
- ```azurepowershell-interactive
- New-AzResourceGroup -Name myResourceGroup -Location eastus
- ```
-
-2. Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet.
+Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group:
- ```azurepowershell-interactive
- New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -NodeCount 1 -GenerateSshKey
- ```
+```azurepowershell-interactive
+New-AzResourceGroup -Name MyResourceGroup -Location eastus
+New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -NodeCount 1 -GenerateSshKey
+```
In this quickstart, you create an Azure Kubernetes Service (AKS) cluster and sub
### [Azure CLI](#tab/azure-cli)
-1. Create a namespace using the [`az eventhubs namespace create`][az-eventhubs-namespace-create] command. Your namespace name must be unique.
-
- ```azurecli-interactive
- az eventhubs namespace create --location eastus --name myNamespace -g myResourceGroup
- ```
-
-2. Create an event hub using the [`az eventhubs eventhub create`][az-eventhubs-eventhub-create] command.
-
- ```azurecli-interactive
- az eventhubs eventhub create --name myEventGridHub --namespace-name myNamespace -g myResourceGroup
- ```
-
-3. Subscribe to the AKS events using the [`az eventgrid event-subscription create`][az-eventgrid-event-subscription-create] command.
-
- ```azurecli-interactive
- SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv)
-
- ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv)
-
- az eventgrid event-subscription create --name MyEventGridSubscription \
- --source-resource-id $SOURCE_RESOURCE_ID \
- --endpoint-type eventhub \
- --endpoint $ENDPOINT
- ```
-
-4. Verify your subscription to AKS events using the [`az eventgrid event-subscription list`][az-eventgrid-event-subscription-list] command.
-
- ```azurecli-interactive
- az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID
- ```
-
- The following example output shows you're subscribed to events from the `myManagedCluster` cluster and those events are delivered to the `myEventGridHub` event hub:
-
- ```output
- [
- {
- "deadLetterDestination": null,
- "deadLetterWithResourceIdentity": null,
- "deliveryWithResourceIdentity": null,
- "destination": {
- "deliveryAttributeMappings": null,
- "endpointType": "EventHub",
- "resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.EventHub/namespaces/myNamespace/eventhubs/myEventGridHub"
- },
- "eventDeliverySchema": "EventGridSchema",
- "expirationTimeUtc": null,
- "filter": {
- "advancedFilters": null,
- "enableAdvancedFilteringOnArrays": null,
- "includedEventTypes": [
- "Microsoft.ContainerService.NewKubernetesVersionAvailable"
- ],
- "isSubjectCaseSensitive": null,
- "subjectBeginsWith": "",
- "subjectEndsWith": ""
- },
- "id": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myManagedCluster/providers/Microsoft.EventGrid/eventSubscriptions/myEventGridSubscription",
- "labels": null,
- "name": "myEventGridSubscription",
- "provisioningState": "Succeeded",
- "resourceGroup": "myResourceGroup",
- "retryPolicy": {
- "eventTimeToLiveInMinutes": 1440,
- "maxDeliveryAttempts": 30
- },
- "systemData": null,
- "topic": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/microsoft.containerservice/managedclusters/myManagedCluster",
- "type": "Microsoft.EventGrid/eventSubscriptions"
- }
- ]
- ```
+Create a namespace and event hub using [`az eventhubs namespace create`][az-eventhubs-namespace-create] and [`az eventhubs eventhub create`][az-eventhubs-eventhub-create]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group.
+
+```azurecli-interactive
+az eventhubs namespace create --location eastus --name MyNamespace -g MyResourceGroup
+az eventhubs eventhub create --name MyEventGridHub --namespace-name MyNamespace -g MyResourceGroup
+```
+
+> [!NOTE]
+> The *name* of your namespace must be unique.
+
+Subscribe to the AKS events using [`az eventgrid event-subscription create`][az-eventgrid-event-subscription-create]:
+
+```azurecli-interactive
+SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv)
+ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv)
+az eventgrid event-subscription create --name MyEventGridSubscription \
+--source-resource-id $SOURCE_RESOURCE_ID \
+--endpoint-type eventhub \
+--endpoint $ENDPOINT
+```
+
+Verify your subscription to AKS events using `az eventgrid event-subscription list`:
+
+```azurecli-interactive
+az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID
+```
+
+The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub:
+
+```output
+[
+ {
+ "deadLetterDestination": null,
+ "deadLetterWithResourceIdentity": null,
+ "deliveryWithResourceIdentity": null,
+ "destination": {
+ "deliveryAttributeMappings": null,
+ "endpointType": "EventHub",
+ "resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub"
+ },
+ "eventDeliverySchema": "EventGridSchema",
+ "expirationTimeUtc": null,
+ "filter": {
+ "advancedFilters": null,
+ "enableAdvancedFilteringOnArrays": null,
+ "includedEventTypes": [
+ "Microsoft.ContainerService.NewKubernetesVersionAvailable","Microsoft.ContainerService.ClusterSupportEnded","Microsoft.ContainerService.ClusterSupportEnding","Microsoft.ContainerService.NodePoolRollingFailed","Microsoft.ContainerService.NodePoolRollingStarted","Microsoft.ContainerService.NodePoolRollingSucceeded"
+ ],
+ "isSubjectCaseSensitive": null,
+ "subjectBeginsWith": "",
+ "subjectEndsWith": ""
+ },
+ "id": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription",
+ "labels": null,
+ "name": "MyEventGridSubscription",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "MyResourceGroup",
+ "retryPolicy": {
+ "eventTimeToLiveInMinutes": 1440,
+ "maxDeliveryAttempts": 30
+ },
+ "systemData": null,
+ "topic": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/microsoft.containerservice/managedclusters/MyAKS",
+ "type": "Microsoft.EventGrid/eventSubscriptions"
+ }
+]
+```
### [Azure PowerShell](#tab/azure-powershell)
-1. Create a namespace using the [`New-AzEventHubNamespace`][new-azeventhubnamespace] cmdlet. Your namespace name must be unique.
-
- ```azurepowershell-interactive
- New-AzEventHubNamespace -Location eastus -Name MyNamespace -ResourceGroupName MyResourceGroup
- ```
-
-2. Create an event hub using the [`New-AzEventHub`][new-azeventhub] cmdlet.
-
- ```azurepowershell-interactive
- New-AzEventHub -Name MyEventGridHub -Namespace MyNamespace -ResourceGroupName MyResourceGroup
- ```
-
-3. Subscribe to the AKS events using the [`New-AzEventGridSubscription`][new-azeventgridsubscription] cmdlet.
-
- ```azurepowershell-interactive
- $SOURCE_RESOURCE_ID = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myManagedCluster).Id
-
- $ENDPOINT = (Get-AzEventHub -ResourceGroupName myResourceGroup -EventHubName myEventGridHub -Namespace myNamespace).Id
-
- $params = @{
- EventSubscriptionName = 'myEventGridSubscription'
- ResourceId = $SOURCE_RESOURCE_ID
- EndpointType = 'eventhub'
- Endpoint = $ENDPOINT
- }
-
- New-AzEventGridSubscription @params
- ```
-
-4. Verify your subscription to AKS events using the [`Get-AzEventGridSubscription`][get-azeventgridsubscription] cmdlet.
-
- ```azurepowershell-interactive
- Get-AzEventGridSubscription -ResourceId $SOURCE_RESOURCE_ID | Select-Object -ExpandProperty PSEventSubscriptionsList
- ```
-
- The following example output shows you're subscribed to events from the `myManagedCluster` cluster and those events are delivered to the `myEventGridHub` event hub:
-
- ```Output
- EventSubscriptionName : myEventGridSubscription
- Id : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myManagedCluster/providers/Microsoft.EventGrid/eventSubscriptions/myEventGridSubscription
- Type : Microsoft.EventGrid/eventSubscriptions
- Topic : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/microsoft.containerservice/managedclusters/myManagedCluster
- Filter : Microsoft.Azure.Management.EventGrid.Models.EventSubscriptionFilter
- Destination : Microsoft.Azure.Management.EventGrid.Models.EventHubEventSubscriptionDestination
- ProvisioningState : Succeeded
- Labels :
- EventTtl : 1440
- MaxDeliveryAttempt : 30
- EventDeliverySchema : EventGridSchema
- ExpirationDate :
- DeadLetterEndpoint :
- Endpoint : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.EventHub/namespaces/myNamespace/eventhubs/myEventGridHub
- ```
+Create a namespace and event hub using [New-AzEventHubNamespace][new-azeventhubnamespace] and [New-AzEventHub][new-azeventhub]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group.
+
+```azurepowershell-interactive
+New-AzEventHubNamespace -Location eastus -Name MyNamespace -ResourceGroupName MyResourceGroup
+New-AzEventHub -Name MyEventGridHub -Namespace MyNamespace -ResourceGroupName MyResourceGroup
+```
+
+> [!NOTE]
+> The *name* of your namespace must be unique.
+
+Subscribe to the AKS events using [New-AzEventGridSubscription][new-azeventgridsubscription]:
+
+```azurepowershell-interactive
+$SOURCE_RESOURCE_ID = (Get-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS).Id
+$ENDPOINT = (Get-AzEventHub -ResourceGroupName MyResourceGroup -EventHubName MyEventGridHub -Namespace MyNamespace).Id
+$params = @{
+ EventSubscriptionName = 'MyEventGridSubscription'
+ ResourceId = $SOURCE_RESOURCE_ID
+ EndpointType = 'eventhub'
+ Endpoint = $ENDPOINT
+}
+New-AzEventGridSubscription @params
+```
+
+Verify your subscription to AKS events using `Get-AzEventGridSubscription`:
+
+```azurepowershell-interactive
+Get-AzEventGridSubscription -ResourceId $SOURCE_RESOURCE_ID | Select-Object -ExpandProperty PSEventSubscriptionsList
+```
+
+The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub:
+
+```Output
+EventSubscriptionName : MyEventGridSubscription
+Id : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription
+Type : Microsoft.EventGrid/eventSubscriptions
+Topic : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myresourcegroup/providers/microsoft.containerservice/managedclusters/myaks
+Filter : Microsoft.Azure.Management.EventGrid.Models.EventSubscriptionFilter
+Destination : Microsoft.Azure.Management.EventGrid.Models.EventHubEventSubscriptionDestination
+ProvisioningState : Succeeded
+Labels :
+EventTtl : 1440
+MaxDeliveryAttempt : 30
+EventDeliverySchema : EventGridSchema
+ExpirationDate :
+DeadLetterEndpoint :
+Endpoint : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub
+```
-When AKS events occur, the events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events].
+When AKS events occur, you see those events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. There are also new events available now for upgrades and cluster within support. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events].
## Delete the cluster and subscriptions ### [Azure CLI](#tab/azure-cli)
-* Remove the resource group, AKS cluster, namespace, event hub, and all related resources using the [`az group delete`][az-group-delete] command.
+Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources.
- ```azurecli-interactive
- az group delete --name myResourceGroup --yes --no-wait
- ```
+```azurecli-interactive
+az group delete --name MyResourceGroup --yes --no-wait
+```
### [Azure PowerShell](#tab/azure-powershell)
-* Remove the resource group, AKS cluster, namespace, event hub, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet.
+Use the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources.
- ```azurepowershell-interactive
- Remove-AzResourceGroup -Name myResourceGroup
- ```
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name MyResourceGroup
+```
- > [!NOTE]
- > When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster isn't removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
- >
- > If you used a managed identity, the identity is managed by the platform and doesn't require removal.
+> [!NOTE]
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+>
+> If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hubs.
-To learn more about AKS, and walk through a complete code to deployment example, continue to the following Kubernetes cluster tutorial.
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
To learn more about AKS, and walk through a complete code to deployment example,
[az-group-delete]: /cli/azure/group#az_group_delete [sp-delete]: kubernetes-service-principal.md#other-considerations [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
-[az-group-create]: /cli/azure/group#az_group_create
-[az-eventgrid-event-subscription-list]: /cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-list
-[get-azeventgridsubscription]: /powershell/module/az.eventgrid/get-azeventgridsubscription
-[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m
[az-provider-register]: /cli/azure/provider#az_provider_register [aks-upgrade]: upgrade-cluster.md [cluster-autoscaler]: cluster-autoscaler.md
-[ephemeral-os]: cluster-configuration.md#ephemeral-os
+[ephemeral-os]: concepts-storage.md#ephemeral-os-disk
[state-billing-azure-vm]: ../virtual-machines/states-billing.md [spot-node-pool]: spot-node-pool.md
aks Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-policy.md
Title: Use Azure Policy to secure your cluster
-description: Use Azure Policy to secure an Azure Kubernetes Service (AKS) cluster.
+ Title: Use Azure Policy to secure your Azure Kubernetes Service (AKS) clusters
+description: Learn how to use Azure Policy to secure your Azure Kubernetes Service (AKS) clusters.
Previously updated : 09/12/2022 Last updated : 06/20/2023
-# Secure your cluster with Azure Policy
+# Secure your Azure Kubernetes Service (AKS) clusters with Azure Policy
-To improve the security of your Azure Kubernetes Service (AKS) cluster, you can apply and enforce built-in security policies on your cluster using Azure Policy. [Azure Policy][azure-policy] helps to enforce organizational standards and to assess compliance at-scale. After installing the [Azure Policy Add-on for AKS][kubernetes-policy-reference], you can apply individual policy definitions or groups of policy definitions called initiatives (sometimes called policysets) to your cluster. See [Azure Policy built-in definitions for AKS][aks-policies] for a complete list of AKS policy and initiative definitions.
+You can apply and enforce built-in security policies on your Azure Kubernetes Service (AKS) clusters using [Azure Policy][azure-policy]. Azure Policy helps enforce organizational standards and assess compliance at-scale. After you install the [Azure Policy add-on for AKS][kubernetes-policy-reference], you can apply individual policy definitions or groups of policy definitions called initiatives (sometimes called policysets) to your cluster. See [Azure Policy built-in definitions for AKS][aks-policies] for a complete list of AKS policy and initiative definitions.
This article shows you how to apply policy definitions to your cluster and verify those assignments are being enforced. ## Prerequisites -- This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].-- The Azure Policy Add-on for AKS installed on an AKS cluster. Follow these [steps to install the Azure Policy Add-on][azure-policy-addon].
+- This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal].
+- You need the [Azure Policy add-on for AKS installed on your AKS cluster][azure-policy-addon].
## Assign a built-in policy definition or initiative
-To apply a policy definition or initiative, use the Azure portal.
+You can apply a policy definition or initiative in the Azure portal using the following steps:
-1. Navigate to the Azure Policy service in Azure portal.
+1. Navigate to the Azure Policy service in Azure portal called **Policy**.
1. In the left pane of the Azure Policy page, select **Definitions**.
-1. Under **Categories** select `Kubernetes`.
-1. Choose the policy definition or initiative you want to apply. For this example, select the `Kubernetes cluster pod security baseline standards for Linux-based workloads` initiative.
+1. Under **Categories**, select `Kubernetes`.
+1. Choose the policy definition or initiative you want to apply. For this example, select the **Kubernetes cluster pod security baseline standards for Linux-based workloads** initiative.
1. Select **Assign**.
-1. Set the **Scope** to the resource group of the AKS cluster with the Azure Policy Add-on enabled.
-1. Select the **Parameters** page and update the **Effect** from `audit` to `deny` to block new deployments violating the baseline initiative. You can also add additional namespaces to exclude from evaluation. For this example, keep the default values.
-1. Select **Review + create** then **Create** to submit the policy assignment.
+1. Set the **Scope** to the resource group of the AKS cluster with the Azure Policy add-on enabled.
+1. Select the **Parameters** page and update the **Effect** from `audit` to `deny` to block new deployments violating the baseline initiative. You can also add extra namespaces to exclude from evaluation. For this example, keep the default values.
+1. Select **Review + create** > **Create** to submit the policy assignment.
## Create and assign a custom policy definition
-Custom policies allow you to define rules for using Azure. For example, you can enforce:
+Custom policies allow you to define rules for using Azure. For example, you can enforce the following types of rules:
+ - Security practices - Cost management - Organization-specific rules (like naming or locations)
Before creating a custom policy, check the [list of common patterns and samples]
Custom policy definitions are written in JSON. To learn more about creating a custom policy, see [Azure Policy definition structure][azure-policy-definition-structure] and [Create a custom policy definition][custom-policy-tutorial-create]. > [!NOTE]
-> Azure Policy now utilizes a new property known as *templateInfo* that allows users to define the source type for the constraint template. By defining *templateInfo* in policy definitions, users donΓÇÖt have to define *constraintTemplate* or *constraint* properties. Users still need to define *apiGroups* and *kinds*. For more information on this, see [Understanding Azure Policy effects][azure-policy-effects-audit].
+> Azure Policy now utilizes a new property known as *templateInfo* that allows you to define the source type for the constraint template. When you define *templateInfo* in policy definitions, you donΓÇÖt have to define *constraintTemplate* or *constraint* properties. You still need to define *apiGroups* and *kinds*. For more information on this, see [Understanding Azure Policy effects][azure-policy-effects-audit].
-Once your custom policy definition has been created, see [Assign a policy definition][custom-policy-tutorial-assign] for a step-by-step walkthrough of assigning the policy to your Kubernetes cluster.
+Once you create your custom policy definition, see [Assign a policy definition][custom-policy-tutorial-assign] for a step-by-step walkthrough of assigning the policy to your Kubernetes cluster.
-## Validate a Azure Policy is running
+## Validate an Azure Policy is running
-Confirm the policy assignments are applied to your cluster by running the following:
+- Confirm the policy assignments are applied to your cluster using the following `kubectl get` command.
-```azurecli-interactive
-kubectl get constrainttemplates
-```
+ ```azurecli-interactive
+ kubectl get constrainttemplates
+ ```
-> [!NOTE]
-> Policy assignments can take [up to 20 minutes to sync][azure-policy-assign-policy] into each cluster.
-
-The output should be similar to:
-
-```console
-$ kubectl get constrainttemplate
-NAME AGE
-k8sazureallowedcapabilities 23m
-k8sazureallowedusersgroups 23m
-k8sazureblockhostnamespace 23m
-k8sazurecontainerallowedimages 23m
-k8sazurecontainerallowedports 23m
-k8sazurecontainerlimits 23m
-k8sazurecontainernoprivilege 23m
-k8sazurecontainernoprivilegeescalation 23m
-k8sazureenforceapparmor 23m
-k8sazurehostfilesystem 23m
-k8sazurehostnetworkingports 23m
-k8sazurereadonlyrootfilesystem 23m
-k8sazureserviceallowedports 23m
-```
+ > [!NOTE]
+ > Policy assignments can take [up to 20 minutes to sync][azure-policy-assign-policy] into each cluster.
-### Validate rejection of a privileged pod
+ Your output should be similar to the following example output:
-Let's first test what happens when you schedule a pod with the security context of `privileged: true`. This security context escalates the pod's privileges. The initiative disallows privileged pods, so the request will be denied resulting in the deployment being rejected.
+ ```output
+ NAME AGE
+ k8sazureallowedcapabilities 23m
+ k8sazureallowedusersgroups 23m
+ k8sazureblockhostnamespace 23m
+ k8sazurecontainerallowedimages 23m
+ k8sazurecontainerallowedports 23m
+ k8sazurecontainerlimits 23m
+ k8sazurecontainernoprivilege 23m
+ k8sazurecontainernoprivilegeescalation 23m
+ k8sazureenforceapparmor 23m
+ k8sazurehostfilesystem 23m
+ k8sazurehostnetworkingports 23m
+ k8sazurereadonlyrootfilesystem 23m
+ k8sazureserviceallowedports 23m
+ ```
-Create a file named `nginx-privileged.yaml` and paste the following YAML manifest:
+### Validate rejection of a privileged pod
+
+Let's first test what happens when you schedule a pod with the security context of `privileged: true`. This security context escalates the pod's privileges. The initiative disallows privileged pods, so the request is denied, which results in the deployment being rejected.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx-privileged
-spec:
- containers:
- - name: nginx-privileged
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- securityContext:
- privileged: true
-```
+1. Create a file named `nginx-privileged.yaml` and paste in the following YAML manifest.
-Create the pod with [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: nginx-privileged
+ spec:
+ containers:
+ - name: nginx-privileged
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ securityContext:
+ privileged: true
+ ```
-```console
-kubectl apply -f nginx-privileged.yaml
-```
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
-As expected the pod fails to be scheduled, as shown in the following example output:
+ ```azurecli-interactive
+ kubectl apply -f nginx-privileged.yaml
+ ```
-```console
-$ kubectl apply -f nginx-privileged.yaml
+ As expected, the pod fails to be scheduled, as shown in the following example output:
-Error from server ([denied by azurepolicy-container-no-privilege-00edd87bf80f443fa51d10910255adbc4013d590bec3d290b4f48725d4dfbdf9] Privileged container is not allowed: nginx-privileged, securityContext: {"privileged": true}): error when creating "privileged.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [denied by azurepolicy-container-no-privilege-00edd87bf80f443fa51d10910255adbc4013d590bec3d290b4f48725d4dfbdf9] Privileged container is not allowed: nginx-privileged, securityContext: {"privileged": true}
-```
+ ```output
+ Error from server ([denied by azurepolicy-container-no-privilege-00edd87bf80f443fa51d10910255adbc4013d590bec3d290b4f48725d4dfbdf9] Privileged container is not allowed: nginx-privileged, securityContext: {"privileged": true}): error when creating "privileged.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [denied by azurepolicy-container-no-privilege-00edd87bf80f443fa51d10910255adbc4013d590bec3d290b4f48725d4dfbdf9] Privileged container is not allowed: nginx-privileged, securityContext: {"privileged": true}
+ ```
-The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
+ The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
### Test creation of an unprivileged pod
-In the previous example, the container image automatically tried to use root to bind NGINX to port 80. This request was denied by the policy initiative, so the pod fails to start. Let's try now running that same NGINX pod without privileged access.
+In the previous example, the container image automatically tried to use root to bind NGINX to port 80. The policy initiative denies this request, so the pod fails to start. Now, let's try running that same NGINX pod without privileged access.
+
+1. Create a file named `nginx-unprivileged.yaml` and paste in the following YAML manifest.
-Create a file named `nginx-unprivileged.yaml` and paste the following YAML manifest:
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: nginx-unprivileged
+ spec:
+ containers:
+ - name: nginx-unprivileged
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ ```
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx-unprivileged
-spec:
- containers:
- - name: nginx-unprivileged
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
-```
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
-Create the pod using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+ ```azurecli-interactive
+ kubectl apply -f nginx-unprivileged.yaml
+ ```
-```console
-kubectl apply -f nginx-unprivileged.yaml
-```
+3. Check the status of the pod using the [`kubectl get pods`][kubectl-get] command.
-The pod is successfully scheduled. When you check the status of the pod using the [kubectl get pods][kubectl-get] command, the pod is *Running*:
+ ```azurecli-interactive
+ kubectl get pods
+ ```
-```console
-$ kubectl get pods
+ Your output should be similar to the following example output, which shows the pod is successfully scheduled and has a status of *Running*:
-NAME READY STATUS RESTARTS AGE
-nginx-unprivileged 1/1 Running 0 18s
-```
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ nginx-unprivileged 1/1 Running 0 18s
+ ```
-This example shows the baseline initiative affecting only deployments which violate policies in the collection. Allowed deployments continue to function.
+ This example shows the baseline initiative affecting only the deployments that violate policies in the collection. Allowed deployments continue to function.
-Delete the NGINX unprivileged pod using the [kubectl delete][kubectl-delete] command and specify the name of your YAML manifest:
+4. Delete the NGINX unprivileged pod using the [`kubectl delete`][kubectl-delete] command and specify the name of your YAML manifest.
-```console
-kubectl delete -f nginx-unprivileged.yaml
-```
+ ```azurecli-interactive
+ kubectl delete -f nginx-unprivileged.yaml
+ ```
## Disable a policy or initiative
-To remove the baseline initiative:
+You can remove the baseline initiative in the Azure portal using the following steps:
-1. Navigate to the Policy pane on the Azure portal.
-1. Select **Assignments** from the left pane.
-1. Click the **...** button next to the `Kubernetes cluster pod security baseline standards for Linux-based workloads` initiative.
-1. Select **Delete assignment**.
+1. Navigate to the **Policy** pane on the Azure portal.
+2. Select **Assignments**.
+3. Select the **...** button next to the **Kubernetes cluster pod security baseline standards for Linux-based workload** initiative.
+4. Select **Delete assignment**.
## Next steps
-For more information about how Azure Policy works:
+For more information about how Azure Policy works, see the following articles:
-- [Azure Policy Overview][azure-policy]-- [Azure Policy initiatives and polices for AKS][aks-policies]-- Remove the [Azure Policy Add-on][azure-policy-addon-remove].
+- [Azure Policy overview][azure-policy]
+- [Azure Policy initiatives and policies for AKS][aks-policies]
+- Remove the [Azure Policy add-on][azure-policy-addon-remove].
<!-- LINKS - external --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
<!-- LINKS - internal --> [aks-policies]: policy-reference.md
For more information about how Azure Policy works:
[azure-policy-addon]: ../governance/policy/concepts/policy-for-kubernetes.md#install-azure-policy-add-on-for-aks [azure-policy-addon-remove]: ../governance/policy/concepts/policy-for-kubernetes.md#remove-the-add-on-from-aks [azure-policy-assign-policy]: ../governance/policy/concepts/policy-for-kubernetes.md#assign-a-policy-definition
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
[kubernetes-policy-reference]: ../governance/policy/concepts/policy-for-kubernetes.md [azure-policy-effects-audit]: ../governance/policy/concepts/effects.md#audit-properties [custom-policy-tutorial-create]: ../governance/policy/tutorials/create-custom-policy-definition.md
aks Use Byo Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-byo-cni.md
Title: Bring your own Container Network Interface (CNI) plugin
+ Title: Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS)
-description: Learn how to utilize Azure Kubernetes Service with your own Container Network Interface (CNI) plugin
+description: Learn how to bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS).
Previously updated : 8/12/2022 Last updated : 06/20/2023 # Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS)
-Kubernetes does not provide a network interface system by default; this functionality is provided by [network plugins][kubernetes-cni]. Azure Kubernetes Service provides several supported CNI plugins. Documentation for supported plugins can be found from the [networking concepts page][aks-network-concepts].
+Kubernetes doesn't provide a network interface system by default. Instead, [network plugins][kubernetes-cni] provide this functionality. Azure Kubernetes Service (AKS) provides several supported CNI plugins. For information on supported plugins, see the [AKS networking concepts][aks-network-concepts].
-While the supported plugins meet most networking needs in Kubernetes, advanced users of AKS may desire to utilize the same CNI plugin used in on-premises Kubernetes environments or to make use of specific advanced functionality available in other CNI plugins.
+The supported plugins meet most networking needs in Kubernetes. However, advanced AKS users might want the same CNI plugin used in on-premises Kubernetes environments or to use advanced functionalities available in other CNI plugins.
-This article shows how to deploy an AKS cluster with no CNI plugin pre-installed, which allows for installation of any third-party CNI plugin that works in Azure.
+This article shows how to deploy an AKS cluster with no CNI plugin preinstalled. From there, you can then install any third-party CNI plugin that works in Azure.
## Support
-BYOCNI has support implications - Microsoft support will not be able to assist with CNI-related issues in clusters deployed with BYOCNI. For example, CNI-related issues would cover most east/west (pod to pod) traffic, along with `kubectl proxy` and similar commands. If CNI-related support is desired, a supported AKS network plugin can be used or support could be procured for the BYOCNI plugin from a third-party vendor.
+Microsoft support can't assist with CNI-related issues in clusters deployed with Bring your own Container Network Interface (BYOCNI). For example, CNI-related issues would cover most east/west (pod to pod) traffic, along with `kubectl proxy` and similar commands. If you want CNI-related support, use a supported AKS network plugin or seek support from the BYOCNI plugin third-party vendor.
-Support will still be provided for non-CNI-related issues.
+Support is still provided for non-CNI-related issues.
## Prerequisites
-* For ARM/Bicep, use at least template version 2022-01-02-preview or 2022-06-01
-* For Azure CLI, use at least version 2.39.0
+* For Azure Resource Manager (ARM) or Bicep, use at least template version 2022-01-02-preview or 2022-06-01.
+* For Azure CLI, use at least version 2.39.0.
* The virtual network for the AKS cluster must allow outbound internet connectivity. * AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range. * The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required: * `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
-* The subnet assigned to the AKS node pool cannot be a [delegated subnet](../virtual-network/subnet-delegation-overview.md).
-* AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range. For more details, see [Network security groups][aks-network-nsg].
+* The subnet assigned to the AKS node pool can't be a [delegated subnet](../virtual-network/subnet-delegation-overview.md).
+* AKS doesn't apply Network Security Groups (NSGs) to its subnet or modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range. For more information, see [Network security groups][aks-network-nsg].
-## Cluster creation steps
-
-### Deploy a cluster
+## Create an AKS cluster with no CNI plugin preinstalled
# [Azure CLI](#tab/azure-cli)
-Deploying a BYOCNI cluster requires passing the `--network-plugin` parameter with the parameter value of `none`.
+1. Create an Azure resource group for your AKS cluster using the [`az group create`][az-group-create] command.
-1. First, create a resource group to create the cluster in:
```azurecli-interactive
- az group create -l <Region> -n <ResourceGroupName>
+ az group create -l eastus -n myResourceGroup
```
-1. Then create the cluster itself:
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command. Pass the `--network-plugin` parameter with the parameter value of `none`.
+ ```azurecli-interactive
- az aks create -l <Region> -g <ResourceGroupName> -n <ClusterName> --network-plugin none
+ az aks create -l eastus -g myResourceGroup -n myAKSCluster --network-plugin none
``` # [Azure Resource Manager](#tab/azure-resource-manager)
-When using an Azure Resource Manager template to deploy, pass `none` to the `networkPlugin` parameter to the `networkProfile` object. See the [Azure Resource Manager template documentation][deploy-arm-template] for help with deploying this template, if needed.
+> [!NOTE]
+> For information on how to deploy this template, see the [ARM template documentation][deploy-arm-template].
```json {
When using an Azure Resource Manager template to deploy, pass `none` to the `net
# [Bicep](#tab/bicep)
-When using a Bicep template to deploy, pass `none` to the `networkPlugin` parameter to the `networkProfile` object. See the [Bicep template documentation][deploy-bicep-template] for help with deploying this template, if needed.
+> [!NOTE]
+> For information on how to deploy this template, see the [Bicep template documentation][deploy-bicep-template].
```bicep param clusterName string = 'aksbyocni'
resource aksCluster 'Microsoft.ContainerService/managedClusters@2022-06-01' = {
-### Deploy a CNI plugin
+## Deploy a CNI plugin
-When AKS provisioning completes, the cluster will be online, but all of the nodes will be in a `NotReady` state:
+Once the AKS provisioning completes, the cluster is online, but all the nodes are in a `NotReady` state, as shown in the following example:
-```bash
-$ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-23902496-vmss000000 NotReady agent 6m9s v1.21.9
+ ```bash
+ $ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-23902496-vmss000000 NotReady agent 6m9s v1.21.9
-$ kubectl get node -o custom-columns='NAME:.metadata.name,STATUS:.status.conditions[?(@.type=="Ready")].message'
-NAME STATUS
-aks-nodepool1-23902496-vmss000000 container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
-```
+ $ kubectl get node -o custom-columns='NAME:.metadata.name,STATUS:.status.conditions[?(@.type=="Ready")].message'
+ NAME STATUS
+ aks-nodepool1-23902496-vmss000000 container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
+ ```
At this point, the cluster is ready for installation of a CNI plugin.
Learn more about networking in AKS in the following articles:
* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md) * [Use an internal load balancer with Azure Container Service (AKS)](internal-lb.md)- * [Create a basic ingress controller with external network connectivity][aks-ingress-basic] * [Enable the HTTP application routing add-on][aks-http-app-routing] * [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal]
Learn more about networking in AKS in the following articles:
<!-- LINKS - External --> [kubernetes-cni]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
-[cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
-[kubenet]: https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet
- <!-- LINKS - Internal --> [az-aks-create]: /cli/azure/aks#az_aks_create
-[aks-ssh]: ssh.md
-[ManagedClusterAgentPoolProfile]: /azure/templates/microsoft.containerservice/managedclusters#managedclusteragentpoolprofile-object
[aks-network-concepts]: concepts-network.md [aks-network-nsg]: concepts-network.md#network-security-groups [aks-ingress-basic]: ingress-basic.md
Learn more about networking in AKS in the following articles:
[aks-ingress-static-tls]: ingress-static-ip.md [aks-http-app-routing]: http-application-routing.md [aks-ingress-internal]: ingress-internal-ip.md
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[network-policy]: use-network-policies.md
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
-[network-comparisons]: concepts-network.md#compare-network-models
-[system-node-pools]: use-system-pools.md
-[prerequisites]: configure-azure-cni.md#prerequisites
[deploy-bicep-template]: ../azure-resource-manager/bicep/deploy-cli.md
+[az-group-create]: /cli/azure/group#az_group_create
+[deploy-arm-template]: ../azure-resource-manager/templates/deploy-cli.md
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
OSM issues a certificate that Nginx uses as the client certificate to proxy HTTP
kubectl apply -f deployment.yaml -n hello-web-app-routing kubectl apply -f service.yaml -n hello-web-app-routing kubectl apply -f ingress.yaml -n hello-web-app-routing
- kubectl apply -f ingressbackend.yaml -n hello-web-app-routing
``` The following example output shows the created resources:
OSM issues a certificate that Nginx uses as the client certificate to proxy HTTP
deployment.apps/aks-helloworld created service/aks-helloworld created ingress.networking.k8s.io/aks-helloworld created
- ingressbackend.policy.openservicemesh.io/aks-helloworld created
``` # [With service annotations (retired)](#tab/service-annotations)
app-service App Service Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-best-practices.md
Note that applications which rely on certificate pinning should also not have a
If an application needs to rely on certificate pinning behavior, it is recommended to add a custom domain to a web app and provide a custom TLS certificate for the domain which can then be relied on for certificate pinning. ## <a name="memoryresources"></a>When apps consume more memory than expected
-When you notice an app consumes more memory than expected as indicated via monitoring or service recommendations, consider the [App Service Auto-Healing feature](https://azure.microsoft.com/blog/auto-healing-windows-azure-web-sites). One of the options for the Auto-Healing feature is taking custom actions based on a memory threshold. Actions span the spectrum from email notifications to investigation via memory dump to on-the-spot mitigation by recycling the worker process. Auto-healing can be configured via web.config and via a friendly user interface as described at in this blog post for the [App Service Support Site Extension](https://azure.microsoft.com/blog/additional-updates-to-support-site-extension-for-azure-app-service-web-apps).
+When you notice an app consumes more memory than expected as indicated via monitoring or service recommendations, consider the [App Service Auto-Healing feature](https://azure.microsoft.com/blog/auto-healing-windows-azure-web-sites). One of the options for the Auto-Healing feature is taking custom actions based on a memory threshold. Actions span the spectrum from email notifications to investigation via memory dump to on-the-spot mitigation by recycling the worker process. Auto-healing can be configured via web.config and via a friendly user interface as described at in this blog post for the App Service Support Site Extension.
## <a name="CPUresources"></a>When apps consume more CPU than expected When you notice an app consumes more CPU than expected or experiences repeated CPU spikes as indicated via monitoring or service recommendations, consider scaling up or scaling out the App Service plan. If your application is stateful, scaling up is the only option, while if your application is stateless, scaling out gives you more flexibility and higher scale potential.
When backup failures happen, review most recent results to understand which type
Azure App Service default configuration for Node.js apps is intended to best suit the needs of most common apps. If configuration for your Node.js app would benefit from personalized tuning to improve performance or optimize resource usage for CPU/memory/network resources, see [Best practices and troubleshooting guide for Node applications on Azure App Service](app-service-web-nodejs-best-practices-and-troubleshoot-guide.md). This article describes the iisnode settings you may need to configure for your Node.js app, describes the various scenarios or issues that your app may be facing, and shows how to address these issues. ## <a name=""></a>When Internet of Things (IoT) devices are connected to apps on App Service
-There are a few scenarios where you can improve your environment when running Internet of Things (IoT) devices that are connected to App Service. One very common practice with IoT devices is "certificate pinning". To avoid any unforseen downtime due to changes in the service's managed certificates, you should never pin certificates to the default \*.azurewebsites.net certificate nor to an App Service Managed Certificate. If your system needs to rely on certificate pinning behavior, it is recommended to add a custom domain to a web app and provide a custom TLS certificate for the domain which can then be relied on for certificate pinning. You can refer to the [certificate pinning](#certificatepinning) section of this article for more information.
+There are a few scenarios where you can improve your environment when running Internet of Things (IoT) devices that are connected to App Service. One very common practice with IoT devices is "certificate pinning". To avoid any unforeseen downtime due to changes in the service's managed certificates, you should never pin certificates to the default \*.azurewebsites.net certificate nor to an App Service Managed Certificate. If your system needs to rely on certificate pinning behavior, it is recommended to add a custom domain to a web app and provide a custom TLS certificate for the domain which can then be relied on for certificate pinning. You can refer to the [certificate pinning](#certificatepinning) section of this article for more information.
-To increase resiliency in your environment, you should not rely on a single endpoint for all your devices. You should at least host your web apps in two different regions to avoid a single point of failure and be ready to failover traffic. On App Service, you can add identical custom domain to different web apps as long as these web apps are hosted in different regions. This ensures that if you need to pin certificates, you can also pin on the custom TLS certificate that you provided. Another option would be to use a load balancer in front of the web apps, such as Azure Front Door or Traffic Manager, to ensure high availabilty for your web apps. You can refer to [Quickstart: Create a Front Door for a highly available global web application](../frontdoor/quickstart-create-front-door.md) or [Controlling Azure App Service traffic with Azure Traffic Manager](./web-sites-traffic-manager.md) for more information.
+To increase resiliency in your environment, you should not rely on a single endpoint for all your devices. You should at least host your web apps in two different regions to avoid a single point of failure and be ready to failover traffic. On App Service, you can add identical custom domain to different web apps as long as these web apps are hosted in different regions. This ensures that if you need to pin certificates, you can also pin on the custom TLS certificate that you provided. Another option would be to use a load balancer in front of the web apps, such as Azure Front Door or Traffic Manager, to ensure high availability for your web apps. You can refer to [Quickstart: Create a Front Door for a highly available global web application](../frontdoor/quickstart-create-front-door.md) or [Controlling Azure App Service traffic with Azure Traffic Manager](./web-sites-traffic-manager.md) for more information.
## Next Steps For more information on best practices, visit [App Service Diagnostics](./overview-diagnostics.md) to find out actionable best practices specific to your resource.
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
Regardless of the configuration you use to set up authentication, the following
- Give each App Service app its own permissions and consent. - Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When testing new code, this practice can help prevent issues from affecting the production app.
+### Migrate to the Microsoft Graph
+
+Some older apps may also have been set up with a dependency on the [deprecated Azure AD Graph][aad-graph], which is scheduled for full retirement. For example, your app code may have called Azure AD graph to check group membership as part of an authorization filter in a middleware pipeline. Apps should move to the [Microsoft Graph](/graph/overview) by following the [guidance provided by AAD as part of the Azure AD Graph deprecation process][aad-graph]. In following those instructions, you may need to make some changes to your configuration of App Service authentication. Once you have added Microsoft Graph permissions to your app registration, you can:
+
+1. Update the **Issuer URL** to include the "/v2.0" suffix if it doesn't already. See [Enable Azure Active Directory in your App Service app](#-step-2-enable-azure-active-directory-in-your-app-service-app) for general expectations around this value.
+1. Remove requests for Azure AD Graph permissions from your login configuration. The properties to change depend on [which version of the management API you're using](./configure-authentication-api-version.md):
+ - If you're using the V1 API (`/authsettings`), this would be in the `additionalLoginParams` array.
+ - If you're using the V2 API (`/authsettingsV2`), this would be in the `loginParameters` array.
+
+ You would need to remove any reference to "https://graph.windows.net", for example. This includes the `resource` parameter (which isn't supported by the "/v2.0" endpoint) or any scopes you're specifically requesting that are from the Azure AD Graph.
+
+ You would also need to update the configuration to request the new Microsoft Graph permissions you set up for the application registration. You can use the [.default scope](../active-directory/develop/scopes-oidc.md#the-default-scope) to simplify this setup in many cases. To do so, add a new login parameter `scope=openid profile email https://graph.microsoft.com/.default`.
+
+With these changes, when App Service Authentication attempts to log in, it will no longer request permissions to the Azure AD Graph, and instead it will get a token for the Microsoft Graph. Any use of that token from your application code would also need to be updated, as per the [guidance provided by AAD][aad-graph].
+
+[aad-graph]: /graph/migrate-azure-ad-graph-overview
+ ## <a name="related-content"> </a>Next steps [!INCLUDE [app-service-mobile-related-content-get-started-users](../../includes/app-service-mobile-related-content-get-started-users.md)]
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI
- Set the Python version with [az webapp config set](/cli/azure/webapp/config#az-webapp-config-set) ```azurecli
- az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PYTHON|3.7"
+ az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PYTHON|3.11"
``` - Show all Python versions that are supported in Azure App Service with [az webapp list-runtimes](/cli/azure/webapp#az-webapp-list-runtimes):
app-service Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/identity-scenarios.md
+
+ Title: 'App Service authentication recommendations'
+description: There are several different authentication solutions available for web apps or web APIs hosted on App Service. This article provides recommendations on which auth solution(s) can be used for specific scenarios such as quickly and simply limiting access to your web app, custom authorization, and incremental consent.
++++ Last updated : 06/20/2023+
+# Authentication scenarios and recommendations
+
+You can add authentication to your web app or API running in Azure App Service to limit the users who can access it. There are several different authentication solutions available. This article describes which authentication solution to use for specific scenarios.
+
+## Authentication solutions
+
+- **Azure App Service built-in authentication** - Allows you to sign users in and access data by writing minimal or no code in your web app, RESTful API, or mobile back end. ItΓÇÖs built directly into the platform and doesnΓÇÖt require any particular language, library, security expertise, or even any code to use.
+- **Microsoft Authentication Library (MSAL)** - Enables developers to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs. Available for multiple supported platforms and frameworks, these are general purpose libraries that can be used in various hosted environments. Developers can also integrate with multiple sign-in providers, like Azure AD, Facebook, Google, Twitter.
+- **Microsoft.Identity.Web** - A higher-level library wrapping MSAL.NET, it provides a set of ASP.NET Core abstractions that simplify adding authentication support to web apps and web APIs integrating with the Microsoft identity platform. It provides a single-surface API convenience layer that ties together ASP.NET Core, its authentication middleware, and MSAL.NET. This library can be used in apps in various hosted environments. You can integrate with multiple sign-in providers, like Azure AD, Facebook, Google, Twitter.
+
+## Scenario recommendations
+
+The following table lists each authentication solution and some important factors for when you would use it.
+
+|Authentication method|When to use|
+|--|--|
+|Built-in App Service authentication |* You want less code to own and manage.<br>* Your app's language and SDKs don't provide user sign-in or authorization.<br>* You don't have the ability to modify your app code (for example, when migrating legacy apps).<br>* You need to handle authentication through configuration and not code.<br>* You need to sign in external or social users.|
+|Microsoft Authentication Library (MSAL)|* You need a code solution in one of several different languages<br>* You need to add custom authorization logic.<br>* You need to support incremental consent.<br>* You need information about the signed-in user in your code.<br>* You need to sign in external or social users.<br>* Your app needs to handle the access token expiring without making the user sign in again.|
+|Microsoft.Identity.Web |* You have an ASP.NET Core app. <br>* You need single sign-on support in your IDE during local development.<br>* You need to add custom authorization logic.<br>* You need to support incremental consent.<br>* You need conditional access in your web app.<br>* You need information about the signed-in user in your code.<br>* You need to sign in external or social users.<br>* Your app needs to handle the access token expiring without making the user sign in again.|
+
+The following table lists authentication scenarios and the authentication solution(s) you would use.
+
+|Scenario |App Service built-in auth| Microsoft Authentication Library | Microsoft.Identity.Web |
+|:--|:--:|:--:|:--:|
+| Need a fast and simple way to limit access to users in your organization? | ✅ | ❌ | ❌ |
+| Unable to modify the application code (app migration scenario)? | ✅ | ❌ | ❌ |
+| Your app's language and libraries support user sign-in/authorization? | ❌ | ✅ | ✅ |
+| Even if you can use a code solution, would you rather *not* use libraries? Don't want the maintenance burden? | ✅ | ❌ | ❌ |
+| Does your web app need to provide incremental consent? | ❌ | ✅ | ✅ |
+| Do you need conditional access in your web app? | ❌ | ❌ | ✅ |
+| Your app need to handle the access token expiring without making the user sign in again (use a refresh token)? | ❌ | ✅ | ✅ |
+| Need custom authorization logic or info about the signed-in user? | ❌ | ✅ | ✅ |
+| Need to sign in users from external or social identity providers? | ✅ | ✅ | ✅ |
+| You have an ASP.NET Core app? | ✅ | ❌ | ✅ |
+| You have a single page app or static web app? | ✅ | ✅ | ✅ |
+| Want Visual Studio integration? | ❌ | ❌ | ✅ |
+| Need single sign-on support in your IDE during local development? | ❌ | ❌ | ✅ |
+
+## Next steps
+
+To get started with built-in App Service authentication, read:
+- [Enable App Service built-in authentication](scenario-secure-app-authentication-app-service.md)
+
+To get started with [Microsoft Authentication Library (MSAL)](/entra/msal/), read:
+- [Add sign-in with Microsoft to a web app](/azure/active-directory/develop/web-app-quickstart)
+- [Only allow authenticated user to access a web API](/azure/active-directory/develop/scenario-protected-web-api-overview)
+- [Sign in users to a single-page application (SPA)](/azure/active-directory/develop/scenario-spa-overview)
+
+To get started with [Microsoft.Identity.Web](/entra/msal/dotnet/microsoft-identity-web/), read:
+
+- [Sign in users to a web app](/azure/active-directory/develop/web-app-quickstart?pivots=devlang-aspnet-core)
+- [Protect a web API](/azure/active-directory/develop/web-api-quickstart?pivots=devlang-aspnet-core)
+- [Sign in users to a Blazor Server app](/azure/active-directory/develop/tutorial-blazor-server)
+
+Learn more about [App Service built-in authentication and authorization](overview-authentication-authorization.md)
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
Implementing a secure solution for authentication (signing-in users) and authori
- ItΓÇÖs built directly into the platform and doesnΓÇÖt require any particular language, SDK, security expertise, or even any code to utilize. - You can integrate with multiple login providers. For example, Azure AD, Facebook, Google, Twitter.
+Your app might need to support more complex scenarios such as Visual Studio integration or incremental consent. There are several different authentication solutions available to support these scenarios. To learn more, read [Identity scenarios](identity-scenarios.md).
+ ## Identity providers App Service uses [federated identity](https://en.wikipedia.org/wiki/Federated_identity), in which a third-party identity provider manages the user identities and authentication flow for you. The following identity providers are available by default:
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
Azure App Service is a fully managed platform as a service (PaaS) offering for d
* **DevOps optimization** - Set up [continuous integration and deployment](deploy-continuous-deployment.md) with Azure DevOps, GitHub, BitBucket, Docker Hub, or Azure Container Registry. Promote updates through [test and staging environments](deploy-staging-slots.md). Manage your apps in App Service by using [Azure PowerShell](/powershell/azure/) or the [cross-platform command-line interface (CLI)](/cli/azure/install-azure-cli). * **Global scale with high availability** - Scale [up](manage-scale-up.md) or [out](../azure-monitor/autoscale/autoscale-get-started.md) manually or automatically. Host your apps anywhere in Microsoft's global datacenter infrastructure, and the App Service [SLA](https://azure.microsoft.com/support/legal/sla/app-service/) promises high availability. * **Connections to SaaS platforms and on-premises data** - Choose from [many hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for enterprise systems (such as SAP), SaaS services (such as Salesforce), and internet services (such as Facebook). Access on-premises data using [Hybrid Connections](app-service-hybrid-connections.md) and [Azure Virtual Networks](./overview-vnet-integration.md).
-* **Security and compliance** - App Service is [ISO, SOC, and PCI compliant](https://www.microsoft.com/trustcenter). Authenticate users with [Azure Active Directory](configure-authentication-provider-aad.md), [Google](configure-authentication-provider-google.md), [Facebook](configure-authentication-provider-facebook.md), [Twitter](configure-authentication-provider-twitter.md), or [Microsoft account](configure-authentication-provider-microsoft.md). Create [IP address restrictions](app-service-ip-restrictions.md) and [manage service identities](overview-managed-identity.md). [Prevent subdomain takeovers](reference-dangling-subdomain-prevention.md).
+* **Security and compliance** - App Service is [ISO, SOC, and PCI compliant](https://www.microsoft.com/trustcenter). Create [IP address restrictions](app-service-ip-restrictions.md) and [managed service identities](overview-managed-identity.md). [Prevent subdomain takeovers](reference-dangling-subdomain-prevention.md).
+* **Authentication** - [Authenticate users](overview-authentication-authorization.md) using the built-in authentication component. Authenticate users with [Azure Active Directory](configure-authentication-provider-aad.md), [Google](configure-authentication-provider-google.md), [Facebook](configure-authentication-provider-facebook.md), [Twitter](configure-authentication-provider-twitter.md), or [Microsoft account](configure-authentication-provider-microsoft.md).
* **Application templates** - Choose from an extensive list of application templates in the [Azure Marketplace](https://azure.microsoft.com/marketplace/), such as WordPress, Joomla, and Drupal. * **Visual Studio and Visual Studio Code integration** - Dedicated tools in Visual Studio and Visual Studio Code streamline the work of creating, deploying, and debugging. * **API and mobile features** - App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more.
app-service Quickstart Python 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python-1.md
In this quickstart, you deploy a Python web app to [App Service on Linux](overvi
## Set up your initial environment 1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-1. Install <a href="https://www.python.org/downloads/" target="_blank">Python 3.6 or higher</a>.
+1. Install <a href="https://www.python.org/downloads/" target="_blank">Python</a>.
1. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.0.80 or higher, with which you run commands in any shell to provision and configure Azure resources. Open a terminal window and check your Python version is 3.6 or higher:
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
# Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service
-In this quickstart, you'll deploy a Python web app (Django or Flask) to [Azure App Service](./overview.md#app-service-on-linux). Azure App Service is a fully managed web hosting service that supports Python 3.7 and higher apps hosted in a Linux server environment.
+In this quickstart, you'll deploy a Python web app (Django or Flask) to [Azure App Service](./overview.md#app-service-on-linux). Azure App Service is a fully managed web hosting service that supports Python apps hosted in a Linux server environment.
To complete this quickstart, you need: 1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
First, enable Azure Active Directory authentication to SQL Database by assigning
1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az-ad-user-list) and replace *\<user-principal-name>*. The result is saved to a variable. ```azurecli-interactive
- azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query '[].id' --output tsv)
+ $azureaduser=(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query '[].id' --output tsv)
``` > [!TIP]
Here's an example of the output:
> If you want, you can add the identity to an [Azure AD group](../active-directory/fundamentals/active-directory-manage-groups.md), then grant SQL Database access to the Azure AD group instead of the identity. For example, the following commands add the managed identity from the previous step to a new group called _myAzureSQLDBAccessGroup_: > > ```azurecli-interactive
-> groupid=$(az ad group create --display-name myAzureSQLDBAccessGroup --mail-nickname myAzureSQLDBAccessGroup --query objectId --output tsv)
-> msiobjectid=$(az webapp identity show --resource-group myResourceGroup --name <app-name> --query principalId --output tsv)
+> $groupid=(az ad group create --display-name myAzureSQLDBAccessGroup --mail-nickname myAzureSQLDBAccessGroup --query objectId --output tsv)
+> $msiobjectid=(az webapp identity show --resource-group myResourceGroup --name <app-name> --query principalId --output tsv)
> az ad group member add --group $groupid --member-id $msiobjectid > az ad group member list -g $groupid > ```
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
zone_pivot_groups: deploy-python-web-app-postgressql
# Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
-In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment.
+In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment.
:::image type="content" border="False" source="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture-240px.png" lightbox="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture.png" alt-text="An architecture diagram showing an App Service with a PostgreSQL database in Azure.":::
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
looks something like this: `/subscriptions/A/resourceGroups/B/providers/Microsof
``` >[!Note]
->If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has _Network Contributor_ role assigned to the subnet the Application Gateway is deployed into.
+> If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has the **Microsoft.Network/virtualNetworks/subnets/join/action** permission delegated to the subnet Application Gateway is deployed into. If a custom role is not defined with this permission, you may use the built-in _Network Contributor_ role, which contains the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission.
## Using a Service Principal It's also possible to provide AGIC access to ARM via a Kubernetes secret.
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
Deploying a new AKS cluster with the AGIC add-on enabled without specifying an e
az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys ```
-If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has Network Contributor role assigned to the subnet the Application Gateway is deployed into.
+If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has the **Microsoft.Network/virtualNetworks/subnets/join/action** permission delegated to the subnet Application Gateway is deployed into. If a custom role is not defined with this permission, you may use the built-in _Network Contributor_ role, which contains the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission.
```azurecli-interactive # Get application gateway id from AKS addon profile
azure-arc Troubleshoot Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance.md
kubectl -n $nameSpace get sqlmi $sqlmiName -o jsonpath-as-json='{.status}'
**Results**
-The state should be `Ready`. If the value isn't `Ready`, you need to wait. If state is error, get the message field, collect logs, and contact support. See [Collecting the logs](#collecting-the-logs).
+The state should be `Ready`. If the value isn't `Ready`, you need to wait. If state is error, get the message field, collect logs, and contact support. See [Collect the logs](#collect-the-logs).
### Check the routing label for stateful set The routing label for stateful set is used to route external endpoint to a matched pod. The name of the label is `role.ag.mssql.microsoft.com`.
kubectl -n $nameSpace get pods $sqlmiName-2 -o jsonpath-as-json='{.metadata.labe
**Results**
-If you didn't find primary, kill the pod that doesn't have any `role.ag.mssql.microsoft.com` label. If this doesn't resolve the issue, collect logs and contact support. See [Collecting the logs](#collecting-the-logs).
+If you didn't find primary, kill the pod that doesn't have any `role.ag.mssql.microsoft.com` label. If this doesn't resolve the issue, collect logs and contact support. See [Collect the logs](#collect-the-logs).
### Get Replica state from local container connection
kubectl exec -ti -n $nameSpace $sqlmiName-2 -c arc-sqlmi -- /opt/mssql-tools/bin
All replicas should be connected & healthy. Here is the detailed description of the query results [sys.dm_hadr_availability_replica_states](/sql/relational-databases/system-dynamic-management-views/sys-dm-hadr-availability-replica-states-transact-sql).
-If you find it isn't synchronized or not connected unexpectedly, try to kill the pod which has the problem. If problem persists, collect logs and contact support. See [Collecting the logs](#collecting-the-logs).
+If you find it isn't synchronized or not connected unexpectedly, try to kill the pod which has the problem. If problem persists, collect logs and contact support. See [Collect the logs](#collect-the-logs).
> [!NOTE] > If there are some large database in the instance, the seeding process to secondary could take a while. If this happens, wait for seeding to complete.
kubectl exec -ti -n $nameSpace $sqlmiName-2 -c arc-sqlmi -- /opt/mssql-tools/bin
**Results**
-You should get `ServerName` from `Listener` of each replica. If you can't get `ServerName`, kill the pods which have the problem. If the problem persists after recovery, collect logs and contact support. See [Collecting the logs](#collecting-the-logs).
+You should get `ServerName` from `Listener` of each replica. If you can't get `ServerName`, kill the pods which have the problem. If the problem persists after recovery, collect logs and contact support. See [Collect the logs](#collect-the-logs).
### Check Kubernetes network connection
You should be able to connect to exposed external port (which has been confirmed
You can use any client like `SqlCmd`, SQL Server Management Studio (SSMS), or Azure Data Studio (ADS) to test this out.
-## Collecting the logs
+## Connection between failover groups is lost
-If the previous steps all succeeded without any problem and you still can't log in, collect the logs and contact support
-
-### Connection between Failover groups is lost
-If the Failover groups between primary and geo-secondary Arc SQL Managed instances is configured to be in `sync` mode and the connection is lost for whatever reason for an extended period of time, then the logs on the primary Arc SQL managed instance cannot be truncated until the transactions are sent to the geo-secondary. This could lead to the logs filling up and potentially running out of space on the primary site. To break out of this situation, remove the failover groups and re-configure when the connection between the sites is re-established.
+If the failover groups between primary and geo-secondary Arc SQL Managed instances is configured to be in `sync` mode and the connection is lost for whatever reason for an extended period of time, then the logs on the primary Arc SQL managed instance cannot be truncated until the transactions are sent to the geo-secondary. This could lead to the logs filling up and potentially running out of space on the primary site. To break out of this situation, remove the failover groups and re-configure when the connection between the sites is re-established.
The failover groups can be removed on both primary as well as secondary site as follows:
and if the data controller is deployed in `direct` mode, provide the `sharedname
Once the failover group on the primary site is deleted, logs can be truncated to free up space.
+## Collect the logs
+
+If the previous steps all succeeded without any problem and you still can't log in, collect the logs and contact support
+ ### Collection controller logs ```console
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
The following private cloud environments and their versions are officially suppo
### Networking
-Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol. You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#restricted-outbound-connectivity) by your firewall or proxy server. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
+Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol. You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#not-able-to-connect-to-url) by your firewall or proxy server. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
## Next steps
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
Arc resource bridge has the following minimum resource requirements:
These minimum requirements enable most scenarios. However, a partner product may support a higher resource connection count to Arc resource bridge, which requires the bridge to have higher resource requirements. Failure to provide sufficient resources may cause errors during deployment, such as disk copy errors. Review the partner product's documentation for specific resource requirements. > [!NOTE]
-> To [use Arc resource bridge with Azure Kubernetes Service (AKS) on Azure Stack HCI](#aks-and-arc-resource-bridge-on-azure-stack-hci), the AKS clusters must be deployed prior to deploying Arc resource bridge. If Arc resource bridge has already been deployed, AKS clusters can't be installed unless you delete Arc resource bridge first. Once your AKS clusters are deployed to Azure Stack HCI, you can deploy Arc resource bridge again.
+> To use Azure Kubernetes Service (AKS) on Azure Stack HCI with Arc resource bridge, AKS must be deployed prior to deploying Arc resource bridge. If Arc resource bridge has already been deployed, AKS can't be installed unless you delete Arc resource bridge first. Once AKS is deployed to Azure Stack HCI, you can deploy Arc resource bridge again.
-## Management machine requirements
+## IP address prefix (subnet) requirements
+
+The IP address prefix (subnet) where Arc resource bridge will be deployed requires a minimum prefix of /29. The IP address prefix must have enough available IP addresses for the gateway IP, control plane IP, appliance VM IP, and reserved appliance VM IP.
+
+The IP address prefix is the subnet's IP address range for the virtual network and subnet mask (IP Mask) in CIDR notation, for example `192.168.7.1/24`. You provide the IP address prefix (in CIDR notation) during the creation of the configuration files for Arc resource bridge.
+
+Consult your system or network administrator to obtain the IP address prefix in CIDR notation. An IP Subnet CIDR calculator may be used to obtain this value.
+
+## Static configuration
-The machine used to run the commands to deploy Arc resource bridge, and maintain it, is called the *management machine*. The management machine should be considered part of the Arc resource bridge ecosystem, as it has specific requirements and is necessary to manage the appliance VM.
+Static IP configuration is recommended for Arc resource bridge, because the resource bridge needs three static IPs in the same subnet for the control plane, appliance VM, and reserved appliance VM.
-Because the management machine needs these specific requirements to manage Arc resource bridge, once the machine is set up, it should continue to be the primary machine used to maintain Arc resource bridge.
+If using DHCP, reserve those IP addresses, ensuring the IPs are outside of the assignable DHCP range of IPs (i.e. the control plane IP should be treated as a reserved/static IP that no other machine on the network will use or receive from DHCP). DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
-The management machine has the following requirements:
+## Management machine requirements
+
+The machine used to run the commands to deploy and maintain Arc resource bridge is called the *management machine*.
+
+Management machine requirements:
- [Azure CLI x64](/cli/azure/install-azure-cli-windows?tabs=azure-cli) installed. - Open communication to Control Plane IP (`controlplaneendpoint` parameter in `createconfig` command).-- Open communication to Appliance VM IP (`k8snodeippoolstart` parameter in `createconfig` command. May be referred to in partner products as Start Range IP, RB IP Start or VM IP 1).-- Open communication to the reserved Appliance VM IP for upgrade (`k8snodeippoolend` parameter in `createconfig` command. (May be referred to as End Range IP, RB IP End or VM IP 2).
+- Open communication to Appliance VM IP.
+- Open communication to the reserved Appliance VM IP.
- Internal and external DNS resolution. The DNS server must resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses that are [required URLs](network-requirements.md#outbound-connectivity) for deployment.-- If using a proxy, the proxy server configuration on the management machine must allow the machine to have internet access and to connect to [required URLs](network-requirements.md#outbound-connectivity) needed for deployment, such as the URL to download OS images.
+- Internet access
+
+## Appliance VM IP address requirements
+
+Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for projection into Azure Resource Manager (ARM).
+
+The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command; it may be referred to in partner products as Start Range IP, RB IP Start or VM IP 1.
-## Appliance VM requirements
+The appliance VM IP is the starting IP address for the appliance VM IP pool range. The VM IP pool range requires a minimum of 2 IP addresses.
-Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for projection into Azure Resource Manager (ARM). The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command (May be referred to in partner products as Start Range IP, RB IP Start or VM IP 1).
+Appliance VM IP address requirements:
-The appliance VM has the following requirements:
+- Open communication with the management machine and management endpoint (such as vCenter for VMware or MOC cloud agent service endpoint for Azure Stack HCI).
+- Internet connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy/firewall.
+- Static IP assigned (strongly recommended)
+ - If using DHCP, then the address must be reserved and outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
-- Open communication with the management machine, vCenter endpoint (for VMware), MOC cloud agent service endpoint (for Azure Stack HCI), or other control center for the on-premises environment.-- The appliance VM needs to be able to resolve the management machine and vice versa.-- Internet access.-- Connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy and firewall.-- Static IP assigned (strongly recommended), used for the `k8snodeippoolstart` in configuration command. This IP address should only be used for the appliance VM and not in-use anywhere else on the network. (If using DHCP, then the address must be reserved.)-- Appliance VM IP address must be from within the IP address prefix provided during configuration creation command.-- Ability to reach a DNS server that can resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses, container registry names, and other [required URLs](network-requirements.md#outbound-connectivity).-- If using a proxy, the proxy server configuration is provided when creating the configuration files for Arc resource bridge. The proxy should allow internet access on the appliance VM to connect to [required URLs](network-requirements.md#outbound-connectivity) needed for deployment, such as the URL to download OS images. The proxy server has to also be reachable from IPs within the IP prefix, including the appliance VM IP.
+- Must be from within the IP address prefix.
+- Internal and external DNS resolution.
+- If using a proxy, the proxy server has to be reachable from this IP and all IPs within the VM IP pool.
## Reserved appliance VM IP requirements
-Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade. During upgrade, a new appliance VM is created with the reserved appliance VM IP. Once the new appliance VM is created, the old appliance VM is deleted, and its IP address becomes reserved for a future upgrade. The reserved appliance VM IP is assigned an IP address from the `k8snodeippoolend` parameter in the `az arcappliance createconfig` command (May be referred to as End Range IP, RB IP End or VM IP 2).
+Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade.
-The reserved appliance VM IP has the following requirements:
+The reserved appliance VM IP is assigned an IP address via the `k8snodeippoolend` parameter in the `az arcappliance createconfig` command. This IP address may be referred to as End Range IP, RB IP End, or VM IP 2.
-- Open communication with the management machine, vCenter endpoint (for VMware), MOC cloud agent service endpoint (for Azure Stack HCI), or other control center for the on-premises environment.-- The appliance VM needs to be able to resolve the management machine and vice versa.-- Internet access.-- Connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy and firewall.-- Static IP assigned, used for the `k8snodeippoolend` in configuration command. (If using DHCP, then the address must be reserved.)-
+The reserved appliance VM IP is the ending IP address for the appliance VM IP pool range. If specifying an IP pool range larger than two IP addresses, the additional IPs are reserved.
-- If using a proxy, the proxy server has to also be reachable from IPs within the IP prefix, including the reserved appliance VM IP.
+Reserved appliance VM IP requirements:
+
+- Open communication with the management machine and management endpoint (such as vCenter for VMware or MOC cloud agent service endpoint for Azure Stack HCI).
+
+- Internet connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy/firewall.
+
+- Static IP assigned (strongly recommended)
+
+ - If using DHCP, then the address must be reserved and outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
+
+ - Must be from within the IP address prefix.
+
+ - Internal and external DNS resolution.
+
+ - If using a proxy, the proxy server has to be reachable from this IP and all IPs within the VM IP pool.
## Control plane IP requirements
-The appliance VM hosts a management Kubernetes cluster with a control plane that should be given a single, static IP address. This IP is assigned from the `controlplaneendpoint` parameter in the `createconfig` command.
+The appliance VM hosts a management Kubernetes cluster with a control plane that requires a single, static IP address. This IP is assigned from the `controlplaneendpoint` parameter in the `createconfig` command or equivalent configuration files creation command.
+
+Control plane IP requirements:
+
+ - Open communication with the management machine.
+ - Static IP address assigned; the IP address should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network.
+ - If using DHCP, the control plane IP should be a single reserved IP that is outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
+ - If using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing Arc resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
-The control plane IP has the following requirements:
+ - If using a proxy, the proxy server has to be reachable from IPs within the IP address prefix, including the reserved appliance VM IP.
-- Open communication with the management machine.-- The control plane needs to be able to resolve the management machine and vice versa.-- Static IP address assigned; the IP should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network. If you're using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
+## DNS server
+
+DNS server(s) must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three IPs must be able to reach the required URLs for deployment.
-- If using a proxy, the proxy server has to also be reachable from IPs within the IP prefix, including the reserved appliance VM IP.
+## Gateway
+
+The gateway IP should be an IP from within the subnet designated in the IP address prefix.
+
+## Example minimum configuration for static IP deployment
+
+The following example shows valid configuration values that can be passed during configuration file creation for Arc resource bridge. It is strongly recommended to use static IP addresses when deploying Arc resource bridge.
+
+Notice that the IP addresses for the gateway, control plane, appliance VM and DNS server (for internal resolution) are within the IP address prefix. This key detail helps ensure successful deployment of the appliance VM.
+
+ IP Address Prefix (CIDR format): 192.168.0.0/29
+
+ Gateway (IP format): 192.168.0.1
+
+ VM IP Pool Start (IP format): 192.168.0.2
+
+ VM IP Pool End (IP format): 192.168.0.3
+
+ Control Plane IP (IP format): 192.168.0.4
+
+ DNS servers (IP list format): 192.168.0.1, 10.0.0.5, 10.0.0.6
## User account and credentials
-Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (ex: Arc-enabled VMware vSphere). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM.
+Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (such as Arc-enabled VMware vSphere). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM.
If the user account is set to periodically change passwords, [the credentials must be immediately updated on the resource bridge](maintenance.md#update-credentials-in-the-appliance-vm). This user account may also be set with a lockout policy to protect the on-premises infrastructure, in case the credentials aren't updated and the resource bridge makes multiple attempts to use expired credentials to access the on-premises control center.
There are several different types of configuration files, based on the on-premis
### Appliance configuration files
-Three configuration files are created when the `createconfig` command completes (or the equivalent commands used by Azure Stack HCI and AKS hybrid): \<resourcename\>-resource.yaml, \<resourcename\>-appliance.yaml and \<resourcename\>-infra.yaml.
+Three configuration files are created when the `createconfig` command completes (or the equivalent commands used by Azure Stack HCI and AKS hybrid): `<appliance-name>-resource.yaml`, `<appliance-name>-appliance.yaml` and `<appliance-name>-infra.yaml`.
+
+By default, these files are generated in the current CLI directory when `createconfig` completes. These files should be saved in a secure location on the management machine, because they're required for maintaining the appliance VM. Because the configuration files reference each other, all three files must be stored in the same location. If the files are moved from their original location at deployment, open the files to check that the reference paths to the configuration files are accurate.
By default, these files are generated in the current CLI directory when `createconfig` completes. These files should be saved in a secure location on the management machine, because they're required for maintaining the appliance VM. Because the configuration files reference each other, all three files must be stored in the same location. If the files are moved from their original location at deployment, open the files to check that the reference paths to the configuration files are accurate.
The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-
Arc resource bridge uses a MOC login credential called [KVA token](/azure-stack/hci/manage/deploy-arc-resource-bridge-using-command-line#set-up-arc-vm-management) (kvatoken.tok) to interact with Azure Stack HCI. The KVA token is generated with the appliance configuration files when deploying Arc resource bridge. This token is also used when collecting logs for Arc resource bridge, so it should be saved in a secure location with the rest of the appliance configuration files. This file is saved in the directory provided during configuration file creation or the default CLI directory.
-## AKS and Arc Resource Bridge on Azure Stack HCI
+## AKS on Azure Stack HCI with Arc resource bridge
+
+To use AKS and Arc resource bridge together on Azure Stack HCI, AKS must be deployed prior to deploying Arc resource bridge. If Arc resource bridge has already been deployed, AKS can't be deployed unless you delete Arc resource bridge first. Once AKS is deployed to Azure Stack HCI, you can deploy Arc resource bridge.
-To use AKS and Arc resource bridge together on Azure Stack HCI, the AKS cluster must be deployed prior to deploying Arc resource bridge. If Arc resource bridge has already been deployed, AKS can't be deployed unless you delete Arc resource bridge first. Once your AKS cluster is deployed to Azure Stack HCI, you can deploy Arc resource bridge.
+When you deploy Arc resource bridge with AKS on Azure Stack HCI (AKS Hybrid), the following configurations must be applied:
-When deploying Arc resource bridge with AKS on Azure Stack HCI (AKS Hybrid), the resource bridge should share the same 'vswitchname' and `ipaddressprefix`, but require different IP addresses for `vippoolstart/end` and `k8snodeippoolstart/end`. Arc resource bridge should be given a unique 'vnetname' that is different from the one used for AKS Hybrid. For full instructions to deploy Arc resource bridge on AKS Hybrid, see [How to install Azure Arc Resource Bridge on Windows Server - AKS hybrid](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
+- Arc resource bridge and AKS-HCI should share the same `vswitchname` and be in the same subnet, sharing the same value for the parameter, `ipaddressprefix` .
+
+- The IP address prefix (subnet) must contain enough IP addresses for both the Arc resource bridge and AKS-HCI.
+
+- Arc resource bridge should be given a unique `vnetname` that is different from the one used for AKS Hybrid.
+
+- The Arc resource bridge requires different IP addresses for `vippoolstart/end` and `k8snodeippoolstart/end`. These IPs can't be shared between the two.
+
+- Arc resource bridge and AKS-HCI must each have a unique control plane IP.
+
+For instructions to deploy Arc resource bridge on AKS Hybrid, see [How to install Azure Arc Resource Bridge on Windows Server - AKS hybrid](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
## Next steps - Understand [network requirements for Azure Arc resource bridge (preview)](network-requirements.md).-- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about features and benefits.-- Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).
+- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about features and benefits.
+- Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
To resolve this problem, delete the resource bridge, register the providers, the
### Expired credentials in the appliance VM
-Arc resource bridge consists of an appliance VM that is deployed to the on-premise infrastructure. The appliance VM maintains a connection to the control center of the fabric using locally stored credentials. If these credentials are not updated, the resource bridge is no longer able to communicate with the control center. This may cause problems when trying to upgrade the resource bridge or manage VMs through Azure.
+Arc resource bridge consists of an appliance VM that is deployed to the on-premises infrastructure. The appliance VM maintains a connection to the management endpoint of the on-premises infrastructure using locally stored credentials. If these credentials are not updated, the resource bridge is no longer able to communicate with the management endpoint. This may cause problems when trying to upgrade the resource bridge or manage VMs through Azure.
To fix this, the credentials in the appliance VM need to be updated. For more information, see [Update credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm). +
+## Networking issues
+ ### Back-off pulling image error
-When trying to deploy Arc resource bridge, you may see an error that contains `back-off pulling image \\\"url"\\\: FailFastPodCondition`. This error is caused when the appliance VM is unable to reach the URL specified in the error. To resolve this issue, make sure the appliance VM [meets all requirements](system-requirements.md#appliance-vm-requirements), including internet access, and is able to reach the [required allowlist URLs](network-requirements.md).
+When trying to deploy Arc resource bridge, you may see an error that contains `back-off pulling image \\\"url"\\\: FailFastPodCondition`. This error is caused when the appliance VM can't reach the URL specified in the error. To resolve this issue, make sure the appliance VM meets system requirements, including internet access connectivity to [required allowlist URLs](network-requirements.md).
-## Networking issues
+### Not able to connect to URL
+
+If you receive an error that contains `Not able to connect to https://example.url.com`, check with your network administrator to ensure your network allows all of the required firewall and proxy URLs to deploy Arc resource bridge. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
-### Restricted outbound connectivity
+### .local not supported
+
+When trying to set the configuration for Arc resource bridge, you may receive an error message similar to:
+
+`"message": "Post \"https://esx.lab.local/52b-bcbc707ce02c/disk-0.vmdk\": dial tcp: lookup esx.lab.local: no such host"`
+
+This occurs when a `.local` path is provided for a configuration setting, such as proxy, dns, datastore or management endpoint (such as vCenter). Arc resource bridge appliance VM uses Azure Linux OS, which doesn't support `.local` by default. A workaround could be to provide the IP address where applicable.
-If you are experiencing connectivity, check to make sure your network allows all of the firewall and proxy URLs that are required to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
### Azure Arc resource bridge is unreachable
To resolve the error, one or more network misconfigurations may need to be addre
1. Appliance VM IP and Control Plane IP must be able to communicate with the management machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge.
-1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#restricted-outbound-connectivity). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
+1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#not-able-to-connect-to-url). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
-1. In a non-proxy environment, the management machine must have external and internal DNS resolution. The management machine must be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to [resolve external addresses](#restricted-outbound-connectivity), such as Azure URLs and OS image download URLs. Work with your system administrator to ensure that the management machine has internal and external DNS resolution. In a proxy environment, the DNS resolution on the proxy server should resolve internal endpoints and [required external addresses](#restricted-outbound-connectivity).
+1. In a non-proxy environment, the management machine must have external and internal DNS resolution. The management machine must be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to [resolve external addresses](#not-able-to-connect-to-url), such as Azure URLs and OS image download URLs. Work with your system administrator to ensure that the management machine has internal and external DNS resolution. In a proxy environment, the DNS resolution on the proxy server should resolve internal endpoints and [required external addresses](#not-able-to-connect-to-url).
To test DNS resolution to an internal address from the management machine in a non-proxy scenario, open command prompt and run `nslookup <vCenter endpoint or HCI MOC cloud agent IP>`. You should receive an answer if the management machine has internal DNS resolution in a non-proxy scenario. 
To resolve the error, one or more network misconfigurations may need to be addre
Verify that the DNS server IP used to create the configuration files has internal and external address resolution. If not, [delete the appliance](/cli/azure/arcappliance/delete), recreate the Arc resource bridge configuration files with the correct DNS server settings, and then deploy Arc resource bridge using the new configuration files.
+## Move Arc resource bridge location
+Resource move of Arc resource bridge isn't currently supported. You'll need to delete the Arc resource bridge, then re-deploy it to the desired location.
+ ## Azure Arc-enabled VMs on Azure Stack HCI issues For general help resolving issues related to Azure Arc-enabled VMs on Azure Stack HCI, see [Troubleshoot Azure Arc-enabled virtual machines](/azure-stack/hci/manage/troubleshoot-arc-enabled-vms).
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
This operation can take 2-5 minutes to complete. Before moving on, check that t
Create the default endpoint in PowerShell: ```powershell
- az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{\"properties\": {\"type\": \"default\"}}'
+ az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{"properties": {"type": "default"}}'
``` Create the default endpoint in Bash: ```bash
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
Because most Azure Cache for Redis clients assume that a password/access key is
### Azure AD Client Workflow
-1. Configure your client application to acquire an Azure AD token for scope `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default` using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
+1. Configure your client application to acquire an Azure AD token for scope `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default` using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
<!-- (ADD code snippet) -->
The library [`Microsoft.Azure.StackExchangeRedis`](https://www.nuget.org/package
This [code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) demonstrates how to use the `Microsoft.Azure.StackExchangeRedis` NuGet package to connect to your Azure Cache for Redis instance using Azure Active Directory.
-<!--
The following table includes links to code samples, which demonstrate how to connect to your Azure Cache for Redis instance using an Azure AD token. A wide variety of client libraries are included in multiple languages. | **Client library** | **Language** | **Link to sample code**| |-|-|-|
-| StackExchange.Redis | C#/.NET | StackExchange.Redis extension as a NuGet package |
+| StackExchange.Redis | C#/.NET | [.NET code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) |
| Python | Python | [Python code Sample](https://aka.ms/redis/aad/sample-code/python) | | Jedis | Java | [Jedis code sample](https://aka.ms/redis/aad/sample-code/java-jedis) | | Lettuce | Java | [Lettuce code sample](https://aka.ms/redis/aad/sample-code/java-lettuce) | | Redisson | Java | [Redisson code sample](https://aka.ms/redis/aad/sample-code/java-redisson) | | ioredis | Node.js | [ioredis code sample](https://aka.ms/redis/aad/sample-code/js-ioredis) |
-| Node-redis | Node.js | [noredis code sample](https://aka.ms/redis/aad/sample-code/js-noderedis) |
->
+| Node-redis | Node.js | [noredis code sample](https://aka.ms/redis/aad/sample-code/js-noderedis) |
### Best practices for Azure AD authentication
azure-cache-for-redis Cache Best Practices Enterprise Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-enterprise-tiers.md
The _Enterprise clustering policy_ is a simpler configuration that utilizes a si
Because the Enterprise tiers use a clustered configuration, you might see `CROSSSLOT` exceptions on commands that operate on multiple keys. Behavior varies depending on the clustering policy used. If you use the OSS clustering policy, multi-key commands require all keys to be mapped to [the same hash slot](https://docs.redis.com/latest/rs/databases/configure/oss-cluster-api/#multi-key-command-support).
-You might also see `CROSSSLOT` errors with Enterprise clustering policy. Only the following multi-key commands are allowed across slots with Enterprise clustering: `DEL`, `MSET`, `MGET`, `EXISTS`, `UNLINK`, and `TOUCH`. For more information, see [Database clustering](https://docs.redis.com/latest/rs/databases/durability-ha/clustering/#multikey-operations).
+You might also see `CROSSSLOT` errors with Enterprise clustering policy. Only the following multi-key commands are allowed across slots with Enterprise clustering: `DEL`, `MSET`, `MGET`, `EXISTS`, `UNLINK`, and `TOUCH`.
+
+In Active-Active databases, multi-key write commands (`DEL`, `MSET`, `UNLINK`) can only be run on keys that are in the same slot. However, the following multi-key commands are allowed across slots in Active-Active databases: `MGET`, `EXISTS`, and `TOUCH`. For more information, see [Database clustering](https://docs.redis.com/latest/rs/databases/durability-ha/clustering/#multikey-operations).
## Handling Region Down Scenarios with Active Geo-Replication
Many customers want to use persistence to take periodic backups of the data on t
- [Development](cache-best-practices-development.md) +
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-performance.md
The following tables show the GET requests per second at different capacities, u
| Instance | Capacity 2 | Capacity 4 | Capacity 6 | |::| :| :| :|
-| E10 | 1,00,000 | 1,900,000 | 2,500,000 |
+| E10 | 1,000,000 | 1,900,000 | 2,500,000 |
| E20 | 900,000 | 1,700,000 | 2,300,000 | | E50 | 1,700,000 | 3,000,000 | 3,900,000 | | E100 | 2,500,000 | 4,400,000 | 4,900,000|
azure-cache-for-redis Cache How To Premium Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
The following list contains answers to commonly asked questions about Azure Cach
* [Can I configure clustering for a basic or standard cache?](#can-i-configure-clustering-for-a-basic-or-standard-cache) * [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers) * [I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do)
-* [Does scaling out using clustering help to increase the number of supported client connections?](#Does scaling out using clustering help to increase the number of supported client connections?)
+* [Does scaling out using clustering help to increase the number of supported client connections?](#does-scaling-out-using-clustering-help-to-increase-the-number-of-supported-client-connections)
### Do I need to make any changes to my client application to use clustering?
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Persistence features are intended to be used to restore data to the same cache a
- RDB/AOF persisted data files can't be imported to a new cache. Use the [Import/Export](cache-how-to-import-export-data.md) feature instead. - Persistence isn't supported with caches using [passive geo-replication](cache-how-to-geo-replication.md) or [active geo-replication](cache-how-to-active-geo-replication.md).-- On the _Premium_ tier, AOF persistence isn't supported with [multiple replicas](cache-how-to-multi-replicas.md).
+- On the _Premium_ tier, AOF persistence isn't supported with [multiple replicas](cache-how-to-multi-replicas.md).
- On the _Premium_ tier, data must be persisted to a storage account in the same region as the cache instance. - On the _Premium_ tier, storage accounts in different subscriptions can be used to persist data if [managed identity](cache-managed-identity.md) is used to connect to the storage account.
On the **Premium** tier, data is persisted directly to an [Azure Storage](../sto
> If you are using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete). >
-On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a managed disk attached directly to the cache instance. The location isn't configurable nor accessible to the user. Using a managed disk increases the performance of persistence. The disk is encrypted using Microsoft managed keys (MMK) by default, but customer managed keys (CMK) can also be used. For more information, see [managing data encryption](#managing-data-encryption).
+On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a managed disk attached directly to the cache instance. The location isn't configurable nor accessible to the user. Using a managed disk increases the performance of persistence. The disk is encrypted using Microsoft managed keys (MMK) by default, but customer managed keys (CMK) can also be used. For more information, see [managing data encryption](#managing-data-encryption).
## How to set up data persistence using the Azure portal
On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a man
| Setting | Suggested value | Description | | | - | -- |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. The *host name* for your cache instance's is `\<DNS name>.redis.cache.windows.net`. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. The _host name_ for your cache instance's is `\<DNS name>.redis.cache.windows.net`. |
| **Subscription** | Drop-down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Drop-down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. | | **Location** | Drop-down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that use your cache. |
On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a man
| Setting | Suggested value | Description | | | - | -- |
- | **Authentication Method** | Drop-down and select an authentication method. Choices are **Managed Identity** or **Storage Key**| Choose your prefered authentication method. Using [managed identity](cache-managed-identity.md) allows you to use a storage account in a different subscription than the one in which your cache is located. |
- | **Subscription** | Drop-down and select an subscription. | You can choose a storage account in a different subscription if you are using managed identity as the authentication method. |
+ | **Authentication Method** | Drop-down and select an authentication method. Choices are **Managed Identity** or **Storage Key**| Choose your preferred authentication method. Using [managed identity](cache-managed-identity.md) allows you to use a storage account in a different subscription than the one in which your cache is located. |
+ | **Subscription** | Drop-down and select a subscription. | You can choose a storage account in a different subscription if you're using managed identity as the authentication method. |
| **Backup Frequency** | Drop-down and select a backup interval. Choices include **15 Minutes**, **30 minutes**, **60 minutes**, **6 hours**, **12 hours**, and **24 hours**. | This interval starts counting down after the previous backup operation successfully completes. When it elapses, a new backup starts. | | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). | | **Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. |
On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a man
The first backup starts once the backup frequency interval elapses. > [!NOTE]
- > When RDB files are backed up to storage, they are stored in the form of page blobs. If you're using a storage account with HNS enabled, persistence will tend to fail because page blobs aren't supported in storage accounts with HNS enabled (ADLS Gen2).
+ > When RDB files are backed up to storage, they are stored in the form of page blobs. If you're using a storage account with HNS enabled, persistence will tend to fail because page blobs aren't supported in storage accounts with HNS enabled (ADLS Gen2).
9. To enable AOF persistence, select **AOF** and configure the settings. | Setting | Suggested value | Description | | | - | -- |
- | **Authentication Method** | Drop-down and select an authentication method. Choices are **Managed Identity** or **Storage Key**| Choose your prefered authentication method. Using [managed identity](cache-managed-identity.md) allows you to use a storage account in a different subscription than the one in which your cache is located. |
- | **Subscription** | Drop-down and select an subscription. | You can choose a storage account in a different subscription if you are using managed identity as the authentication method. |
+ | **Authentication Method** | Drop-down and select an authentication method. Choices are **Managed Identity** or **Storage Key**| Choose your preferred authentication method. Using [managed identity](cache-managed-identity.md) allows you to use a storage account in a different subscription than the one in which your cache is located. |
+ | **Subscription** | Drop-down and select a subscription. | You can choose a storage account in a different subscription if you're using managed identity as the authentication method. |
| **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](/azure/storage/blobs/soft-delete-blob-overview). | | **First Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | | **Second Storage Account** | (Optional) Drop-down and select your secondary storage account. | You can optionally configure another storage account. If a second storage account is configured, the writes to the replica cache are written to this second storage account. |
It takes a while for the cache to create. You can monitor progress on the Azure
| Setting | Suggested value | Description | | | - | -- | | **Backup Frequency** | Drop down and select a backup interval. Choices include **Write every second** and **Always write**. | The _Always write_ option will append new entries to the AOF file after every write to the cache. This choice offers the best durability but does lower cache performance. |
-
-1. Finish creating the cache by following the rest of the instructions in the [Enterprise tier quickstart guide](quickstart-create-redis-enterprise.md).
+
+1. Finish creating the cache by following the rest of the instructions in the [Enterprise tier quickstart guide](quickstart-create-redis-enterprise.md).
> [!NOTE] > You can add persistence to a previously created Enterprise tier cache at any time by navigating to the **Advanced settings** in the Resource menu.
The [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache) comman
Existing caches can be updated using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) command. See examples of [adding persistence to an existing cache](/powershell/module/az.rediscache/set-azrediscache#example-3-modify-azure-cache-for-redis-if-you-want-to-add-data-persistence-after-azure-redis-cache-created). - ### [Using PowerShell (Enterprise tier)](#tab/enterprise) The [New-AzRedisEnterpriseCache](/powershell/module/az.redisenterprisecache/new-azredisenterprisecache) command can be used to create a new Enterprise-tier cache using data persistence. Use the `RdbPersistenceEnabled`, `RdbPersistenceFrequency`, `AofPersistenceEnabled`, and `AofPersistenceFrequency` parameters to configure the persistence setup. This example creates a new E10 Enterprise tier cache using RDB persistence with one hour frequency:
az redisenterprise database update --cluster-name "cache1" --resource-group "rg1
## Managing data encryption
-Because Redis persistence creates data at rest, encrypting this data is an important concern for many users. Encryption options vary based on the tier of Azure Cache for Redis being used.
-With the **Premium** tier, data is streamed directly from the cache instance to Azure Storage when persistence is initiated. Various encryption methods can be used with Azure Storage, including Microsoft-managed keys, customer-managed keys, and customer-provided keys. For information on encryption methods, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md).
+Because Redis persistence creates data at rest, encrypting this data is an important concern for many users. Encryption options vary based on the tier of Azure Cache for Redis being used.
+
+With the **Premium** tier, data is streamed directly from the cache instance to Azure Storage when persistence is initiated. Various encryption methods can be used with Azure Storage, including Microsoft-managed keys, customer-managed keys, and customer-provided keys. For information on encryption methods, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md).
With the **Enterprise** and **Enterprise Flash** tiers, data is stored on a managed disk mounted to the cache instance. By default, the disk holding the persistence data, and the OS disk are encrypted using Microsoft-managed keys. A customer-managed key (CMK) can also be used to control data encryption. See [Encryption on Enterprise tier caches](cache-how-to-encryption.md) for instructions.
For more information on performance when using AOF persistence, see [Does AOF pe
AOF persistence does affect throughput. AOF runs on both the primary and replica process, therefore you see higher CPU and Server Load for a cache with AOF persistence than an identical cache without AOF persistence. AOF offers the best consistency with the data in memory because each write and delete is persisted with only a few seconds of delay. The trade-off is that AOF is more compute intensive.
-As long as CPU and Server Load are both less than 90%, there is a penalty on throughput, but the cache operates normally, otherwise. Above 90% CPU and Server Load, the throughput penalty can get much higher, and the latency of all commands processed by the cache increases. This is because AOF persistence runs on both the primary and replica process, increasing the load on the node in use, and putting persistence on the critical path of data.
+As long as CPU and Server Load are both less than 90%, there's a penalty on throughput, but the cache operates normally, otherwise. Above 90% CPU and Server Load, the throughput penalty can get much higher, and the latency of all commands processed by the cache increases. Latency increases because AOF persistence runs on both the primary and replica process, increasing the load on the node in use, and putting persistence on the critical path of data.
### What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?
For both RDB and AOF persistence:
### Can I use the same storage account for persistence across two different caches?
-Yes, you can use the same storage account for persistence across two different caches. The [limitations on subscriptions and regions](#prerequisites-and-limitations) will still apply.
+Yes, you can use the same storage account for persistence across two different caches. The [limitations on subscriptions and regions](#prerequisites-and-limitations) still apply.
### Will I be charged for the storage being used in data persistence?
Yes, you can use the same storage account for persistence across two different c
### How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete?
-We recommend that you avoid enabling soft delete on storage accounts when used with Azure Cache for Redis data persistence with the Premium tier. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data.
+We recommend that you avoid enabling soft delete on storage accounts when used with Azure Cache for Redis data persistence with the Premium tier. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data.
Soft delete quickly becomes expensive with the typical data sizes of a cache that also performs write operations every second. For more information on soft delete costs, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). ### Can I change the RDB backup frequency after I create the cache?
-Yes, you can change the backup frequency for RDB persistence using the Azure portal, CLI, or PowerShell.
+Yes, you can change the backup frequency for RDB persistence using the Azure portal, CLI, or PowerShell.
### Why is there more than 60 minutes between backups when I have an RDB backup frequency of 60 minutes?
All RDB persistence backups, except for the most recent one, are automatically d
Use a second storage account for AOF persistence when you think you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits. This option is only available for Premium tier caches. - ### How can I remove the second storage account? You can remove the AOF persistence secondary storage account by setting the second storage account to be the same as the first storage account. For existing caches, access **Data persistence** from the **Resource menu** for your cache. To disable AOF persistence, select **Disabled**.
For more information on scaling, see [What happens if I've scaled to a different
### How is my AOF data organized in storage?
-When you use the Premium tier, data stored in AOF files is divided into multiple page blobs per node to increase performance of saving the data to storage. The following table displays how many page blobs are used for each pricing tier:
+When you use the Premium tier, data stored in AOF files is divided into multiple page blobs per shard. By default, half of the blobs are saved in the primary storage account and half are saved in the secondary storage account. Splitting the data across multiple page blobs and two different storage accounts increases the performance.
+
+If the peak rate of writes to the cache isn't very high, then this extra performance might not be needed. In that case, the secondary storage account configuration can be removed. All of the AOF files are instead stored in just the single primary storage account. The following table displays how many total page blobs are used for each pricing tier:
| Premium tier | Blobs |
-|--|-|
-| P1 | 4 per shard |
-| P2 | 8 per shard |
-| P3 | 16 per shard |
-| P4 | 20 per shard |
+|::|:|
+| P1 | 8 per shard |
+| P2 | 16 per shard |
+| P3 | 32 per shard |
+| P4 | 40 per shard |
-When clustering is enabled, each shard in the cache has its own set of page blobs, as indicated in the previous table. For example, a P2 cache with three shards distributes its AOF file across 24 page blobs (eight blobs per shard, with three shards).
+When clustering is enabled, each shard in the cache has its own set of page blobs, as indicated in the previous table. For example, a P2 cache with three shards distributes its AOF file across 48 page blobs: sixteen blobs per shard, with three shards.
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to stay in the soft delete state. ### Will having firewall exceptions on the storage account affect persistence?
-Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. This only applies to persistence in the Premium tier.
-
+Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. This only applies to persistence in the Premium tier.
### Can I have AOF persistence enabled if I have more than one replica?
With the Premium tier, you can't use Append-only File (AOF) persistence with mul
### How do I check if soft delete is enabled on my storage account?
-Select the storage account that your cache is using for persistence. Select **Data Protection** from the Resource menu. In the working pane, check the state of *Enable soft delete for blobs*. For more information on soft delete in Azure storage accounts, see [Enable soft delete for blobs](/azure/storage/blobs/soft-delete-blob-enable?tabs=azure-portal).
+Select the storage account that your cache is using for persistence. Select **Data Protection** from the Resource menu. In the working pane, check the state of _Enable soft delete for blobs_. For more information on soft delete in Azure storage accounts, see [Enable soft delete for blobs](/azure/storage/blobs/soft-delete-blob-enable?tabs=azure-portal).
## Next steps
azure-cache-for-redis Cache Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-network-isolation.md
Previously updated : 03/10/2023 Last updated : 06/21/2023
Azure Private Link provides private connectivity from a virtual network to Azure
### Advantages of Private Link - Supported on Basic, Standard, Premium, Enterprise, and Enterprise Flash tiers of Azure Cache for Redis instances.-- By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Cache instance from your virtual network via a private endpoint. The endpoint is assigned a private IP address in a subnet within the virtual network. With this private link, cache instances are available from both within the VNet and publicly. -- Once a private endpoint is created, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which will only allow private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md).
+- By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Cache instance from your virtual network via a private endpoint. The endpoint is assigned a private IP address in a subnet within the virtual network. With this private link, cache instances are available from both within the VNet and publicly.
+ > [!IMPORTANT]
+ > Enterprise/Enterprise Flash caches with private link cannot be accessed publicly.
+- Once a private endpoint is created on Basic/Standard/Premium tier caches, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which will only allow private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md).
+ > [!IMPORTANT]
+ > Enterprise/Enterprise Flash tier does not support `publicNetworkAccess` flag.
- All external cache dependencies won't affect the VNet's NSG rules. ### Limitations of Private Link
VNet is the fundamental building block for your private network in Azure. VNet e
- VNet injected caches are only available for Premium Azure Cache for Redis. - When using a VNet injected cache, you must change your VNet to cache dependencies such as CRLs/PKI, AKV, Azure Storage, Azure Monitor, and more.-- You can't inject an Azure Cache for Redis instance into a Virtual Network. You can only select this option when you _create_ the cache.
+- You can't inject an Azure Cache for Redis instance into a Virtual Network. You can only select this option when you _create_ the cache.
## Azure Firewall rules
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-driven-scaling.md
Title: Event-driven scaling in Azure Functions description: Explains the scaling behaviors of Consumption plan and Premium plan function apps. Previously updated : 04/04/2023 Last updated : 06/16/2023
Scaling can vary based on several factors, and apps scale differently based on t
## Limit scale-out
-You may wish to restrict the maximum number of instances an app used to scale out. This is most common for cases where a downstream component like a database has limited throughput. By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions will scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
+You may wish to restrict the maximum number of instances an app used to scale out. This is most common for cases where a downstream component like a database has limited throughput. By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions will scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
+
+# [Azure CLI](#tab/azure-cli)
```azurecli az resource update --resource-type Microsoft.Web/sites -g <RESOURCE_GROUP> -n <FUNCTION_APP-NAME>/config/web --set properties.functionAppScaleLimit=<SCALE_LIMIT> ```
+# [Azure PowerShell](#tab/azure-powershell)
+ ```azurepowershell $resource = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName <RESOURCE_GROUP> -Name <FUNCTION_APP-NAME>/config/web $resource.Properties.functionAppScaleLimit = <SCALE_LIMIT> $resource | Set-AzResource -Force ``` ++ ## Scale-in behaviors Event-driven scaling automatically reduces capacity when demand for your functions is reduced. It does this by shutting down worker instances of your function app. Before an instance is shut down, new events stop being sent to the instance. Also, functions that are currently executing are given time to finish executing. This behavior is logged as drain mode. This shut-down period can extend up to 10 minutes for Consumption plan apps and up to 60 minutes for Premium plan apps. Event-driven scaling and this behavior don't apply to Dedicated plan apps.
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
namespace AzureSQL.ToDo
# [C# Script](#tab/csharp-script)
-More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharpscript).
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
This section contains the following examples:
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
namespace AzureSQL.ToDo
} ``` --- # [C# Script](#tab/csharp-script) -
-More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharpscript).
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
This section contains the following examples:
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
Title: Target-based scaling in Azure Functions description: Explains target-based scaling behaviors of Consumption plan and Premium plan function apps. Previously updated : 04/04/2023 Last updated : 06/16/2023
Target-based scaling provides a fast and intuitive scaling model for customers a
- Event Hubs - Azure Cosmos DB
-Target-based scaling replaces the previous Azure Functions incremental scaling model as the default for these extension types. Incremental scaling added or removed a maximum of one worker at [each new instance rate](event-driven-scaling.md#understanding-scaling-behaviors), with complex decisions for when to scale. In contrast, target-based scaling allows scale up of four instances at a time, and the scaling decision is based on a simple target-based equation:
+Target-based scaling replaces the previous Azure Functions incremental scaling model as the default for these extension types. Incremental scaling added or removed a maximum of one worker at [each new instance rate](event-driven-scaling.md#understanding-scaling-behaviors), with complex decisions for when to scale. In contrast, target-based scaling allows scale up of four instances at a time, and the scaling decision is based on a simple target-based equation:
![Illustration of the equation: desired instances = event source length / target executions per instance.](./media/functions-target-based-scaling/target-based-scaling-formula.png) The default _target executions per instance_ values come from the SDKs used by the Azure Functions extensions. You don't need to make any changes for target-based scaling to work.
-> [!NOTE]
-> To determine the change in _desired instances_ if multiple functions in the same function app are voting to scale out, a sum across them is used to determine the change in desired instances. Scale out requests override scale in. If there are no scale out request but there are scale in requests, then the max scale in value is used. In order to achieve the most accurate scaling based on metrics, we recommend one target-based triggered function per function app.
+## Considerations
-## Prerequisites
+The following considerations apply when using target-based scaling:
-Target-based scaling is supported for the [Consumption](consumption-plan.md) and [Premium](functions-premium-plan.md) plans. Your function app runtime must be 4.3.0 or higher.
++ Target-based scaling is enabled by default for function apps on the Consumption plan or for Premium plans, but you can [opt-out](#opting-out). Event-driven scaling isn't supported when running on Dedicated (App Service) plans. ++ Your [function app runtime version](set-runtime-version.md) must be 4.3.0 or a later version.++ When using target-based scaling, the `functionAppScaleLimit` site setting is still honored. For more information, see [Limit scale out](event-driven-scaling.md#limit-scale-out).++ To achieve the most accurate scaling based on metrics, use only one target-based triggered function per function app.++ When multiple functions in the same function app are all requesting to scale out at the same time, a sum across those functions is used to determine the change in desired instances. Functions requesting to scale-out override functions requesting to scale-in.++ When there are scale-in requests without any scale-out requests, the max scale in value is used. ## Opting out
-Target-based scaling is enabled by default for function apps on the Consumption plan or Premium plans without runtime scale monitoring. If you wish to disable target-based scaling and revert to incremental scaling, add the following app setting to your function app:
+Target-based scaling is enabled by default for function apps hosted on a Consumption plan or on a Premium plans. To disable target-based scaling and fall back to incremental scaling, add the following app setting to your function app:
| App Setting | Value | | -- | -- |
Examples for the Python v2 programming model and the JavaScript v4 programming m
To learn more, see the following articles: + [Improve the performance and reliability of Azure Functions](./performance-reliability.md)
-+ [Azure Functions reliable event processing](./functions-reliable-event-processing.md)
++ [Azure Functions reliable event processing](./functions-reliable-event-processing.md)
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
In an environment that includes two or more components on multiple Azure Resourc
:::image type="content" source="media/deploy/schedule-recurrence-property.png" alt-text="Configure the recurrence frequency for logic app":::
-1. In the designer pane, select **Function-Try** to configure the target settings. In the request body, if you want to manage VMs across all resource groups in the subscription, modify the request body as shown in the following example.
+1. In the designer pane, select **Function-Try** to configure the target settings and then select the **</> Code view** button in the top menu to edit the code for the **Function-Try** element. In the request body, if you want to manage VMs across all resource groups in the subscription, modify the request body as shown in the following example.
```json {
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
Title: Azure Maps Drawing Conversion errors and warnings+ description: Learn about the Conversion errors and warnings you may meet while you're using the Azure Maps Conversion service. Read the recommendations on how to resolve the errors and the warnings, with some examples.--++ Last updated 05/21/2021
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
Title: Use Azure Maps Drawing Error Visualizer+ description: This article demonstrates how to visualize warnings and errors returned by the Creator Conversion API.--++ Last updated 02/17/2023
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
Title: Manage Microsoft Azure Maps Creator+ description: This article demonstrates how to manage Microsoft Azure Maps Creator.--++ Last updated 01/20/2022
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Title: Use the Azure Maps Indoor Maps module with Microsoft Creator services with custom styles (preview)+ description: Learn how to use the Microsoft Azure Maps Indoor Maps module to render maps by embedding the module's JavaScript libraries.--++ Last updated 09/23/2022
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
Title: Implement dynamic styling for Azure Maps Creator indoor maps+ description: Learn how to Implement dynamic styling for Creator indoor maps
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
The render coverage tables below list the countries/regions that support Azure M
| Brunei | Γ£ô | | Cambodia | Γ£ô | | Guam | Γ£ô |
-| Hong Kong | Γ£ô |
+| Hong Kong Special Administrative Region | Γ£ô |
| India | Γ£ô | | Indonesia | Γ£ô | | Laos | Γ£ô |
-| Macao | Γ£ô |
+| Macao Special Administrative Region | Γ£ô |
| Malaysia | Γ£ô | | Myanmar | Γ£ô | | New Zealand | Γ£ô |
azure-maps Schema Stateset Stylesobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/schema-stateset-stylesobject.md
Title: StylesObject Schema reference guide for Dynamic Azure Maps description: Reference guide to the dynamic Azure Maps StylesObject schema and syntax.--++ Last updated 02/17/2023 - # StylesObject Schema reference guide for dynamic Maps
A `StyleObject` is one of the following style rules:
- * [`BooleanTypeStyleRule`](#booleantypestylerule)
- * [`NumericTypeStyleRule`](#numerictypestylerule)
- * [`StringTypeStyleRule`](#stringtypestylerule)
+* [`BooleanTypeStyleRule`](#booleantypestylerule)
+* [`NumericTypeStyleRule`](#numerictypestylerule)
+* [`StringTypeStyleRule`](#stringtypestylerule)
The JSON below shows example usage of each of the three style types. The `BooleanTypeStyleRule` is used to determine the dynamic style for features whose `occupied` property is true and false. The `NumericTypeStyleRule` is used to determine the style for features whose `temperature` property falls within a certain range. Finally, the `StringTypeStyleRule` is used to match specific styles to `meetingType`. -- ```json "styles": [ {
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
Title: Drawing tools module | Microsoft Azure Maps description: In this article, you'll learn how to set drawing options data using the Microsoft Azure Maps Web SDK-- Previously updated : 01/29/2020-++ Last updated : 06/15/2023+ - # Use the drawing tools module
Once the drawing tools module is loaded in your application, you can enable draw
### Set the drawing mode
-The following code creates an instance of the drawing manager and sets the drawing **mode** option.
+The following code creates an instance of the drawing manager and sets the drawing **mode** option.
```javascript //Create an instance of the drawing manager and set drawing mode.
drawingManager = new atlas.drawing.DrawingManager(map,{
}); ```
-The code below is a complete running example of how to set a drawing mode of the drawing manager. Click the map to start drawing a polygon.
+The following image is an example of drawing mode of the `DrawingManager`. Select any place on the map to start drawing a polygon.
-<br/>
+<!--
<iframe height="500" scrolling="no" title="Draw a polygon" src="//codepen.io/azuremaps/embed/YzKVKRa/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/YzKVKRa/'>Draw a polygon</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
### Set the interaction type
drawingManager = new atlas.drawing.DrawingManager(map,{
}); ```
+<!
This code sample implements the functionality of drawing a polygon on the map. Just hold down the left mouse button and dragging it around, freely. <br/> <iframe height="500" scrolling="no" title="Free-hand drawing" src="//codepen.io/azuremaps/embed/ZEzKoaj/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/ZEzKoaj/'>Free-hand drawing</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
### Customizing drawing options
-The previous examples demonstrated how to customize drawing options while instantiating the Drawing Manager. You can also set the Drawing Manager options by using the `drawingManager.setOptions()` function. Below is a tool to test out customization of all options for the drawing manager using the setOptions function.
+The previous examples demonstrated how to customize drawing options while instantiating the Drawing Manager. You can also set the Drawing Manager options by using the `drawingManager.setOptions()` function.
-<br/>
+The [Drawing manager options] can be used to test out customization of all options for the drawing manager using the `setOptions` function.
-<iframe height="685" title="Customize drawing manager" src="//codepen.io/azuremaps/embed/LYPyrxR/?height=600&theme-id=0&default-tab=result" frameborder="no" allowtransparency="true" allowfullscreen="true">See the Pen <a href='https://codepen.io/azuremaps/pen/LYPyrxR/'>Get shape data</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!
+<iframe height="685" title="Customize drawing manager" src="//codepen.io/azuremaps/embed/LYPyrxR/?height=600&theme-id=0&default-tab=result" frameborder="no" allowtransparency="true" allowfullscreen="true">See the Pen <a href='https://codepen.io/azuremaps/pen/LYPyrxR/'>Get shape data</a> by Azure Maps
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
### Put a shape into edit mode
Learn more about the classes and methods used in this article:
> [!div class="nextstepaction"] > [Drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar)+
+[Drawing manager options]: https://samples.azuremaps.com/drawing-tools-module/drawing-manager-options
azure-maps Spatial Io Add Ogc Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-ogc-map-layer.md
Title: Add an Open Geospatial Consortium (OGC) map layer | Microsoft Azure Maps
+ Title: Add an Open Geospatial Consortium (OGC) map layer
+ description: Learn how to overlay an OGC map layer on the map, and how to use the different options in the OgcMapLayer class.-- Previously updated : 03/02/2020-++ Last updated : 06/16/2023+
The following sections outline the web map service features that are supported b
- Supported versions: `1.0.0`, `1.1.0`, `1.1.1`, and `1.3.0` - The service must support the `EPSG:3857` projection system, or handle reprojections.-- GetFeatureInfo requires the service to support `EPSG:4326` or handle reprojections.
+- GetFeatureInfo requires the service to support `EPSG:4326` or handle reprojections.
- Supported operations: | Operation | Description |
The following sections outline the web map service features that are supported b
The `url` can be the base URL for the service or a full URL with the query for getting the capabilities of the service. Depending on the details provided, the WFS client may try several standard URL formats to determine how to initially access the service.
-The following code shows how to overlay an OGC map layer on the map.
+The [OGC map layer] sample shows how to overlay an OGC map layer on the map.
-<br/>
-
-<iframe height='700' scrolling='no' title='OGC Map layer example' src='//codepen.io/azuremaps/embed/xxGLZWB/?height=700&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/xxGLZWB/'>OGC Map layer example</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!-
+<iframe height='700' scrolling='no' title='OGC Map layer example' src='//codepen.io/azuremaps/embed/xxGLZWB/?height=700&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/xxGLZWB/'>OGC Map layer example</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+->
## OGC map layer options
-The below sample demonstrates the different OGC map layer options. You may click on the code pen button at the top-right corner to edit the code pen.
+The [OGC map layer options] sample demonstrates the different OGC map layer options.
-<br/>
-<iframe height='700' scrolling='no' title='OGC map layer options' src='//codepen.io/azuremaps/embed/abOyEVQ/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/abOyEVQ/'>OGC map layer options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!-
+<iframe height='700' scrolling='no' title='OGC map layer options' src='//codepen.io/azuremaps/embed/abOyEVQ/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/abOyEVQ/'>OGC map layer options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+->
## OGC Web Map Service explorer
-The following tool overlays imagery from the Web Map Services (WMS) and Web Map Tile Services (WMTS) as layers. You may select which layers in the service are rendered on the map. You may also view the associated legends for these layers.
+The [OGC Web Map Service explorer] sample overlays imagery from the Web Map Services (WMS) and Web Map Tile Services (WMTS) as layers. You may select which layers in the service are rendered on the map. You may also view the associated legends for these layers.
-<br/>
-<iframe height='750' scrolling='no' title='OGC Web Map Service explorer' src='//codepen.io/azuremaps/embed/YzXxYdX/?height=750&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/YzXxYdX/'>OGC Web Map Service explorer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!-
+<iframe height='750' scrolling='no' title='OGC Web Map Service explorer' src='//codepen.io/azuremaps/embed/YzXxYdX/?height=750&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/YzXxYdX/'>OGC Web Map Service explorer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+->
You may also specify the map settings to use a proxy service. The proxy service lets you load resources that are hosted on domains that don't have CORS enabled.
See the following articles, which contain code samples you could add to your map
> [Leverage core operations](spatial-io-core-operations.md) > [!div class="nextstepaction"]
-> [Supported data format details](spatial-io-supported-data-format-details.md)
+> [Supported data format details](spatial-io-supported-data-format-details.md)
+
+[OGC map layer]: https://samples.azuremaps.com/spatial-io-module/ogc-map-layer-example
+[OGC map layer options]: https://samples.azuremaps.com/spatial-io-module/ogc-map-layer-options
+[OGC Web Map Service explorer]: https://samples.azuremaps.com/spatial-io-module/ogc-web-map-service-explorer
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md
Title: Add a simple data layer | Microsoft Azure Maps
+ Title: Add a simple data layer
+ description: Learn how to add a simple data layer using the Spatial IO module, provided by Azure Maps Web SDK.-- Previously updated : 02/29/2020-++ Last updated : 06/19/2023+ - #Customer intent: As an Azure Maps web sdk user, I want to add simple data layer so that I can render styled features on the map.
var layer = new atlas.layer.SimpleDataLayer(datasource);
map.layers.add(layer); ```
-Add features to the data source. Then, the simple data layer will figure out how best to render the features. Styles for individual features can be set as properties on the feature. The following code shows a GeoJSON point feature with a `color` property set to `red`.
+The following code snippet demonstrates using a simple data layer, referencing the data from an online source.
+
+```javascript
+function InitMap()
+{
+ var map = new atlas.Map('myMap', {
+ center: [-73.967605, 40.780452],
+ zoom: 12,
+ view: "Auto",
+
+ //Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authType: 'subscriptionKey',
+ subscriptionKey: '{Your-Azure-Maps-Subscription-key}'
+ },
+ });
+
+ //Wait until the map resources are ready.
+ map.events.add('ready', function () {
+
+ //Create a data source and add it to the map.
+ var datasource = new atlas.source.DataSource();
+ map.sources.add(datasource);
+
+ //Add a simple data layer for rendering data.
+ var layer = new atlas.layer.SimpleDataLayer(datasource);
+ map.layers.add(layer);
+
+ //Load an initial data set.
+ loadDataSet('https://s3-us-west-2.amazonaws.com/s.cdpn.io/1717245/use-simple-data-layer.json');
+
+ function loadDataSet(url) {
+ //Read the spatial data and add it to the map.
+ atlas.io.read(url).then(r => {
+ if (r) {
+ //Update the features in the data source.
+ datasource.setShapes(r);
+
+ //If bounding box information is known for data, set the map view to it.
+ if (r.bbox) {
+ map.setCamera({
+ bounds: r.bbox,
+ padding: 50
+ });
+ }
+ }
+ });
+ }
+ });
+}
+```
+
+The url passed to the `loadDataSet` function points to the following json:
```json {
Add features to the data source. Then, the simple data layer will figure out how
} ```
-The following code renders the above point feature using the simple data layer.
+Once you add features to the data source, the simple data layer figures out how best to render them. Styles for individual features can be set as properties on the feature.
+
+The above sample code shows a GeoJSON point feature with a `color` property set to `red`.
-<br/>
+This sample code renders the point feature using the simple data layer, and appears as follows:
-<iframe height="500" scrolling="no" title="Use the Simple data layer" src="//codepen.io/azuremaps/embed/zYGzpQV/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/zYGzpQV/'>Use the simple data layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+
+> [!NOTE]
+> Notice that the coordinates set when the map was initialized:
+>
+> &emsp; center: [-73.967605, 40.780452]
+>
+> Are overwritten by the value from the datasource:
+>
+> &emsp; "coordinates": [0, 0]
+
+<!
+<iframe height="500" scrolling="no" title="Use the Simple data layer" src="//codepen.io/azuremaps/embed/zYGzpQV/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/zYGzpQV/'>Use the simple data layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
The real power of the simple data layer comes when:
The real power of the simple data layer comes when:
- Features in the data set have several style properties individually set on them; or - You're not sure what the data set exactly contains.
-For example when parsing XML data feeds, you may not know the exact styles and geometry types of the features. The following sample shows the power of the simple data layer by rendering the features of a KML file. It also demonstrates various options that the simple data layer class provides.
-
-<br/>
+For example when parsing XML data feeds, you may not know the exact styles and geometry types of the features. The [Simple data layer options] sample shows the power of the simple data layer by rendering the features of a KML file. It also demonstrates various options that the simple data layer class provides.
-<iframe height="700" scrolling="no" title="Simple data layer options" src="//codepen.io/azuremaps/embed/gOpRXgy/?height=700&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/gOpRXgy/'>Simple data layer options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!
+<iframe height="700" scrolling="no" title="Simple data layer options" src="//codepen.io/azuremaps/embed/gOpRXgy/?height=700&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/gOpRXgy/'>Simple data layer options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
> [!NOTE] > This simple data layer uses the [popup template](map-add-popup.md#add-popup-templates-to-the-map) class to display KML balloons or feature properties as a table. By default, all content rendered in the popup will be sandboxed inside of an iframe as a security feature. However, there are limitations: >
-> - All scripts, forms, pointer lock and top navigation functionality is disabled. Links are allowed to open up in a new tab when clicked.
+> - All scripts, forms, pointer lock and top navigation functionality is disabled. Links are allowed to open up in a new tab when clicked.
> - Older browsers that don't support the `srcdoc` parameter on iframes will be limited to rendering a small amount of content.
->
-> If you trust the data being loaded into the popups and potentially want these scripts loaded into popups be able to access your application, you can disable this by setting the popup templates `sandboxContent` option to false.
+>
+> If you trust the data being loaded into the popups and potentially want these scripts loaded into popups be able to access your application, you can disable this by setting the popup templates `sandboxContent` option to false.
## Default supported style properties
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Supported data format details](spatial-io-supported-data-format-details.md)+
+[Simple data layer options]: https://samples.azuremaps.com/spatial-io-module/simple-data-layer-options
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
Title: Read and write spatial data | Microsoft Azure Maps
+ Title: Read and write spatial data
+ description: Learn how to read and write data using the Spatial IO module, provided by Azure Maps Web SDK.-- Previously updated : 03/01/2020-++ Last updated : 06/21/2023+ -
-#Customer intent: As an Azure Maps web sdk user, I want to read and write spatial data so that I can use data for map rendering.
# Read and write spatial data
When reading a compressed file, either as a zip or a KMZ, it will be unzipped an
The result from the read function is a `SpatialDataSet` object. This object extends the GeoJSON FeatureCollection class. It can easily be passed into a `DataSource` as-is to render its features on a map. The `SpatialDataSet` not only contains feature information, but it may also include KML ground overlays, processing metrics, and other details as outlined in the following table.
-| Property name | Type | Description |
+| Property name | Type | Description |
|||-| | `bbox` | `BoundingBox` | Bounding box of all the data in the data set. | | `features` | `Feature[]` | GeoJSON features within the data set. |
The result from the read function is a `SpatialDataSet` object. This object exte
## Examples of reading spatial data
-The following code shows how to read a spatial data set, and render it on the map using the `SimpleDataLayer` class. The code uses a GPX file pointed to by a URL.
+The [Load spatial data] sample shows how to read a spatial data set, and render it on the map using the `SimpleDataLayer` class. The code uses a GPX file pointed to by a URL. For the source code of this sample, see [Load spatial data source].
-<br/>
-<iframe height='500' scrolling='no' title='Load Spatial Data Simple' src='//codepen.io/azuremaps/embed/yLNXrZx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/yLNXrZx/'>Load Spatial Data Simple</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='500' scrolling='no' title='Load Spatial Data Simple' src='//codepen.io/azuremaps/embed/yLNXrZx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/yLNXrZx/'>Load Spatial Data Simple</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
The next code demo shows how to read and load KML, or KMZ, to the map. KML can contain ground overlays, which will be in the form of an `ImageLyaer` or `OgcMapLayer`. These overlays must be added on the map separately from the features. Additionally, if the data set has custom icons, those icons need to be loaded to the maps resources before the features are loaded.
-<br/>
+The [Load KML onto map] sample shows how to load KML or KMZ files onto the map. For the source code of this sample, see [Load KML onto map source].
-<iframe height='500' scrolling='no' title='Load KML Onto Map' src='//codepen.io/azuremaps/embed/XWbgwxX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbgwxX/'>Load KML Onto Map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+
+<!--
+<iframe height='500' scrolling='no' title='Load KML Onto Map' src='//codepen.io/azuremaps/embed/XWbgwxX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbgwxX/'>Load KML Onto Map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. The read function will try to access files on another domain using CORS first. After the first time it fails to access any resource on another domain using CORS it will only request additional files if a proxy service has been provided. The read function appends the file URL to the end of the proxy URL provided. This snippet of code shows how to pass a proxy service into the read function:
atlas.io.read('https://nonCorsDomain.example.com/mySuperCoolData.xml', {
```
-The demo below shows how to read a delimited file and render it on the map. In this case, the code uses a CSV file that has spatial data columns.
+The following code snippet shows how to read a delimited file and render it on the map. In this case, the code uses a CSV file that has spatial data columns. Note that you must add a reference to the Azure Maps Spatial IO module.
+
+```javascript
-<br/>
+<!-- Add reference to the Azure Maps Spatial IO module. -->
+<script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.min.js"></script>
+
+<script type="text/javascript">
+var map, datasource, layer;
+
+//a URL pointing to the CSV file
+var delimitedFileUrl = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/1717245/earthquakes_gt7_alltime.csv";
+
+function InitMap()
+{
+ map = new atlas.Map('myMap', {
+ center: [-73.985708, 40.75773],
+ zoom: 12,
+ view: "Auto",
+
+ //Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authType: 'subscriptionKey',
+ subscriptionKey: '{Your-Azure-Maps-Subscription-key}'
+ },
+ });
+
+ //Wait until the map resources are ready.
+ map.events.add('ready', function () {
+ //Create a data source and add it to the map.
+ datasource = new atlas.source.DataSource();
+ map.sources.add(datasource);
+
+ //Add a simple data layer for rendering the data.
+ layer = new atlas.layer.SimpleDataLayer(datasource);
+ map.layers.add(layer);
+
+ //Read a CSV file from a URL or pass in a raw string.
+ atlas.io.read(delimitedFileUrl).then(r => {
+ if (r) {
+ //Add the feature data to the data source.
+ datasource.add(r);
+
+ //If bounding box information is known for data, set the map view to it.
+ if (r.bbox) {
+ map.setCamera({
+ bounds: r.bbox,
+ padding: 50
+ });
+ }
+ }
+ });
+ });
+}
+</script>
+```
-<iframe height='500' scrolling='no' title='Add a Delimited File' src='//codepen.io/azuremaps/embed/ExjXBEb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ExjXBEb/'>Add a Delimited File</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+
+<!--
+<iframe height='500' scrolling='no' title='Add a Delimited File' src='//codepen.io/azuremaps/embed/ExjXBEb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ExjXBEb/'>Add a Delimited File</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
## Write spatial data There are two main write functions in the spatial IO module. The `atlas.io.write` function generates a string, while the `atlas.io.writeCompressed` function generates a compressed zip file. The compressed zip file would contain a text-based file with the spatial data in it. Both of these functions return a promise to add the data to the file. And, they both can write any of the following data: `SpatialDataSet`, `DataSource`, `ImageLayer`, `OgcMapLayer`, feature collection, feature, geometry, or an array of any combination of these data types. When writing using either functions, you can specify the wanted file format. If the file format isn't specified, then the data will be written as KML.
-The tool below demonstrates the majority of the write options that can be used with the `atlas.io.write` function.
+The [Spatial data write options] sample is a tool that demonstrates the majority of the write options that can be used with the `atlas.io.write` function. For the source code of this sample, see [Spatial data write options source].
-<br/>
-<iframe height='700' scrolling='no' title='Spatial data write options' src='//codepen.io/azuremaps/embed/YzXxXPG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/YzXxXPG/'>Spatial data write options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='700' scrolling='no' title='Spatial data write options' src='//codepen.io/azuremaps/embed/YzXxXPG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/YzXxXPG/'>Spatial data write options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
## Example of writing spatial data
-The following sample allows you to drag and drop and then load spatial files on the map. You can export GeoJSON data from the map and write it in one of the supported spatial data formats as a string or as a compressed file.
+The [Drag and drop spatial files onto map] sample allows you to drag and drop one or more KML, KMZ, GeoRSS, GPX, GML, GeoJSON or CSV files onto the map. For the source code of this sample, see [Drag and drop spatial files onto map source].
-<br/>
-<iframe height='700' scrolling='no' title='Drag and drop spatial files onto map' src='//codepen.io/azuremaps/embed/zYGdGoO/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/zYGdGoO/'>Drag and drop spatial files onto map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='700' scrolling='no' title='Drag and drop spatial files onto map' src='//codepen.io/azuremaps/embed/zYGdGoO/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/zYGdGoO/'>Drag and drop spatial files onto map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. This snippet of code shows you could incorporate a proxy service:
Well-known text can be read using the `atlas.io.ogc.WKT.read` function, and writ
## Examples of reading and writing Well-Known Text (WKT)
-The following code shows how to read the well-known text string `POINT(-122.34009 47.60995)` and render it on the map using a bubble layer.
+The [Read Well Known Text] sample shows how to read the well-known text string `POINT(-122.34009 47.60995)` and render it on the map using a bubble layer. For the source code of this sample, see [Read Well Known Text source].
-<br/>
-<iframe height='500' scrolling='no' title='Read Well-Known Text' src='//codepen.io/azuremaps/embed/XWbabLd/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbabLd/'>Read Well-Known Text</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='500' scrolling='no' title='Read Well-Known Text' src='//codepen.io/azuremaps/embed/XWbabLd/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbabLd/'>Read Well-Known Text</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
-The following code demonstrates reading and writing well-known text back and forth.
+The [Read and write Well Known Text] sample demonstrates how to read and write Well Known Text (WKT) strings as GeoJSON. For the source code of this sample, see [Read and write Well Known Text source].
-<br/>
-<iframe height='700' scrolling='no' title='Read and write Well-Known Text' src='//codepen.io/azuremaps/embed/JjdyYav/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/JjdyYav/'>Read and write Well-Known Text</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='700' scrolling='no' title='Read and write Well-Known Text' src='//codepen.io/azuremaps/embed/JjdyYav/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/JjdyYav/'>Read and write Well-Known Text</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
## Read and write GML
Learn more about the classes and methods used in this article:
[Supported data format details](spatial-io-supported-data-format-details.md) - ## Next steps See the following articles for more code samples to add to your maps: [Add an OGC map layer](spatial-io-add-ogc-map-layer.md)+
+[Load spatial data]: https://samples.azuremaps.com/spatial-io-module/load-spatial-data-(simple)
+[Load spatial data source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20spatial%20data%20(simple)/Load%20spatial%20data%20(simple).html
+[Load KML onto map]: https://samples.azuremaps.com/spatial-io-module/load-kml-onto-map
+[Load KML onto map source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20KML%20onto%20map/Load%20KML%20onto%20map.html
+[Spatial data write options]: https://samples.azuremaps.com/spatial-io-module/spatial-data-write-options
+[Spatial data write options source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Spatial%20data%20write%20options/Spatial%20data%20write%20options.html
+[Drag and drop spatial files onto map]: https://samples.azuremaps.com/spatial-io-module/drag-and-drop-spatial-files-onto-map
+[Drag and drop spatial files onto map source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Drag%20and%20drop%20spatial%20files%20onto%20map/Drag%20and%20drop%20spatial%20files%20onto%20map.html
+[Read Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-well-known-text
+[Read Well Known Text source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20Well%20Known%20Text/Read%20Well%20Known%20Text.html
+[Read and write Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-and-write-well-known-text
+[Read and write Well Known Text source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20and%20write%20Well%20Known%20Text/Read%20and%20write%20Well%20Known%20Text.html
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
You might have a limited number of Azure app actions per action group.
| 358 | Finland | | 33 | France | | 49 | Germany |
-| 852 | Hong Kong |
+| 852 | Hong Kong Special Administrative Region|
| 91 | India | | 353 | Ireland | | 972 | Israel |
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
HttpContext.Features.Get<RequestTelemetry>().Properties["myProp"] = someData
## Enable client-side telemetry for web applications
-The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) JavaScript (Web) SDK Loader Script injection by configuration.
+The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) using JavaScript (Web) SDK Loader Script injection by configuration.
1. In `_ViewImports.cshtml`, add injection:
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
builder.Services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceO
var app = builder.Build(); ```
+You can customize additional sampling settings using the [SamplingPercentageEstimatorSettings](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/BASE/src/ServerTelemetryChannel/Implementation/SamplingPercentageEstimatorSettings.cs) class:
+
+```csharp
+using Microsoft.ApplicationInsights.WindowsServer.Channel.Implementation;
+
+telemetryProcessorChainBuilder.UseAdaptiveSampling(new SamplingPercentageEstimatorSettings
+{
+ MinSamplingPercentage = 0.01,
+ MaxSamplingPercentage = 100,
+ MaxTelemetryItemsPerSecond = 5
+ }, null, excludedTypes: "Dependency");
+```
+ ### Configuring adaptive sampling for Azure Functions Follow instructions from [this page](../../azure-functions/configure-monitoring.md#configure-sampling) to configure adaptive sampling for apps running in Azure Functions.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Connection string: `APPLICATIONINSIGHTS_CONNECTION_STRING`
# [.NET 5.0+](#tab/dotnet5)
-1. Set the instrumentation key in the `appsettings.json` file:
+1. Set the connection string in the `appsettings.json` file:
```json { "ApplicationInsights": {
- "InstrumentationKey" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;"
+ "ConnectionString" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://{region}.in.applicationinsights.azure.com/;LiveEndpoint=https://{region}.livediagnostics.monitor.azure.com/"
} } ```
-2. Retrieve the instrumentation key in `Program.cs` when registering the `ApplicationInsightsTelemetry` service:
+2. Retrieve the connection string in `Program.cs` when registering the `ApplicationInsightsTelemetry` service:
```csharp
- var options = new ApplicationInsightsServiceOptions { ConnectionString = app.Configuration["ApplicationInsights:InstrumentationKey"] };
+ var options = new ApplicationInsightsServiceOptions { ConnectionString = app.Configuration["ApplicationInsights:ConnectionString"] };
builder.Services.AddApplicationInsightsTelemetry(options: options); ``` > [!NOTE]
-> When deploying applications to Azure in production scenarios, consider placing instrumentation keys or other configuration secrets in secure locations such as App Service configuration settings or Azure Key Vault. Avoid including secrets in your application code or checking them into source control where they might be exposed or misused. The preceding code example will also work if the instrumentation key is stored in App Service configuration settings. Learn more about [configuring App Service settings](/azure/app-service/configure-common).
+> When deploying applications to Azure in production scenarios, consider placing connection strings or other configuration secrets in secure locations such as App Service configuration settings or Azure Key Vault. Avoid including secrets in your application code or checking them into source control where they might be exposed or misused. The preceding code example will also work if the connection string is stored in App Service configuration settings. Learn more about [configuring App Service settings](/azure/app-service/configure-common).
# [.NET Framework](#tab/dotnet-framework) Set the property [TelemetryConfiguration.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/add45ceed35a817dc7202ec07d3df1672d1f610d/BASE/src/Microsoft.ApplicationInsights/Extensibility/TelemetryConfiguration.cs#L271-L274) or [ApplicationInsightsServiceOptions.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/81288f26921df1e8e713d31e7e9c2187ac9e6590/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs#L66-L69).
-Explicitly set the instrumentation key in code:
+Explicitly set the connection string in code:
```csharp var configuration = new TelemetryConfiguration {
- ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;"
+ ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://{region}.in.applicationinsights.azure.com/;LiveEndpoint=https://{region}.livediagnostics.monitor.azure.com/"
}; ```
-Set the instrumentation key using a configuration file:
+Set the connection string using a configuration file:
```xml <?xml version="1.0" encoding="utf-8"?> <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
- <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
+ <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://{region}.in.applicationinsights.azure.com/;LiveEndpoint=https://{region}.livediagnostics.monitor.azure.com/</ConnectionString>
</ApplicationInsights> ```
You can set the connection string in the `applicationinsights.json` configuratio
```json {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000"
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://{region}.in.applicationinsights.azure.com/;LiveEndpoint=https://{region}.livediagnostics.monitor.azure.com/"
} ```
JavaScript doesn't support the use of environment variables. You have two option
```javascript const appInsights = require("applicationinsights");
-appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;");
+appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://{region}.in.applicationinsights.azure.com/;LiveEndpoint=https://{region}.livediagnostics.monitor.azure.com/");
appInsights.start(); ```
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.trace.samplers import ProbabilitySampler from opencensus.trace.tracer import Tracer
-tracer = Tracer(exporter=AzureExporter(connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000'), sampler=ProbabilitySampler(1.0))
+tracer = Tracer(exporter=AzureExporter(connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://{region}.in.applicationinsights.azure.com/;LiveEndpoint=https://{region}.livediagnostics.monitor.azure.com/'), sampler=ProbabilitySampler(1.0))
```
azure-monitor Autoscale Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-diagnostics.md
The following examples are the general formats for autoscale resource logs with
Use these logs to troubleshoot issues in autoscale. For more information, see [Troubleshooting autoscale problems](autoscale-troubleshoot.md).
+> [!NOTE]
+> Although the logs may refer to "scale up" and "scale down" actions, the actual action taken is scale in or scale out.
+ ## Autoscale evaluations log The following schemas appear in the autoscale evaluations log.
azure-monitor Autoscale Webhook Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-webhook-email.md
description: Learn how to use autoscale actions to call web URLs or send email n
Previously updated : 04/03/2017 Last updated : 06/21/2023 # Use autoscale actions to send email and webhook alert notifications in Azure Monitor
-This article shows you how to set up triggers so that you can call specific web URLs or send emails based on autoscale actions in Azure.
+This article shows you how to set up notifications so that you can call specific web URLs or send emails based on autoscale actions in Azure.
## Webhooks
-Webhooks allow you to route Azure alert notifications to other systems for post-processing or custom notifications. Examples include routing the alert to services that can handle an incoming web request to send an SMS, log bugs, or notify a team by using chat or messaging services. The webhook URI must be a valid HTTP or HTTPS endpoint.
+Webhooks allow you to send HTTP requests to a specific URL endpoint (callback URL) when a certain event or trigger occurs. Using webhooks, you can automate and streamline processes by enabling the automatic exchange of information between different systems or applications. Use webhooks to trigger custom code, notifications, or other actions to run when an autoscale event occurs.
## Email
-You can send email to any valid email address. Administrators and co-administrators of the subscription where the rule is running are also notified.
+You can send email to any valid email address when an autoscale event occurs. Administrators and co-administrators of the subscription where the rule is running are also notified.
-## Cloud Services and App Service
-You can opt in from the Azure portal for Azure Cloud Services and server farms (Azure App Service).
+## Configure Notifications
-* Choose the **scale by** metric.
+Use the Azure portal, CLI, PowerShell, or Resource Manager templates to configure notifications.
- ![Screenshot that shows the Autoscale setting pane.](./media/autoscale-webhook-email/insights-autoscale-notify.png)
+### [Portal](#tab/portal)
-## Virtual machine scale sets
-For newer virtual machines created with Azure Resource Manager (virtual machine scale sets), you can use the REST API, Resource Manager templates, PowerShell, and the Azure CLI for configuration. An Azure portal interface isn't yet available.
+### Set up notifications using the Azure portal.
-When you use the REST API or Resource Manager templates, include the notifications element in your [autoscale settings](/azure/templates/microsoft.insights/2015-04-01/autoscalesettings) with the following options:
+Select the **Notify** tab on the autoscale settings page to configure notifications.
+Select the check boxes to send an email to the subscription administrator or co-administrators. You can also enter a list of email addresses to send notifications to.
+
+Enter a webhook URI to send a notification to a web service. You can also add custom headers to the webhook request. For example, you can add an authentication token in the header, query parameters, or add a custom header to identify the source of the request.
+++
+### [CLI](#tab/cli)
+
+### Use CLI to configure notifications.
+
+Use the `az monitor autoscale update` or the `az monitor autoscale create` command to configure notifications using Azure CLI.
+
+The following parameters are used to configure notifications:
+++ `--add-action` - The action to take when the autoscale rule is triggered. The value must be `email` or `webhook`.++ `--email-administrator {false, true}` - Send email to the subscription administrator.++ `--email-coadministrators {false, true}` - Send email to the subscription co-administrators.++ `--remove-action` - Remove an action previously added by `--add-action`. The value must be `email` or `webhook`. The parameter is only relevant for the `az monitor autoscale update` command.++
+For example, the following command adds an email notification and a webhook notification to and existing autoscale setting. The command also sends email to the subscription administrator.
+
+```azurecli
+ az monitor autoscale update \
+--resource-group <resource group name> \
+--name <autoscale setting name> \
+--email-administrator true \
+--add-action email pdavis@contoso.com \
+--add-action webhook http://myservice.com/webhook-listerner-123
```+
+> [!NOTE]
+> You can add mote than one email or webhook notification by using the `--add-action` parameter multiple times. While multiple webhook notifications are supported and can be seen in the JSON, the portal only shows the first webhook.
++
+For more information, see [az monitor autoscale](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest).
+++
+### [PowerShell](#tab/powershell)
+
+### Use PowerShell to configure notifications.
+
+The following example shows how to configure a webhook and email notification.
+
+1. Create the webhook object.
+
+1. Create the notification object.
+1. Add the notification object to the autoscale setting using `New-AzAutoscaleSetting` or `Update-AzAutoscaleSetting` cmdlets.
+
+```powershell
+# Assumining you have already created a profile object and have a vmssName, resourceGroup, and subscriptionId
+
+ $webhook=New-AzAutoscaleWebhookNotificationObject `
+-Property @{"method"='GET'; "headers"= '"Authorization", "tokenvalue-12345678abcdef"'} `
+-ServiceUri "http://myservice.com/webhook-listerner-123"
+
+$notification=New-AzAutoscaleNotificationObject `
+-EmailCustomEmail "pdavis@contoso.com" `
+-EmailSendToSubscriptionAdministrator $true ` -EmailSendToSubscriptionCoAdministrator $true `
+-Webhook $webhook
++
+New-AzAutoscaleSetting -Name autoscalesetting2 `
+-ResourceGroupName $resourceGroup `
+-Location eastus `
+-Profile $profile `
+-Enabled -Notification $notification `
+-PropertiesName "autoscalesetting" `
+-TargetResourceUri "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName"
+```
++
+### [Resource Manager](#tab/resourcemanager)
+
+### Use Resource Manager templates to configure notifications.
+
+When you use the Resource Manager templates or REST API, include the `notifications` element in your [autoscale settings](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings?pivots=deployment-language-arm-template#resource-format-1), for example:
+
+```JSON
"notifications": [ { "operation": "Scale",
When you use the REST API or Resource Manager templates, include the notificatio
}, "webhooks": [ {
- "serviceUri": "https://foo.webhook.example.com?token=abcd1234",
+ "serviceUri": "https://my.webhook.example.com?token=abcd1234",
"properties": { "optional_key1": "optional_value1", "optional_key2": "optional_value2"
When you use the REST API or Resource Manager templates, include the notificatio
| serviceUri |Yes |Valid HTTPS URI. | | properties |Yes |Value must be empty {} or can contain key-value pairs. | ++ ## Authentication in webhooks
-The webhook can authenticate by using token-based authentication, where you save the webhook URI with a token ID as a query parameter. An example is https:\//mysamplealert/webcallback?tokenid=sometokenid&someparameter=somevalue.
+The webhook can authenticate by using token-based authentication, where you save the webhook URI with a token ID as a query parameter. For example, `https://mysamplealert/webcallback?tokenid=123-abc456-7890&myparameter2=value123`.
## Autoscale notification webhook payload schema When the autoscale notification is generated, the following metadata is included in the webhook payload:
-```
+```JSON
{
- "version": "1.0",
- "status": "Activated",
- "operation": "Scale In",
- "context": {
- "timestamp": "2016-03-11T07:31:04.5834118Z",
- "id": "/subscriptions/s1/resourceGroups/rg1/providers/microsoft.insights/autoscalesettings/myautoscaleSetting",
- "name": "myautoscaleSetting",
- "details": "Autoscale successfully started scale operation for resource 'MyCSRole' from capacity '3' to capacity '2'",
- "subscriptionId": "s1",
- "resourceGroupName": "rg1",
- "resourceName": "MyCSRole",
- "resourceType": "microsoft.classiccompute/domainnames/slots/roles",
- "resourceId": "/subscriptions/s1/resourceGroups/rg1/providers/microsoft.classicCompute/domainNames/myCloudService/slots/Production/roles/MyCSRole",
- "portalLink": "https://portal.azure.com/#resource/subscriptions/s1/resourceGroups/rg1/providers/microsoft.classicCompute/domainNames/myCloudService",
- "oldCapacity": "3",
- "newCapacity": "2"
- },
- "properties": {
- "key1": "value1",
- "key2": "value2"
- }
+ "version": "1.0",
+ "status": "Activated",
+ "operation": "Scale Out",
+ "context": {
+ "timestamp": "2023-06-22T07:01:47.8926726Z",
+ "id": "/subscriptions/123456ab-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/microsoft.insights/autoscalesettings/AutoscaleSettings-002",
+ "name": "AutoscaleSettings-002",
+ "details": "Autoscale successfully started scale operation for resource 'ScaleableAppServicePlan' from capacity '1' to capacity '2'",
+ "subscriptionId": "123456ab-9876-a1b2-a2b1-123a567b9f8767",
+ "resourceGroupName": "rg-001",
+ "resourceName": "ScaleableAppServicePlan",
+ "resourceType": "microsoft.web/serverfarms",
+ "resourceId": "/subscriptions/123456ab-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/Microsoft.Web/serverfarms/ScaleableAppServicePlan",
+ "portalLink": "https://portal.azure.com/#resource/subscriptions/123456ab-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/Microsoft.Web/serverfarms/ScaleableAppServicePlan",
+ "resourceRegion": "West Central US",
+ "oldCapacity": "1",
+ "newCapacity": "2"
+ },
+ "properties": {
+ "key1": "value1",
+ "key2": "value2"
+ }
} ```
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: Query logs from Container insights
description: Container insights collects metrics and log data, and this article describes the records and includes sample queries. Previously updated : 08/29/2022 Last updated : 06/06/2023
KubePodInventory
ContainerStatus = strcat("Container Status: ", ContainerStatus) ```
+## Container logs
+
+Container logs for AKS are stored in [the ContainerLogV2 table](./container-insights-logging-v2.md). You can run the following sample queries to look for the stderr/stdout log output from target pods, deployments, or namespaces.
+
+### Container logs for a specific pod, namespace, and container
+
+```kusto
+ContainerLogV2
+| where _ResourceId =~ "clusterResourceID" //update with resource ID
+| where PodNamespace == "podNameSpace" //update with target namespace
+| where PodName == "podName" //update with target pod
+| where ContainerName == "containerName" //update with target container
+| project TimeGenerated, Computer, ContainerId, LogMessage, LogSource
+```
+
+### Container logs for a specific deployment
+
+``` kusto
+let KubePodInv = KubePodInventory
+| where _ResourceId =~ "clusterResourceID" //update with resource ID
+| where Namespace == "deploymentNamespace" //update with target namespace
+| where ControllerKind == "ReplicaSet"
+| extend deployment = reverse(substring(reverse(ControllerName), indexof(reverse(ControllerName), "-") + 1))
+| where deployment == "deploymentName" //update with target deployment
+| extend ContainerId = ContainerID
+| summarize arg_max(TimeGenerated, *) by deployment, ContainerId, PodStatus, ContainerStatus
+| project deployment, ContainerId, PodStatus, ContainerStatus;
+
+KubePodInv
+| join
+(
+ ContainerLogV2
+ | where TimeGenerated >= startTime and TimeGenerated < endTime
+ | where PodNamespace == "deploymentNamespace" //update with target namespace
+ | where PodName startswith "deploymentName" //update with target deployment
+) on ContainerId
+| project TimeGenerated, deployment, PodName, PodStatus, ContainerName, ContainerId, ContainerStatus, LogMessage, LogSource
+
+```
+
+### Container logs for any failed pod in a specific namespace
+
+``` kusto
+ let KubePodInv = KubePodInventory
+ | where TimeGenerated >= startTime and TimeGenerated < endTime
+ | where _ResourceId =~ "clustereResourceID" //update with resource ID
+ | where Namespace == "podNamespace" //update with target namespace
+ | where PodStatus == "Failed"
+ | extend ContainerId = ContainerID
+ | summarize arg_max(TimeGenerated, *) by ContainerId, PodStatus, ContainerStatus
+ | project ContainerId, PodStatus, ContainerStatus;
+
+ KubePodInv
+ | join
+ (
+ ContainerLogV2
+ | where TimeGenerated >= startTime and TimeGenerated < endTime
+ | where PodNamespace == "podNamespace" //update with target namespace
+ ) on ContainerId
+ | project TimeGenerated, PodName, PodStatus, ContainerName, ContainerId, ContainerStatus, LogMessage, LogSource
+
+```
+
+## Container insights default visualization queries
+
+These queries are generated from the [out of the box visualizations](./container-insights-analyze.md) from container insights. You can choose to use these if you have enabled custom [cost optimization settings](./container-insights-cost-config.md), in lieu of the default charts.
+
+### Node CPU and memory utilization
+
+The required tables for this chart include Perf and KubeNodeInventory.
+
+```kusto
+ let trendBinSize = 5m;
+ let MaxListSize = 1000;
+ let clusterId = 'clusterResourceID'; //update with resource ID
+ let clusterIdToken = strcat(clusterId, "/");
+
+ let materializedPerfData = materialize(Perf
+| where InstanceName startswith clusterIdToken
+| where ObjectName == 'K8SNode'
+| summarize arg_max(TimeGenerated, *) by CounterName, Computer, bin(TimeGenerated, trendBinSize)
+| where CounterName == 'cpuCapacityNanoCores' or CounterName == 'memoryCapacityBytes' or CounterName == 'cpuUsageNanoCores' or CounterName == 'memoryRssBytes'
+| project TimeGenerated, Computer, CounterName, CounterValue
+| summarize StoredValue = max(CounterValue) by Computer, CounterName, bin(TimeGenerated, trendBinSize));
+
+ let rawData = KubeNodeInventory
+| where ClusterId =~ clusterId
+| summarize arg_max(TimeGenerated, *) by Computer, bin(TimeGenerated, trendBinSize)
+| join( materializedPerfData
+| where CounterName == 'cpuCapacityNanoCores' or CounterName == 'memoryCapacityBytes'
+| project Computer, CounterName = iif(CounterName == 'cpuCapacityNanoCores', 'cpu', 'memory'), CapacityValue = StoredValue, TimeGenerated ) on Computer, TimeGenerated
+| join kind=inner( materializedPerfData
+| where CounterName == 'cpuUsageNanoCores' or CounterName == 'memoryRssBytes'
+| project Computer, CounterName = iif(CounterName == 'cpuUsageNanoCores', 'cpu', 'memory'), UsageValue = StoredValue, TimeGenerated ) on Computer, CounterName, TimeGenerated
+| project Computer, CounterName, TimeGenerated, UsagePercent = UsageValue * 100.0 / CapacityValue;
+
+ rawData
+| summarize Min = min(UsagePercent), Avg = avg(UsagePercent), Max = max(UsagePercent), percentiles(UsagePercent, 50, 90, 95) by bin(TimeGenerated, trendBinSize), CounterName
+| sort by TimeGenerated asc
+| project CounterName, TimeGenerated, Min, Avg, Max, P50 = percentile_UsagePercent_50, P90 = percentile_UsagePercent_90, P95 = percentile_UsagePercent_95
+| summarize makelist(TimeGenerated, MaxListSize), makelist(Min, MaxListSize), makelist(Avg, MaxListSize), makelist(Max, MaxListSize), makelist(P50, MaxListSize), makelist(P90, MaxListSize), makelist(P95, MaxListSize) by CounterName
+| join ( rawData
+| summarize Min = min(UsagePercent), Avg = avg(UsagePercent), Max = max(UsagePercent), percentiles(UsagePercent, 50, 90, 95) by CounterName ) on CounterName
+| project ClusterId = clusterId, CounterName, Min, Avg, Max, P50 = percentile_UsagePercent_50, P90 = percentile_UsagePercent_90, P95 = percentile_UsagePercent_95, list_TimeGenerated, list_Min, list_Avg, list_Max, list_P50, list_P90, list_P95
+```
+### Node count by status
+
+The required tables for this chart include KubeNodeInventory.
+
+```kusto
+ let trendBinSize = 5m;
+ let maxListSize = 1000;
+ let clusterId = 'clusterResourceID'; //update with resource ID
+
+ let rawData = KubeNodeInventory
+| where ClusterId =~ clusterId
+| distinct ClusterId, TimeGenerated
+| summarize ClusterSnapshotCount = count() by Timestamp = bin(TimeGenerated, trendBinSize), ClusterId
+| join hint.strategy=broadcast ( KubeNodeInventory
+| where ClusterId =~ clusterId
+| summarize TotalCount = count(), ReadyCount = sumif(1, Status contains ('Ready')) by ClusterId, Timestamp = bin(TimeGenerated, trendBinSize)
+| extend NotReadyCount = TotalCount - ReadyCount ) on ClusterId, Timestamp
+| project ClusterId, Timestamp, TotalCount = todouble(TotalCount) / ClusterSnapshotCount, ReadyCount = todouble(ReadyCount) / ClusterSnapshotCount, NotReadyCount = todouble(NotReadyCount) / ClusterSnapshotCount;
+
+ rawData
+| order by Timestamp asc
+| summarize makelist(Timestamp, maxListSize), makelist(TotalCount, maxListSize), makelist(ReadyCount, maxListSize), makelist(NotReadyCount, maxListSize) by ClusterId
+| join ( rawData
+| summarize Avg_TotalCount = avg(TotalCount), Avg_ReadyCount = avg(ReadyCount), Avg_NotReadyCount = avg(NotReadyCount) by ClusterId ) on ClusterId
+| project ClusterId, Avg_TotalCount, Avg_ReadyCount, Avg_NotReadyCount, list_Timestamp, list_TotalCount, list_ReadyCount, list_NotReadyCount
+```
+
+### Pod count by status
+
+The required tables for this chart include KubePodInventory.
+
+```kusto
+ let trendBinSize = 5m;
+ let maxListSize = 1000;
+ let clusterId = 'clusterResourceID'; //update with resource ID
+
+ let rawData = KubePodInventory
+| where ClusterId =~ clusterId
+| distinct ClusterId, TimeGenerated
+| summarize ClusterSnapshotCount = count() by bin(TimeGenerated, trendBinSize), ClusterId
+| join hint.strategy=broadcast ( KubePodInventory
+| where ClusterId =~ clusterId
+| summarize PodStatus=any(PodStatus) by TimeGenerated, PodUid, ClusterId
+| summarize TotalCount = count(), PendingCount = sumif(1, PodStatus =~ 'Pending'), RunningCount = sumif(1, PodStatus =~ 'Running'), SucceededCount = sumif(1, PodStatus =~ 'Succeeded'), FailedCount = sumif(1, PodStatus =~ 'Failed'), TerminatingCount = sumif(1, PodStatus =~ 'Terminating') by ClusterId, bin(TimeGenerated, trendBinSize) ) on ClusterId, TimeGenerated
+| extend UnknownCount = TotalCount - PendingCount - RunningCount - SucceededCount - FailedCount - TerminatingCount
+| project ClusterId, Timestamp = TimeGenerated, TotalCount = todouble(TotalCount) / ClusterSnapshotCount, PendingCount = todouble(PendingCount) / ClusterSnapshotCount, RunningCount = todouble(RunningCount) / ClusterSnapshotCount, SucceededCount = todouble(SucceededCount) / ClusterSnapshotCount, FailedCount = todouble(FailedCount) / ClusterSnapshotCount, TerminatingCount = todouble(TerminatingCount) / ClusterSnapshotCount, UnknownCount = todouble(UnknownCount) / ClusterSnapshotCount;
+
+ let rawDataCached = rawData;
+
+ rawDataCached
+| order by Timestamp asc
+| summarize makelist(Timestamp, maxListSize), makelist(TotalCount, maxListSize), makelist(PendingCount, maxListSize), makelist(RunningCount, maxListSize), makelist(SucceededCount, maxListSize), makelist(FailedCount, maxListSize), makelist(TerminatingCount, maxListSize), makelist(UnknownCount, maxListSize) by ClusterId
+| join ( rawDataCached
+| summarize Avg_TotalCount = avg(TotalCount), Avg_PendingCount = avg(PendingCount), Avg_RunningCount = avg(RunningCount), Avg_SucceededCount = avg(SucceededCount), Avg_FailedCount = avg(FailedCount), Avg_TerminatingCount = avg(TerminatingCount), Avg_UnknownCount = avg(UnknownCount) by ClusterId ) on ClusterId
+| project ClusterId, Avg_TotalCount, Avg_PendingCount, Avg_RunningCount, Avg_SucceededCount, Avg_FailedCount, Avg_TerminatingCount, Avg_UnknownCount, list_Timestamp, list_TotalCount, list_PendingCount, list_RunningCount, list_SucceededCount, list_FailedCount, list_TerminatingCount, list_UnknownCount
+```
+### List of containers by status
+
+The required tables for this chart include KubePodInventory and Perf.
+
+```kusto
+ let startDateTime = datetime('start time');
+ let endDateTime = datetime('end time');
+ let trendBinSize = 15m;
+ let maxResultCount = 10000;
+ let metricUsageCounterName = 'cpuUsageNanoCores';
+ let metricLimitCounterName = 'cpuLimitNanoCores';
+
+ let KubePodInventoryTable = KubePodInventory
+| where TimeGenerated >= startDateTime
+| where TimeGenerated < endDateTime
+| where isnotempty(ClusterName)
+| where isnotempty(Namespace)
+| where isnotempty(Computer)
+| project TimeGenerated, ClusterId, ClusterName, Namespace, ServiceName, ControllerName, Node = Computer, Pod = Name, ContainerInstance = ContainerName, ContainerID, ReadySinceNow = format_timespan(endDateTime - ContainerCreationTimeStamp , 'ddd.hh:mm:ss.fff'), Restarts = ContainerRestartCount, Status = ContainerStatus, ContainerStatusReason = columnifexists('ContainerStatusReason', ''), ControllerKind = ControllerKind, PodStatus;
+
+ let startRestart = KubePodInventoryTable
+| summarize arg_min(TimeGenerated, *) by Node, ContainerInstance
+| where ClusterId =~ 'clusterResourceID' //update with resource ID
+| project Node, ContainerInstance, InstanceName = strcat(ClusterId, '/', ContainerInstance), StartRestart = Restarts;
+
+ let IdentityTable = KubePodInventoryTable
+| summarize arg_max(TimeGenerated, *) by Node, ContainerInstance
+| where ClusterId =~ 'clusterResourceID' //update with resource ID
+| project ClusterName, Namespace, ServiceName, ControllerName, Node, Pod, ContainerInstance, InstanceName = strcat(ClusterId, '/', ContainerInstance), ContainerID, ReadySinceNow, Restarts, Status = iff(Status =~ 'running', 0, iff(Status=~'waiting', 1, iff(Status =~'terminated', 2, 3))), ContainerStatusReason, ControllerKind, Containers = 1, ContainerName = tostring(split(ContainerInstance, '/')[1]), PodStatus, LastPodInventoryTimeGenerated = TimeGenerated, ClusterId;
+
+ let CachedIdentityTable = IdentityTable;
+
+ let FilteredPerfTable = Perf
+| where TimeGenerated >= startDateTime
+| where TimeGenerated < endDateTime
+| where ObjectName == 'K8SContainer'
+| where InstanceName startswith 'clusterResourceID'
+| project Node = Computer, TimeGenerated, CounterName, CounterValue, InstanceName ;
+
+ let CachedFilteredPerfTable = FilteredPerfTable;
+
+ let LimitsTable = CachedFilteredPerfTable
+| where CounterName =~ metricLimitCounterName
+| summarize arg_max(TimeGenerated, *) by Node, InstanceName
+| project Node, InstanceName, LimitsValue = iff(CounterName =~ 'cpuLimitNanoCores', CounterValue/1000000, CounterValue), TimeGenerated;
+ let MetaDataTable = CachedIdentityTable
+| join kind=leftouter ( LimitsTable ) on Node, InstanceName
+| join kind= leftouter ( startRestart ) on Node, InstanceName
+| project ClusterName, Namespace, ServiceName, ControllerName, Node, Pod, InstanceName, ContainerID, ReadySinceNow, Restarts, LimitsValue, Status, ContainerStatusReason = columnifexists('ContainerStatusReason', ''), ControllerKind, Containers, ContainerName, ContainerInstance, StartRestart, PodStatus, LastPodInventoryTimeGenerated, ClusterId;
+
+ let UsagePerfTable = CachedFilteredPerfTable
+| where CounterName =~ metricUsageCounterName
+| project TimeGenerated, Node, InstanceName, CounterValue = iff(CounterName =~ 'cpuUsageNanoCores', CounterValue/1000000, CounterValue);
+
+ let LastRestartPerfTable = CachedFilteredPerfTable
+| where CounterName =~ 'restartTimeEpoch'
+| summarize arg_max(TimeGenerated, *) by Node, InstanceName
+| project Node, InstanceName, UpTime = CounterValue, LastReported = TimeGenerated;
+
+ let AggregationTable = UsagePerfTable
+| summarize Aggregation = max(CounterValue) by Node, InstanceName
+| project Node, InstanceName, Aggregation;
+
+ let TrendTable = UsagePerfTable
+| summarize TrendAggregation = max(CounterValue) by bin(TimeGenerated, trendBinSize), Node, InstanceName
+| project TrendTimeGenerated = TimeGenerated, Node, InstanceName , TrendAggregation
+| summarize TrendList = makelist(pack("timestamp", TrendTimeGenerated, "value", TrendAggregation)) by Node, InstanceName;
+
+ let containerFinalTable = MetaDataTable
+| join kind= leftouter( AggregationTable ) on Node, InstanceName
+| join kind = leftouter (LastRestartPerfTable) on Node, InstanceName
+| order by Aggregation desc, ContainerName
+| join kind = leftouter ( TrendTable) on Node, InstanceName
+| extend ContainerIdentity = strcat(ContainerName, ' ', Pod)
+| project ContainerIdentity, Status, ContainerStatusReason = columnifexists('ContainerStatusReason', ''), Aggregation, Node, Restarts, ReadySinceNow, TrendList = iif(isempty(TrendList), parse_json('[]'), TrendList), LimitsValue, ControllerName, ControllerKind, ContainerID, Containers, UpTimeNow = datetime_diff('Millisecond', endDateTime, datetime_add('second', toint(UpTime), make_datetime(1970,1,1))), ContainerInstance, StartRestart, LastReportedDelta = datetime_diff('Millisecond', endDateTime, LastReported), PodStatus, InstanceName, Namespace, LastPodInventoryTimeGenerated, ClusterId;
+containerFinalTable
+| limit 200
+```
+
+### List of Controllers by status
+
+The required tables for this chart include KubePodInventory and Perf.
+
+```kusto
+ let endDateTime = datetime('start time');
+ let startDateTime = datetime('end time');
+ let trendBinSize = 15m;
+ let metricLimitCounterName = 'cpuLimitNanoCores';
+ let metricUsageCounterName = 'cpuUsageNanoCores';
+
+ let primaryInventory = KubePodInventory
+| where TimeGenerated >= startDateTime
+| where TimeGenerated < endDateTime
+| where isnotempty(ClusterName)
+| where isnotempty(Namespace)
+| extend Node = Computer
+| where ClusterId =~ 'clusterResourceID' //update with resource ID
+| project TimeGenerated, ClusterId, ClusterName, Namespace, ServiceName, Node = Computer, ControllerName, Pod = Name, ContainerInstance = ContainerName, ContainerID, InstanceName, PerfJoinKey = strcat(ClusterId, '/', ContainerName), ReadySinceNow = format_timespan(endDateTime - ContainerCreationTimeStamp, 'ddd.hh:mm:ss.fff'), Restarts = ContainerRestartCount, Status = ContainerStatus, ContainerStatusReason = columnifexists('ContainerStatusReason', ''), ControllerKind = ControllerKind, PodStatus, ControllerId = strcat(ClusterId, '/', Namespace, '/', ControllerName);
+
+let podStatusRollup = primaryInventory
+| summarize arg_max(TimeGenerated, *) by Pod
+| project ControllerId, PodStatus, TimeGenerated
+| summarize count() by ControllerId, PodStatus = iif(TimeGenerated < ago(30m), 'Unknown', PodStatus)
+| summarize PodStatusList = makelist(pack('Status', PodStatus, 'Count', count_)) by ControllerId;
+
+let latestContainersByController = primaryInventory
+| where isnotempty(Node)
+| summarize arg_max(TimeGenerated, *) by PerfJoinKey
+| project ControllerId, PerfJoinKey;
+
+let filteredPerformance = Perf
+| where TimeGenerated >= startDateTime
+| where TimeGenerated < endDateTime
+| where ObjectName == 'K8SContainer'
+| where InstanceName startswith 'clusterResourceID' //update with resource ID
+| project TimeGenerated, CounterName, CounterValue, InstanceName, Node = Computer ;
+
+let metricByController = filteredPerformance
+| where CounterName =~ metricUsageCounterName
+| extend PerfJoinKey = InstanceName
+| summarize Value = percentile(CounterValue, 95) by PerfJoinKey, CounterName
+| join (latestContainersByController) on PerfJoinKey
+| summarize Value = sum(Value) by ControllerId, CounterName
+| project ControllerId, CounterName, AggregationValue = iff(CounterName =~ 'cpuUsageNanoCores', Value/1000000, Value);
+
+let containerCountByController = latestContainersByController
+| summarize ContainerCount = count() by ControllerId;
+
+let restartCountsByController = primaryInventory
+| summarize Restarts = max(Restarts) by ControllerId;
+
+let oldestRestart = primaryInventory
+| summarize ReadySinceNow = min(ReadySinceNow) by ControllerId;
+
+let trendLineByController = filteredPerformance
+| where CounterName =~ metricUsageCounterName
+| extend PerfJoinKey = InstanceName
+| summarize Value = percentile(CounterValue, 95) by bin(TimeGenerated, trendBinSize), PerfJoinKey, CounterName
+| order by TimeGenerated asc
+| join kind=leftouter (latestContainersByController) on PerfJoinKey
+| summarize Value=sum(Value) by ControllerId, TimeGenerated, CounterName
+| project TimeGenerated, Value = iff(CounterName =~ 'cpuUsageNanoCores', Value/1000000, Value), ControllerId
+| summarize TrendList = makelist(pack("timestamp", TimeGenerated, "value", Value)) by ControllerId;
+
+let latestLimit = filteredPerformance
+| where CounterName =~ metricLimitCounterName
+| extend PerfJoinKey = InstanceName
+| summarize arg_max(TimeGenerated, *) by PerfJoinKey
+| join kind=leftouter (latestContainersByController) on PerfJoinKey
+| summarize Value = sum(CounterValue) by ControllerId, CounterName
+| project ControllerId, LimitValue = iff(CounterName =~ 'cpuLimitNanoCores', Value/1000000, Value);
+
+let latestTimeGeneratedByController = primaryInventory
+| summarize arg_max(TimeGenerated, *) by ControllerId
+| project ControllerId, LastTimeGenerated = TimeGenerated;
+
+primaryInventory
+| distinct ControllerId, ControllerName, ControllerKind, Namespace
+| join kind=leftouter (podStatusRollup) on ControllerId
+| join kind=leftouter (metricByController) on ControllerId
+| join kind=leftouter (containerCountByController) on ControllerId
+| join kind=leftouter (restartCountsByController) on ControllerId
+| join kind=leftouter (oldestRestart) on ControllerId
+| join kind=leftouter (trendLineByController) on ControllerId
+| join kind=leftouter (latestLimit) on ControllerId
+| join kind=leftouter (latestTimeGeneratedByController) on ControllerId
+| project ControllerId, ControllerName, ControllerKind, PodStatusList, AggregationValue, ContainerCount = iif(isempty(ContainerCount), 0, ContainerCount), Restarts, ReadySinceNow, Node = '-', TrendList, LimitValue, LastTimeGenerated, Namespace
+| limit 250;
+```
+
+### List of Nodes by status
+
+The required tables for this chart include KubeNodeInventory, KubePodInventory, and Perf.
+
+```kusto
+ let endDateTime = datetime('start time');
+ let startDateTime = datetime('end time');
+ let binSize = 15m;
+ let limitMetricName = 'cpuCapacityNanoCores';
+ let usedMetricName = 'cpuUsageNanoCores';
+
+ let materializedNodeInventory = KubeNodeInventory
+| where TimeGenerated < endDateTime
+| where TimeGenerated >= startDateTime
+| project ClusterName, ClusterId, Node = Computer, TimeGenerated, Status, NodeName = Computer, NodeId = strcat(ClusterId, '/', Computer), Labels
+| where ClusterId =~ 'clusterResourceID'; //update with resource ID
+
+ let materializedPerf = Perf
+| where TimeGenerated < endDateTime
+| where TimeGenerated >= startDateTime
+| where ObjectName == 'K8SNode'
+| extend NodeId = InstanceName;
+
+ let materializedPodInventory = KubePodInventory
+| where TimeGenerated < endDateTime
+| where TimeGenerated >= startDateTime
+| where isnotempty(ClusterName)
+| where isnotempty(Namespace)
+| where ClusterId =~ 'clusterResourceID'; //update with resource ID
+
+ let inventoryOfCluster = materializedNodeInventory
+| summarize arg_max(TimeGenerated, Status) by ClusterName, ClusterId, NodeName, NodeId;
+
+ let labelsByNode = materializedNodeInventory
+| summarize arg_max(TimeGenerated, Labels) by ClusterName, ClusterId, NodeName, NodeId;
+
+ let countainerCountByNode = materializedPodInventory
+| project ContainerName, NodeId = strcat(ClusterId, '/', Computer)
+| distinct NodeId, ContainerName
+| summarize ContainerCount = count() by NodeId;
+
+ let latestUptime = materializedPerf
+| where CounterName == 'restartTimeEpoch'
+| summarize arg_max(TimeGenerated, CounterValue) by NodeId
+| extend UpTimeMs = datetime_diff('Millisecond', endDateTime, datetime_add('second', toint(CounterValue), make_datetime(1970,1,1)))
+| project NodeId, UpTimeMs;
+
+ let latestLimitOfNodes = materializedPerf
+| where CounterName == limitMetricName
+| summarize CounterValue = max(CounterValue) by NodeId
+| project NodeId, LimitValue = CounterValue;
+
+ let actualUsageAggregated = materializedPerf
+| where CounterName == usedMetricName
+| summarize Aggregation = percentile(CounterValue, 95) by NodeId //This line updates to the desired aggregation
+| project NodeId, Aggregation;
+
+ let aggregateTrendsOverTime = materializedPerf
+| where CounterName == usedMetricName
+| summarize TrendAggregation = percentile(CounterValue, 95) by NodeId, bin(TimeGenerated, binSize) //This line updates to the desired aggregation
+| project NodeId, TrendAggregation, TrendDateTime = TimeGenerated;
+
+ let unscheduledPods = materializedPodInventory
+| where isempty(Computer)
+| extend Node = Computer
+| where isempty(ContainerStatus)
+| where PodStatus == 'Pending'
+| order by TimeGenerated desc
+| take 1
+| project ClusterName, NodeName = 'unscheduled', LastReceivedDateTime = TimeGenerated, Status = 'unscheduled', ContainerCount = 0, UpTimeMs = '0', Aggregation = '0', LimitValue = '0', ClusterId;
+
+ let scheduledPods = inventoryOfCluster
+| join kind=leftouter (aggregateTrendsOverTime) on NodeId
+| extend TrendPoint = pack("TrendTime", TrendDateTime, "TrendAggregation", TrendAggregation)
+| summarize make_list(TrendPoint) by NodeId, NodeName, Status
+| join kind=leftouter (labelsByNode) on NodeId
+| join kind=leftouter (countainerCountByNode) on NodeId
+| join kind=leftouter (latestUptime) on NodeId
+| join kind=leftouter (latestLimitOfNodes) on NodeId
+| join kind=leftouter (actualUsageAggregated) on NodeId
+| project ClusterName, NodeName, ClusterId, list_TrendPoint, LastReceivedDateTime = TimeGenerated, Status, ContainerCount, UpTimeMs, Aggregation, LimitValue, Labels
+| limit 250;
+
+ union (scheduledPods), (unscheduledPods)
+| project ClusterName, NodeName, LastReceivedDateTime, Status, ContainerCount, UpTimeMs = UpTimeMs_long, Aggregation = Aggregation_real, LimitValue = LimitValue_real, list_TrendPoint, Labels, ClusterId
+```
+ ## Resource logs Resource logs for AKS are stored in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. You can distinguish different logs with the **Category** column. For a description of each category, see [AKS reference resource logs](../../aks/monitor-aks-reference.md). The following examples require a diagnostic extension to send resource logs for an AKS cluster to a Log Analytics workspace. For more information, see [Configure monitoring](../../aks/monitor-aks.md#configure-monitoring).
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Previously updated : 05/11/2022 Last updated : 06/06/2023
This applies to the scenario where you have already enabled container insights f
>* The configuration change can take a few minutes to complete before it takes effect. All ama-logs pods in the cluster will restart. >* The restart is a rolling restart for all ama-logs pods. It won't restart all of them at the same time.
-## Multi-line logging in Container Insights
+## Multi-line logging in Container Insights (preview)
Azure Monitor - Container insights now supports multiline logging. With this feature enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. Customers are able see container log lines upto to 64 KB (up from the existing 16 KB limit). If the stitched log line is larger than 64 KB, it gets truncated due to Log Analytics limits. Additionally, the feature also adds support for .NET and Go stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table. ### Pre-requisites
-Customers must enable *ContainerLogV2* for multi-line logging to work. Go here to [enable ContainerLogV2](/containers/container-insights-logging-v2#enable-the-containerlogv2-schema) in Container Insights.
+Customers must enable *ContainerLogV2* for multi-line logging to work. Go here to [enable ContainerLogV2](./container-insights-logging-v2.md#enable-the-containerlogv2-schema) in Container Insights.
### How to enable - This is currently a preview feature Multi-line logging can be enabled by setting *enable_multiline_logs* flag to ΓÇ£trueΓÇ¥ in [the config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml#L49)
Multi-line logging can be enabled by setting *enable_multiline_logs* flag to ΓÇ£
## Next steps * Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
+* Learn how [query data](./container-insights-log-query.md#container-logs) from ContainerLogV2
azure-monitor Prometheus Metrics From Arc Enabled Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-from-arc-enabled-cluster.md
AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList="pods=[k8s-annot
AzureMonitorMetrics.KubeStateMetrics.MetricsLabelsAllowlist "namespaces=[k8s-label-1,k8s-label-n]" ```
-### Delete the extension instance
-The following command only deletes the extension instance. The Azure Monitor workspace and its data are not deleted.
-
-```azurecli
-az k8s-extension delete --name azuremonitor-metrics -g <cluster_resource_group> -c<cluster_name> -t connectedClusters
-```
- ### [Resource Manager](#tab/resource-manager) ### Prerequisites
az k8s-extension show \
```
+### Delete the extension instance
+
+To delete the extension instance, use the following CLI command:
+
+```azurecli
+az k8s-extension delete --name azuremonitor-metrics -g <cluster_resource_group> -c<cluster_name> -t connectedClusters
+```
+
+The command only deletes the extension instance. The Azure Monitor workspace and its data are not deleted.
+ ## Disconnected clusters If your cluster is disconnected from Azure for more than 48 hours, Azure Resource Graph won't have information about your cluster. As a result, your Azure Monitor Workspace may have incorrect information about your cluster state.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Configure a table for Basic logs if:
| Event Hubs | [AZMSArchiveLogs](/azure/azure-monitor/reference/tables/AZMSArchiveLogs)<br>[AZMSAutoscaleLogs](/azure/azure-monitor/reference/tables/AZMSAutoscaleLogs)<br>[AZMSCustomerManagedKeyUserLogs](/azure/azure-monitor/reference/tables/AZMSCustomerManagedKeyUserLogs)<br>[AZMSKafkaCoordinatorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaCoordinatorLogs)<br>[AZMSKafkaUserErrorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaUserErrorLogs) | | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) | | Health Care APIs | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs)<br>[AHDSDicomDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSDicomDiagnosticLogs)<br>[AHDSDicomAuditLogs](/azure/azure-monitor/reference/tables/AHDSDicomAuditLogs) |
+ | Key Vault | [AZKVAuditLogs](/azure/azure-monitor/reference/tables/AZKVAuditLogs)<br>[AZKVPolicyEvaluationDetailsLogs](/azure/azure-monitor/reference/tables/AZKVPolicyEvaluationDetailsLogs) |
| Kubernetes services | [AKSAudit](/azure/azure-monitor/reference/tables/AKSAudit)<br>[AKSAuditAdmin](/azure/azure-monitor/reference/tables/AKSAuditAdmin)<br>[AKSControlPlane](/azure/azure-monitor/reference/tables/AKSControlPlane) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | | Redis Cache Enterprise | [REDConnectionEvents](/azure/azure-monitor/reference/tables/REDConnectionEvents) |
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
description: Log Analytics workspace data export in Azure Monitor lets you conti
Previously updated : 02/09/2022 Last updated : 06/21/2023
Log Analytics workspace data export continuously exports data that's sent to you
- Currently, data export isn't supported in China. ## Data completeness
-Data export is optimized to move large data volumes to your destinations. The export operation might fail if the destination doesn't have sufficient capacity or is unavailable. In the event of failure, the retry process continues for up to 12 hours. For more information about destination limits and recommended alerts, see [Create or update a data export rule](#create-or-update-a-data-export-rule). If the destinations are still unavailable after the retry period, the data is discarded. In certain cases, retry can cause duplication of a fraction of the exported records.
+Data export is optimized to move large data volume to your destinations. The export operation might fail if the destination doesn't have sufficient capacity or is unavailable. In the event of failure, the retry process continues for up to 12 hours. For more information about destination limits and recommended alerts, see [Create or update a data export rule](#create-or-update-a-data-export-rule). If the destinations are still unavailable after the retry period, the data is discarded. In certain cases, retry can cause duplication of a fraction of the exported records.
## Pricing model
-Data export charges are based on the volume of data exported measured in bytes. The size of data exported by Log Analytics Data Export is the number of bytes in the exported JSON-formatted data. Data volume is measured in GB (10^9 bytes).
+Data export charges are based on the number of bytes exported to destinations in JSON formatted data, and measured in GB (10^9 bytes). Size calculation in workspace query can't correspond with export charges since doesn't include the JSON formatted data. You can use PowerShell to [calculate the total billing size of a blob container](../../storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md).
For more information, including the data export billing timeline, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Set Up Logs Ingestion Api Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/set-up-logs-ingestion-api-prerequisites.md
+
+ Title: Set up resources required to send data to Azure Monitor Logs using the Logs Ingestion API
+description: Run a PowerShell script to set up all resources required to send data to Azure Monitor using the Logs Ingestion API.
++++ Last updated : 06/12/2023+++
+# Set up resources required to send data to Azure Monitor Logs using the Logs Ingestion API
+
+This article provides a PowerShell script that sets up all of the resources you need before you can send data to Azure Monitor Logs using the [Logs ingestion API](logs-ingestion-api-overview.md).
+
+> [!NOTE]
+> As a Microsoft MVP, [Morten Waltorp Knudsen](https://mortenknudsen.net/) contributed to and provided material feedback for this article. For an example of how you can automate the setup and ongoing use of the Log Ingestion API, see Morten's [AzLogDcrIngestPS PowerShell module](https://github.com/KnudsenMorten/AzLogDcrIngestPS).
+
+## Create resources and permissions
+The script creates these resources, if they don't already exist:
+
+- A Log Analytics workspace and a resource group for the Log Analytics workspace.
+
+ You probably already have a Log Analytics workspace, in which case, provide the workspace details so the script sets up the other resources in the same region as the workspace.
+
+- An Azure AD application to authenticate against the API and:
+ - A service principal on the Azure AD application
+ - A secret for the Azure AD application
+- A data collection endpoint (DCE) and a resource group for the data collection endpoint, in same region as Log Analytics workspace, to receive data.
+- A resource group for data collection rules (DCR) in the same region as the Log Analytics workspace.
+
+The script also grants the app `Contributor` permissions to:
+
+- The Log Analytics workspace
+- The resource group for data collection rules
+- The resource group for data collection endpoints
+
+## PowerShell script
++
+```powershell
+#
+# Prerequisite functions
+#
+
+Write-Output "Checking needed functions ... Please Wait !"
+$ModuleCheck = Get-Module -Name Az.Resources -ListAvailable -ErrorAction SilentlyContinue
+If (!($ModuleCheck))
+ {
+ Write-Output "Installing Az-module in CurrentUser scope ... Please Wait !"
+ Install-module -Name Az -Force -Scope CurrentUser
+ }
+
+$ModuleCheck = Get-Module -Name Microsoft.Graph -ListAvailable -ErrorAction SilentlyContinue
+If (!($ModuleCheck))
+ {
+ Write-Output "Installing Microsoft.Graph in CurrentUser scope ... Please Wait !"
+ Install-module -Name Microsoft.Graph -Force -Scope CurrentUser
+ }
+
+<#
+ Install-module Az -Scope CurrentUser
+ Install-module Microsoft.Graph -Scope CurrentUser
+ install-module Az.portal -Scope CurrentUser
+
+ Import-module Az -Scope CurrentUser
+ Import-module Az.Accounts -Scope CurrentUser
+ Import-module Az.Resources -Scope CurrentUser
+ Import-module Microsoft.Graph.Applications -Scope CurrentUser
+ Import-Module Microsoft.Graph.DeviceManagement.Enrolment -Scope CurrentUser
+#>
++
+#-
+# (1) Variables (Prerequisites, environment setup)
+#-
+$TenantId = "<your tenant ID>"
+
+# Azure app registration
+$AzureAppName = "Log-Ingestion-App"
+$AzAppSecretName = "Log-Ingestion-App secret"
+
+# Log Analytics workspace
+$LogAnalyticsSubscription = "<Log Analytics workspace ID>"
+$LogAnalyticsResourceGroup = "<Log Analytics workspace resource group>"
+$LoganalyticsWorkspaceName = "<Log Analytics workspace name>"
+$LoganalyticsLocation = "<Log Analytics workspace location>"
+
+# Data collection endpoint
+$AzDceName = "dce-log-ingestion-demo"
+$AzDceResourceGroup = "rg-dce-log-ingestion-demo"
+
+# Data collection rule
+$AzDcrResourceGroup = "rg-dcr-log-ingestion-demo"
+$AzDcrPrefix = "demo"
+
+$VerbosePreference = "SilentlyContinue" # "Continue"
+
+#-
+# (2) Connectivity
+#-
+ # Connect to Azure
+ Connect-AzAccount -Tenant $TenantId -WarningAction SilentlyContinue
+
+ # Get access token
+ $AccessToken = Get-AzAccessToken -ResourceUrl https://management.azure.com/
+ $AccessToken = $AccessToken.Token
+
+ # Build headers for Azure REST API with access token
+ $Headers = @{
+ "Authorization"="Bearer $($AccessToken)"
+ "Content-Type"="application/json"
+ }
++
+ # Connect to Microsoft Graph
+ $MgScope = @(
+ "Application.ReadWrite.All",`
+ "Directory.Read.All",`
+ "Directory.AccessAsUser.All",
+ "RoleManagement.ReadWrite.Directory"
+ )
+ Connect-MgGraph -TenantId $TenantId -ForceRefresh -Scopes $MgScope
+
+#-
+# (3) Prerequisites - deployment of environment (if missing)
+#-
+
+ <#
+ This section deploys all resources needed for ingesting logs using the Log Ingestion API.
+
+ The deployment includes the following steps:
+
+ (1) Create a resource group for the Log Analytics workspace
+ (2) Create the Log Analytics workspace
+ (3) Create an Azure App registration to send data to Azure Monitor Logs
+ (4) Create a service principal on the app
+ (5) Create a secret for the app
+ (6) Create a resource group for the data collection endpoint (DCE) in the same region as the Log Analytics workspace
+ (7) Create a resource group for data collection rules (DCR) in the same region as the Log Analytics workspace
+ (8) Create data collection endpoint (DCE) in same region as Log Analytics workspace
+ (9) Grant the Azure app permissions to the Log Analytics workspace
+ (10) Grant the Azure app permissions to the resource group for data collection rules (DCR)
+ (11) Grant the Azure app permissions to the resource group for data collection endpoints (DCE)
+ #>
+
+ #-
+ # Azure context
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Azure context is subscription [ $($LogAnalyticsSubscription) ]"
+ $AzContext = Get-AzContext
+ If ($AzContext.Subscription -ne $LogAnalyticsSubscription )
+ {
+ Write-Output ""
+ Write-Output "Switching Azure context to subscription [ $($LogAnalyticsSubscription) ]"
+ $AzContext = Set-AzContext -Subscription $LogAnalyticsSubscription -Tenant $TenantId
+ }
+
+ #-
+ # Create the resource group for Log Analytics workspace
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Azure resource group exist [ $($LogAnalyticsResourceGroup) ]"
+ try {
+ Get-AzResourceGroup -Name $LogAnalyticsResourceGroup -ErrorAction Stop
+ } catch {
+ Write-Output ""
+ Write-Output "Creating Azure resource group [ $($LogAnalyticsResourceGroup) ]"
+ New-AzResourceGroup -Name $LogAnalyticsResourceGroup -Location $LoganalyticsLocation
+ }
+
+ #-
+ # Create the workspace
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Log Analytics workspace exist [ $($LoganalyticsWorkspaceName) ]"
+ try {
+ $LogWorkspaceInfo = Get-AzOperationalInsightsWorkspace -Name $LoganalyticsWorkspaceName -ResourceGroupName $LogAnalyticsResourceGroup -ErrorAction Stop
+ } catch {
+ Write-Output ""
+ Write-Output "Creating Log Analytics workspace [ $($LoganalyticsWorkspaceName) ] in $LogAnalyticsResourceGroup"
+ New-AzOperationalInsightsWorkspace -Location $LoganalyticsLocation -Name $LoganalyticsWorkspaceName -Sku PerGB2018 -ResourceGroupName $LogAnalyticsResourceGroup
+ }
+
+ #-
+ # Get workspace details
+ #-
+
+ $LogWorkspaceInfo = Get-AzOperationalInsightsWorkspace -Name $LoganalyticsWorkspaceName -ResourceGroupName $LogAnalyticsResourceGroup
+
+ $LogAnalyticsWorkspaceResourceId = $LogWorkspaceInfo.ResourceId
+
+ #-
+ # Create Azure app registration
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Azure App [ $($AzureAppName) ]"
+ $AppCheck = Get-MgApplication -Filter "DisplayName eq '$AzureAppName'" -ErrorAction Stop
+ If ($AppCheck -eq $null)
+ {
+ Write-Output ""
+ Write-host "Creating Azure App [ $($AzureAppName) ]"
+ $AzureApp = New-MgApplication -DisplayName $AzureAppName
+ }
+
+ #-
+ # Create service principal on Azure app
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Azure Service Principal on App [ $($AzureAppName) ]"
+ $AppInfo = Get-MgApplication -Filter "DisplayName eq '$AzureAppName'"
+
+ $AppId = $AppInfo.AppId
+ $ObjectId = $AppInfo.Id
+
+ $ServicePrincipalCheck = Get-MgServicePrincipal -Filter "AppId eq '$AppId'"
+ If ($ServicePrincipalCheck -eq $null)
+ {
+ Write-Output ""
+ Write-host "Creating Azure Service Principal on App [ $($AzureAppName) ]"
+ $ServicePrincipal = New-MgServicePrincipal -AppId $AppId
+ }
+
+ #-
+ # Create secret on Azure app
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Azure Secret on App [ $($AzureAppName) ]"
+ $AppInfo = Get-MgApplication -Filter "AppId eq '$AppId'"
+
+ $AppId = $AppInfo.AppId
+ $ObjectId = $AppInfo.Id
+
+ If ($AzAppSecretName -notin $AppInfo.PasswordCredentials.DisplayName)
+ {
+ Write-Output ""
+ Write-host "Creating Azure Secret on App [ $($AzureAppName) ]"
+
+ $passwordCred = @{
+ displayName = $AzAppSecretName
+ endDateTime = (Get-Date).AddYears(1)
+ }
+
+ $AzAppSecret = (Add-MgApplicationPassword -applicationId $ObjectId -PasswordCredential $passwordCred).SecretText
+ Write-Output ""
+ Write-Output "Secret with name [ $($AzAppSecretName) ] created on app [ $($AzureAppName) ]"
+ Write-Output $AzAppSecret
+ Write-Output ""
+ Write-Output "AppId for app [ $($AzureAppName) ] is"
+ Write-Output $AppId
+ }
+
+ #-
+ # Create a resource group for data collection endpoints (DCE) in the same region as the Log Analytics workspace
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Azure resource group exist [ $($AzDceResourceGroup) ]"
+ try {
+ Get-AzResourceGroup -Name $AzDceResourceGroup -ErrorAction Stop
+ } catch {
+ Write-Output ""
+ Write-Output "Creating Azure resource group [ $($AzDceResourceGroup) ]"
+ New-AzResourceGroup -Name $AzDceResourceGroup -Location $LoganalyticsLocation
+ }
+
+ #-
+ # Create a resource group for data collection rules (DCR) in the same region as the Log Analytics workspace
+ #-
+
+ Write-Output ""
+ Write-Output "Validating Azure resource group exist [ $($AzDcrResourceGroup) ]"
+ try {
+ Get-AzResourceGroup -Name $AzDcrResourceGroup -ErrorAction Stop
+ } catch {
+ Write-Output ""
+ Write-Output "Creating Azure resource group [ $($AzDcrResourceGroup) ]"
+ New-AzResourceGroup -Name $AzDcrResourceGroup -Location $LoganalyticsLocation
+ }
+
+ #-
+ # Create data collection endpoint
+ #-
+
+ Write-Output ""
+ Write-Output "Validating data collection endpoint exist [ $($AzDceName) ]"
+
+ $DceUri = "https://management.azure.com" + "/subscriptions/" + $LogAnalyticsSubscription + "/resourceGroups/" + $AzDceResourceGroup + "/providers/Microsoft.Insights/dataCollectionEndpoints/" + $AzDceName + "?api-version=2022-06-01"
+ Try
+ {
+ Invoke-RestMethod -Uri $DceUri -Method GET -Headers $Headers
+ }
+ Catch
+ {
+ Write-Output ""
+ Write-Output "Creating/updating DCE [ $($AzDceName) ]"
+
+ $DceObject = [pscustomobject][ordered]@{
+ properties = @{
+ description = "DCE for LogIngest to LogAnalytics $LoganalyticsWorkspaceName"
+ networkAcls = @{
+ publicNetworkAccess = "Enabled"
+
+ }
+ }
+ location = $LogAnalyticsLocation
+ name = $AzDceName
+ type = "Microsoft.Insights/dataCollectionEndpoints"
+ }
+
+ $DcePayload = $DceObject | ConvertTo-Json -Depth 20
+
+ $DceUri = "https://management.azure.com" + "/subscriptions/" + $LogAnalyticsSubscription + "/resourceGroups/" + $AzDceResourceGroup + "/providers/Microsoft.Insights/dataCollectionEndpoints/" + $AzDceName + "?api-version=2022-06-01"
+
+ Try
+ {
+ Invoke-WebRequest -Uri $DceUri -Method PUT -Body $DcePayload -Headers $Headers
+ }
+ Catch
+ {
+ }
+ }
+
+ #-
+ # Sleeping 1 min to let Azure AD replicate before delegation
+ #-
+
+ # Write-Output "Sleeping 1 min to let Azure AD replicate before doing delegation"
+ # Start-Sleep -s 60
+
+ #-
+ # Grant the Azure app permissions to the Log Analytics workspace
+ # Needed for table management - not needed for log ingestion - for simplicity, it's set up when there's one app
+ #-
+
+ Write-Output ""
+ Write-Output "Setting Contributor permissions for app [ $($AzureAppName) ] on the Log Analytics workspace [ $($LoganalyticsWorkspaceName) ]"
+
+ $LogWorkspaceInfo = Get-AzOperationalInsightsWorkspace -Name $LoganalyticsWorkspaceName -ResourceGroupName $LogAnalyticsResourceGroup
+
+ $LogAnalyticsWorkspaceResourceId = $LogWorkspaceInfo.ResourceId
+
+ $ServicePrincipalObjectId = (Get-MgServicePrincipal -Filter "AppId eq '$AppId'").Id
+ $DcrRgResourceId = (Get-AzResourceGroup -Name $AzDcrResourceGroup).ResourceId
+
+ # Contributor on Log Analytics workspace
+ $guid = (new-guid).guid
+ $ContributorRoleId = "b24988ac-6180-42a0-ab88-20f7382dd24c" # Contributor
+ $roleDefinitionId = "/subscriptions/$($LogAnalyticsSubscription)/providers/Microsoft.Authorization/roleDefinitions/$($ContributorRoleId)"
+ $roleUrl = "https://management.azure.com" + $LogAnalyticsWorkspaceResourceId + "/providers/Microsoft.Authorization/roleAssignments/$($Guid)?api-version=2018-07-01"
+ $roleBody = @{
+ properties = @{
+ roleDefinitionId = $roleDefinitionId
+ principalId = $ServicePrincipalObjectId
+ scope = $LogAnalyticsWorkspaceResourceId
+ }
+ }
+ $jsonRoleBody = $roleBody | ConvertTo-Json -Depth 6
+
+ $result = try
+ {
+ Invoke-RestMethod -Uri $roleUrl -Method PUT -Body $jsonRoleBody -headers $Headers -ErrorAction SilentlyContinue
+ }
+ catch
+ {
+ }
++
+ #-
+ # Grant the Azure app permissions to the DCR resource group
+ #-
+
+ Write-Output ""
+ Write-Output "Setting Contributor permissions for app [ $($AzureAppName) ] on resource group [ $($AzDcrResourceGroup) ]"
+
+ $ServicePrincipalObjectId = (Get-MgServicePrincipal -Filter "AppId eq '$AppId'").Id
+ $AzDcrRgResourceId = (Get-AzResourceGroup -Name $AzDcrResourceGroup).ResourceId
+
+ # Contributor
+ $guid = (new-guid).guid
+ $ContributorRoleId = "b24988ac-6180-42a0-ab88-20f7382dd24c" # Contributor
+ $roleDefinitionId = "/subscriptions/$($LogAnalyticsSubscription)/providers/Microsoft.Authorization/roleDefinitions/$($ContributorRoleId)"
+ $roleUrl = "https://management.azure.com" + $AzDcrRgResourceId + "/providers/Microsoft.Authorization/roleAssignments/$($Guid)?api-version=2018-07-01"
+ $roleBody = @{
+ properties = @{
+ roleDefinitionId = $roleDefinitionId
+ principalId = $ServicePrincipalObjectId
+ scope = $AzDcrRgResourceId
+ }
+ }
+ $jsonRoleBody = $roleBody | ConvertTo-Json -Depth 6
+
+ $result = try
+ {
+ Invoke-RestMethod -Uri $roleUrl -Method PUT -Body $jsonRoleBody -headers $Headers -ErrorAction SilentlyContinue
+ }
+ catch
+ {
+ }
+
+ Write-Output ""
+ Write-Output "Setting Monitoring Metrics Publisher permissions for app [ $($AzureAppName) ] on RG [ $($AzDcrResourceGroup) ]"
+
+ # Monitoring Metrics Publisher
+ $guid = (new-guid).guid
+ $monitorMetricsPublisherRoleId = "3913510d-42f4-4e42-8a64-420c390055eb"
+ $roleDefinitionId = "/subscriptions/$($LogAnalyticsSubscription)/providers/Microsoft.Authorization/roleDefinitions/$($monitorMetricsPublisherRoleId)"
+ $roleUrl = "https://management.azure.com" + $AzDcrRgResourceId + "/providers/Microsoft.Authorization/roleAssignments/$($Guid)?api-version=2018-07-01"
+ $roleBody = @{
+ properties = @{
+ roleDefinitionId = $roleDefinitionId
+ principalId = $ServicePrincipalObjectId
+ scope = $AzDcrRgResourceId
+ }
+ }
+ $jsonRoleBody = $roleBody | ConvertTo-Json -Depth 6
+
+ $result = try
+ {
+ Invoke-RestMethod -Uri $roleUrl -Method PUT -Body $jsonRoleBody -headers $Headers -ErrorAction SilentlyContinue
+ }
+ catch
+ {
+ }
+
+ #-
+ # Grant the Azure app permissions to the DCE resource group
+ #-
+
+ Write-Output ""
+ Write-Output "Setting Contributor permissions for app [ $($AzDceName) ] on RG [ $($AzDceResourceGroup) ]"
+
+ $ServicePrincipalObjectId = (Get-MgServicePrincipal -Filter "AppId eq '$AppId'").Id
+ $AzDceRgResourceId = (Get-AzResourceGroup -Name $AzDceResourceGroup).ResourceId
+
+ # Contributor
+ $guid = (new-guid).guid
+ $ContributorRoleId = "b24988ac-6180-42a0-ab88-20f7382dd24c" # Contributor
+ $roleDefinitionId = "/subscriptions/$($LogAnalyticsSubscription)/providers/Microsoft.Authorization/roleDefinitions/$($ContributorRoleId)"
+ $roleUrl = "https://management.azure.com" + $AzDceRgResourceId + "/providers/Microsoft.Authorization/roleAssignments/$($Guid)?api-version=2018-07-01"
+ $roleBody = @{
+ properties = @{
+ roleDefinitionId = $roleDefinitionId
+ principalId = $ServicePrincipalObjectId
+ scope = $AzDceRgResourceId
+ }
+ }
+ $jsonRoleBody = $roleBody | ConvertTo-Json -Depth 6
+
+ $result = try
+ {
+ Invoke-RestMethod -Uri $roleUrl -Method PUT -Body $jsonRoleBody -headers $Headers -ErrorAction SilentlyContinue
+ }
+ catch
+ {
+ }
+
+ #--
+ # Summarize environment details
+ #--
+
+ # Azure App
+ Write-Output ""
+ Write-Output "Tenant Id:"
+ Write-Output $TenantId
+
+ # Azure App
+ $AppInfo = Get-MgApplication -Filter "DisplayName eq '$AzureAppName'"
+ $AppId = $AppInfo.AppId
+ $ObjectId = $AppInfo.Id
+
+ Write-Output ""
+ Write-Output "Log Ingestion Azure App name:"
+ Write-Output $AzureAppName
+
+ Write-Output ""
+ Write-Output "Log Ingestion Azure App ID:"
+ Write-Output $AppId
+ Write-Output ""
++
+ If ($AzAppSecret)
+ {
+ Write-Output "LogIngestion Azure App secret:"
+ Write-Output $AzAppSecret
+ }
+ Else
+ {
+ Write-Output "Log Ingestion Azure App secret:"
+ Write-Output "N/A (new secret must be made)"
+ }
+
+ # Azure Service Principal for App
+ $ServicePrincipalObjectId = (Get-MgServicePrincipal -Filter "AppId eq '$AppId'").Id
+ Write-Output ""
+ Write-Output "Log Ingestion service principal Object ID for app:"
+ Write-Output $ServicePrincipalObjectId
+
+ # Azure Loganalytics
+ Write-Output ""
+ $LogWorkspaceInfo = Get-AzOperationalInsightsWorkspace -Name $LoganalyticsWorkspaceName -ResourceGroupName $LogAnalyticsResourceGroup
+ $LogAnalyticsWorkspaceResourceId = $LogWorkspaceInfo.ResourceId
+
+ Write-Output ""
+ Write-Output "Log Analytics workspace resource ID:"
+ Write-Output $LogAnalyticsWorkspaceResourceId
+
+ # DCE
+ $DceUri = "https://management.azure.com" + "/subscriptions/" + $LogAnalyticsSubscription + "/resourceGroups/" + $AzDceResourceGroup + "/providers/Microsoft.Insights/dataCollectionEndpoints/" + $AzDceName + "?api-version=2022-06-01"
+ $DceObj = Invoke-RestMethod -Uri $DceUri -Method GET -Headers $Headers
+
+ $AzDceLogIngestionUri = $DceObj.properties.logsIngestion[0].endpoint
+
+ Write-Output ""
+ Write-Output "Data collection endpoint name:"
+ Write-Output $AzDceName
+
+ Write-Output ""
+ Write-Output "Data collection endpoint Log Ingestion URI:"
+ Write-Output $AzDceLogIngestionUri
+ Write-Output ""
+ Write-Output "-"
+```
+
+## Next steps
+
+- [Learn more about data collection rules](../essentials/data-collection-rule-overview.md)
+- [Learn more about writing transformation queries](../essentials//data-collection-transformations.md)
+
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Several other features don't have a direct cost, but instead you pay for the ing
| Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application Insights resources. For most customers, this category typically incurs the bulk of Azure Monitor charges. There's no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for logs can vary significantly on the configuration that you choose. For information on how charges for logs data are calculated and the different pricing tiers available, see [Azure Monitor logs pricing details](logs/cost-logs.md). | | Platform logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there's a charge for the workspace data ingestion and collection. | | Metrics | There's no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There's a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Prometheus Metrics | The service is currently free to use, with billing set to begin on 8/1/2023. Pricing for Azure Monitor managed service for Prometheus consists of data ingestion priced at $0.16/10 million samples ingested and metric queries priced at $0.001/10 million samples processed. Data is retained for 18 months at no extra charge. |
| Alerts | Charges are based on the type and number of signals used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [Log alerts](alerts/alerts-types.md#log-alerts) configured for [at-scale monitoring](alerts/alerts-types.md#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. | | Web tests | There's a cost for [standard web tests](app/availability-standard-tests.md) and [multistep web tests](/previous-versions/azure/azure-monitor/app/availability-multistep) in Application Insights. Multistep web tests have been deprecated.
azure-monitor Workbook Templates Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbook-templates-move-region.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 #Customer intent: As an Azure service administrator, I want to move my resources to another Azure region
azure-monitor Workbooks Access Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-access-troubleshooting-guide.md
ibiza Previously updated : 09/08/2022 Last updated : 06/21/2023 # Access deprecated Troubleshooting guides in Azure Workbooks
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-automate.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Programmatically manage workbooks
azure-monitor Workbooks Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-bring-your-own-storage.md
Title: Azure Monitor workbooks bring your own storage description: Learn how to secure your workbook by saving the workbook content to your storage. - ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Bring your own storage to save workbooks
azure-monitor Workbooks Chart Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Chart visualizations
azure-monitor Workbooks Commonly Used Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-commonly-used-components.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Composite Bar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-composite-bar.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Composite bar renderer
azure-monitor Workbooks Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-configurations.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Workbook configuration options
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
Previously updated : 05/30/2022 Last updated : 06/21/2023
azure-monitor Workbooks Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-criteria.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Dropdowns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md
Title: Azure Monitor workbook dropdown parameters
description: Simplify complex reporting with prebuilt and custom parameterized workbooks containing dropdown parameters. - ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Workbook dropdown parameters
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
description: Learn how to perform the commonly used tasks in workbooks.
Previously updated : 05/30/2022 Last updated : 06/21/2023
azure-monitor Workbooks Graph Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Graph visualizations
azure-monitor Workbooks Grid Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-grid-visualizations.md
Title: Azure Monitor workbook grid visualizations
description: Learn about all the Azure Monitor workbook grid visualizations. Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Honey Comb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-honey-comb.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Honeycomb visualizations
azure-monitor Workbooks Jsonpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-jsonpath.md
ibiza Previously updated : 02/19/2023 Last updated : 06/21/2023 # Use JSONPath to transform JSON data in workbooks
azure-monitor Workbooks Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-limits.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
Title: Azure Workbooks link actions
description: This article explains how to use link actions in Azure Workbooks. Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Map Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-map-visualizations.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Map visualization
azure-monitor Workbooks Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-move-region.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 #Customer intent: As an Azure service administrator, I want to move my resources to another Azure region
azure-monitor Workbooks Multi Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-multi-value.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Options Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-options-group.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-overview.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-parameters.md
Title: Create workbook parameters
description: Learn how to add parameters to your workbook to collect input from the consumers and reference it in other parts of the workbook. - ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Workbook parameters
azure-monitor Workbooks Renderers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-renderers.md
Title: Azure Workbooks rendering options
description: Learn about all the Azure Monitor workbook rendering options. Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-resources.md
description: Learn how to use resource parameters to allow picking of resources
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Workbook resource parameters
azure-monitor Workbooks Retrieve Legacy Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-retrieve-legacy-workbooks.md
ibiza Previously updated : 09/08/2022 Last updated : 06/21/2023
azure-monitor Workbooks Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-samples.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-templates.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-monitor Workbooks Text Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text-visualizations.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Text Visualization
azure-monitor Workbooks Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Workbook text parameters
azure-monitor Workbooks Tile Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tile-visualizations.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Tile visualizations
azure-monitor Workbooks Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-time.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Workbook time parameters
azure-monitor Workbooks Tree Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tree-visualizations.md
ibiza Previously updated : 07/05/2022 Last updated : 06/21/2023 # Tree visualizations
azure-monitor Workbooks View Designer Conversion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-view-designer-conversion-overview.md
Previously updated : 07/22/2022 Last updated : 06/21/2023
azure-monitor Workbooks View Designer Conversions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-view-designer-conversions.md
Previously updated : 07/22/2022 Last updated : 06/21/2023
azure-monitor Workbooks Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-visualizations.md
Previously updated : 07/05/2022 Last updated : 06/21/2023
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
Before creating an SMB volume, you need to create an Active Directory connection
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Encryption key source**
+ You can select `Microsoft Managed Key` or `Customer Managed Key`. See [Configure customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) and [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) about using this field.
+ * **Availability zone** This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Encryption key source**
+ You can select `Microsoft Managed Key` or `Customer Managed Key`. See [Configure customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) and [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) about using this field.
+ * **Availability zone** This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
na Previously updated : 02/21/2023 Last updated : 06/15/2023 # Create a capacity pool for Azure NetApp Files
You must have already [created a NetApp account](azure-netapp-files-create-netap
* **Size** Specify the size of the capacity pool that you are purchasing. The minimum capacity pool size is 2 TiB. You can change the size of a capacity pool in 1-TiB increments.-
+
>[!NOTE] >[!INCLUDE [Limitations for capacity pool minimum of 2 TiB](includes/2-tib-capacity-pool.md)]
You must have already [created a NetApp account](azure-netapp-files-create-netap
> [!IMPORTANT] > Setting **QoS type** to **Manual** is permanent. You cannot convert a manual QoS capacity pool to use auto QoS. However, you can convert an auto QoS capacity pool to use manual QoS. See [Change a capacity pool to use manual QoS](manage-manual-qos-capacity-pool.md#change-to-qos).
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot of new capacity pool options.":::
+ * **Encryption type** <a name="encryption_type"></a>
+ Specify whether you want the volumes in this capacity pool to use **single** or **double** encryption. See [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) for details.
+ > [!IMPORTANT]
+ > Azure NetApp Files double encryption at rest supports [Standard network features](azure-netapp-files-network-topologies.md#configurable-network-features), but not Basic network features. See [considerations](double-encryption-at-rest.md#considerations) for using Azure NetApp Files double encryption at rest.
+ >
+ > After the capacity pool is created, you canΓÇÖt modify the setting (switching between `single` or `double`) for the encryption type.
+
+ Azure NetApp Files double encryption at rest is currently in preview. If you are using this feature for the first time, you need to register the feature first.
+ 1. Register the feature:
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFDoubleEncryption
+ ```
+ 2. Check the status of the feature registration. `RegistrationState` may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFDoubleEncryption
+ ```
+ You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot showing the New Capacity Pool window.":::
4. Select **Create**.
+ The **Capacity pools** page shows the configurations for the capacity pool.
+
## Next steps - [Storage Hierarchy](azure-netapp-files-understand-storage-hierarchy.md)
You must have already [created a NetApp account](azure-netapp-files-create-netap
- [Azure NetApp Files pricing page](https://azure.microsoft.com/pricing/details/storage/netapp/) - [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md) - [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md)
+- [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 02/28/2023 Last updated : 06/22/2023 # Create a dual-protocol volume for Azure NetApp Files
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* The **Allow local NFS users with LDAP** option in Active Directory connections intends to provide occasional and temporary access to local users. When this option is enabled, user authentication and lookup from the LDAP server stop working, and the number of group memberships that Azure NetApp Files will support will be limited to 16. As such, you should keep this option *disabled* on Active Directory connections, except for the occasion when a local user needs to access LDAP-enabled volumes. In that case, you should disable this option as soon as local user access is no longer required for the volume. See [Allow local NFS users with LDAP to access a dual-protocol volume](#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume) about managing local user access. * Ensure that the NFS client is up to date and running the latest updates for the operating system. * Dual-protocol volumes support both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (AADDS).
-* Dual-protocol volumes do not support the use of LDAP over TLS with AADDS. See [LDAP over TLS considerations](configure-ldap-over-tls.md#considerations).
+* Dual-protocol volumes do not support the use of LDAP over TLS with Azure Active Directory Domain Services ([Azure AD DS](../active-directory-domain-services/overview.md)). LDAP over TLS is supported with Active Directory Domain Services (AD DS). See [LDAP over TLS considerations](configure-ldap-over-tls.md#considerations).
* The NFS version used by a dual-protocol volume can be NFSv3 or NFSv4.1. The following considerations apply: * Dual protocol does not support the Windows ACLS extended attributes `set/get` from NFS clients. * NFS clients cannot change permissions for the NTFS security style, and Windows clients cannot change permissions for UNIX-style dual-protocol volumes.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Encryption key source**
+ You can select `Microsoft Managed Key` or `Customer Managed Key`. See [Configure customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) and [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) about using this field.
+ * **Availability zone** This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
+
+ Title: Azure NetApp Files double encryption at rest | Microsoft Docs
+description: Explains Azure NetApp Files double encryption at rest to help you use this feature.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 06/21/2023+++
+# Azure NetApp Files double encryption at rest
+
+By default, Azure NetApp Files capacity pools use single encryption at rest. When you [create a capacity pool](azure-netapp-files-set-up-capacity-pool.md#encryption_type), you have the option to use double encryption at rest for the volumes in the capacity pool. You can do so by selecting `double` as the **encryption type** for the capacity pool that you are creating.
+
+Critical data is often found in places such as financial institutions, military users, business customer data, government records, health care medical records, and so on. While single encryption at rest may be considered sufficient for some data, you should use double encryption at rest for data where a breach of confidentiality would be catastrophic. Leaks of information such as customer sensitive data, names, addresses, and government identification can result in extremely high liability, and it can be mitigated by having data confidentiality protected by double encryption at rest.
+
+When data is transported over networks, additional encryption such as Transport Layer Security (TLS) can help to protect the transit of data. But once the data has arrived, protection of that data at rest helps to address the vulnerability. Using Azure NetApp Files double encryption at rest complements the security thatΓÇÖs inherent with the physically secure cloud storage in Azure data centers.
+
+Azure NetApp Files double encryption at rest provides two levels of encryption protection: both a hardware-based encryption layer (encrypted SSD drives) and a software-encryption layer. The hardware-based encryption layer resides at the physical storage level, using FIPS 140-2 certified drives. The software-based encryption layer is at the volume level completing the second level of encryption protection.
+
+If you are using this feature for the first time, you need to [register for the feature](azure-netapp-files-set-up-capacity-pool.md#encryption_type) and then create a double-encryption capacity pool. For details, see [Create a capacity pool for Azure NetApp Files](azure-netapp-files-set-up-capacity-pool.md).
+
+When you create a volume in a double-encryption capacity pool, the default key management (the **Encryption key source** field) is `Microsoft Managed Key`, and the other choice is `Customer Managed Key`. Using customer-managed keys requires additional preparation of an Azure Key Vault and other details. For more information about using volume encryption with customer managed keys, see [Configure customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md).
++
+## Supported regions
+
+Azure NetApp Files double encryption at rest is supported for the following regions:
+
+* West Europe
+* East US 2
+* East Asia
+
+## Considerations
+
+* Azure NetApp Files double encryption at rest supports [Standard network features](azure-netapp-files-network-topologies.md#configurable-network-features), but not Basic network features.
+* For the cost of using Azure NetApp Files double encryption at rest, see the [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) page.
+* You can't convert volumes in a single-encryption capacity pool to use double encryption at rest. However, you can copy data in a single-encryption volume to a volume created in a capacity pool that is configured with double encryption.
+* For capacity pools created with double encryption at rest, volume names in the capacity pool are visible only to volume owners for maximum security.
++
+## Next steps
+
+* [Create a capacity pool for Azure NetApp Files](azure-netapp-files-set-up-capacity-pool.md)
azure-netapp-files Troubleshoot Capacity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-capacity-pools.md
This article describes resolutions to issues you might have when managing capaci
| Cannot change QoS type from manual to auto | Once the QoS type is changed to manual, you cannot change it to auto. Given this, there are three options: <ul><li> Do not move the volume if it must be in a capacity pool with QoS type auto.</li><li> Create a new capacity pool with QoS type manual enabled, then you can move the volume to the new capacity pool. </li><li> Change the destination pool to QoS type manual from auto. Then perform the move. </li></ul> For information about QoS, see [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md#qos_types). | | Cannot change a volume from a Double Encrypted Pool to a Single Encrypted Pool or from a Single Encrypted Pool to a Double Encrypted Pool | The destination pool must be of the same encryption type as the source pool. |
+## Issues for double-encryption capacity pools
+
+| Error condition | Resolution |
+|-|-|
+| Out of storage capacity when creating or resizing volumes under double-encryption capacity pools: `There are currently insufficient resources available to create [or extend] a volume in this region. Please retry the operation. If the problem persists, contact Support.` | The error indicates insufficient resources in the region to support hardware-level data encryption. Retry the operation after some time. Resources may have been freed in the cluster, region, or zone in the interim. |
+ ## Next steps * [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 06/01/2023 Last updated : 06/15/2023 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## June 2023
+* [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) (Preview)
+
+ We are excited to announce the addition of double encryption at rest for Azure NetApp Files volumes. This new feature provides an extra layer of protection for your critical data, ensuring maximum confidentiality and mitigating potential liabilities. Double encryption at rest is ideal for industries such as finance, military, healthcare, and government, where breaches of confidentiality can have catastrophic consequences. By combining hardware-based encryption with encrypted SSD drives and software-based encryption at the volume level, your data remains secure throughout its lifecycle. You can select **double** as the encryption type during capacity pool creation to easily enable this advanced security layer.
+ * Availability zone volume placement enhancement - [Populate existing volumes](manage-availability-zone-volume-placement.md#populate-an-existing-volume-with-availability-zone-information) (preview) The Azure NetApp Files [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy *new volumes* in the availability zone of your choice, in alignment with Azure compute and other services in the same zone. With this "Populate existing volume" enhancement, you can now obtain and, if desired, populate *previously deployed, existing volumes* with the logical availability zone information. This capability automatically maps the physical zone the volumes was deployed in and maps it to the logical zone for your subscription. This feature doesn't move any volumes between zones.
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Title: CI/CD with Azure Pipelines and Bicep files
description: In this quickstart, you learn how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use an Azure CLI task to deploy a Bicep file. Previously updated : 05/05/2023 Last updated : 06/21/2023 # Quickstart: Integrate Bicep with Azure Pipelines
You need a [Bicep file](./quickstart-create-bicep-use-visual-studio-code.md) tha
## Create pipeline
-1. From your Azure DevOps organization, select **Pipelines** and **New pipeline**.
+1. From your Azure DevOps organization, select **Pipelines** and **Create pipeline**.
- ![Add new pipeline](./media/add-template-to-azure-pipelines/new-pipeline.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/new-pipeline.png" alt-text="Screenshot of creating new pipeline.":::
-1. Specify where your code is stored.
+1. Specify where your code is stored. This quickstart uses Azure Repos Git.
- ![Select code source](./media/add-template-to-azure-pipelines/select-source.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/select-source.png" alt-text="Screenshot of selecting code source.":::
1. Select the repository that has the code for your project.
- ![Select repository](./media/add-template-to-azure-pipelines/select-repo.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/select-repo.png" alt-text="Screenshot of selecting repository.":::
1. Select **Starter pipeline** for the type of pipeline to create.
- ![Select pipeline](./media/add-template-to-azure-pipelines/select-pipeline.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/select-pipeline.png" alt-text="Screenshot of selecting pipeline.":::
## Deploy Bicep files
You can use Azure Resource Group Deployment task or Azure CLI task to deploy a B
### Use Azure Resource Manager Template Deployment task
-Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure Resource Manager Template Deployment task](/azure/devops/pipelines/tasks/reference/azure-resource-manager-template-deployment-v3).
-
-```yml
-trigger:
-- master-
-name: Deploy Bicep files
-
-variables:
- vmImageName: 'ubuntu-latest'
-
- azureServiceConnection: '<your-connection-name>'
- resourceGroupName: 'exampleRG'
- location: '<your-resource-group-location>'
- templateFile: './main.bicep'
-pool:
- vmImage: $(vmImageName)
-
-steps:
-- task: AzureResourceManagerTemplateDeployment@3
- inputs:
- deploymentScope: 'Resource Group'
- azureResourceManagerConnection: '$(azureServiceConnection)'
- action: 'Create Or Update Resource Group'
- resourceGroupName: '$(resourceGroupName)'
- location: '$(location)'
- templateLocation: 'Linked artifact'
- csmFile: '$(templateFile)'
- overrideParameters: '-storageAccountType Standard_LRS'
- deploymentMode: 'Incremental'
- deploymentName: 'DeployPipelineTemplate'
-```
+1. Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure Resource Manager Template Deployment task](/azure/devops/pipelines/tasks/reference/azure-resource-manager-template-deployment-v3).
+
+ ```yml
+ trigger:
+ - main
+
+ name: Deploy Bicep files
+
+ variables:
+ vmImageName: 'ubuntu-latest'
+
+ azureServiceConnection: '<your-connection-name>'
+ resourceGroupName: 'exampleRG'
+ location: '<your-resource-group-location>'
+ templateFile: './main.bicep'
+ pool:
+ vmImage: $(vmImageName)
+
+ steps:
+ - task: AzureResourceManagerTemplateDeployment@3
+ inputs:
+ deploymentScope: 'Resource Group'
+ azureResourceManagerConnection: '$(azureServiceConnection)'
+ action: 'Create Or Update Resource Group'
+ resourceGroupName: '$(resourceGroupName)'
+ location: '$(location)'
+ templateLocation: 'Linked artifact'
+ csmFile: '$(templateFile)'
+ overrideParameters: '-storageAccountType Standard_LRS'
+ deploymentMode: 'Incremental'
+ deploymentName: 'DeployPipelineTemplate'
+ ```
+
+1. Update the values of `azureServiceConnection` and `location`.
+1. Verify you have a `main.bicep` in your repo, and the content of the Bicep file.
+1. Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status.
-For the descriptions of the task inputs, see [Azure Resource Manager Template Deployment task](/azure/devops/pipelines/tasks/reference/azure-resource-manager-template-deployment-v3).
+### Use Azure CLI task
-Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status.
+1. Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2):
-### Use Azure CLI task
+ ```yml
+ trigger:
+ - main
-Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2):
-
-```yml
-trigger:
-- master-
-name: Deploy Bicep files
-
-variables:
- vmImageName: 'ubuntu-latest'
-
- azureServiceConnection: '<your-connection-name>'
- resourceGroupName: 'exampleRG'
- location: '<your-resource-group-location>'
- templateFile: 'main.bicep'
-pool:
- vmImage: $(vmImageName)
-
-steps:
-- task: AzureCLI@2
- inputs:
- azureSubscription: $(azureServiceConnection)
- scriptType: bash
- scriptLocation: inlineScript
- useGlobalConfig: false
- inlineScript: |
- az --version
- az group create --name $(resourceGroupName) --location $(location)
- az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile)
-```
+ name: Deploy Bicep files
-To override the parameters, update the last line of `inlineScript` to:
+ variables:
+ vmImageName: 'ubuntu-latest'
-```bicep
-az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile) --parameters storageAccountType='Standard_GRS' location='eastus'
-```
+ azureServiceConnection: '<your-connection-name>'
+ resourceGroupName: 'exampleRG'
+ location: '<your-resource-group-location>'
+ templateFile: 'main.bicep'
+ pool:
+ vmImage: $(vmImageName)
+
+ steps:
+ - task: AzureCLI@2
+ inputs:
+ azureSubscription: $(azureServiceConnection)
+ scriptType: bash
+ scriptLocation: inlineScript
+ useGlobalConfig: false
+ inlineScript: |
+ az --version
+ az group create --name $(resourceGroupName) --location $(location)
+ az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile)
+ ```
+
+ To override the parameters, update the last line of `inlineScript` to:
+
+ ```bicep
+ az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile) --parameters storageAccountType='Standard_GRS' location='eastus'
+ ```
-For the descriptions of the task inputs, see [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2). When using the task on air-gapped cloud, you must set the `useGlobalConfig` property of the task to `true`. The default value is `false`.
+ For the descriptions of the task inputs, see [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2). When using the task on air-gapped cloud, you must set the `useGlobalConfig` property of the task to `true`. The default value is `false`.
-Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status.
+1. Update the values of `azureServiceConnection` and `location`.
+1. Verify you have a `main.bicep` in your repo, and the content of the Bicep file.
+1. Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status.
## Clean up resources
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
Previously updated : 09/09/2022 Last updated : 06/22/2023 # Resource functions for Bicep
param allowedLocations array = [
'australiacentral' ]
-resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2019-09-01' = {
+resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
name: 'locationRestriction' properties: { policyType: 'Custom'
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2019-09-01'
} }
-resource policyAssignment 'Microsoft.Authorization/policyAssignments@2019-09-01' = {
+resource policyAssignment 'Microsoft.Authorization/policyAssignments@2022-06-01' = {
name: 'locationAssignment' properties: { policyDefinitionId: policyDefinition.id
param adminLogin string
@secure() param adminPassword string
-resource sqlServer 'Microsoft.Sql/servers@2020-11-01-preview' = {
+resource sqlServer 'Microsoft.Sql/servers@2022-08-01-preview' = {
... } ```
param subscriptionId string
param kvResourceGroup string param kvName string
-resource keyVault 'Microsoft.KeyVault/vaults@2019-09-01' existing = {
+resource keyVault 'Microsoft.KeyVault/vaults@2023-02-01' existing = {
name: kvName scope: resourceGroup(subscriptionId, kvResourceGroup ) }
Other `list` functions have different return formats. To see the format of a fun
The following example deploys a storage account and then calls `listKeys` on that storage account. The key is used when setting a value for [deployment scripts](../templates/deployment-script-template.md). ```bicep
-resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
name: 'dscript${uniqueString(resourceGroup().id)}' location: location kind: 'StorageV2'
resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = {
} }
-resource dScript 'Microsoft.Resources/deploymentScripts@2019-10-01-preview' = {
+resource dScript 'Microsoft.Resources/deploymentScripts@2020-10-01' = {
name: 'scriptWithStorage' location: location ...
The possible uses of `list*` are shown in the following table.
| Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |
-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2022-05-15/database-accounts/list-connection-strings) |
-| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2022-05-15/database-accounts/list-keys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2022-05-15/notebook-workspaces/list-connection-info) |
+| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2022-11-15/database-accounts/list-connection-strings) |
+| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2022-11-15/database-accounts/list-keys) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2022-11-15/notebook-workspaces/list-connection-info) |
| Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) |
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2023-04-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2023-04-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2023-04-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
Returns an object representing a resource's runtime state.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
-The reference function is available in Bicep files, but typically you don't need it. Instead, use the symbolic name for the resource.
+The Bicep files provide access to the reference function, although it is typically unnecessary. Instead, it is recommended to use the symbolic name of the resource. The reference function can only be used within the `properties` object of a resource and cannot be employed for top-level properties like `name` or `location`. The same generally applies to references using the symbolic name. However, for properties such as `name`, it is possible to generate a template without utilizing the reference function. Sufficient information about the resource name is known to directly emit the name. It is referred to as compile-time properties. Bicep validation can identify any incorrect usage of the symbolic name.
-The following example deploys a storage account. It uses the symbolic name `storageAccount` for the storage account to return a property.
+The following example deploys a storage account. The first two outputs give you the same results.
```bicep
-param storageAccountName string
+param storageAccountName string = uniqueString(resourceGroup().id)
+param location string = resourceGroup().location
-resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
name: storageAccountName
- location: 'eastus'
+ location: location
kind: 'Storage' sku: { name: 'Standard_LRS' } }
-output storageEndpoint object = storageAccount.properties.primaryEndpoints
+output storageObjectSymbolic object = storageAccount.properties
+output storageObjectReference object = reference('storageAccount')
+output storageName string = storageAccount.name
+output storageLocation string = storageAccount.location
``` To get a property from an existing resource that isn't deployed in the template, use the `existing` keyword:
To get a property from an existing resource that isn't deployed in the template,
```bicep param storageAccountName string
-resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
name: storageAccountName }
For example:
```bicep param storageAccountName string
+param location string = resourceGroup().location
-resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
name: storageAccountName
- location: 'eastus'
+ location: location
kind: 'Storage' sku: { name: 'Standard_LRS'
To get the resource ID for a resource that isn't deployed in the Bicep file, use
```bicep param storageAccountName string
-resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
name: storageAccountName }
var roleDefinitionId = {
} }
-resource roleAssignment 'Microsoft.Authorization/roleAssignments@2018-09-01-preview' = {
+resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(resourceGroup().id, principalId, roleDefinitionId[builtInRoleType].id) properties: { roleDefinitionId: roleDefinitionId[builtInRoleType].id
param allowedLocations array = [
var mgScope = tenantResourceId('Microsoft.Management/managementGroups', targetMG) var policyDefinitionName = 'LocationRestriction'
-resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2020-03-01' = {
+resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
name: policyDefinitionName properties: { policyType: 'Custom'
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2020-03-01'
} }
-resource location_lock 'Microsoft.Authorization/policyAssignments@2020-03-01' = {
+resource location_lock 'Microsoft.Authorization/policyAssignments@2021-06-01' = {
name: 'location-lock' properties: { scope: mgScope
param policyDefinitionID string = '0a914e76-4921-4c19-b460-a2d36003525a'
@description('Specifies the name of the policy assignment, can be used defined or an idempotent name as the defaultValue provides.') param policyAssignmentName string = guid(policyDefinitionID, resourceGroup().name)
-resource policyAssignment 'Microsoft.Authorization/policyAssignments@2019-09-01' = {
+resource policyAssignment 'Microsoft.Authorization/policyAssignments@2022-06-01' = {
name: policyAssignmentName properties: { scope: subscriptionResourceId('Microsoft.Resources/resourceGroups', resourceGroup().name)
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Title: Create parameters files for Bicep deployment
description: Create parameters file for passing in values during deployment of a Bicep file Previously updated : 06/20/2023 Last updated : 06/22/2023 # Create parameters files for Bicep deployment
az deployment group create \
--name ExampleDeployment \ --resource-group ExampleGroup \ --template-file storage.bicep \
- --parameters @storage.bicepparam
+ --parameters storage.bicepparam
``` For more information, see [Deploy resources with Bicep and Azure CLI](./deploy-cli.md#parameters). To deploy _.bicep_ files you need Azure CLI version 2.20 or higher.
For more information, see [Deploy resources with Bicep and Azure PowerShell](./d
## Parameter precedence
-You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence. This feature hasn't been implemented for Bicep parameters file.
+You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence. This feature hasn't been implemented for Bicep parameters file.
It's possible to use an external parameters file, by providing the URI to the file. When you use an external parameters file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file.
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
Title: Manage resource groups - Python
description: Use Python to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. -+ Last updated 02/27/2023
+content_well_notification:
+ - AI-contribution
# Manage Azure resource groups by using Python Learn how to use Python with [Azure Resource Manager](overview.md) to manage your Azure resource groups.
+<!--[!INCLUDE [AI attribution](../../../includes/ai-generated-attribution.md)]-->
## Prerequisites
azure-resource-manager Cloud Services Extended Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/cloud-services-extended-support.md
# Move guidance for Cloud Services (extended support) deployment model resources
+> [!IMPORTANT]
+> The move feature is under development for Cloud Services (extended support) and not available for Production use yet. The guidance will be updated once the feature is deployed for all customers and regions.
+ The steps to move resources deployed through the Cloud Services (extended support) model differ based on whether you're moving the resources within a subscription or to a new subscription. ## Move in the same subscription
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md
Title: Conditional deployment with templates
description: Describes how to conditionally deploy a resource in an Azure Resource Manager template (ARM template). Previously updated : 05/22/2023 Last updated : 06/22/2023 # Conditional deployment in ARM templates
You can use conditional deployment to create a new resource or use an existing o
] } },
- "resources": {
- "saNew": {
+ "resources": [
+ {
"condition": "[equals(parameters('newOrExisting'), 'new')]", "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01",
You can use conditional deployment to create a new resource or use an existing o
}, "kind": "StorageV2" },
- "saExisting": {
+ {
"condition": "[equals(parameters('newOrExisting'), 'existing')]",
- "existing": true,
"type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01", "name": "[parameters('storageAccountName')]" }
- },
+ ],
"outputs": { "storageAccountId": { "type": "string",
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md
Title: Key Vault secret with template description: Shows how to pass a secret from a key vault as a parameter during deployment. Previously updated : 05/22/2023 Last updated : 06/22/2023
The following template deploys a SQL server that includes an administrator passw
"type": "securestring" } },
- "resources": {
- "sqlServer": {
+ "resources": [
+ {
"type": "Microsoft.Sql/servers", "apiVersion": "2021-11-01", "name": "[parameters('sqlServerName')]",
The following template deploys a SQL server that includes an administrator passw
"version": "12.0" } }
- }
+ ]
} ```
azure-resource-manager Template Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-expressions.md
Title: Template syntax and expressions
description: Describes the declarative JSON syntax for Azure Resource Manager templates (ARM templates). Previously updated : 02/22/2023 Last updated : 06/22/2023 # Syntax and expressions in ARM templates
To totally remove an element, you can use the [filter() function](./template-fun
} ] },
- "resources": {},
+ "resources": [],
"outputs": { "backendAddressPools": { "type": "array",
azure-resource-manager Template Functions Cidr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-cidr.md
The following example parses an IPv4 CIDR string:
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "resources": {},
+ "resources": [],
"outputs": { "v4info": { "type": "object",
The following example parses an IPv6 CIDR string:
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "resources": {},
+ "resources": [],
"outputs": { "v6info": { "type": "object",
The following example calculates the first five /24 subnet ranges from the speci
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "resources": {},
+ "resources": [],
"outputs": { "v4subnets": { "type": "array",
The following example calculates the first five /52 subnet ranges from the speci
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "resources": {},
+ "resources": [],
"outputs": { "v6subnets": { "type": "array",
The following example calculates the first five usable host IP addresses from th
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "resources": {},
+ "resources": [],
"outputs": { "v4hosts": { "type": "array",
The following example calculates the first five usable host IP addresses from th
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "resources": {},
+ "resources": [],
"outputs": { "v6hosts": { "type": "array",
azure-resource-manager Template Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-lambda.md
Title: Template functions - lambda
description: Describes the lambda functions to use in an Azure Resource Manager template (ARM template) Previously updated : 05/22/2023 Last updated : 06/22/2023 # Lambda functions for ARM templates
The following examples show how to use the `filter` function.
} ] },
- "resources": {},
+ "resources": [],
"outputs": { "oldDogs": { "type": "array",
The output from the preceding example shows the dogs that are five or older:
} ] },
- "resources": {},
+ "resources": [],
"outputs": { "filteredLoop": { "type": "array",
The following example shows how to use the `map` function.
} ] },
- "resources": {},
+ "resources": [],
"outputs": { "dogNames": { "type": "array",
The following examples show how to use the `reduce` function.
], "ages": "[map(variables('dogs'), lambda('dog', lambdaVariables('dog').age))]" },
- "resources": {},
+ "resources": [],
"outputs": { "totalAge": { "type": "int",
The output from the preceding example is:
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "resources": {},
+ "resources": [],
"outputs": { "reduceObjectUnion": { "type": "object",
The following example shows how to use the `sort` function.
} ] },
- "resources": {},
+ "resources": [],
"outputs": { "dogsByAge": { "type": "array",
The following example shows how to use the `toObject` function with the two requ
} ] },
- "resources": {},
+ "resources": [],
"outputs": { "dogsObject": { "type": "object",
The following example shows how to use the `toObject` function with three parame
} ] },
- "resources": {},
+ "resources": [],
"outputs": { "dogsObject": { "type": "object",
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
- Title: Comparison of Azure Video Indexer and Azure Media Services v3 presets
-description: This article compares Azure Video Indexer capabilities and Azure Media Services v3 presets.
- Previously updated : 11/10/2022----
-# Compare Azure Media Services v3 presets and Azure Video Indexer
-
-This article compares the capabilities of **Azure Video Indexer(AVI) APIs** and **Media Services v3 APIs**.
-
-Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). Azure Media Services have [announced deprecation](https://learn.microsoft.com/azure/media-services/latest/release-notes#retirement-of-the-azure-media-redactor-video-analyzer-and-face-detector-on-september-14-2023) of their Video Analysis preset starting September 2023. It is advised to use Azure Video Indexer Video Analysis going forward, which is general available and offers more functionality.
-
-The following table offers the current guideline for understanding the differences and similarities.
-
-## Compare
-
-|Feature|Azure Video Indexer APIs |Video Analyzer and Audio Analyzer Presets<br/>in Media Services v3 APIs|
-||||
-|Media Insights|[Enhanced](video-indexer-output-json-v2.md) |[Fundamentals](/azure/media-services/latest/analyze-video-audio-files-concept)|
-|Experiences|See the full list of supported features: <br/> [Overview](video-indexer-overview.md)|Returns video insights only|
-|Pricing|[AVI pricing](https://azure.microsoft.com/pricing/details/video-indexer/) |[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics) |
-|Compliance|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Azure Video Indexer" to see if it complies with a certificate of interest.|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Media Services" to see if it complies with a certificate of interest.|
-|Trial|East US|Not available|
-|Region availability|See [Cognitive Services availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)|See [Media Services availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=media-services).|
-
-## Next steps
-
-[Azure Video Indexer overview](video-indexer-overview.md)
-
-[Media Services v3 overview](/azure/media-services/latest/media-services-overview)
cloud-services Cloud Services Diagnostics Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-diagnostics-powershell.md
You can collect diagnostic data like application logs, performance counters etc. from a Cloud Service using the Azure Diagnostics extension. This article describes how to enable the Azure Diagnostics extension for a Cloud Service using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. ## Enable diagnostics extension as part of deploying a Cloud Service
-This approach is applicable to continuous integration type of scenarios, where the diagnostics extension can be enabled as part of deploying the cloud service. When creating a new Cloud Service deployment, you can enable the diagnostics extension by passing in the *ExtensionConfiguration* parameter to the [New-AzureDeployment](/powershell/module/servicemanagement/azure.service/new-azuredeployment) cmdlet. The *ExtensionConfiguration* parameter takes an array of diagnostics configurations that can be created using the [New-AzureServiceDiagnosticsExtensionConfig](/powershell/module/servicemanagement/azure.service/new-azureservicediagnosticsextensionconfig) cmdlet.
+This approach is applicable to continuous integration type of scenarios, where the diagnostics extension can be enabled as part of deploying the cloud service. When creating a new Cloud Service deployment, you can enable the diagnostics extension by passing in the *ExtensionConfiguration* parameter to the [New-AzureDeployment](/powershell/module/servicemanagement/azure/new-azuredeployment) cmdlet. The *ExtensionConfiguration* parameter takes an array of diagnostics configurations that can be created using the [New-AzureServiceDiagnosticsExtensionConfig](/powershell/module/servicemanagement/azure/new-azureservicediagnosticsextensionconfig) cmdlet.
The following example shows how you can enable diagnostics for a cloud service with a WebRole and WorkerRole, each having a different diagnostics configuration.
$workerrole_diagconfig = New-AzureServiceDiagnosticsExtensionConfig -Role "Worke
``` ## Enable diagnostics extension on an existing Cloud Service
-You can use the [Set-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/set-azureservicediagnosticsextension) cmdlet to enable or update diagnostics configuration on a Cloud Service that is already running.
+You can use the [Set-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure/set-azureservicediagnosticsextension) cmdlet to enable or update diagnostics configuration on a Cloud Service that is already running.
[!INCLUDE [cloud-services-wad-warning](../../includes/cloud-services-wad-warning.md)]
Set-AzureServiceDiagnosticsExtension -DiagnosticsConfiguration @($webrole_diagco
``` ## Get current diagnostics extension configuration
-Use the [Get-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/get-azureservicediagnosticsextension) cmdlet to get the current diagnostics configuration for a cloud service.
+Use the [Get-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure/get-azureservicediagnosticsextension) cmdlet to get the current diagnostics configuration for a cloud service.
```powershell Get-AzureServiceDiagnosticsExtension -ServiceName "MyService" ``` ## Remove diagnostics extension
-To turn off diagnostics on a cloud service, you can use the [Remove-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/remove-azureservicediagnosticsextension) cmdlet.
+To turn off diagnostics on a cloud service, you can use the [Remove-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure/remove-azureservicediagnosticsextension) cmdlet.
```powershell Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService"
Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService" -Role "WebRole"
## Next Steps * For additional guidance on using Azure diagnostics and other techniques to troubleshoot problems, see [Enabling Diagnostics in Azure Cloud Services and Virtual Machines](cloud-services-dotnet-diagnostics.md). * The [Diagnostics Configuration Schema](../azure-monitor/agents/diagnostics-extension-schema-windows.md) explains the various xml configurations options for the diagnostics extension.
-* To learn how to enable the diagnostics extension for Virtual Machines, see [Create a Windows Virtual machine with monitoring and diagnostics using Azure Resource Manager Template](../virtual-machines/extensions/diagnostics-template.md)
+* To learn how to enable the diagnostics extension for Virtual Machines, see [Create a Windows Virtual machine with monitoring and diagnostics using Azure Resource Manager Template](../virtual-machines/extensions/diagnostics-template.md)
cloud-services Cloud Services How To Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-manage-portal.md
There are two key prerequisites for a successful deployment swap:
- If you want to use a static IP address for your production slot, you must reserve one for your staging slot as well. Otherwise, the swap fails. -- All instances of your roles must be running before you can perform the swap. You can check the status of your instances on the **Overview** blade of the Azure portal. Alternatively, you can use the [Get-AzureRole](/powershell/module/servicemanagement/azure.service/get-azurerole) command in Windows PowerShell.
+- All instances of your roles must be running before you can perform the swap. You can check the status of your instances on the **Overview** blade of the Azure portal. Alternatively, you can use the [Get-AzureRole](/powershell/module/servicemanagement/azure/get-azurerole) command in Windows PowerShell.
Note that guest OS updates and service healing operations also can cause deployment swaps to fail. For more information, see [Troubleshoot cloud service deployment problems](cloud-services-troubleshoot-deployment-problems.md).
The **Overview** blade has a status bar at the top. When you select the bar, a n
* [General configuration of your cloud service](cloud-services-how-to-configure-portal.md). * Learn how to [deploy a cloud service](cloud-services-how-to-create-deploy-portal.md). * Configure a [custom domain name](cloud-services-custom-domain-name-portal.md).
-* Configure [TLS/SSL certificates](cloud-services-configure-ssl-certificate-portal.md).
+* Configure [TLS/SSL certificates](cloud-services-configure-ssl-certificate-portal.md).
cloud-services Cloud Services Powershell Create Cloud Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-powershell-create-cloud-container.md
This article explains how to quickly create a Cloud Services container using Azu
1. Install the Microsoft Azure PowerShell cmdlet from the [Azure PowerShell downloads](https://aka.ms/webpi-azps) page. 2. Open the PowerShell command prompt.
-3. Use the [Add-AzureAccount](/powershell/module/servicemanagement/azure.service/add-azureaccount) to sign in.
+3. Use the [Add-AzureAccount](/powershell/module/servicemanagement/azure/add-azureaccount) to sign in.
> [!NOTE] > For further instruction on installing the Azure PowerShell cmdlet and connecting to your Azure subscription, refer to [How to install and configure Azure PowerShell](/powershell/azure/).
Get-help New-AzureService
### Next steps
-* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure.service/Get-AzureService), [Remove-AzureService](/powershell/module/servicemanagement/azure.service/Remove-AzureService), and [Set-AzureService](/powershell/module/servicemanagement/azure.service/set-azureservice) commands. You may also refer to [How to configure cloud services](cloud-services-how-to-configure-portal.md) for further information.
+* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure/Get-AzureService), [Remove-AzureService](/powershell/module/servicemanagement/azure/Remove-AzureService), and [Set-AzureService](/powershell/module/servicemanagement/azure/set-azureservice) commands. You may also refer to [How to configure cloud services](cloud-services-how-to-configure-portal.md) for further information.
* To publish your cloud service project to Azure, refer to the **PublishCloudService.ps1** code sample from [archived cloud services repository](https://github.com/MicrosoftDocs/azure-cloud-services-files/tree/master/Scripts/cloud-services-continuous-delivery).
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
Remote Desktop enables you to access the desktop of a role running in Azure. You
This article describes how to enable remote desktop on your Cloud Service Roles using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. PowerShell utilizes the Remote Desktop Extension so you can enable Remote Desktop after the application is deployed. ## Configure Remote Desktop from PowerShell
-The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/set-azureserviceremotedesktopextension) cmdlet allows you to enable Remote Desktop on specified roles or all roles of your cloud service deployment. The cmdlet lets you specify the Username and Password for the remote desktop user through the *Credential* parameter that accepts a PSCredential object.
+The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/set-azureserviceremotedesktopextension) cmdlet allows you to enable Remote Desktop on specified roles or all roles of your cloud service deployment. The cmdlet lets you specify the Username and Password for the remote desktop user through the *Credential* parameter that accepts a PSCredential object.
If you are using PowerShell interactively, you can easily set the PSCredential object by calling the [Get-Credentials](/powershell/module/microsoft.powershell.security/get-credential) cmdlet.
ConvertTo-SecureString -String "Password123" -AsPlainText -Force | ConvertFrom-S
To create the credential object from the secure password file, you must read the file contents and convert them back to a secure string using [ConvertTo-SecureString](/powershell/module/microsoft.powershell.security/convertto-securestring).
-The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/set-azureserviceremotedesktopextension) cmdlet also accepts an *Expiration* parameter, which specifies a **DateTime** at which the user account expires. For example, you could set the account to expire a few days from the current date and time.
+The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/set-azureserviceremotedesktopextension) cmdlet also accepts an *Expiration* parameter, which specifies a **DateTime** at which the user account expires. For example, you could set the account to expire a few days from the current date and time.
This PowerShell example shows you how to set the Remote Desktop Extension on a cloud service:
The Remote Desktop extension is associated with a deployment. If you create a ne
## Remote Desktop into a role instance
-The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure.service/get-azureremotedesktopfile) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the RDP file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance.
+The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the RDP file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance.
```powershell Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -Launch
Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -L
## Check if Remote Desktop extension is enabled on a service
-The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/get-azureremotedesktopfile) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, this happens on the deployment slot and you can choose to use the staging slot instead.
+The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, this happens on the deployment slot and you can choose to use the staging slot instead.
```powershell Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename
Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename
If you have already enabled the remote desktop extension on a deployment, and need to update the remote desktop settings, first remove the extension. And enable it again with the new settings. For example, if you want to set a new password for the remote user account, or the account expired. Doing this is required on existing deployments that have the remote desktop extension enabled. For new deployments, you can simply apply the extension directly.
-To remove the remote desktop extension from the deployment, you can use the [Remove-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/remove-azureserviceremotedesktopextension) cmdlet. You can also optionally specify the deployment slot and role from which you want to remove the remote desktop extension.
+To remove the remote desktop extension from the deployment, you can use the [Remove-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/remove-azureserviceremotedesktopextension) cmdlet. You can also optionally specify the deployment slot and role from which you want to remove the remote desktop extension.
```powershell Remove-AzureServiceRemoteDesktopExtension -ServiceName $servicename -UninstallConfiguration
Remove-AzureServiceRemoteDesktopExtension -ServiceName $servicename -UninstallCo
## Additional resources
-[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
+[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-sizes-specs.md
This topic describes the available sizes and options for Cloud Service role inst
## Sizes for web and worker role instances There are multiple standard sizes to choose from on Azure. Considerations for some of these sizes include:
-* D-series VMs are designed to run applications that demand higher compute power and temporary disk performance. D-series VMs provide faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the temporary disk. For details, see the announcement on the Azure blog, [New D-Series Virtual Machine Sizes](https://azure.microsoft.com/blog/2014/09/22/new-d-series-virtual-machine-sizes/).
+* D-series VMs are designed to run applications that demand higher compute power and temporary disk performance. D-series VMs provide faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the temporary disk. For details, see the announcement on the Azure blog, [New D-Series Virtual Machine Sizes](https://azure.microsoft.com/updates/d-series-virtual-machine-sizes).
* Dv3-series, Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series. * G-series VMs offer the most memory and run on hosts that have Intel Xeon E5 V3 family processors. * The A-series VMs can be deployed on various hardware types and processors. The size is throttled, based on the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine.
cognitive-services Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/region-support.md
Some summarization features are only available in limited regions. More regions
## Next steps
-* [Summarization overview](overview.md)
+* [Summarization overview](overview.md)
cognitive-services Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/business-continuity-disaster-recovery.md
Previously updated : 6/24/2022-- Last updated : 6/21/2023++ recommendations: false keywords:
keywords:
Azure OpenAI is available in multiple regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
-It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
+It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
## Best practices
If you're using a default endpoint, you should configure your client code to mon
Follow these steps to configure your client to monitor errors:
-1. Use this page to identify the list of available regions for the OpenAI service.
+1. Use the [models page](../concepts/models.md) to identify the list of available regions for Azure OpenAI.
2. Select a primary and one secondary/backup regions from the list.
-3. Create OpenAI Service resources for each region selected
+3. Create Azure OpenAI resources for each region selected.
4. For the primary region and any backup regions your code will need to know:
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the default quotas and
|--|--| | OpenAI resources per region per Azure subscription | 30 | | Default quota per model and region (in tokens-per-minute)<sup>1</sup> |Text-Davinci-003: 120 K <br> GPT-4: 20 K <br> GPT-4-32K: 60 K <br> All others: 240 K |
+| Default DALL-E quota limits | 2 concurrent requests |
| Maximum prompt tokens per request | Varies per model. For more information, see [Azure OpenAI Service models](./concepts/models.md)| | Max fine-tuned model deployments | 2 | | Total number of training jobs per resource | 100 |
communication-services Call Automation Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/call-automation-logs.md
Title: Azure Communication Services Call Automation logs-+ description: Learn about logging for Azure Communication Services Call Automation.
-# Azure Communication Services Call Automation Logs
+# Azure Communication Services Call Automation logs
-Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. You configure these capabilities through the Azure portal.
## Prerequisites
-Azure Communication Services provides monitoring and analytics features via [Azure Monitor Logs overview](../../../../azure-monitor/logs/data-platform-logs.md) and [Azure Monitor Metrics](../../../../azure-monitor/essentials/data-platform-metrics.md). Each Azure resource requires its own diagnostic setting, which defines the following criteria:
- * Categories of logs and metric data sent to the destinations defined in the setting. The available categories will vary for different resource types.
- * One or more destinations to send the logs. Current destinations include Log Analytics workspace, Event Hubs, and Azure Storage.
- * A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), then create multiple settings. Each resource can have up to five diagnostic settings.
+Azure Communication Services provides monitoring and analytics features via [Azure Monitor Logs](../../../../azure-monitor/logs/data-platform-logs.md) and [Azure Monitor Metrics](../../../../azure-monitor/essentials/data-platform-metrics.md). Each Azure resource requires its own diagnostic setting, which defines the following criteria:
+
+* Categories of log and metric data sent to the destinations that the setting defines. The available categories vary by resource type.
+* One or more destinations to send the logs. Current destinations include Log Analytics workspace, Azure Event Hubs, and Azure Storage.
+
+ A single diagnostic setting can define no more than one of each destination type. If you want to send data to more than one destination type (for example, two Log Analytics workspaces), create multiple settings. Each resource can have up to five diagnostic settings.
> [!IMPORTANT]
-> You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send call automation data to one of these options your survey data will not be stored and will be lost.
-The following are instructions for configuring your Azure Monitor resource to start creating logs and metrics for your Communications Services. For detailed documentation about using Diagnostic Settings across all Azure resources, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+> You must enable a diagnostic setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, an event hub, or an Azure storage account to receive and analyze your survey data. If you don't send Call Automation data to one of these options, your survey data won't be stored and will be lost.
+
+The following instructions configure your Azure Monitor resource to start creating logs and metrics for your Communication Services instance. For detailed documentation about using diagnostic settings across all Azure resources, see [Enable logging in diagnostic settings](../enable-logging.md).
-> [!NOTE]
-> Under the diagnostic setting name please select ΓÇ£Operation call automation logsΓÇ¥ and ΓÇ£Call Automation Events summary logsΓÇ¥ to enable the logs for call automation logs.
-
- :::image type="content" source="..\media\log-analytics\call-automation-log.png" alt-text="Screenshot of diagnostic settings for call automation.":::
+Under the diagnostic setting name, select **Operation Call Automation Logs** and **Call Automation Events Summary Logs** to enable the logs for Call Automation.
## Resource log categories Communication Services offers the following types of logs that you can enable:
-* **Usage logs** - provides usage data associated with each billed service offering.
-* **Call Automation operational logs** - provides operational information on Call Automation API requests. These logs can be used to identify failure points and query all requests made in a call (using Correlation ID or Server Call ID).
-* **Call Automation media summary logs** - Provides information about outcome of media operations. These come to the user asynchronously when making media requests using Call Automation APIs. These can be used to help identify failure points and possible patterns on how end users are interacting with your application.
+* **Usage logs**: Provide usage data associated with each billed service offering.
+* **Call automation operational logs**: Provide operational information on Call Automation API requests. You can use these logs to identify failure points and query all requests made in a call (by using the correlation ID or server call ID).
+* **Call Automation media summary logs**: Provide information about the outcome of media operations. These logs come to you asynchronously when you're making media requests by using Call Automation APIs. You can use these logs to help identify failure points and possible patterns on how users interact with your application.
-## Usage logs schema
+## Usage log schema
| Property | Description | | -- | |
-| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Timestamp` | The time stamp (UTC) of when the log was generated. |
| `OperationName` | The operation associated with the log record. |
-| `OperationVersion` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| `Properties` | Other data applicable to various modes of Communication Services. |
-| `RecordID` | The unique ID for a given usage record. |
-| `UsageType` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
-| `UnitType` | The type of unit that usage is based on for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `OperationVersion` | The `api-version` value associated with the operation, if the `OperationName` operation was performed through an API. If no API corresponds to this operation, the version represents the version of the operation, in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a resource. The properties that appear within the `properties` blob of an event are the same within a log category and resource type. |
+| `CorrelationID` | The ID for correlated events. You can use it to identify correlated events between multiple tables. |
+| `Properties` | Other data that's applicable to various modes of Communication Services. |
+| `RecordID` | The unique ID for a usage record. |
+| `UsageType` | The mode of usage (for example, Chat, PSTN, or NAT). |
+| `UnitType` | The type of unit that usage is based on for a mode of usage (for example, minutes, megabytes, or messages). |
| `Quantity` | The number of units used or consumed for this record. | ## Call Automation operational logs | Property | Description | | -- | |
-| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `TimeGenerated` | The time stamp (UTC) of when the log was generated. |
| `OperationName` | The operation associated with the log record. | | `CorrelationID` | The identifier to identify a call and correlate events for a unique call. |
-| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `OperationVersion` | The `api-version` version associated with the operation, if the `operationName` operation was performed through an API. If no API corresponds to this operation, the version represents the version of the operation, in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a resource. The properties that appear within the `properties` blob of an event are the same within a log category and resource type. |
| `ResultType` | The status of the operation. |
-| `ResultSignature` | The sub-status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `ResultSignature` | The substatus of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
| `DurationMs` | The duration of the operation in milliseconds. |
-| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that comes from an entity with a publicly available IP address. |
| `Level` | The severity level of the event. | | `URI` | The URI of the request. |
-| `CallConnectionId` | ID representing the call connection, if available. This ID is different for each participant and is used to identify their connection to the call. |
+| `CallConnectionId` | The ID that represents the call connection, if available. This ID is different for each participant and is used to identify their connection to the call. |
| `ServerCallId` | A unique ID to identify a call. |
-| `SDKVersion` | SDK version used for the request. |
+| `SDKVersion` | The SDK version used for the request. |
| `SDKType` | The SDK type used for the request. |
-| `ParticipantId` | ID to identify the call participant that made the request. |
-| `SubOperationName` | Used to identify the subtype of media operation (play, recognize) |
-|`operationID`| It represents the operation ID used to correlate asynchronous events|
+| `ParticipantId` | The ID to identify the call participant that made the request. |
+| `SubOperationName` | The name that's used to identify the subtype of media operation (play or recognize). |
+|`operationID`| The ID that's used to correlate asynchronous events.|
-**Examples**
+Here's an example of a Call Automation operational log:
```json [
Communication Services offers the following types of logs that you can enable:
| Property | Description | | -- | |
-| `TimeGenerated` | It represents the timestamp (UTC) of the event|
-|`level`| It represents the severity level of the event. Must be one of Informational, Warning, Error, or Critical.ΓÇ» |
-|`resourceId`| Represents the resource ID of the resource that emitted the event |
-|`durationMs`| Represents the duration of the operation in milliseconds |
-|`callerIpAddress`| |
-|`correlationId`| Skype Chain IDΓÇ» |
-|`operationName`| The name of the operation represented by this event|
-|`operationVersion`
-| `resultType`| The status of the event. Typical values include Completed, Canceled, Failed|
-| `resultSignature`| The sub-status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call|
-|`operationId`| It represents the operation ID used to correlate asynchronous events|
-|`recognizePromptSubOperationName`|A subtype of the operation. Potential values: File, TextToSpeech, SSML, etc.|
-| `playInLoop`| True if looping was requested for the Play operation, else otherwise|
-|`playToParticipant`| True if the Play operation had a target. False if it was a play to all operation|
-| `interrupted`| True in case of the prompt being interrupted, false otherwise|
-|`resultCode`|Operation Result Code |
-|`resultSubcode`| Operation Result Subcode |
-|`resultMessage`| Operation result message |
--
-**Examples**
+| `TimeGenerated` | The time stamp (UTC) of the event.|
+| `level`| The severity level of the event. It must be one of `Informational`, `Warning`, `Error`, or `Critical`.ΓÇ» |
+| `resourceId` | The ID of the resource that emitted the event. |
+| `durationMs` | The duration of the operation in milliseconds. |
+| `callerIpAddress` | |
+| `correlationId` | The Skype chain ID.ΓÇ» |
+| `operationName` | The name of the operation that this event represents.|
+| `operationVersion` | |
+| `resultType` | The status of the event. Typical values include `Completed`, `Canceled`, and `Failed`.|
+| `resultSignature` | The substatus of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call.|
+| `operationId` | The operation ID that's used to correlate asynchronous events.|
+| `recognizePromptSubOperationName` | A subtype of the operation. Potential values include `File`, `TextToSpeech`, and `SSML`.|
+| `playInLoop` | `True` if looping was requested for the play operation. `False` if otherwise.|
+| `playToParticipant` | `True` if the play operation had a target. `False` if it was a play-to-all operation.|
+| `interrupted` | `True` if the prompt is interrupted. `False` if otherwise.|
+| `resultCode` | The result code of the operation. |
+| `resultSubcode` | The result subcode of the operation. |
+| `resultMessage` | The result message of the operation. |
+
+Here's an example of a Call Automation media summary log:
+ ```json [ {
Communication Services offers the following types of logs that you can enable:
} ````+
+## Next steps
+
+- Learn about the [insights dashboard to monitor Call Automation logs and metrics](/azure/communication-services/concepts/analytics/insights/call-automation-insights).
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/recording-logs.md
-# Azure Communication Services Call Recording Logs
+# Azure Communication Services Call Recording logs
-Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. You configure these capabilities through the Azure portal.
-> [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
## Resource log categories Communication Services offers the following types of logs that you can enable:
-* **Usage logs** - provides usage data associated with each billed service offering
-* **Call Recording Summary Logs** - provides summary information for call recordings like:
- - Call duration.
- - Media content (for example, audio/video, unmixed, or transcription).
- - Format types used for the recording (for example, WAV or MP4).
- - The reason why the recording ended.
-* **Recording incoming operations logs** - provides information regarding incoming requests for Call Recording operations. Every entry corresponds to the result of a call to the Call Recording APIs, e.g. StartRecording, StopRecording, PauseRecording, ResumeRecording, etc.
+* **Usage logs**: Provide usage data associated with each billed service offering.
+* **Call Recording summary logs**: Provide summary information for call recordings, like:
+ * Call duration.
+ * Media content (for example, audio/video, unmixed, or transcription).
+ * Format types used for the recording (for example, WAV or MP4).
+ * The reason why the recording ended.
+* **Recording incoming operations logs**: Provide information about incoming requests for Call Recording operations. Every entry corresponds to the result of a call to the Call Recording APIs, such as StartRecording, StopRecording, PauseRecording, and ResumeRecording.
--
-A recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot). It can also end because of a system failure.
+A recording file is generated at the end of a call or meeting. Either a user or an app (bot) can start and stop the recording. The recording can also end because of a system failure.
Summary logs are published after a recording is ready to be downloaded. The logs are published within the standard latency time for Azure Monitor resource logs. See [Log data ingestion time in Azure Monitor](../../../../azure-monitor/logs/data-ingestion-time.md#azure-metrics-resource-logs-activity-log).
-### Usage logs schema
+### Usage log schema
| Property | Description | | -- | | | `timestamp` | The timestamp (UTC) of when the log was generated. |
-| `operationName` | The operation associated with log record. |
-| `operationVersion` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| `correlationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| `Properties` | Other data applicable to various modes of Communication Services. |
-| `recordID` | The unique ID for a given usage record. |
-| `usageType` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
-| `unitType` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `operationName` | The operation associated with the log record. |
+| `operationVersion` | The `api-version` value associated with the operation, if the `operationName` operation was performed through an API. If no API corresponds to this operation, the version represents the version of the operation, in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a resource. The properties that appear within the `properties` blob of an event are the same within a log category and resource type. |
+| `correlationID` | The ID for correlated events. You can use it to identify correlated events between multiple tables. |
+| `Properties` | Other data that's applicable to various modes of Communication Services. |
+| `recordID` | The unique ID for a usage record. |
+| `usageType` | The mode of usage (for example, Chat, PSTN, or NAT). |
+| `unitType` | The type of unit that usage is based on for a mode of usage (for example, minutes, megabytes, or messages). |
| `quantity` | The number of units used or consumed for this record. |
-### Call Recording summary logs schema
+### Call Recording summary log schema
| Property name | Data type | Description | |- |--|--|
-|`timeGenerated`|DateTime|Time stamp (UTC) of when the log was generated.|
-|`operationName`|String|Operation associated with a log record.|
-|`correlationId`|String|ID that's used to correlate events between tables.|
-|`recordingID`|String|ID for the recording that this log refers to.|
-|`category`|String|Log category of the event. Logs with the same log category and resource type have the same property fields.|
-|`resultType`|String| Status of the operation.|
-|`level`|String |Severity level of the operation.|
-|`chunkCount`|Integer|Total number of chunks created for the recording.|
-|`channelType`|String|Channel type of the recording, such as mixed or unmixed.|
-|`recordingStartTime`|DateTime|Time that the recording started.|
-|`contentType`|String|Content of the recording, such as audio only, audio/video, or transcription.|
-|`formatType`|String|File format of the recording.|
-|`recordingLength`|Double|Duration of the recording in seconds.|
-|`audioChannelsCount`|Integer|Total number of audio channels in the recording.|
-|`recordingEndReason`|String|Reason why the recording ended.|
+| `timeGenerated` | DateTime | The time stamp (UTC) of when the log was generated. |
+| `operationName` | String | The operation associated with a log record. |
+| `correlationId` | String | The ID that's used to correlate events between tables. |
+| `recordingID` | String | The ID for the recording that this log refers to. |
+| `category` | String | The log category of the event. Logs with the same log category and resource type have the same property fields. |
+| `resultType` | String | The status of the operation. |
+| `level` |String| The severity level of the operation. |
+| `chunkCount` | Integer | The total number of chunks created for the recording. |
+| `channelType` | String | The channel type of the recording, such as mixed or unmixed. |
+| `recordingStartTime` | DateTime| The time that the recording started.|
+| `contentType` | String | The content of the recording, such as audio only, audio/video, or transcription. |
+| `formatType` | String | The file format of the recording. |
+| `recordingLength` | Double | The duration of the recording in seconds.|
+| `audioChannelsCount` | Integer | The total number of audio channels in the recording. |
+| `recordingEndReason` | String | The reason why the recording ended. |
### Call Recording and example data
Summary logs are published after a recording is ready to be downloaded. The logs
"category": "RecordingSummary", ```
-A call can have one recording or many recordings, depending on how many times a recording event is triggered.
-For example, if an agent initiates an outbound call on a recorded line and the call drops because of a poor network signal, `callID` will have one `recordingID` value. If the agent calls back the customer, the system generates a new `callID` instance and a new `recordingID` value.
+A call can have one recording or many recordings, depending on how many times a recording event is triggered.
+For example, if an agent starts an outbound call on a recorded line and the call drops because of a poor network signal, `callID` will have one `recordingID` value. If the agent calls back the customer, the system generates a new `callID` instance and a new `recordingID` value.
#### Example: Call Recording for one call to one recording
For example, if an agent initiates an outbound call on a recorded line and the c
} ```
-If the agent initiates a recording and then stops and restarts the recording multiple times while the call is still on, `callID` will have many `recordingID` values, depending on how many times the recording events were triggered.
+If the agent starts a recording and then stops and restarts the recording multiple times while the call is still on, `callID` will have many `recordingID` values. The number of values depends on how many times the recording events were triggered.
#### Example: Call Recording for one call to many recordings
-```json
+```json
{ "TimeGenerated": "2022-08-17T23:55:46.6304762Z",
If the agent initiates a recording and then stops and restarts the recording mul
"AudioChannelsCount": 1 } ```+ ### ACSCallRecordingIncomingOperations logs
-Properties
+Here are the properties:
| Property | Description | | -- | |
-|` timeGenerated`| Represents the timestamp (UTC) of when the log was generated. |
-|` callConnectionId`| Represents the ID of the call connection/leg, if available. |
-|` callerIpAddress`| Represents the caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-|` correlationId`| Represents the ID for correlated events. Can be used to identify correlated events between multiple tables. |
-|` durationMs`|Represents the duration of the operation in milliseconds. |
-|` level`| Represents the severity level of the operation. |
-|` operationName`| Represents the operation associated with log records. |
-|` operationVersion`| Represents the API version associated with the operation or version of the operation (if there is no API version). |
-|` resourceId`| Represents a unique identifier for the resource that the record is associated with. |
-|` resultSignature`| Represents the sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-|` resultType`| Represents the status of the operation. |
-|` sdkType`| Represents the SDK type used in the request. |
-|` sdkVersion`| Represents the SDK version. |
-|` serverCallId`| Represents the server Call ID. |
-|` URI`| Represents the URI of the request. |
-
- Sample
+| `timeGenerated` | The time stamp (UTC) of when the log was generated. |
+| `callConnectionId` | The ID of the call connection or leg, if available. |
+| `callerIpAddress` | The caller IP address, if the operation corresponds to an API call that comes from an entity with a publicly available IP address. |
+| `correlationId` | The ID for correlated events. You can use it to identify correlated events between multiple tables. |
+| `durationMs` | The duration of the operation in milliseconds. |
+| `level` | The severity level of the operation. |
+| `operationName` | The operation associated with log records. |
+| `operationVersion` | The API version associated with the operation or version of the operation (if there is no API version). |
+| `resourceId` | A unique identifier for the resource that the record is associated with. |
+| `resultSignature` | The substatus of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `resultType` | The status of the operation. |
+| `sdkType` | The SDK type used in the request. |
+| `sdkVersion` | The SDK version. |
+| `serverCallId` | The server call ID. |
+| `URI` | The URI of the request. |
+
+Here's an example:
```json "properties"
Properties
} ``` - ## Next steps -- Get [Call Recording insights](../insights/call-recording-insights.md)-- Learn more about [Call Recording](../../voice-video-calling/call-recording.md). -
+- Get [Call Recording insights](../insights/call-recording-insights.md).
+- Learn more about [Call Recording](../../voice-video-calling/call-recording.md).
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
Title: Azure Communication Services - voice and video logs -
-description: Learn about logging for Azure Communication Services Voice and Video.
+ Title: Azure Communication Services Voice Calling and Video Calling logs
+
+description: Learn about logging for Azure Communication Services Voice Calling and Video Calling.
-# Azure Communication Services voice and video Logs
+# Azure Communication Services Voice Calling and Video Calling logs
-Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. You configure these capabilities through the Azure portal.
-> [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+The content in this article refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for Communication Services, see [Enable logging in diagnostic settings](../enable-logging.md).
+
+## Data concepts
-## Data Concepts
-The following are high level descriptions of data concepts specific to Voice and Video calling. These concepts are important to review in order to understand the meaning of the data captured in the logs.
+The following high-level descriptions of data concepts are specific to Voice Calling and Video Calling. These concepts are important to review so that you can understand the meaning of the data captured in the logs.
### Entities and IDs
-A *Call*, as represented in the data, is an abstraction depicted by the `correlationId`. `CorrelationId`s are unique per Call, and are time-bound by `callStartTime` and `callDuration`. Every Call is an event that contains data from two or more *Endpoints*, which represent the various human, bot, or server participants in the Call.
+Become familiar with the following terms:
+
+- **Call**: As represented in the data, a call is an abstraction that's depicted by `correlationId`. Values for `correlationId` are unique for each call, and they're time-bound by `callStartTime` and `callDuration`.
+
+- **Participant**: This entity represents the connection between an endpoint and the server. A participant (`participantId`) is present only when the call is a group call.
-A *Participant* (`participantId`) is present only when the Call is a *Group* Call, as it represents the connection between an Endpoint and the server.
+- **Endpoint**: This is the most unique entity, represented by `endpointId`. Every call is an event that contains data from two or more endpoints. Endpoints represent the participants in the call.
-An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint is not assigned a unique ID. By analyzing endpointType and the number of `endpointIds`, you can determine how many users and other non-human Participants (bots, servers) join a Call. Our native SDKs (Android, iOS) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which always generates a new `endpointId` for each new Call.
+ `EndpointType` tells you whether the endpoint is a human user (PSTN or VoIP), a bot, or the server that's managing multiple participants within a call. When an `endpointType` value is `"Server"`, the endpoint is not assigned a unique ID. You can analyze `endpointType` and the number of `endpointId` values to determine how many users and other nonhuman participants (bots and servers) join a call.
-A *Stream* is the most granular entity, as there is one Stream per direction (inbound/outbound) and `mediaType` (for example, audio and video).
+ Native SDKs for Android and iOS reuse the same `endpointId` value for a user across multiple calls, so you can get an understanding of experiences across sessions. This process differs from web-based endpoints, which always generate a new `endpointId` value for each new call.
-## Data Definitions
+- **Stream**: This is the most granular entity. There's one stream for each direction (inbound or outbound) and `mediaType` value (for example, `Audio` or `Video`).
-### Usage logs schema
+## Data definitions
+
+### Usage log schema
| Property | Description | | -- | |
-| `Timestamp` | The timestamp (UTC) of when the log was generated. |
-| `Operation Name` | The operation associated with log record. |
-| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties change in the future. |
-| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| `Properties` | Other data applicable to various modes of Communication Services. |
-| `Record ID` | The unique ID for a given usage record. |
-| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
-| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Timestamp` | The time stamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with the log record. |
+| `Operation Version` | The `api-version` value associated with the operation, if the `Operation Name` operation was performed through an API. If no API corresponds to this operation, the version represents the version of the operation, in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a resource. The properties that appear within the `properties` blob of an event are the same within a log category and resource type. |
+| `Correlation ID` | The ID for correlated events. You can use it to identify correlated events between multiple tables. |
+| `Properties` | Other data that's applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a usage record. |
+| `Usage Type` | The mode of usage (for example, Chat, PSTN, or NAT). |
+| `Unit Type` | The type of unit that usage is based on for a mode of usage (for example, minutes, megabytes, or messages). |
| `Quantity` | The number of units used or consumed for this record. |
-### Call Summary log schema
-The Call Summary Log contains data to help you identify key properties of all Calls. A different Call Summary Log is created per each `participantId` (`endpointId` in the case of P2P calls) in the Call.
+### Call summary log schema
+
+The call summary log contains data to help you identify key properties of all calls. A different call summary log is created for each `participantId` (`endpointId` in the case of peer-to-peer [P2P] calls) value in the call.
> [!IMPORTANT]
-> Participant information in the call summary log vary based on the participant tenant. The SDK and OS version is redacted if the participant is not within the same tenant (also referred to as cross-tenant) as the ACS resource. Cross-tenantsΓÇÖ participants are classified as external users invited by a resource tenant to join and collaborate during a call.
+> Participant information in the call summary log varies based on the participant tenant. The SDK version and OS version are redacted if the participant is not within the same tenant (also called *cross-tenant*) as the Communication Services resource. Cross-tenant participants are classified as external users invited by a resource tenant to join and collaborate during a call.
| Property | Description | |-|-|
-| `time` | The timestamp (UTC) of when the log was generated. |
-| `operationName` | The operation associated with log record. |
-| `operationVersion` | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| `correlationId` | `correlationId` is the unique ID for a Call. The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationId` is used to easily identify the Call you're troubleshooting. |
-| `identifier` | This value is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
-| `callStartTime` | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. |
-| `callDuration` | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. |
-| `callType` | Contains either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
-| `teamsThreadId` | This ID is only relevant when the Call is organized as a Microsoft Teams meeting, representing the Microsoft Teams ΓÇô Azure Communication Services interoperability use-case. This ID is exposed in operational logs. You can also get this ID through the Chat APIs. |
-| `participantId` | This ID is generated to represent the two-way connection between a `"Participant"` Endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| `participantStartTime` | Timestamp for beginning of the first connection attempt by the participant. |
-| `participantDuration` | The duration of each Participant connection in seconds, from `participantStartTime` to the timestamp when the connection is ended. |
-| `participantEndReason` | Contains Calling SDK error codes emitted by the SDK when relevant for each `participantId`. See Calling SDK error codes. |
-| `endpointId` | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationId`) for native clients. The number of `endpointId`s determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
-| `endpointType` | This value describes the properties of each Endpoint connected to the Call. Can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. |
-| `sdkVersion` | Version string for the Communication Services Calling SDK version used by each relevant Endpoint. (Example: `"1.1.00.20212500"`) |
-| `osVersion` | String that represents the operating system and version of each Endpoint device. |
+| `time` | The time stamp (UTC) of when the log was generated. |
+| `operationName` | The operation associated with the log record. |
+| `operationVersion` | The `api-version` value associated with the operation, if the `operationName` operation was performed through an API. If no API corresponds to this operation, the version represents the version of the operation, in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. This property is the granularity at which you can enable or disable logs on a resource. The properties that appear within the `properties` blob of an event are the same within a log category and resource type. |
+| `correlationId` | The unique ID for a call. It identifies correlated events from all of the participants and endpoints that connect during a single call, and you can use it to join data from different logs. If you ever need to open a support case with Microsoft, you can use the `correlationId` value to easily identify the call that you're troubleshooting. |
+| `identifier` | The unique ID for the user. The identity can be an Azure Communication Services user, an Azure Active Directory (Azure AD) user ID, a Teams anonymous user ID, or a Teams bot ID. You can use this ID to correlate user events across logs. |
+| `callStartTime` | A time stamp for the start of the call, based on the first attempted connection from any endpoint. |
+| `callDuration` | The duration of the call, expressed in seconds. It's based on the first attempted connection and the end of the last connection between two endpoints. |
+| `callType` | The type of the call. It contains either `"P2P"` or `"Group"`. A `"P2P"` call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` call is a call that has more than two endpoints or is created as `"Group"` call before the connection. |
+| `teamsThreadId` | The Teams thread ID. This ID is relevant only when the call is organized as a Teams meeting. It then represents the use case of interoperability between Microsoft Teams and Azure Communication Services. <br><br>This ID is exposed in operational logs. You can also get this ID through the Chat APIs. |
+| `participantId` | The ID that's generated to represent the two-way connection between a `"Participant"` endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there's a direct connection between two endpoints, and no `participantId` value is generated. |
+| `participantStartTime` | The time stamp for the beginning of the participant's first connection attempt. |
+| `participantDuration` | The duration of each participant connection in seconds, from `participantStartTime` to the time stamp when the connection ended. |
+| `participantEndReason` | The reason for the end of a participant connection. It contains Calling SDK error codes that the SDK emits (when relevant) for each `participantId` value. |
+| `endpointId` | The unique ID that represents each endpoint connected to the call, where `endpointType` defines the endpoint type. When the value is `null`, the connected entity is the Communication Services server (`endpointType` = `"Server"`). <br><br>The `endpointId` value can sometimes persist for the same user across multiple calls (`correlationId`) for native clients. The number of `endpointId` values determines the number of call summary logs. A distinct summary log is created for each `endpointId` value. |
+| `endpointType` | This value describes the properties of each endpoint that's connected to the call. It can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. |
+| `sdkVersion` | The version string for the Communication Services Calling SDK version that each relevant endpoint uses (for example, `"1.1.00.20212500"`). |
+| `osVersion` | A string that represents the operating system and version of each endpoint device. |
| `participantTenantId` | The ID of the Microsoft tenant associated with the participant. This field is used to guide cross-tenant redaction.
+### Call diagnostic log schema
+
+Call diagnostic logs provide important information about the endpoints and the media transfers for each participant. They also provide measurements that help you understand quality problems.
-### Call Diagnostic log schema
-Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, and as measurements that help to understand quality issues.
-For each Endpoint within a Call, a distinct Call Diagnostic Log is created for outbound media streams (audio, video, etc.) between Endpoints.
-In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In Group Calls the participantId serves as key identifier to join the related outbound logs into a distinct Participant connection. Note that Call diagnostic logs remain intact and are the same regardless of the participant tenant.
+For each endpoint within a call, a distinct call diagnostic log is created for outbound media streams (audio or video, for example) between endpoints. In a P2P call, each log contains data that relates to each of the outbound streams associated with each endpoint. In group calls, `participantId` serves as a key identifier to join the related outbound logs into a distinct participant connection. Call diagnostic logs remain intact and are the same regardless of the participant tenant.
> [!NOTE]
-> In this document, P2P and group calls are by default within the same tenant, for all call scenarios that are cross-tenant they are specified accordingly throughout the document.
+> In this article, P2P and group calls are within the same tenant, by default, for all call scenarios that are cross-tenant. They're specified accordingly throughout the article.
| Property | Description | ||-|
-| `operationName` | The operation associated with log record. |
-| `operationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| `correlationId` | The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationId` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationId` can used to easily identify the Call you're troubleshooting. |
-| `participantId` | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| `identifier` | This valueis the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams object ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
-| `endpointId` | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationId`) for native clients but are unique for every Call when the client is a web browser. |
-| `endpointType` | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, `"Voicemail"`, `"Anonymous"`, or `"Unknown"`. |
-| `mediaType` | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
-| `streamId` | Non-unique integer which, together with `mediaType`, can be used to uniquely identify streams of the same `participantId`.|
-| `transportType` | String value which describes the network transport protocol per `participantId`. Can contain `"UDPΓÇ¥`, `ΓÇ£TCPΓÇ¥`, or `ΓÇ£UnrecognizedΓÇ¥`. `"Unrecognized"` indicates that the system could not determine if the `transportType` was TCP or UDP. |
-| `roundTripTimeAvg` | This metric is the average time it takes to get an IP packet from one Endpoint to another within a `participantDuration`. This network propagation delay is related to the physical distance between the two points, the speed of light, and any overhead taken by the various routers in between. The latency is measured as one-way or Round-trip Time (RTT). Its value expressed in milliseconds, and an RTT greater than 500ms should be considered as negatively impacting the Call quality. |
-| `roundTripTimeMax` | The maximum RTT (ms) measured per media stream during a `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| `jitterAvg` | This metric is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering, which is approximately at `jitterAvg` >30 ms, that a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. This metric is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| `jitterMax` | This metric is the maximum jitter value measured between packets per media stream. Bursts in network conditions can cause issues in the audio/video traffic flow. |
-| `packetLossRateAvg` | This metric is the average percentage of packets that are lost. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets that have almost no impact to back-to-back burst losses that cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media, resulting in missed syllables and words, and choppy video and sharing. A packet loss rate of greater than 10% (0.1) should be considered a rate that's likely having a negative quality impact. This metric is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| `packetLossRateMax` | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow.
-### P2P vs. Group Calls
+| `operationName` | The operation associated with the log record. |
+| `operationVersion` | The `api-version` value associated with the operation, if the `operationName` operation was performed through an API. If no API corresponds to this operation, the version represents the version of the operation, in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. This property is the granularity at which you can enable or disable logs on a resource. The properties that appear within the `properties` blob of an event are the same within a log category and resource type. |
+| `correlationId` | The unique ID for a call. It identifies correlated events from all of the participants and endpoints that connect during a single call. If you ever need to open a support case with Microsoft, you can use the `correlationId` value to easily identify the call that you're troubleshooting. |
+| `participantId` | The ID that's generated to represent the two-way connection between a `"Participant"` endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there's a direct connection between two endpoints, and no `participantId` value is generated. |
+| `identifier` | The unique ID for the user. The identity can be an Azure Communication Services user, an Azure AD user ID, a Teams object ID, or a Teams bot ID. You can use this ID to correlate user events across logs. |
+| `endpointId` | The unique ID that represents each endpoint that's connected to the call, where `endpointType` defines the endpoint type. When the value is `null`, the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationId`) for native clients but is unique for every call when the client is a web browser. |
+| `endpointType` | The value that describes the properties of each `endpointId` instance. It can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, `"Voicemail"`, `"Anonymous"`, or `"Unknown"`. |
+| `mediaType` | The string value that describes the type of media that's being transmitted between endpoints within each stream. Possible values include `"Audio"`, `"Video"`, `"VBSS"` (video-based screen sharing), and `"AppSharing"`. |
+| `streamId` | A non-unique integer that, together with `mediaType`, you can use to uniquely identify streams of the same `participantId` value.|
+| `transportType` | The string value that describes the network transport protocol for each `participantId` value. It can contain `"UDP"`, `"TCP"`, or `"Unrecognized"`. `"Unrecognized"` indicates that the system could not determine if the transport type was TCP or UDP. |
+| `roundTripTimeAvg` | The average time that it takes to get an IP packet from one endpoint to another within a `participantDuration` period. This network propagation delay is related to the physical distance between the two points, the speed of light, and any overhead that the various routers take in between. <br><br>The latency is measured as one-way time or round-trip time (RTT). Its value expressed in milliseconds. An RTT greater than 500 ms is negatively affecting the call quality. |
+| `roundTripTimeMax` | The maximum RTT (in milliseconds) measured fo reach media stream during a `participantDuration` period in a group call or during a `callDuration` period in a P2P call. |
+| `jitterAvg` | The average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. When the jitter exceeds the buffering, which is approximately at a `jitterAvg` time greater than 30 ms, a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. <br><br>This metric is measured for each media stream over the `participantDuration` period in a group call or over the `callDuration` period in a P2P call. |
+| `jitterMax` | The maximum jitter value measured between packets for each media stream. Bursts in network conditions can cause problems in the audio/video traffic flow. |
+| `packetLossRateAvg` | The average percentage of packets that are lost. Packet loss directly affects audio quality. Small, individual lost packets have almost no impact, whereas back-to-back burst losses cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media. This situation results in missed syllables and words, along with choppy video and sharing. <br><br>A packet loss rate of greater than 10% (0.1) is likely having a negative quality impact. This metric is measured for each media stream over the `participantDuration` period in a group call or over the `callDuration` period in a P2P call. |
+| `packetLossRateMax` | This value represents the maximum packet loss rate (percentage) for each media stream over the `participantDuration` period in a group call or over the `callDuration` period in a P2P call. Bursts in network conditions can cause problems in the audio/video traffic flow.
-There are two types of Calls (represented by `callType`): P2P and Group.
+### P2P vs. group calls
-**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
+There are two types of calls, as represented by `callType`:
- :::image type="content" source="../media/call-logs-azure-monitor/p2p-diagram.png" alt-text="Screenshot displays P2P call across 2 endpoints.":::
+- **P2P call**: A connection between only two endpoints, with no server endpoint. P2P calls are initiated as a call between those endpoints and are not created as a group call event before the connection.
-**Group** Calls include any Call that has more than 2 Endpoints connected. Group Calls include a server Endpoint, and the connection between each Endpoint and the server. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. You can determine the timeline of when each endpoints joined the call by using the `participantStartTime` and `participantDuration` metrics.
+ :::image type="content" source="../media/call-logs-azure-monitor/p2p-diagram.png" alt-text="Diagram that shows a P2P call across two endpoints.":::
+- **Group call**: Any call that has more than two endpoints connected. Group calls include a server endpoint and the connection between each endpoint and the server. P2P calls that add another endpoint during the call cease to be P2P, and they become a group call. You can determine the timeline of when each endpoint joined the call by using the `participantStartTime` and `participantDuration` metrics.
- :::image type="content" source="../media/call-logs-azure-monitor/group-call-version-a.png" alt-text="Screenshot displays group call across multiple endpoints.":::
+ :::image type="content" source="../media/call-logs-azure-monitor/group-call-version-a.png" alt-text="Diagram that shows a group call across multiple endpoints.":::
+## Log structure
-## Log Structure
+Azure Communication Services creates two types of logs:
-Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
+- **Call summary logs**: Contain basic information about the call, including all the relevant IDs, time stamps, endpoints, and SDK information. For each participant within a call, Communication Services creates a distinct call summary log.
-Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each participant within a call, a distinct call summary log is created (if someone rejoins a call, they have the same EndpointId, but a different ParticipantId, so there can be two Call Summary logs for that endpoint).
+ If someone rejoins a call, that participant has the same `EndpointId` value but a different `ParticipantId` value. That endpoint can then have two call summary logs.
-Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each media stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` creates a log containing data for the inbound streams, and all other streams creates logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
+- **Call diagnostic logs**: Contain information about the stream, along with a set of metrics that indicate quality of experience measurements. For each endpoint within a call (including the server), Communication Services creates a distinct call diagnostic log for each media stream (audio or video, for example) between endpoints.
-### Example 1: P2P Call
+In a P2P call, each log contains data that relates to each of the outbound streams associated with each endpoint. In a group call, each stream associated with `endpointType` = `"Server"` creates a log that contains data for the inbound streams. All other streams create logs that contain data for the outbound streams for all non-server endpoints. In group calls, use the `participantId` value as the key to join the related inbound and outbound logs into a distinct participant connection.
-The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (one per `participantID`) and four Call Diagnostic Logs would be created (one per media stream). Each log contains data relating to the outbound stream of the `participantID`.
+### Example: P2P call
+The following diagram represents two endpoints connected directly in a P2P call. In this example, Communication Services creates two call summary logs (one for each `participantID` value) and four call diagnostic logs (one for each media stream). Each log contains data that relates to the outbound stream of `participantID`.
-### Example 2: Group Call
+### Example: Group call
-The below diagram represents a Group Call example with three `participantIDs`, which means three `participantIDs` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. One Call Summary Logs would be created per `participantID`, and four Call Diagnostic Logs would be created relating to each `participantID`, one for each media stream.
+The following diagram represents a group call example with three `participantID` values (which means three participants) and a server endpoint. Values for `endpointId` can potentially appear in multiple participants--for example, when they rejoin a call from the same device. Communication Services creates one call summary log for each `participantID` value. It creates four call diagnostic logs: one for each media stream per `participantID`.
-
-### Example 3: P2P Call cross-tenant
-The below diagram represents two participants across multiple tenants that are connected directly in a P2P Call. In this example, one Call Summary Logs would be created (one per participant) with redacted OS and SDK versioning and four Call Diagnostic Logs would be created (one per media stream). Each log contains data relating to the outbound stream of the `participantID`.
-
+### Example: Cross-tenant P2P call
-### Example 4: Group Call cross-tenant
-The below diagram represents a Group Call example with three `participantIds` across multiple tenants. One Call Summary Logs would be created per participant with redacted OS and SDK versioning, and four Call Diagnostic Logs would be created relating to each `participantId` , one for each media stream.
+The following diagram represents two participants across multiple tenants that are connected directly in a P2P call. In this example, Communication Services creates one call summary log (one for each participant) with redacted OS and SDK versions. Communication Services also creates four call diagnostic logs (one for each media stream). Each log contains data that relates to the outbound stream of `participantID`.
+### Example: Cross-tenant group call
+
+The following diagram represents a group call example with three `participantId` values across multiple tenants. Communication Services creates one call summary log for each participant with redacted OS and SDK versions. Communication Services also creates four call diagnostic logs that relate to each `participantId` value (one for each media stream).
+ > [!NOTE]
-> Only outbound diagnostic logs can be supported in this release.
-> Please note that participants and bots identity are treated the same way, as a result OS and SDK versioning associated to the bot and the participant can be redacted
+> This release supports only outbound diagnostic logs.
+> OS and SDK versions associated with the bot and the participant can be redacted because Communication Services treats identities of participants and bots the same way.
-## Sample Data
+## Sample data
-### P2P Call
+### P2P call
-Shared fields for all logs in the call:
+Here are shared fields for all logs in a P2P call:
```json "time": "2021-07-19T18:46:50.188Z",
Shared fields for all logs in the call:
"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae", ```
-#### Call Summary Logs
-Call Summary Logs have shared operation and category information:
+#### Call summary logs
+
+Call summary logs have shared operation and category information:
```json "operationName": "CallSummary",
Call Summary Logs have shared operation and category information:
"category": "CallSummary", ```
-Call Summary for VoIP user 1
+
+Here's a call summary for VoIP user 1:
+ ```json "properties": { "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
Call Summary for VoIP user 1
} ```
-Call summary for VoIP user 2
+Here's a call summary for VoIP user 2:
+ ```json "properties": { "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
Call summary for VoIP user 2
"osVersion": "null" } ```
-Call Summary Logs crossed tenants: Call summary for VoIP user 1
+
+Here's a cross-tenant call summary log for VoIP user 1:
+ ```json "properties": { "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
Call Summary Logs crossed tenants: Call summary for VoIP user 1
"osVersion": "Redacted" } ```
-Call summary for PSTN call
+
+Here's a call summary for a PSTN call:
> [!NOTE]