Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | How To Create Group Based Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md | This article describes how you can create and manage group-based permissions in 1. Select **Next** 1. If you selected **Admin for all Authorization System Types**- - Select Identities for each Authorization System that you would like members of this group to Request on. + - Select Identities to add for each Authorization System. Added Identities will have access to submit requests from the **Remediation** tab. 1. If you selected **Admin for selected Authorization System Types** - Select **Viewer**, **Controller**, or **Approver** for the **Authorization System Types** you want.- - Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on. + - Select **Next** and then select Select Identities to add for each Authorization System. Added Identities will have access to submit requests from the **Remediation** tab. 1. If you select **Custom**, select the **Authorization System Types** you want. - Select **Viewer**, **Controller**, or **Approver** for the **Authorization Systems** you want.- - Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on. + - Select **Next** and then select Select Identities to add for each Authorization System. Added Identities will have access to submit requests from the **Remediation** tab. 1. Select **Save**, The following message appears: **New Group Has been Created Successfully.** 1. To see the group you created in the **Groups** table, refresh the page. |
active-directory | Troubleshoot Mac Sso Extension Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-mac-sso-extension-plugin.md | + + Title: Troubleshooting the Microsoft Enterprise SSO Extension plugin on Apple devices +description: This article helps to troubleshoot deploying the Microsoft Enterprise SSO plug-in on Apple devices +++++ Last updated : 02/02/2023+++++++#Customer intent: As an IT admin, I want to learn how to discover and fix issues related to the Microsoft Enterprise SSO plug-in on macOS and iOS. ++++# Troubleshooting the Microsoft Enterprise SSO Extension plugin on Apple devices ++This article provides troubleshooting guidance used by administrators to resolve issues with deploying and using the [Enterprise SSO plugin](../develop/apple-sso-plugin.md). The Apple SSO extension can be deployed to iOS/iPadOS and macOS. ++Organizations may opt to deploy SSO to their corporate devices to provide a better experience for their end users. On Apple platforms, this process involves implementing Single Sign On (SSO) via [Primary Refresh Tokens](concept-primary-refresh-token.md). SSO relieves end users of the burden of excessive authentication prompts. ++Microsoft has implemented a plugin built on top of Apple's SSO framework, which provides brokered authentication for applications integrated with Microsoft Entra Azure Active Directory (Azure AD). For more information, see the article [Microsoft Enterprise SSO plug-in for Apple devices](../develop/apple-sso-plugin.md). ++## Extension types ++Apple supports two types of SSO Extensions that are part of its framework: **Redirect** and **Credential**. The Microsoft Enterprise SSO plugin has been implemented as a Redirect type and is best suited for brokering authentication to Azure AD. The following table compares the two types of extensions. ++| Extension type | Best suited for | How it works | Key differences | +||||| +| Redirect | Modern authentication methods such as OpenID Connect, OAUTH2, and SAML (Azure Active Directory)| Operating System intercepts the authentication request from the application to the Identity provider URLs defined in the extension MDM configuration profile. Redirect extensions receive: URLs, headers, and body.| Request credentials before requesting data. Uses URLs in MDM configuration profile. | +| Credential | Challenge and response authentication types like **Kerberos** (on-premises Active Directory Domain Services)| Request is sent from the application to the authentication server (AD domain controller). Credential extensions are configured with HOSTS in the MDM configuration profile. If the authentication server returns a challenge that matches a host listed in the profile, the operating system will route the challenge to the extension. The extension has the choice of handling or rejecting the challenge. If handled, the extension returns the authorization headers to complete the request, and authentication server will return response to the caller. | Request data then get challenged for authentication. Use HOSTs in MDM configuration profile. | ++Microsoft has implementations for brokered authentication for the following client operating systems: ++| OS | Authentication broker | +||| +| Windows| Web Account Manager (WAM) | +| iOS/iPadOS| Microsoft Authenticator | +| Android| Microsoft Authenticator or Microsoft Intune Company Portal | +| macOS | Microsoft Intune Company Portal (via SSO Extension) | ++All Microsoft broker applications use a key artifact known as a Primary Refresh Token (PRT), which is a JSON Web Token (JWT) used to acquire access tokens for applications and web resources secured with Azure AD. When deployed through an MDM, the Enterprise SSO extension for macOS or iOS obtains a PRT that is similar to the PRTs used on Windows devices by the Web Account Manager (WAM). For more information, see the article [What is a Primary Refresh Token](concept-primary-refresh-token.md). ++## Troubleshooting model ++The following flowchart outlines a logical flow for approaching troubleshooting the SSO Extension. The rest of this article will go into detail on the steps depicted in this flowchart. The troubleshooting can be broken down into two separate focus areas: [Deployment](#deployment-troubleshooting) and [Application Auth Flow](#application-auth-flow-troubleshooting). +++## Deployment troubleshooting ++Most issues that customers encounter stem from either improper Mobile Device Management (MDM) configuration(s) of the SSO extension profile, or an inability for the Apple device to receive the configuration profile from the MDM. This section will cover the steps you can take to ensure that the MDM profile has been deployed to a Mac and that it has the correct configuration. ++### Deployment requirements ++- macOS operating system: **version 10.15 (Catalina)** or greater. +- iOS operating system: **version 13** or greater. +- Device is managed by any MDM vendor that supports [Apple macOS and/or iOS](https://support.apple.com/guide/deployment/dep1d7afa557/web) (MDM Enrollment). +- Authentication Broker Software installed: [**Microsoft Intune Company Portal**](/mem/intune/apps/apps-company-portal-macos) or [**Microsoft Authenticator for iOS**](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a). ++#### Check macOS operating system version ++Use the following steps to check the operating system (OS) version on the macOS device. Apple SSO Extension profiles will only be deployed to devices running **macOS 10.15 (Catalina)** or greater. You can check the macOS version from either the [User Interface](#user-interface) or from the [Terminal](#terminal). ++##### User interface ++1. From the macOS device, click on the Apple icon in the top left corner and select **About This Mac**. ++1. The Operating system version will be listed beside **macOS**. ++##### Terminal ++1. From the macOS device, open Terminal from the **Applications** -> **Utilities** folder. +1. When the Terminal opens type **sw_vers** at the prompt, look for a result like the following: ++ ```bash + % sw_vers + ProductName: macOS + ProductVersion: 13.0.1 + BuildVersion: 22A400 + ``` ++#### MDM deployment of SSO extension configuration profile ++Work with your MDM administrator (or Device Management team) to ensure that the extension configuration profile is deployed to the Apple devices. The extension profile can be deployed from any MDM that supports macOS or iOS devices. ++> [!IMPORTANT] +> Apple requires devices are enrolled into an MDM for the SSO extension to be deployed. ++The following table provides specific MDM installation guidance depending on which OS you're deploying the extension to: ++- [**iOS/iPadOS**: Deploy the Microsoft Enterprise SSO plug-in](/mem/intune/configuration/use-enterprise-sso-plug-in-macos-with-intune) +- [**macOS**: Deploy the Microsoft Enterprise SSO plug-in](/mem/intune/configuration/use-enterprise-sso-plug-in-ios-ipados-with-intune) ++> [!IMPORTANT] +> Although, any MDM is supported for deploying the SSO Extension, many organizations implement [**device-based conditional access polices**](../conditional-access/concept-conditional-access-grant.md#require-device-to-be-marked-as-compliant) by way of evaluating MDM compliance policies. If a third-party MDM is being used, ensure that the MDM vendor supports [**Intune Partner Compliance**](/mem/intune/protect/device-compliance-partners) if you would like to use device-based Conditional Access policies. When the SSO Extension is deployed via Intune or an MDM provider that supports Intune Partner Compliance, the extension can pass the device certificate to Azure AD so that device authentication can be completed. ++#### Validate SSO configuration profile on macOS device ++Assuming the MDM administrator has followed the steps in the previous section [MDM Deployment of SSO Extension Profile](#mdm-deployment-of-sso-extension-configuration-profile), the next step is to verify if the profile has been deployed successfully to the device. ++##### Locate SSO extension MDM configuration profile ++1. From the macOS device, click on the **spotlight icon**. +1. When the **Spotlight Search** appears type **Profiles** and hit **return**. +1. This action should bring up the **Profiles** panel within the **System Settings**. ++ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/profiles-within-system-settings.png" alt-text="Screenshot showing configuration profiles."::: ++ | Screenshot callout | Description | + |::|| + |**1**| Indicates that the device is under **MDM** Management. | + |**2**| There will likely be multiple profiles to choose from. In this example, the Microsoft Enterprise SSO Extension Profile is called **Extensible Single Sign On Profile-32f37be3-302e-4549-a3e3-854d300e117a**. | ++ > [!NOTE] + > Depending on the type of MDM being used, there could be several profiles listed and their naming scheme is arbitrary depending on the MDM configuration. Select each one and inspect that the **Settings** row indicates that it is a **Single Sign On Extension**. ++1. Double-click on the configuration profile that matches a **Settings** value of **Single Sign On Extension**. ++ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/sso-extension-config-profile.png" alt-text="Screenshot showing sso extension configuration profile."::: ++ | Screenshot callout | Configuration profile setting | Description | + |::|:|| + |**1**|**Signed**| Signing authority of the MDM provider. | + |**2**|**Installed**| Date/Timestamp showing when the extension was installed (or updated). | + |**3**|**Settings: Single Sign On Extension**|Indicates that this configuration profile is an **Apple SSO Extension** type.| + |**4**|**Extension**| Identifier that maps to the **bundle ID** of the application that is running the **Microsoft Enterprise Extension Plugin**. The identifier must **always** be set to **`com.microsoft.CompanyPortalMac.ssoextension`** and the Team Identifier must appear as **(UBF8T346G9)** if the profile is installed on a macOS device. *Note: if any values differ, then the MDM won't invoke the extension correctly.*| + |**5**|**Type**| The **Microsoft Enterprise SSO Extension** must **always** be set to a **Redirect** extension type. For more information, see [Redirect vs Credential Extension Types](#extension-types). | + |**6**|**URLs**| The login URLs belonging to the Identity Provider **(Azure AD)**. See list of [supported URLs](../develop/apple-sso-plugin.md#manual-configuration-for-other-mdm-services). | ++ All Apple SSO Redirect Extensions must have the following MDM Payload components in the configuration profile: ++ | MDM payload component | Description | + ||| + |**Extension Identifier**| Includes both the Bundle Identifier and Team Identifier of the application on the macOS device, running the Extension. Note: The Microsoft Enterprise SSO Extension should always be set to: **com.microsoft.CompanyPortalMac.ssoextension (UBF8T346G9)** to inform the macOS operating system that the extension client code is part of the **Intune Company Portal application**. | + |**Type**| Must be set to **Redirect** to indicate a **Redirect Extension** type. | + |**URLs**| Endpoint URLs of the identity provider (Azure AD), where the operating system routes authentication requests to the extension. | + |**Optional Extension Specific Configuration**| Dictionary values that may act as configuration parameters. In the context of Microsoft Enterprise SSO Extension, these configuration parameters are called feature flags. See [feature flag definitions](../develop/apple-sso-plugin.md#more-configuration-options). | ++ > [!NOTE] + > The MDM definitions for Apple's SSO Extension profile can be referenced in the article [Extensible Single Sign-on MDM payload settings for Apple devices](https://support.apple.com/guide/deployment/depfd9cdf845/web) Microsoft has implemented our extension based on this schema. See [Microsoft Enterprise SSO plug-in for Apple devices](../develop/apple-sso-plugin.md#manual-configuration-for-other-mdm-services) ++1. To verify that the correct profile for the Microsoft Enterprise SSO Extension is installed, the **Extension** field should match: **com.microsoft.CompanyPortalMac.ssoextension (UBF8T346G9)**. +1. Take note of the **Installed** field in the configuration profile as it can be a useful troubleshooting indicator, when changes are made to its configuration. ++If the correct configuration profile has been verified, proceed to the [Application Auth Flow Troubleshooting](#application-auth-flow-troubleshooting) section. ++##### MDM configuration profile is missing ++If the SSO extension configuration profile doesn't appear in the **Profiles** list after following the [previous section](#locate-sso-extension-mdm-configuration-profile), it could be that the MDM configuration has User/Device targeting enabled, which is effectively **filtering out** the user or device from receiving the configuration profile. Check with your MDM administrator and collect the **Console** logs found in the [next section](#collect-mdm-specific-console-logs). ++###### Collect MDM specific console logs ++1. From the macOS device, click on the **spotlight icon**. +1. When the **Spotlight Search** appears, type **Console** and hit **return**. +1. Click the **Start** button to enable the Console trace logging. ++ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/console-window-start-button.png" alt-text="Screenshot showing the Console app and the start button being clicked."::: ++1. Have the MDM administrator try to redeploy the config profile to this macOS device/user and force a sync cycle. +1. Type **subsystem:com.apple.ManagedClient** into the **search bar** and hit **return**. ++ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/console-subsystem-filter.png" alt-text="Screenshot showing the Console app with the subsystem filter."::: ++1. Where the cursor is flashing in the **search bar** type **message:Extensible**. ++ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/filter-console-message-extensible.png" alt-text="Screenshot showing the console being further filtered on the message field."::: ++1. You should now see the MDM Console logs filtered on **Extensible SSO** configuration profile activities. The following screenshot shows a log entry **Installed configuration profile**, showing that the configuration profile was installed. ++## Application auth flow troubleshooting ++The guidance in this section assumes that the macOS device has a correctly deployed configuration profile. See [Validate SSO Configuration Profile on macOS Device](#validate-sso-configuration-profile-on-macos-device) for the steps. ++Once deployed the **Microsoft Enterprise SSO Extension for Apple devices** supports two types of application authentication flows for each application type. When troubleshooting, it's important to understand the type of application being used. ++### Application types ++| Application type | Interactive auth | Silent auth | Description | Examples | +| | :: | :: | | :: | +| [**Native MSAL App**](../develop/apple-sso-plugin.md#applications-that-use-msal) |X|X| MSAL (Microsoft Authentication Library) is an application developer framework tailored for building applications with the Microsoft Identity platform (Azure AD).<br>Apps built on **MSAL version 1.1 or greater** are able to integrate with the Microsoft Enterprise SSO Extension.<br>*If the application is SSO extension (broker) aware it will utilize the extension without any further configuration* for more information, see our [MSAL developer sample documentation](https://github.com/AzureAD/microsoft-authentication-library-for-objc). | Microsoft To Do | +| [**Non-MSAL Native/Browser SSO**](../develop/apple-sso-plugin.md#applications-that-dont-use-msal) ||X| Applications that use Apple networking technologies or webviews can be configured to obtain a shared credential from the SSO Extension<br>Feature flags must be configured to ensure that the bundle ID for each app is allowed to obtain the shared credential (PRT). | Microsoft Word<br>Safari<br>Microsoft Edge<br>Visual Studio | ++> [!IMPORTANT] +> Not all Microsoft first-party native applications use the MSAL framework. At the time of this article's publication, most of the Microsoft Office macOS applications still rely on the older ADAL library framework, and thus rely on the Browser SSO flow. ++#### How to find the bundle ID for an application on macOS ++1. From the macOS device, click on the **spotlight icon**. +1. When the **Spotlight Search** appears type **Terminal** and hit **return**. +1. When the Terminal opens type **`osascript -e 'id of app "<appname>"'`** at the prompt. See some examples follow: ++ ```bash + % osascript -e 'id of app "Safari"' + com.apple.Safari ++ % osascript -e 'id of app "OneDrive"' + com.microsoft.OneDrive ++ % osascript -e 'id of app "Microsoft Edge"' + com.microsoft.edgemac + ``` ++1. Now that the bundle ID(s) have been gathered, follow our [guidance to configure the feature flags](../develop/apple-sso-plugin.md#enable-sso-for-all-apps-with-a-specific-bundle-id-prefix) to ensure that **Non-MSAL Native/Browser SSO apps** can utilize the SSO Extension. **Note: All bundle ids are case sensitive for the Feature flag configuration**. ++> [!CAUTION] +> Applications that do not use Apple Networking technologies (**like WKWebview and NSURLSession**) will not be able to use the shared credential (PRT) from the SSO Extension. Both **Google Chrome** and **Mozilla Firefox** fall into this category. Even if they are configured in the MDM configuration profile, the result will be a regular authentication prompt in the browser. ++### Bootstrapping ++By default, only MSAL apps invoke the SSO Extension, and then in turn the Extension acquires a shared credential (PRT) from Azure AD. However, the **Safari** browser application or other **Non-MSAL** applications can be configured to acquire the PRT. See [Allow users to sign in from unknown applications and the Safari browser](../develop/apple-sso-plugin.md#allow-users-to-sign-in-from-unknown-applications-and-the-safari-browser). After the SSO extension acquires a PRT, it will store the credential in the user's login Keychain. Next, check to ensure that the PRT is present in the user's keychain: ++#### Checking keychain access for PRT ++1. From the macOS device, click on the **spotlight icon**. +1. When the **Spotlight Search** appears, type **Keychain Access** and hit **return**. +1. Under **Default Keychains** select **Local Items (or iCloud)**. ++ - Ensure that the **All Items** is selected. + - In the search bar, on the right-hand side, type **primaryrefresh** (To filter). + + :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/prt-located-in-keychain-access.png" alt-text="screenshot showing how to find the PRT in Keychain access app."::: ++ | Screenshot callout | Keychain credential component | Description | + |::|:|| + |**1** |**All Items**|Shows all types of credentials across Keychain Access| + |**2** |**Keychain Search Bar**|Allows filtering by credential. To filter for the Azure AD PRT type **`primaryrefresh`**| + |**3** |**Kind**|Refers to the type of credential. The Azure AD PRT credential is an **Application Password** credential type| + |**4** |**Account**|Displays the Azure AD User Account, which owns the PRT in the format: **`UserObjectId.TenantId-login.windows.net`** | + |**5** |**Where**|Displays the full name of the credential. The Azure AD PRT credential begins with the following format: **`primaryrefreshtoken-29d9ed98-a469-4536-ade2-f981bc1d605`** The **29d9ed98-a469-4536-ade2-f981bc1d605** is the Application ID for the **Microsoft Authentication Broker** service, responsible for handling PRT acquisition requests| + |**6** |**Modified**|Shows when the credential was last updated. For the Azure AD PRT credential, anytime the credential is either bootstrapped or updated by an interactive sign-on event will update the date/timestamp| + |**7** |**Keychain** |Indicates which Keychain the selected credential resides. The Azure AD PRT credential will either reside in the **Local Items** or **iCloud** Keychain. *Note: When iCloud is enabled on the macOS device, the **Local Items** Keychain will become the **iCloud** keychain*| ++1. If the PRT isn't found in Keychain Access, do the following based on the application type: ++ - **Native MSAL**: Check that the application developer, if the app was built with **MSAL version 1.1 or greater**, has enabled the application to be broker aware. Also, check [**Deployment Troubleshooting steps**](#deployment-troubleshooting) to rule out any deployment issues. + - **Non MSAL (Safari)**: Check to ensure that the feature flag **`browser_sso_interaction_enabled`** is set to 1 and not 0 in the MDM configuration profile ++#### Authentication flow after bootstrapping a PRT ++Now that the PRT (shared credential) has been verified, before doing any deeper troubleshooting, it's helpful to understand the high-level steps for each application type and how it interacts with the Microsoft Enterprise SSO Extension plugin (broker app). The following animations and descriptions should help macOS administrators understand the scenario before looking at any logging data. ++##### Native MSAL application ++Scenario: An application developed to use MSAL (Example: **Microsoft To Do** client) that is running on an Apple device needs to sign the user in with their Azure AD account in order to access an Azure AD protected service (Example: **Microsoft To Do Service**). +++1. MSAL-developed applications invoke the SSO extension directly, and send the PRT to the Azure AD token endpoint along with the application's request for a token for an Azure AD protected resource +1. Azure AD validates the PRT credential, and returns an application-specific token back to the SSO extension broker +1. The SSO extension broker then passes the token to the MSAL client application, which then sends it to the Azure AD protected resource +1. The user is now signed into the app and the authentication process is complete ++##### Non-MSAL/Browser SSO ++Scenario: A user on an Apple device opens up the Safari web browser (or any Non-MSAL native app that supports the Apple Networking Stack) to sign into an Azure AD protected resource (Example: `https://office.com`). +++1. Using a Non-MSAL application (Example: **Safari**), the user attempts to sign into an Azure AD integrated application (Example: office.com) and is redirected to obtain a token from Azure AD +1. As long as the Non-MSAL application is allow-listed in the MDM payload configuration, the Apple network stack intercepts the authentication request and redirects the request to the SSO Extension broker +1. Once the SSO extension receives the intercepted request, the PRT is sent to the Azure AD token endpoint +1. Azure AD validates the PRT, and returns an application-specific token back to the SSO Extension +1. The application-specific token is given to the Non-MSAL client application, and the client application sends the token to access the Azure AD protected service +1. The user now has completed the sign-in and the authentication process is complete ++### Obtaining the SSO extension logs ++One of the most useful tools to troubleshoot various issues with the SSO extension are the client logs from the Apple device. ++#### Save SSO extension logs from Company Portal app ++1. From the macOS device, click on the **spotlight icon**. +1. When the **Spotlight Search** appears, type **Company Portal** and hit **return**. +1. When the **Company Portal** loads (Note: no need to Sign into the app), navigate to the top menu bar: **Help**->**Save diagnostic report**. ++ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/company-portal-help-save-diagnostic.png" alt-text="Screenshot showing how to navigate the Help top menu to Save the diagnostic report."::: ++1. Save the Company Portal Log archive to place of your choice (for example: Desktop). +1. Open the **CompanyPortal.zip** archive and Open the **SSOExtension.log** file with any text editor. ++> [!TIP] +> A handy way to view the logs is using [**Visual Studio Code**](https://code.visualstudio.com/download) and installing the [**Log Viewer**](https://marketplace.visualstudio.com/items?itemName=berublan.vscode-log-viewer) extension. ++#### Tailing SSO extension logs with terminal ++During troubleshooting it may be useful to reproduce a problem while tailing the SSOExtension logs in real time: ++1. From the macOS device, click on the **spotlight icon**. +1. When the **Spotlight Search** appears type: **Terminal** and hit **return**. +1. When the Terminal opens type: ++ ```bash + tail -F ~/Library/Containers/com.microsoft.CompanyPortalMac.ssoextension/Data/Library/Caches/Logs/Microsoft/SSOExtension/* + ``` ++ > [!NOTE] + > The trailing /* indicates that multiple logs will be tailed should any exist ++ ``` + % tail -F ~/Library/Containers/com.microsoft.CompanyPortalMac.ssoextension/Data/Library/Caches/Logs/Microsoft/SSOExtension/* + ==> /Users/<username>/Library/Containers/com.microsoft.CompanyPortalMac.ssoextension/Data/Library/Caches/Logs/Microsoft/SSOExtension/SSOExtension 2022-12-25--13-11-52-855.log <== + 2022-12-29 14:49:59:281 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Handling SSO request, requested operation: + 2022-12-29 14:49:59:281 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Ignoring this SSO request... + 2022-12-29 14:49:59:282 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Finished SSO request. + 2022-12-29 14:49:59:599 | I | Beginning authorization request + 2022-12-29 14:49:59:599 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Checking for feature flag browser_sso_interaction_enabled, value in config 1, value type __NSCFNumber + 2022-12-29 14:49:59:599 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Feature flag browser_sso_interaction_enabled is enabled + 2022-12-29 14:49:59:599 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Checking for feature flag browser_sso_disable_mfa, value in config (null), value type (null) + 2022-12-29 14:49:59:599 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Checking for feature flag disable_browser_sso_intercept_all, value in config (null), value type (null) + 2022-12-29 14:49:59:600 | I | Request does not need UI + 2022-12-29 14:49:59:600 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Checking for feature flag admin_debug_mode_enabled, value in config (null), value type (null) + ``` ++1. As you reproduce the issue, keep the **Terminal** window open to observe the output from the tailed **SSOExtension** logs. ++### Understanding the SSO extension logs ++Analyzing the SSO extension logs is an excellent way to troubleshoot the authentication flow from applications sending authentication requests to Azure AD. Any time the SSO extension Broker is invoked, a series of logging activities results, and these activities are known as **Authorization Requests**. The logs contain the following useful information for troubleshooting: ++- Feature Flag configuration +- Authorization Request Types + - Native MSAL + - Non MSAL/Browser SSO +- Interaction with the macOS Keychain for credential retrival/storage operations +- Correlation IDs for Azure AD Sign-In events + - PRT acquisition + - Device Registration ++> [!CAUTION] +> The SSO extension logs are extremely verbose, especially when looking at Keychain credential operations. For this reason, it's always best to understand the scenario before looking at the logs during troubleshooting. ++#### Log structure ++The SSO extension logs are broken down into columns. The following screenshot shows the column breakdown of the logs: +++| Column | Column name | Description | +|::|:|| +|**1**|**Local Date/Time**|The **Local** Date and Time displayed| +|**2**|**I-Information<br>W-Warning<br>E-Error**|Displays Information, Warning, or Errors| +|**3**|**Thread ID (TID)**|Displays the thread ID of the SSO extension Broker App's execution| +|**4**|**MSAL Version Number**|The Microsoft Enterprise SSO extension Broker Plugin is build as an MSAL app. This column denotes the version of MSAL that the broker app is running | +|**5**|**macOS version** |Show the version of the macOS operating system| +|**6**|**UTC Date/Time** |The **UTC** Date and Time displayed| +|**7**|**Correlation ID** |Lines in the logs that have to do with Azure AD or Keychain operations extend the UTC Date/Time column with a Correlation ID| +|**8**|**Message** |Shows the detailed messaging of the logs. Most of the troubleshooting information can be found by examining this column| ++#### Feature flag configuration ++During the MDM configuration of the Microsoft Enterprise SSO Extension, an optional extension specific data can be sent as instructions to change how the SSO extension behaves. These configuration specific instructions are known as **Feature Flags**. The Feature Flag configuration is especially important for Non-MSAL/Browser SSO authorization requests types, as the Bundle ID can determine if the Extension will be invoked or not. See [Feature Flag documentation](../develop/apple-sso-plugin.md#more-configuration-options). Every authorization request begins with a Feature Flag configuration report. The following screenshot will walk through an example feature flag configuration: +++| Callout | Feature flag | Description | +|::|:|:| +|**1**|**[browser_sso_interaction_enabled](../develop/apple-sso-plugin.md#allow-users-to-sign-in-from-unknown-applications-and-the-safari-browser)**|Non-MSAL or Safari browser can bootstrap a PRT | +|**2**|**[browser_sso_disable_mfa](../develop/apple-sso-plugin.md#disable-asking-for-mfa-during-initial-bootstrapping)**|During bootstrapping of the PRT credential, by default MFA is required. Notice this configuration is set to **null** which means that the default configuration will be enforced| +|**3**|**[disable_explicit_app_prompt](../develop/apple-sso-plugin.md#disable-oauth-2-application-prompts)**|Replaces **prompt=login** authentication requests from applications to reduce prompting| +|**4**|**[AppPrefixAllowList](../develop/apple-sso-plugin.md#enable-sso-for-all-apps-with-a-specific-bundle-id-prefix)**|Any Non-MSAL application that has a Bundle ID that starts with **`com.micorosoft.`** can be intercepted and handled by the SSO extension broker | ++> [!IMPORTANT] +> Feature flags set to **null** means that their **default** configuration is in place. Check **[Feature Flag documentation](../develop/apple-sso-plugin.md#more-configuration-options)** for more details ++#### MSAL native application sign-in flow ++The following section will walk through how to examine the SSO extension logs for the Native MSAL Application auth flow. For this example, we're using the [MSAL macOS/iOS sample application](https://github.com/AzureAD/microsoft-authentication-library-for-objc) as the client application, and the application is making a call to the Microsoft Graph API to display the sign-in user's information. ++##### MSAL native: Interactive flow walkthrough ++The following actions should take place for a successful interactive sign-on: ++1. The User will sign-in to the MSAL macOS sample app. +1. The Microsoft SSO Extension Broker will be invoked and handle the request. +1. Microsoft SSO Extension Broker will undergo the bootstrapping process to acquire a PRT for the signed in user. +1. Store the PRT in the Keychain. +1. Check for the presence of a Device Registration object in Azure AD (WPJ). +1. Return an access token to the client application to access the Microsoft Graph with a scope of User.Read. ++> [!IMPORTANT] +> The sample log snippets that follows, have been annoted with comment headers // that are not seen in the logs. They are used to help illustrate a specific action being undertaken. We have documented the log snippets this way to assist with copy and paste operations. In addition, the log examples have been trimmed to only show lines of significance for troubleshooting. ++The User clicks on the **Call Microsoft Graph API** button to invoke the sign-in process. +++```SSOExtensionLogs +////////////////////////// +//get_accounts_operation// +////////////////////////// +Handling SSO request, requested operation: get_accounts_operation +(Default accessor) Get accounts. +(MSIDAccountCredentialCache) retrieving cached credentials using credential query +(Default accessor) Looking for token with aliases (null), tenant (null), clientId 08dc26ab-e050-465e-beb4-d3f2d66647a5, scopes (null) +(Default accessor) No accounts found in default accessor. +(Default accessor) No accounts found in other accessors. +Completed get accounts SSO request with a personal device mode. +Request complete +Request needs UI +ADB 3.1.40 -[ADBrokerAccountManager allBrokerAccounts:] +ADB 3.1.40 -[ADBrokerAccountManager allMSIDBrokerAccounts:] +(Default accessor) Get accounts. +No existing accounts found, showing webview ++///////// +//login// +///////// +Handling SSO request, requested operation: login +Handling interactive SSO request... +Starting SSO broker request with payload: { + authority = "https://login.microsoftonline.com/common"; + "client_app_name" = MSALMacOS; + "client_app_version" = "1.0"; + "client_id" = "08dc26ab-e050-465e-beb4-d3f2d66647a5"; + "client_version" = "1.1.7"; + "correlation_id" = "3506307A-E90F-4916-9ED5-25CF81AE97FC"; + "extra_oidc_scopes" = "openid profile offline_access"; + "instance_aware" = 0; + "msg_protocol_ver" = 4; + prompt = "select_account"; + "provider_type" = "provider_aad_v2"; + "redirect_uri" = "msauth.com.microsoft.idnaace.MSALMacOS://auth"; + scope = "user.read"; +} ++//////////////////////////////////////////////////////////// +//Request PRT from Microsoft Authentication Broker Service// +//////////////////////////////////////////////////////////// +Using request handler <ADInteractiveDevicelessPRTBrokerRequestHandler: 0x117ea50b0> +(Default accessor) Looking for token with aliases (null), tenant (null), clientId 29d9ed98-a469-4536-ade2-f981bc1d605e, scopes (null) +Attempting to get Deviceless Primary Refresh Token interactively. +Caching AAD Environements +networkHost: login.microsoftonline.com, cacheHost: login.windows.net, aliases: login.microsoftonline.com, login.windows.net, login.microsoft.com, sts.windows.net +networkHost: login.partner.microsoftonline.cn, cacheHost: login.partner.microsoftonline.cn, aliases: login.partner.microsoftonline.cn, login.chinacloudapi.cn +networkHost: login.microsoftonline.de, cacheHost: login.microsoftonline.de, aliases: login.microsoftonline.de +networkHost: login.microsoftonline.us, cacheHost: login.microsoftonline.us, aliases: login.microsoftonline.us, login.usgovcloudapi.net +networkHost: login-us.microsoftonline.com, cacheHost: login-us.microsoftonline.com, aliases: login-us.microsoftonline.com +Resolved authority, validated: YES, error: 0 +[MSAL] Resolving authority: Masked(not-null), upn: Masked(null) +[MSAL] Resolved authority, validated: YES, error: 0 +[MSAL] Start webview authorization session with webview controller class MSIDAADOAuthEmbeddedWebviewController: +[MSAL] Presenting web view contoller. +``` ++The logging sample can be broken down into three segments: ++|Segment |Description | +||| +| **`get_accounts_operation`** |Checks to see if there are any existing accounts in the cache<br> - **ClientID**: The application ID registered in Azure AD for this MSAL app<br>**ADB 3.1.40** indicates that version of the Microsoft Enterprise SSO Extension Broker plugin | +|**`login`** |Broker handles the request for Azure AD:<br> - **Handling interactive SSO request...**: Denotes an interactive request<br> - **correlation_id**: Useful for cross referencing with the Azure AD server-side sign-in logs <br> - **scope**: **User.Read** API permission scope being requested from the Microsoft Graph<br> - **client_version**: version of MSAL that the application is running<br> - **redirect_uri**: MSAL apps use the format **`msauth.com.<Bundle ID>://auth`** | +|**PRT Request**|Bootstrapping process to acquire a PRT interactively has been initiated and renders the Webview SSO Session<br><br>**Microsoft Authentication Broker Service**<br> - **clientId: 29d9ed98-a469-4536-ade2-f981bc1d605e**<br> - All PRT requests are made to Microsoft Authentication Broker Service| ++The SSO Webview Controller appears and user is prompted to enter their Azure AD login (UPN/email) +++> [!NOTE] +> Clicking on the ***i*** in the bottom left corner of the webview controller displays more information about the SSO extension and the specifics about the app that has invoked it. ++After the user successfully enters their Azure AD credentials, the following log entries are written to the SSO extension logs ++``` +SSOExtensionLogs +/////////////// +//Acquire PRT// +/////////////// +[MSAL] -completeWebAuthWithURL: msauth://microsoft.aad.brokerplugin/?code=(not-null)&client_info=(not-null)&state=(not-null)&session_state=(not-null) +[MSAL] Dismissed web view contoller. +[MSAL] Result from authorization session callbackURL host: microsoft.aad.brokerplugin , has error: NO +[MSAL] (Default accessor) Looking for token with aliases ( + "login.windows.net", + "login.microsoftonline.com", + "login.windows.net", + "login.microsoft.com", + "sts.windows.net" +), tenant (null), clientId 29d9ed98-a469-4536-ade2-f981bc1d605e, scopes (null) +Saving PRT response in cache since no other PRT was found +[MSAL] Saving keychain item, item info Masked(not-null) +[MSAL] Keychain find status: 0 +Acquired PRT. ++/////////////////////////////////////////////////////////////////////// +//Discover if there is an Azure AD Device Registration (WPJ) present // +//and if so re-acquire a PRT and associate with Device ID // +/////////////////////////////////////////////////////////////////////// +WPJ Discovery: do discovery in environment 0 +Attempt WPJ discovery using tenantId. +WPJ discovery succeeded. +Using cloud authority from WPJ discovery: https://login.microsoftonline.com/common +ADBrokerDiscoveryAction completed. Continuing Broker Flow. +PRT needs upgrade as device registration state has changed. Device is joined 1, prt is joined 0 +Beginning ADBrokerAcquirePRTInteractivelyAction +Attempting to get Primary Refresh Token interactively. +Acquiring broker tokens for broker client id. +Resolving authority: Masked(not-null), upn: auth.placeholder-61945244__domainname.com +Resolved authority, validated: YES, error: 0 +Enrollment id read from intune cache : (null). +Handle silent PRT response Masked(not-null), error Masked(null) +Acquired broker tokens. +Acquiring PRT. +Acquiring PRT using broker refresh token. +Requesting PRT from authority https://login.microsoftonline.com/<TenantID>/oauth2/v2.0/token +[MSAL] (Default accessor) Looking for token with aliases ( + "login.windows.net", + "login.microsoftonline.com", + "login.windows.net", + "login.microsoft.com", + "sts.windows.net" +), tenant (null), clientId (null), scopes (null) +[MSAL] Acquired PRT successfully! +Acquired PRT. +ADBrokerAcquirePRTInteractivelyAction completed. Continuing Broker Flow. +Beginning ADBrokerAcquireTokenWithPRTAction +Resolving authority: Masked(not-null), upn: auth.placeholder-61945244__domainname.com +Resolved authority, validated: YES, error: 0 +Handle silent PRT response Masked(not-null), error Masked(null) ++////////////////////////////////////////////////////////////////////////// +//Provide Access Token received from Azure AD back to Client Application// +//and complete authorization request // +////////////////////////////////////////////////////////////////////////// +[MSAL] (Default cache) Removing credentials with type AccessToken, environment login.windows.net, realm TenantID, clientID 08dc26ab-e050-465e-beb4-d3f2d66647a5, unique user ID dbb22b2f, target User.Read profile openid email +ADBrokerAcquireTokenWithPRTAction succeeded. +Composing broker response. +Sending broker response. +Returning to app (msauth.com.microsoft.idnaace.MSALMacOS://auth) - protocol version: 3 +hash: 4A07DFC2796FD75A27005238287F2505A86BA7BB9E6A00E16A8F077D47D6D879 +payload: Masked(not-null) +Completed interactive SSO request. +Completed interactive SSO request. +Request complete +Completing SSO request... +Finished SSO request. +``` ++At this point in the authentication/authorization flow, the PRT has been bootstrapped and it should be visible in the macOS keychain access. See [Checking Keychain Access for PRT](#checking-keychain-access-for-prt). The **MSAL macOS sample** application uses the access token received from the Microsoft SSO Extension Broker to display the user's information. ++Next, examine server-side [Azure AD Sign-in logs](../reports-monitoring/reference-basic-info-sign-in-logs.md#correlation-id) based on the correlation ID collected from the client-side SSO extension logs . For more information, see [Sign-in logs in Azure Active Directory](../reports-monitoring/concept-sign-ins.md). ++###### View Azure AD Sign-in logs by correlation ID filter ++1. Open the Azure AD Sign-ins for the tenant where the application is registered. +1. Select **User sign-ins (interactive)**. +1. Select the **Add Filters** and select the **Correlation Id** radio button. +1. Copy and paste the Correlation ID obtained from the SSO extension logs and select **Apply**. ++For the MSAL Interactive Login Flow, we expect to see an interactive sign-in for the resource **Microsoft Authentication Broker** service. This event is where the user entered their password to bootstrap the PRT. +++There will also be non-interactive sign-in events, due to the fact the PRT is used to acquire the access token for the client application's request. Follow the [View Azure AD Sign-in logs by Correlation ID Filter](#view-azure-ad-sign-in-logs-by-correlation-id-filter) but in step 2, select **User sign-ins (non-interactive)**. +++|Sign-in log attribute |Description | +||| +|**Application**| Display Name of the Application registration in the Azure AD tenant where the client application authenticates. | +|**Application Id**| Also referred to the ClientID of the application registration in the Azure AD tenant. | +|**Resource**| The API resource that the client application is trying to obtain access to. In this example, the resource is the **Microsoft Graph API**. | +|**Incoming Token Type**| An Incoming token type of **Primary Refresh Token (PRT)** shows the input token being used to obtain an access token for the resource. | +|**User Agent**| The user agent string in this example is showing that the **Microsoft SSO Extension** is the application processing this request. A useful indicator that the SSO extension is being used, and broker auth request is taking place. | +|**Azure AD app authentication library**| When an MSAL application is being used the details of the library and the platform are written here. | +|**Oauth Scope Information**| The Oauth2 scope information requested for the access token. (**User.Read**,**profile**,**openid**,**email**). | ++##### MSAL Native: Silent flow walkthrough ++After a period of time, the access token will no longer be valid. So, if the user reclicks on the **Call Microsoft Graph API** button. The SSO extension will attempt to refresh the access token with the already acquired PRT. ++``` +SSOExtensionLogs +///////////////////////////////////////////////////////////////////////// +//refresh operation: Assemble Request based on User information in PRT / +///////////////////////////////////////////////////////////////////////// +Beginning authorization request +Request does not need UI +Handling SSO request, requested operation: refresh +Handling silent SSO request... +Looking account up by home account ID dbb22b2f, displayable ID auth.placeholder-61945244__domainname.com +Account identifier used for request: Masked(not-null), auth.placeholder-61945244__domainname.com +Starting SSO broker request with payload: { + authority = "https://login.microsoftonline.com/<TenantID>"; + "client_app_name" = MSALMacOS; + "client_app_version" = "1.0"; + "client_id" = "08dc26ab-e050-465e-beb4-d3f2d66647a5"; + "client_version" = "1.1.7"; + "correlation_id" = "45418AF5-0901-4D2F-8C7D-E7C5838A977E"; + "extra_oidc_scopes" = "openid profile offline_access"; + "home_account_id" = "<UserObjectId>.<TenantID>"; + "instance_aware" = 0; + "msg_protocol_ver" = 4; + "provider_type" = "provider_aad_v2"; + "redirect_uri" = "msauth.com.microsoft.idnaace.MSALMacOS://auth"; + scope = "user.read"; + username = "auth.placeholder-61945244__domainname.com"; +} +////////////////////////////////////////// +//Acquire Access Token with PRT silently// +////////////////////////////////////////// +Using request handler <ADSSOSilentBrokerRequestHandler: 0x127226a10> +Executing new request +Beginning ADBrokerAcquireTokenSilentAction +Beginning silent flow. +[MSAL] Resolving authority: Masked(not-null), upn: auth.placeholder-61945244__domainname.com +[MSAL] (Default cache) Removing credentials with type AccessToken, environment login.windows.net, realm <TenantID>, clientID 08dc26ab-e050-465e-beb4-d3f2d66647a5, unique user ID dbb22b2f, target User.Read profile openid email +[MSAL] (MSIDAccountCredentialCache) retrieving cached credentials using credential query +[MSAL] Silent controller with PRT finished with error Masked(null) +ADBrokerAcquireTokenWithPRTAction succeeded. +Composing broker response. +Sending broker response. +Returning to app (msauth.com.microsoft.idnaace.MSALMacOS://auth) - protocol version: 3 +hash: 292FBF0D32D7EEDEB520098E44C0236BA94DDD481FAF847F7FF6D5CD141B943C +payload: Masked(not-null) +Completed silent SSO request. +Request complete +Completing SSO request... +Finished SSO request. +``` ++The logging sample can be broken down into two segments: ++|Segment |Description | +|::|| +|**`refresh`** | Broker handles the request for Azure AD:<br> - **Handling silent SSO request...**: Denotes a silent request<br> - **correlation_id**: Useful for cross referencing with the Azure AD server-side sign-in logs <br> - **scope**: **User.Read** API permission scope being requested from the Microsoft Graph<br> - **client_version**: version of MSAL that the application is running<br> - **redirect_uri**: MSAL apps use the format **`msauth.com.<Bundle ID>://auth`**<br><br>**Refresh** has notable differences to the request payload:<br> - **authority**: Contains the Azure AD tenant URL endpoint as opposed to the **common** endpoint<br> - **home_account_id**: Show the User account in the format **\<UserObjectId\>.\<TenantID\>**<br> - **username**: hashed UPN format **auth.placeholder-XXXXXXXX__domainname.com** | +|**PRT Refresh and Acquire Access Token** | This operation will revalidate the PRT and refresh it if necessary, before returning the access token back to the calling client application. | ++We can again take the **correlation Id** obtained from the client-side **SSO Extension** logs and cross reference with the server-side Azure AD Sign-in logs. +++The Azure AD Sign-in shows identical information to the Microsoft Graph resource from the **login** operation in the previous [interactive login section](#view-azure-ad-sign-in-logs-by-correlation-id-filter). ++#### Non-MSAL/Browser SSO application login flow ++The following section will walk through how to examine the SSO extension logs for the Non-MSAL/Browser Application auth flow. For this example, we're using the Apple Safari browser as the client application, and the application is making a call to the Office.com (OfficeHome) web application. ++##### Non-MSAL/Browser SSO flow walkthrough ++The following actions should take place for a successful sign-on: ++1. Assume that User who already has undergone the bootstrapping process has an existing PRT. +1. On a device, with the **Microsoft SSO Extension Broker** deployed, the configured **feature flags** will be checked to ensure that the application can be handled by the SSO Extension. +1. Since the Safari browser adheres to the **Apple Networking Stack**, the SSO extension will try to intercept the Azure AD auth request. +1. The PRT will be used to acquire a token for the resource being requested. +1. If the device is Azure AD Registered, it will pass the Device ID along with the request. +1. The SSO extension will populate the header of the Browser request to sign-in to the resource. ++The following client-side **SSO Extension** logs show the request being handled transparently by the SSO extension broker to fulfill the request. ++``` +SSOExtensionLogs +Created Browser SSO request for bundle identifier com.apple.Safari, cookie SSO include-list ( +), use cookie sso for this app 0, initiating origin https://www.office.com +Init MSIDKeychainTokenCache with keychainGroup: Masked(not-null) +[Browser SSO] Starting Browser SSO request for authority https://login.microsoftonline.com/common +[MSAL] (Default accessor) Found 1 tokens +[Browser SSO] Checking PRTs for deviceId 73796663 +[MSAL] [Browser SSO] Executing without UI for authority https://login.microsoftonline.com/common, number of PRTs 1, device registered 1 +[MSAL] [Browser SSO] Processing request with PRTs and correlation ID in headers (null), query 67b6a62f-6c5d-40f1-8440-a8edac7a1f87 +[MSAL] Resolving authority: Masked(not-null), upn: Masked(null) +[MSAL] No cached preferred_network for authority +[MSAL] Caching AAD Environements +[MSAL] networkHost: login.microsoftonline.com, cacheHost: login.windows.net, aliases: login.microsoftonline.com, login.windows.net, login.microsoft.com, sts.windows.net +[MSAL] networkHost: login.partner.microsoftonline.cn, cacheHost: login.partner.microsoftonline.cn, aliases: login.partner.microsoftonline.cn, login.chinacloudapi.cn +[MSAL] networkHost: login.microsoftonline.de, cacheHost: login.microsoftonline.de, aliases: login.microsoftonline.de +[MSAL] networkHost: login.microsoftonline.us, cacheHost: login.microsoftonline.us, aliases: login.microsoftonline.us, login.usgovcloudapi.net +[MSAL] networkHost: login-us.microsoftonline.com, cacheHost: login-us.microsoftonline.com, aliases: login-us.microsoftonline.com +[MSAL] Resolved authority, validated: YES, error: 0 +[MSAL] Found registration registered in login.microsoftonline.com, isSameAsRequestEnvironment: Yes +[MSAL] Passing device header in browser SSO for device id 43cfaf69-0f94-4d2e-a815-c103226c4c04 +[MSAL] Adding SSO-cookie header with PRT Masked(not-null) +SSO extension cleared cookies before handling request 1 +[Browser SSO] SSO response is successful 0 +[MSAL] Keychain find status: 0 +[MSAL] (Default accessor) Found 1 tokens +Request does not need UI +[MSAL] [Browser SSO] Checking PRTs for deviceId 73796663 +Request complete +``` ++|SSO extension log component |Description | +||| +|**Created Browser SSO request** | All Non-MSAL/Browser SSO requests begin with this line:<br> - **bundle identifier**: [Bundle ID](#how-to-find-the-bundle-id-for-an-application-on-macos): `com.apple.Safari`<br> - **initiating origin**: Web URL the browser is accessing before hitting one of the login URLs for Azure AD (https://office.com) | +|**Starting Browser SSO request for authority**|Resolves the number of PRTs and if the Device is Registered:<br>https://login.microsoftonline.com/common, number of **PRTs 1, device registered 1** | +|**Correlation ID** | [Browser SSO] Processing request with PRTs and correlation ID in headers (null), query **\<CorrelationID\>**. This ID is important for cross-referencing with the Azure AD server-side sign-in logs | +|**Device Registration** | Optionally if the device is Azure AD Registered, the SSO extension can pass the device header in Browser SSO requests: <br> - Found registration registered in<br> - **login.microsoftonline.com, isSameAsRequestEnvironment: Yes** <br><br>Passing device header in browser SSO for **device id** `43cfaf69-0f94-4d2e-a815-c103226c4c04`| ++Next, use the correlation ID obtained from the Browser SSO extension logs to cross-reference the Azure AD Sign-in logs. +++|Sign-in log attribute |Description | +||| +|**Application**| Display Name of the Application registration in the Azure AD tenant where the client application authenticates. In this example, the display name is **OfficeHome**. | +|**Application Id**| Also referred to the ClientID of the application registration in the Azure AD tenant. | +|**Resource**| The API resource that the client application is trying to obtain access to. In this example, the resource is the **OfficeHome** web application. | +|**Incoming Token Type**| An Incoming token type of **Primary Refresh Token (PRT)** shows the input token being used to obtain an access token for the resource. | +|**Authentication method detected**| Under the **Authentication Details** tab, the value of **Azure AD SSO plug-in** is useful indicator that the SSO extension is being used to facilitate the Browser SSO request | +|**Azure AD SSO extension version**| Under the **Additional Details** tab, this value shows the version of the Microsoft Enterprise SSO extension Broker app. | +|**Device ID**| If the device is registered, the SSO extension can pass the Device ID to handle device authentication requests. | +|**Operating System**| Shows the type of operating system. | +|**Compliant**| SSO extension can facilitate Compliance policies by passing the device header. The requirements are:<br> - **Azure AD Device Registration**<br> - **MDM Management**<br> - **Intune or Intune Partner Compliance** | +|**Managed**| Indicates that device is under management. | +|**Join Type**| macOS and iOS, if registered, can only be of type: **Azure AD Registered**. | ++## Next steps ++- [Microsoft Enterprise SSO plug-in for Apple devices (preview)](../develop/apple-sso-plugin.md) +- [Deploy the Microsoft Enterprise SSO plug-in for Apple Devices (preview)](/mem/intune/configuration/use-enterprise-sso-plug-in-ios-ipados-macos) |
active-directory | How To Connect Group Writeback V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md | -> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](#understand-limitations-of-public-preview) before you enable this functionality. +> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](#understand-limitations-of-public-preview) before you enable this functionality. You should not deploy the functionality to write back security groups in your production environment. We are planning to replace the AADConnect security group writeback functionality with the new Cloud Sync group writeback feature, and when this releases we will remove the AADConnect Group Writeback functionality. This does not impact M365 group writeback funcitonality, which will remain unchanged. There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities: |
azure-arc | Create Data Controller Using Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md | Save a copy of [bootstrapper-unified.yaml](https://raw.githubusercontent.com/mic > [!IMPORTANT] > The bootstrapper-unified.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following:-- Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md).-- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry.-- Change the image URL for the bootstrapper image in the bootstrap.yaml file.-- Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret.+> - Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md). +> - [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry. +> - Change the image URL for the bootstrapper image in the bootstrap.yaml file. +> - Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret. Run the following command to create the namespace and bootstrapper service with the edited file. |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 01/23/2023 Last updated : 02/03/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." For more information, see [Introduction to Kubernetes compute target in AzureML] ## Flux (GitOps) -[GitOps on Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md) uses [Flux v2](https://fluxcd.io/docs/), a popular open-source tool set, to help manage cluster configuration and application deployment. GitOps is enabled in the cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` cluster extension resource. +[GitOps on AKS and Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md) uses [Flux v2](https://fluxcd.io/docs/), a popular open-source tool set, to help manage cluster configuration and application deployment. GitOps is enabled in the cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` cluster extension resource. For more information, see [Tutorial: Deploy applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md). |
azure-functions | Functions Bindings Error Pages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md | Starting with version 3.x of the Azure Functions runtime, you can define retry p The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached. -A retry policy is evaluated when a Timer, Kafka, or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. Event Hubs checkpoints won't be written until the retry policy for the execution has finished. Because of this behavior, progress on the specific partition is paused until the current batch has finished. +A retry policy is evaluated when a Timer, Kafka, or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. ++> [!IMPORTANT] +> Event Hubs checkpoints won't be written until the retry policy for the execution has finished. Because of this behavior, progress on the specific partition is paused until the current batch has finished. +> +> The Event Hubs v5 extension supports additional retry capabilities for interactions between the Functions host and the event hub. Please refer to the `clientRetryOptions` in [the Event Hubs section of the host.json](functions-bindings-event-hubs.md#host-json) file for more information. #### Retry strategies |
azure-functions | Functions Bindings Triggers Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-triggers-python.md | def test_function(mytimer: func.TimerRequest) -> None: logging.info('The timer is past due!') logging.info('Python timer trigger function ran at %s', utc_timestamp) ```++## Durable Functions ++Durable Functions also provides preview support of the V2 programming model. To try it out, install the Durable Functions SDK (PyPI package `azure-functions-durable`) from version `1.2.2` or greater. You can reach us in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python) with feedback and suggestions. +++> [!NOTE] +> Using [Extension Bundles](/azure-functions/functions-bindings-register#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually. +> To do this, remove the `extensionBundles` section of your `host.json` as described [here](/azure-functions/functions-bindings-register#extension-bundles) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience. ++The Durable Functions Triggers and Bindings may be accessed from an instance `DFApp`, a subclass of `FunctionApp` that additionally exports Durable Functions-specific decorators. ++Below is a simple Durable Functions app that declares a simple sequential orchestrator, all in one file! ++```python +import azure.functions as func +import azure.durable_functions as df ++myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS) ++# An HTTP-Triggered Function with a Durable Functions Client binding +@myApp.route(route="orchestrators/{functionName}") +@myApp.durable_client_input(client_name="client") +async def durable_trigger(req: func.HttpRequest, client): + function_name = req.route_params.get('functionName') + instance_id = await client.start_new(function_name) + response = client.create_check_status_response(req, instance_id) + return response ++# Orchestrator +@myApp.orchestration_trigger(context_name="context") +def my_orchestrator(context): + result1 = yield context.call_activity("hello", "Seattle") + result2 = yield context.call_activity("hello", "Tokyo") + result3 = yield context.call_activity("hello", "London") ++ return [result1, result2, result3] ++# Activity +@myApp.activity_trigger(input_name="myInput") +def hello(myInput: str): + return "Hello " + myInput +``` ++> [!NOTE] +> Previously, Durable Functions orchestrators needed an extra line of boilerplate, usually at the end of the file, to be indexed: +> `main = df.Orchestrator.create(<name_of_orchestrator_function>)`. +> This is no longer needed in V2 of the Python programming model. This applies to Entities as well, which required a similar boilerplate through +> `main = df.Entity.create(<name_of_entity_function>)`. ++For reference, all Durable Functions Triggers and Bindings are listed below: ++### Orchestration Trigger ++```python +import azure.functions as func +import azure.durable_functions as df ++myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS) ++@myApp.orchestration_trigger(context_name="context") +def my_orchestrator(context): + result = yield context.call_activity("Hello", "Tokyo") + return result +``` ++### Activity Trigger ++```python +import azure.functions as func +import azure.durable_functions as df ++myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS) ++@myApp.activity_trigger(input_name="myInput") +def my_activity(myInput: str): + return "Hello " + myInput +``` ++### DF Client Binding ++```python +import azure.functions as func +import azure.durable_functions as df ++myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS) ++@myApp.route(route="orchestrators/{functionName}") +@myApp.durable_client_input(client_name="client") +async def durable_trigger(req: func.HttpRequest, client): + function_name = req.route_params.get('functionName') + instance_id = await client.start_new(function_name) + response = client.create_check_status_response(req, instance_id) + return response +``` ++### Entity Trigger ++```python +import azure.functions as func +import azure.durable_functions as df ++myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS) ++@myApp.entity_trigger(context_name="context") +def entity_function(context): + current_value = context.get_state(lambda: 0) + operation = context.operation_name + if operation == "add": + amount = context.get_input() + current_value += amount + elif operation == "reset": + current_value = 0 + elif operation == "get": + pass + + context.set_state(current_value) + context.set_result(current_value) +``` + ## Next steps + [Python developer guide](./functions-reference-python.md) |
azure-functions | Functions Create First Quarkus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md | To learn more about Azure Functions and Quarkus, see the following articles and * [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java) * [Quickstart: Create a Java function in Azure using Visual Studio Code](/azure/azure-functions/create-first-function-vs-code-java) * [Azure Functions documentation](/azure/azure-functions/)-* [Quarkus guide to deploying on Azure](https://quarkus.io/guides/deploying-to-azure-cloud) +* [Quarkus guide to deploying on Azure](https://quarkus.io/guides/deploying-to-azure-cloud) |
azure-monitor | Data Collection Rule Azure Monitor Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md | To complete this procedure, you need: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- Create [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor Agent sends to which destinations, as described in the next section - Associate the data collection rule to specific virtual machines. ## Create a data collection rule |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | For ASP.NET applications, the full SQL query text is collected with the help of | Platform | Steps needed to get full SQL query | | | | | Web Apps in Azure App Service|In your web app control panel, [open the Application Insights pane](../../azure-monitor/app/azure-web-apps.md) and enable SQL Commands under .NET. |-| IIS Server (Azure Virtual Machines, on-premises, and so on) | Either use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package or use the Status Monitor PowerShell Module to [install the instrumentation engine](../../azure-monitor/app/status-monitor-v2-api-reference.md#enable-instrumentationengine) and restart IIS. | +| IIS Server (Azure Virtual Machines, on-premises, and so on) | Either use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package or use the Application Insights Agent PowerShell Module to [install the instrumentation engine](../../azure-monitor/app/status-monitor-v2-api-reference.md#enable-instrumentationengine) and restart IIS. | | Azure Cloud Services | Add a [startup task to install StatusMonitor](../../azure-monitor/app/azure-web-apps-net-core.md). <br> Your app should be onboarded to the ApplicationInsights SDK at build time by installing NuGet packages for [ASP.NET](./asp-net.md) or [ASP.NET Core applications](./asp-net-core.md). | | IIS Express | Use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package. | WebJobs in Azure App Service| Use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package. |
azure-monitor | Configuration With Applicationinsights Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md | -The configuration file is named `ApplicationInsights.config` or `ApplicationInsights.xml`. The name depends on the type of your application. It's automatically added to your project when you [install most versions of the SDK][start]. By default, when you use the automated experience from the Visual Studio template projects that support **Add** > **Application Insights Telemetry**, the `ApplicationInsights.config` file is created in the project root folder. When it's compiled, it's copied to the bin folder. It's also added to a web app by [Status Monitor on an IIS server][redfield]. The configuration file is ignored if the [extension for Azure websites](azure-web-apps.md) or the [extension for Azure VMs and virtual machine scale sets](azure-vm-vmss-apps.md) is used. +The configuration file is named `ApplicationInsights.config` or `ApplicationInsights.xml`. The name depends on the type of your application. It's automatically added to your project when you [install most versions of the SDK][start]. By default, when you use the automated experience from the Visual Studio template projects that support **Add** > **Application Insights Telemetry**, the `ApplicationInsights.config` file is created in the project root folder. When it's compiled, it's copied to the bin folder. It's also added to a web app by [Application Insights Agent on an IIS server][redfield]. The configuration file is ignored if the [extension for Azure websites](azure-web-apps.md) or the [extension for Azure VMs and virtual machine scale sets](azure-vm-vmss-apps.md) is used. There isn't an equivalent file to control the [SDK in a webpage][client]. There's a node in the configuration file for each module. To disable a module, d ### Dependency tracking -[Dependency tracking](./asp-net-dependencies.md) collects telemetry about calls your app makes to databases and external services and databases. To allow this module to work in an IIS server, you need to [install Status Monitor][redfield]. +[Dependency tracking](./asp-net-dependencies.md) collects telemetry about calls your app makes to databases and external services and databases. To allow this module to work in an IIS server, you need to [install Application Insights Agent][redfield]. You can also write your own dependency tracking code by using the [TrackDependency API](./api-custom-events-metrics.md#trackdependency). |
azure-monitor | Data Retention Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md | We do not recommend explicitly setting your application to only use TLS 1.2, unl | Azure App Services | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). | | Azure Function Apps | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). | |.NET | Supported, Long Term Support (LTS). | For detailed configuration information, refer to [these instructions](/dotnet/framework/network-programming/tls). |-|Status Monitor | Supported, configuration required. | Status Monitor relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2. +|Application Insights Agent| Supported, configuration required. | Application Insights Agent relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2. |Node.js | Supported, in v10.5.0, configuration might be required. | Use the [official Node.js TLS/SSL documentation](https://nodejs.org/api/tls.html) for any application-specific configuration. | |Java | Supported, JDK support for TLS 1.2 was added in [JDK 6 update 121](https://www.oracle.com/technetwork/java/javase/overview-156328.html#R160_121) and [JDK 7](https://www.oracle.com/technetwork/java/javase/7u131-relnotes-3338543.html). | JDK 8 uses [TLS 1.2 by default](https://blogs.oracle.com/java-platform-group/jdk-8-will-use-tls-12-as-default). | |Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.| The SDKs vary between platforms, and there are several components that you can i | Your action | Data classes collected (see next table) | | | | | [Add Application Insights SDK to a .NET web project][greenbrown] |ServerContext<br/>Inferred<br/>Perf counters<br/>Requests<br/>**Exceptions**<br/>Session<br/>users |-| [Install Status Monitor on IIS][redfield] |Dependencies<br/>ServerContext<br/>Inferred<br/>Perf counters | +| [Install Application Insights Agent on IIS][redfield] |Dependencies<br/>ServerContext<br/>Inferred<br/>Perf counters | | [Add Application Insights SDK to a Java web app][java] |ServerContext<br/>Inferred<br/>Request<br/>Session<br/>users | | [Add JavaScript SDK to webpage][client] |ClientContext <br/>Inferred<br/>Page<br/>ClientPerf<br/>Ajax | | [Define default properties][apiproperties] |**Properties** on all standard and custom events | For [SDKs for other platforms][platforms], see their documents. | Client perf |URL/page name, browser load time | | Ajax |HTTP calls from webpage to server | | Requests |URL, duration, response code |-| Dependencies |Type (SQL, HTTP, ...), connection string, or URI, sync/async, duration, success, SQL statement (with Status Monitor) | +| Dependencies |Type (SQL, HTTP, ...), connection string, or URI, sync/async, duration, success, SQL statement (with Application Insights Agent) | | Exceptions |Type, message, call stacks, source file, line number, `thread id` | | Crashes |`Process id`, `parent process id`, `crash thread id`; application patch, `id`, build; exception type, address, reason; obfuscated symbols and registers, binary start and end addresses, binary name and path, cpu type | | Trace |Message and severity level | |
azure-monitor | Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md | Alternatively, you can subscribe to this page as an RSS feed by adding https://g ## Outgoing ports -You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or Status Monitor to send data to the portal. +You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or Application Insights Agent to send data to the portal. | Purpose | URL | Type | IP | Ports | | | | | | | You need to open some outgoing ports in your server's firewall to allow the Appl > > If you're using an older version of TLS, Application Insights will not ingest any telemetry. For applications based on .NET Framework see [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) to support the newer TLS version. -## Status Monitor +## Application Insights Agent -Status Monitor configuration is needed only when you're making changes. +Application Insights Agent configuration is needed only when you're making changes. | Purpose | URL | Ports | | | | | |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | Like other types of sampling, the algorithm retains related telemetry items. For Data points that are discarded by sampling aren't available in any Application Insights feature such as [Continuous Export](./export-telemetry.md). -Ingestion sampling doesn't operate while adaptive or fixed-rate sampling is in operation. Adaptive sampling is enabled by default when the ASP.NET SDK or the ASP.NET Core SDK is being used, or when Application Insights is enabled in [Azure App Service ](azure-web-apps.md) or by using Status Monitor. When telemetry is received by the Application Insights service endpoint, it examines the telemetry and if the sampling rate is reported to be less than 100% (which indicates that telemetry is being sampled) then the ingestion sampling rate that you set is ignored. +Ingestion sampling doesn't operate while adaptive or fixed-rate sampling is in operation. Adaptive sampling is enabled by default when the ASP.NET SDK or the ASP.NET Core SDK is being used, or when Application Insights is enabled in [Azure App Service ](azure-web-apps.md) or by using Application Insights Agent. When telemetry is received by the Application Insights service endpoint, it examines the telemetry and if the sampling rate is reported to be less than 100% (which indicates that telemetry is being sampled) then the ingestion sampling rate that you set is ignored. > [!WARNING] > The value shown on the portal tile indicates the value that you set for ingestion sampling. It doesn't represent the actual sampling rate if any sort of SDK sampling (adaptive or fixed-rate sampling) is in operation. |
azure-monitor | Status Monitor V2 Api Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-api-reference.md | Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Ser Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W3SVC[Environment] Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WAS[Environment] Configuring registry for instrumentation engine...-Successfully disabled Application Insights Status Monitor +Successfully disabled Application Insights Agent Installing GAC module 'C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\0.2.0\content\Runtime\Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.dll' Applying transformation to 'C:\Windows\System32\inetsrv\config\applicationHost.config' Found GAC module Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.ManagedHttpModuleHelper, Microsoft.AppInsights.IIS.ManagedHttpModuleHelper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 Configuring IIS Environment for codeless attach... Configuring IIS Environment for instrumentation engine... Configuring registry for instrumentation engine... Updating app pool permissions...-Successfully enabled Application Insights Status Monitor +Successfully enabled Application Insights Agent ``` ## Disable-InstrumentationEngine Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Ser Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W3SVC[Environment] Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WAS[Environment] Configuring registry for instrumentation engine...-Successfully disabled Application Insights Status Monitor +Successfully disabled Application Insights Agent ``` Filters: ## Get-ApplicationInsightsMonitoringStatus -This cmdlet provides troubleshooting information about Status Monitor. +This cmdlet provides troubleshooting information about Application Insights Agent. Use this cmdlet to investigate the monitoring status, version of the PowerShell Module, and to inspect the running process. This cmdlet will report version information and information about key files required for monitoring. In this example; - **Default Web Site** is Stopped in IIS - **DemoWebApp111** has been started in IIS, but hasn't received any requests. This report shows that there's no running process (ProcessId: not found). - **DemoWebApp222** is running and is being monitored (Instrumented: true). Based on the user configuration, Instrumentation Key xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx123 was matched for this site.-- **DemoWebApp333** has been manually instrumented using the Application Insights SDK. Status Monitor detected the SDK and won't monitor this site.+- **DemoWebApp333** has been manually instrumented using the Application Insights SDK. Application Insights Agent detected the SDK and won't monitor this site. #### Example: PowerShell module information To collect these events: You have three options when collecting events: 1. Use the switch `-CollectSdkEvents` to collect events emitted from the Application Insights SDK.-2. Use the switch `-CollectRedfieldEvents` to collect events emitted by Status Monitor and the Redfield Runtime. These logs are helpful when diagnosing IIS and application startup. +2. Use the switch `-CollectRedfieldEvents` to collect events emitted by Application Insights Agent and the Redfield Runtime. These logs are helpful when diagnosing IIS and application startup. 3. Use both switches to collect both event types. 4. By default, if no switch is specified both event types will be collected. The full path will be displayed during script execution. **Optional.** Use this switch to collect Application Insights SDK events. #### -CollectRedfieldEvents-**Optional.** Use this switch to collect events from Status Monitor and the Redfield runtime. +**Optional.** Use this switch to collect events from Application Insights Agent and the Redfield runtime. #### -Verbose **Common parameter.** Use this switch to output detailed logs. |
azure-monitor | Status Monitor V2 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md | Yes. There are multiple ways to download Application Insights Agent: Each of these options is described in the [detailed instructions](status-monitor-v2-detailed-instructions.md). -### Does Status Monitor v2 support ASP.NET Core applications? +### Does Application Insights Agent support ASP.NET Core applications? Yes. Starting from [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1), ASP.NET Core applications hosted in IIS are supported. |
azure-monitor | Autoscale Common Scale Patterns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-scale-patterns.md | For example, you have a Virtual Machine Scale Set and want to: In this example: + The weekend profile starts at 00:01, Saturday morning and ends at 04:00 on Monday morning. + The end times are left blank. The weekday profile will end when the weekend profile starts and vice-versa.-+ There's no need for a default profile as there's no time that isn't covered by the other profiles. ++ The default profile is irrelevant as there's no time that isn't covered by the other profiles. +>[!Note] +> Creating a recurring profile with no end time is only supported via the portal. +> If the end-time is not included in the CLI command, a default end-time of 23:59 will be implemented by creating a copy of the default profile with the naming convention `"name": {\"name\": \"Auto created default scale condition\", \"for\": \"<non-default profile name>\"}` + :::image type="content" source="./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png" alt-text="A screenshot showing two autoscale profiles, one default and one for weekends." lightbox="./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png"::: ## Scale differently during specific events |
azure-monitor | Autoscale Multiprofile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md | In the portal, the end time field becomes the next start time for the default pr > [!TIP] > To set up multiple contiguous profiles using the portal, leave the end time empty. The current profile will stop being used when the next profile becomes active. Only specify an end time when you want to revert to the default profile.+> Creating a recurring profile with no end time is only supported via the portal. ## Multiple profiles using templates, CLI, and PowerShell The example below shows how to add a recurring autoscale profile, recurring on T ``` azurecli -az monitor autoscale profile create --autoscale-name VMSS1-Autoscale-607 --count 2 --max-count 10 --min-count 1 --name Thursdays --recurrence week thu --resource-group rg-vmss1 --start 06:00 --end 22:50 --timezone "Pacific Standard Time" +az monitor autoscale profile create --autoscale-name VMSS1-Autoscale --count 2 --max-count 10 --min-count 1 --name Thursdays --recurrence week thu --resource-group rg-vmss1 --start 06:00 --end 22:50 --timezone "Pacific Standard Time" -az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale-607 --scale in 1 --condition "Percentage CPU < 25 avg 5m" --profile-name Thursdays +az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale --scale in 1 --condition "Percentage CPU < 25 avg 5m" --profile-name Thursdays -az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale-607 --scale out 2 --condition "Percentage CPU > 50 avg 5m" --profile-name Thursdays +az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale --scale out 2 --condition "Percentage CPU > 50 avg 5m" --profile-name Thursdays ``` > [!NOTE] -> The JSON for your autoscale default profile is modified by adding a recurring profile. -> The `name` element of the default profile is changed to an object in the format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"recurring profile\"}"` where *recurring profile* is the profile name of your recurring profile. +> * The JSON for your autoscale default profile is modified by adding a recurring profile. +> The `name` element of the default profile is changed to an object in the format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"recurring profile name\"}"` where *recurring profile* is the profile name of your recurring profile. > The default profile also has a recurrence clause added to it that starts at the end time specified for the new recurring profile.-> A new default profile is created for each recurring profile. +> * A new default profile is created for each recurring profile. +> * If the end time is not specified in the CLI command, the end time will be defaulted to 23:59. ## Updating the default profile when you have recurring profiles |
azure-monitor | Rest Api Walkthrough | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md | Title: Azure monitoring REST API walkthrough description: How to authenticate requests and use the Azure Monitor REST API to retrieve available metric definitions and metric values.+ Previously updated : 05/09/2022 Last updated : 01/30/2023 -+ # Azure monitoring REST API walkthrough -This article shows you how to perform authentication so your code can use the [Azure Monitor REST API reference](/rest/api/monitor/). +This article shows you how to use the [Azure Monitor REST API reference](/rest/api/monitor/). -The Azure Monitor API makes it possible to programmatically retrieve the available default metric definitions, dimension values, and metric values. The data can be saved in a separate data store such as Azure SQL Database, Azure Cosmos DB, or Azure Data Lake. From there, more analysis can be performed as needed. --Besides working with various metric data points, the Azure Monitor API also makes it possible to list alert rules, view activity logs, and do much more. For a full list of available operations, see the [Azure Monitor REST API reference](/rest/api/monitor/). +Retrieve metric definitions, dimension values, and metric values using the Azure Monitor API and use the data in your applications, or store in a database for analysis. You can also list alert rules and view activity logs using the Azure Monitor API. ## Authenticate Azure Monitor requests -All the tasks executed against the Azure Monitor API use the Azure Resource Manager authentication model. So, all requests must be authenticated with Azure Active Directory (Azure AD). One approach to authenticating the client application is to create an Azure AD service principal and retrieve the authentication (JWT) token. --The following sample script demonstrates creating an Azure AD service principal via PowerShell. For a more detailed walkthrough, see the documentation on [using Azure PowerShell to create a service principal to access resources](/powershell/azure/create-azure-service-principal-azureps). It's also possible to [create a service principal via the Azure portal](../../active-directory/develop/howto-create-service-principal-portal.md). +Request submitted using the Azure Monitor API use the Azure Resource Manager authentication model. All requests are authenticated with Azure Active Directory. One approach to authenticating the client application is to create an Azure Active Directory service principal and retrieve an authentication token. You can create an Azure Active Directory service principal using the Azure portal, CLI, or PowerShell. For more information, see [Register an App to request authorization tokens and work with APIs](../logs/api/register-app-for-token.md) -```powershell -$subscriptionId = "{azure-subscription-id}" -$resourceGroupName = "{resource-group-name}" +## Retrieve a token +Once you've created a service principal, retrieve an access token using a REST call. Submit the following request using the `appId` and `password` for your service principal or app: -# Authenticate to a specific Azure subscription. -Connect-AzAccount -SubscriptionId $subscriptionId +```HTTP -# Password for the service principal -$pwd = "{service-principal-password}" -$secureStringPassword = ConvertTo-SecureString -String $pwd -AsPlainText -Force + POST /<appId>/oauth2/v2.0/token + Host: https://login.microsoftonline.com + Content-Type: application/x-www-form-urlencoded + + grant_type=client_credentials + &client_id=<app-client-id> + &resource=https://management.azure.com + &client_secret=<password> -# Create a new Azure AD application -$azureAdApplication = New-AzADApplication ` - -DisplayName "My Azure Monitor" ` - -HomePage "https://localhost/azure-monitor" ` - -IdentifierUris "https://localhost/azure-monitor" ` - -Password $secureStringPassword +``` -# Create a new service principal associated with the designated application -New-AzADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId +For example -# Assign Reader role to the newly created service principal -New-AzRoleAssignment -RoleDefinitionName Reader ` - -ServicePrincipalName $azureAdApplication.ApplicationId.Guid +```bash +curl --location --request POST 'https://login.microsoftonline.com/a1234bcd-5849-4a5d-a2eb-5267eae1bbc7/oauth2/token' \ +--header 'Content-Type: application/x-www-form-urlencoded' \ +--data-urlencode 'grant_type=client_credentials' \ +--data-urlencode 'client_id=0a123b56-c987-1234-abcd-1a2b3c4d5e6f' \ +--data-urlencode 'client_secret123456.ABCDE.~XYZ876123ABceDb0000' \ +--data-urlencode 'resource=https://management.azure.com' ```+A successful request receives an access token in the response: -To query the Azure Monitor API, the client application should use the previously created service principal to authenticate. The following example PowerShell script shows one approach that uses the [Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-overview.md) to obtain the authentication token. --```powershell -$ClientID = "{client_id}" -$loginURL = "https://login.microsoftonline.com" -$tenantdomain = "{tenant_id}" -$CertPassWord = "{password_for_cert}" -$certPath = "C:\temp\Certs\testCert_01.pfx" - -[string[]] $Scopes = "https://graph.microsoft.com/.default" - -Function Load-MSAL { - if ($PSVersionTable.PSVersion.Major -gt 5) - { - $core = $true - $foldername = "netcoreapp2.1" - } - else - { - $core = $false - $foldername = "net45" - } - - # Download MSAL.Net module to a local folder if it does not exist there - if ( ! (Get-ChildItem $HOME/MSAL/lib/Microsoft.Identity.Client.* -erroraction ignore) ) { - install-package -Source nuget.org -ProviderName nuget -SkipDependencies Microsoft.Identity.Client -Destination $HOME/MSAL/lib -force -forcebootstrap | out-null - } - - # Load the MSAL assembly -- needed once per PowerShell session - [System.Reflection.Assembly]::LoadFrom((Get-ChildItem $HOME/MSAL/lib/Microsoft.Identity.Client.*/lib/$foldername/Microsoft.Identity.Client.dll).fullname) | out-null - } - -Function Get-GraphAccessTokenFromMSAL { - - Load-MSAL - - $global:app = $null - - $x509cert = [System.Security.Cryptography.X509Certificates.X509Certificate2] (GetX509Certificate_FromPfx -CertificatePath $certPath -CertificatePassword $CertPassWord) - write-host "Cert = {$x509cert}" - - $ClientApplicationBuilder = [Microsoft.Identity.Client.ConfidentialClientApplicationBuilder]::Create($ClientID) - [void]$ClientApplicationBuilder.WithAuthority($("$loginURL/$tenantdomain")) - [void]$ClientApplicationBuilder.WithCertificate($x509cert) - $global:app = $ClientApplicationBuilder.Build() - - [Microsoft.Identity.Client.AuthenticationResult] $authResult = $null - $AquireTokenParameters = $global:app.AcquireTokenForClient($Scopes) - try { - $authResult = $AquireTokenParameters.ExecuteAsync().GetAwaiter().GetResult() - } - catch { - $ErrorMessage = $_.Exception.Message - Write-Host $ErrorMessage - } - - return $authResult -} - -function GetX509Certificate_FromPfx($CertificatePath, $CertificatePassword){ - #write-host "Path: '$CertificatePath'" - - if(![System.IO.Path]::IsPathRooted($CertificatePath)) - { - $LocalPath = Get-Location - $CertificatePath = "$LocalPath\$CertificatePath" - } - - #Write-Host "Looking for '$CertificatePath'" - - $certificate = [System.Security.Cryptography.X509Certificates.X509Certificate2]::new($CertificatePath, $CertificatePassword) - - Return $certificate +```HTTP +{ + token_type": "Bearer", + "expires_in": "86399", + "ext_expires_in": "86399", + "access_token": ""eyJ0eXAiOiJKV1QiLCJ.....Ax" }- -$myvar = Get-GraphAccessTokenFromMSAL -Write-Host "Access Token: " $myvar.AccessToken - ``` -Loading the certificate from a .pfx file in PowerShell can make it easier for an admin to manage certificates without having to install the certificate in the certificate store. However, this step shouldn't be done on a client machine because the user could potentially discover the file and the password for it and the method to authenticate. The client credentials flow is only intended to be run in a back-end service-to-service type of scenario where only admins have access to the machine. -After authenticating, queries can then be executed against the Azure Monitor REST API. There are two helpful queries: -- List the metric definitions for a resource.-- Retrieve the metric values.+After authenticating and retrieving a token, use the access token in your Azure Monitor API requests by including the header `'Authorization: Bearer <access token>'` > [!NOTE]-> For more information on authenticating with the Azure REST API, see the [Azure REST API reference](/rest/api/azure/). +> For more information on working with the Azure REST API, see the [Azure REST API reference](/rest/api/azure/). > ## Retrieve metric definitions Use the [Azure Monitor Metric Definitions REST API](/rest/api/monitor/metricdefinitions) to access the list of metrics that are available for a service.+Use the following request format to retrieve metric definitions. -**Method**: GET --**Request URI**: https:\/\/management.azure.com/subscriptions/*{subscriptionId}*/resourceGroups/*{resourceGroupName}*/providers/*{resourceProviderNamespace}*/*{resourceType}*/*{resourceName}*/providers/microsoft.insights/metricDefinitions?api-version=*{apiVersion}* --To retrieve the metric definitions for an Azure Storage account, the request would appear as the following example: --```powershell -$request = "https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01" +```HTTP +GET /subscriptions/<subscription id>/resourcegroups/<resourceGroupName>/providers/<resourceProviderNamespace>/<resourceType>/<resourceName>/providers/microsoft.insights/metricDefinitions?api-version=<apiVersion> +Host: management.azure.com +Content-Type: application/json +Authorization: Bearer <access token> +``` -Invoke-RestMethod -Uri $request ` - -Headers $authHeader ` - -Method Get ` - -OutFile ".\contosostorage-metricdef-results.json" ` - -Verbose +For example, The following request retrieves the metric definitions for an Azure Storage account +```curl +curl --location --request GET 'https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01' +--header 'Authorization: Bearer eyJ0eXAiOi...xYz ``` -> [!NOTE] -> Older versions of the metric definitions API didn't support dimensions. We recommend using API version "2018-01-01" or later. -> --The resulting JSON response body would be similar to the following example (note that the second metric has dimensions): +The following JSON shows an example response body. +In this example, only the second metric has dimensions. ```json { "value": [ {- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metricdefinitions/UsedCapacity", - "resourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage", + "id": "/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metricdefinitions/UsedCapacity", + "resourceId": "/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage", "namespace": "Microsoft.Storage/storageAccounts", "category": "Capacity", "name": { The resulting JSON response body would be similar to the following example (note "timeGrain": "PT1H", "retention": "P93D" },- { - "timeGrain": "PT6H", - "retention": "P93D" - }, + ... { "timeGrain": "PT12H", "retention": "P93D" The resulting JSON response body would be similar to the following example (note ] }, {- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metricdefinitions/Transactions", - "resourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage", + "id": "/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metricdefinitions/Transactions", + "resourceId": "/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage", "namespace": "Microsoft.Storage/storageAccounts", "category": "Transaction", "name": { The resulting JSON response body would be similar to the following example (note "timeGrain": "PT5M", "retention": "P93D" },- { - "timeGrain": "PT15M", - "retention": "P93D" - }, + ... { "timeGrain": "PT30M", "retention": "P93D" The resulting JSON response body would be similar to the following example (note "timeGrain": "PT1H", "retention": "P93D" },- { - "timeGrain": "PT6H", - "retention": "P93D" - }, - { - "timeGrain": "PT12H", - "retention": "P93D" - }, + ... { "timeGrain": "P1D", "retention": "P93D" The resulting JSON response body would be similar to the following example (note ] } ```+> [!NOTE] +> We recommend using API version "2018-01-01" or later. Older versions of the metric definitions API don't support dimensions. ## Retrieve dimension values -After the available metric definitions are known, there might be some metrics that have dimensions. Before you query for the metric, you might want to discover the range of values that a dimension has. Based on these dimension values, you can then choose to filter or segment the metrics based on dimension values while you query for metrics. Use the [Azure Monitor Metrics REST API](/rest/api/monitor/metrics) to find all the possible values for a given metric dimension. +After the retrieving the available metric definitions, retrieve the range of values for the metric's dimensions. Use dimension values to filter or segment the metrics in your queries. Use the [Azure Monitor Metrics REST API](/rest/api/monitor/metrics) to find all of the values for a given metric dimension. -Use the metric's name `value` (not `localizedValue`) for any filtering requests. If no filters are specified, the default metric is returned. The use of this API only allows one dimension to have a wildcard filter. The key difference between a dimension values request and a metric data request is specifying the `"resultType=metadata"` query parameter. +Use the metric's `name.value` element in the filter definitions. If no filters are specified, the default metric is returned. The API only allows one dimension to have a wildcard filter. +Specify the request for dimension values using the `"resultType=metadata"` query parameter. The `resultType` is omitted for a metric values request. > [!NOTE] > To retrieve dimension values by using the Azure Monitor REST API, use the API version "2019-07-01" or later. >--**Method**: GET --**Request URI**: https\://management.azure.com/subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/*{resource-provider-namespace}*/*{resource-type}*/*{resource-name}*/providers/microsoft.insights/metrics?metricnames=*{metric}*×pan=*{starttime/endtime}*&$filter=*{filter}*&resultType=metadata&api-version=*{apiVersion}* --For example, to retrieve the list of dimension values that were emitted for the `API Name` dimension for the `Transactions` metric, where the GeoType dimension = `Primary` during the specified time range, the request would be: --```powershell -$filter = "APIName eq '*' and GeoType eq 'Primary'" -$request = "https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metrics?metricnames=Transactions×pan=2018-03-01T00:00:00Z/2018-03-02T00:00:00Z&resultType=metadata&`$filter=GeoType eq 'Primary' and ApiName eq '*'&api-version=2019-07-01" -Invoke-RestMethod -Uri $request ` - -Headers $authHeader ` - -Method Get ` - -OutFile ".\contosostorage-dimension-values.json" ` - -Verbose +Use the following request format to retrieve dimension values. +```HTTP +GET /subscriptions/<subscription-id>/resourceGroups/ +<resource-group-name>/providers/<resource-provider-namespace>/ +<resource-type>/<resource-name>/providers/microsoft.insights/ +metrics?metricnames=<metric> +×pan=<starttime/endtime> +&$filter=<filter> +&resultType=metadata +&api-version=<apiVersion> HTTP/1.1 +Host: management.azure.com +Content-Type: application/json +Authorization: Bearer <access token> ```--The resulting JSON response body would be similar to the following example: +The following example retrieves the list of dimension values that were emitted for the `API Name` dimension of the `Transactions` metric, where the `GeoType` dimension has a value of `Primary`, for the specified time range. ++```curl +curl --location --request GET 'https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metrics \ +?metricnames=Transactions \ +×pan=2023-03-01T00:00:00Z/2023-03-02T00:00:00Z \ +&resultType=metadata \ +&$filter=GeoType eq \'Primary\' and ApiName eq \'*\' \ +&api-version=2019-07-01' +-header 'Content-Type: application/json' \ +--header 'Authorization: Bearer eyJ0e..meG1lWm9Y' +``` +The following JSON shows an example response body. ```json {- "timespan": "2018-03-01T00:00:00Z/2018-03-02T00:00:00Z", + "timespan": "2023-03-01T00:00:00Z/2023-03-02T00:00:00Z", "value": [ {- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/Microsoft.Insights/metrics/Transactions", + "id": "/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/Microsoft.Insights/metrics/Transactions", "type": "Microsoft.Insights/metrics", "name": { "value": "Transactions", The resulting JSON response body would be similar to the following example: ## Retrieve metric values -After the available metric definitions and possible dimension values are known, it's then possible to retrieve the related metric values. Use the [Azure Monitor Metrics REST API](/rest/api/monitor/metrics) to retrieve the metric values. +After retrieving the metric definitions and dimension values, retrieve the metric values. Use the [Azure Monitor Metrics REST API](/rest/api/monitor/metrics) to retrieve the metric values. ++Use the metric's `name.value` element in the filter definitions. If no dimension filters are specified, the rolled up, aggregated metric is returned. -Use the metric's name `value` (not `localizedValue`) for any filtering requests. If no dimension filters are specified, the rolled up aggregated metric is returned. To fetch multiple time series with specific dimension values, specify a filter query parameter that specifies both dimension values such as `"&$filter=ApiName eq 'ListContainers' or ApiName eq 'GetBlobServiceProperties'"`. To return a time series for every value of a given dimension, use an `*` filter such as `"&$filter=ApiName eq '*'"`. The `Top` and `OrderBy` query parameters can be used to limit and order the number of time series returned. +To fetch multiple time series with specific dimension values, specify a filter query parameter that specifies both dimension values such as `"&$filter=ApiName eq 'ListContainers' or ApiName eq 'GetBlobServiceProperties'"`. ++To return a time series for every value of a given dimension, use an `*` filter such as `"&$filter=ApiName eq '*'"`. The `Top` and `OrderBy` query parameters can be used to limit and order the number of time series returned. > [!NOTE] > To retrieve multi-dimensional metric values using the Azure Monitor REST API, use the API version "2019-07-01" or later. > -**Method**: GET --**Request URI**: https:\//management.azure.com/subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/*{resource-provider-namespace}*/*{resource-type}*/*{resource-name}*/providers/microsoft.insights/metrics?metricnames=*{metric}*×pan=*{starttime/endtime}*&$filter=*{filter}*&interval=*{timeGrain}*&aggregation=*{aggreation}*&api-version=*{apiVersion}* +Use the following request format to retrieve metric values. -For example, to retrieve the top three APIs, in descending value, by the number of `Transactions` during a 5-minute range, where the GeoType was `Primary`, the request would be: --```powershell -$filter = "APIName eq '*' and GeoType eq 'Primary'" -$request = "https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metrics?metricnames=Transactions×pan=2018-03-01T02:00:00Z/2018-03-01T02:05:00Z&`$filter=apiname eq 'GetBlobProperties'&interval=PT1M&aggregation=Total&top=3&orderby=Total desc&api-version=2019-07-01" -Invoke-RestMethod -Uri $request ` - -Headers $authHeader ` - -Method Get ` - -OutFile ".\contosostorage-metric-values.json" ` - -Verbose +```HTTP +GET /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/<resource-provider-namespace>/<resource-type>/<resource-name>/providers/microsoft.insights/metrics?metricnames=<metric>×pan=<starttime/endtime>&$filter=<filter>&interval=<timeGrain>&aggregation=<aggreation>&api-version=<apiVersion> +Host: management.azure.com +Content-Type: application/json +Authorization: Bearer <access token> ``` -The resulting JSON response body would be similar to the following example: +The following example retrieves the top three APIs, by the number of `Transactions` in descending value order, during a 5-minute range, where the `GeoType` dimension has a value of `Primary`. ++```curl +curl --location --request GET 'https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metrics \ +?metricnames=Transactions \ +×pan=2023-03-01T02:00:00Z/2023-03-01T02:05:00Z \ +& $filter=apiname eq '\''GetBlobProperties'\' +&interval=PT1M \ +&aggregation=Total \ +&top=3 \ +&orderby=Total desc \ +&api-version=2019-07-01"' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Bearer yJ0eXAiOi...g1dCI6Ii1LS' +``` +The following JSON shows an example response body. ```json { "cost": 0,- "timespan": "2018-03-01T02:00:00Z/2018-03-01T02:05:00Z", + "timespan": "2023-03-01T02:00:00Z/2023-03-01T02:05:00Z", "interval": "PT1M", "value": [ {- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/Microsoft.Insights/metrics/Transactions", + "id": "/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/Microsoft.Insights/metrics/Transactions", "type": "Microsoft.Insights/metrics", "name": { "value": "Transactions", The resulting JSON response body would be similar to the following example: ], "data": [ {- "timeStamp": "2017-09-19T02:00:00Z", + "timeStamp": "2023-09-19T02:00:00Z", "total": 2 }, {- "timeStamp": "2017-09-19T02:01:00Z", + "timeStamp": "2023-09-19T02:01:00Z", "total": 1 }, {- "timeStamp": "2017-09-19T02:02:00Z", + "timeStamp": "2023-09-19T02:02:00Z", "total": 3 }, {- "timeStamp": "2017-09-19T02:03:00Z", + "timeStamp": "2023-09-19T02:03:00Z", "total": 7 }, {- "timeStamp": "2017-09-19T02:04:00Z", + "timeStamp": "2023-09-19T02:04:00Z", "total": 2 } ] The resulting JSON response body would be similar to the following example: } ``` -### Use ARMClient +### Retrieve the resource ID -Another approach is to use [ARMClient](https://github.com/projectkudu/armclient) on your Windows machine. ARMClient handles the Azure AD authentication (and resulting JWT token) automatically. The following steps outline the use of ARMClient for retrieving metric data: +Using the REST API requires the resource ID of the target Azure resource. +Resource IDs follow the following pattern: -1. Install [Chocolatey](https://chocolatey.org/) and [ARMClient](https://github.com/projectkudu/armclient). -1. In a terminal window, enter **armclient.exe login**. Doing so prompts you to sign in to Azure. -1. Enter **armclient GET [your_resource_id]/providers/microsoft.insights/metricdefinitions?api-version=2016-03-01**. -1. Enter **armclient GET [your_resource_id]/providers/microsoft.insights/metrics?api-version=2016-09-01**. +`/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/<provider>/<resource name>/` -For example, to retrieve the metric definitions for a specific logic app, issue the following command: +For example -```console -armclient GET /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Logic/workflows/ContosoTweets/providers/microsoft.insights/metricDefinitions?api-version=2016-03-01 -``` --## Retrieve the resource ID --Using the REST API can help you to understand the available metric definitions, granularity, and related values. That information is helpful when you use the [Azure Management Library](/previous-versions/azure/reference/mt417623(v=azure.100)). --For the preceding code, the resource ID to use is the full path to the desired Azure resource. For example, to query against an Azure Web App, the resource ID would be: --*/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/sites/{site-name}/* +* **Azure IoT Hub**: /subscriptions/\<subscription-id>/resourceGroups/\<resource-group-name>/providers/Microsoft.Devices/IotHubs/\<iot-hub-name> +* **Elastic SQL pool**: /subscriptions/\<subscription-id>/resourceGroups/\<resource-group-name>/providers/Microsoft.Sql/servers/\<pool-db>/elasticpools/\<sql-pool-name> +* **Azure SQL Database (v12)**: /subscriptions/\<subscription-id>/resourceGroups/\<resource-group-name>/providers/Microsoft.Sql/servers/\<server-name>/databases/\<database-name> +* **Azure Service Bus**: /subscriptions/\<subscription-id>/resourceGroups/\<resource-group-name>/providers/Microsoft.ServiceBus/\<namespace>/\<servicebus-name> +* **Azure Virtual Machine Scale Sets**: /subscriptions/\<subscription-id>/resourceGroups/\<resource-group-name>/providers/Microsoft.Compute/virtualMachineScaleSets/\<vm-name> +* **Azure Virtual Machines**: /subscriptions/\<subscription-id>/resourceGroups/\<resource-group-name>/providers/Microsoft.Compute/virtualMachines/\<vm-name> +* **Azure Event Hubs**: /subscriptions/\<subscription-id>/resourceGroups/\<resource-group-name>/providers/Microsoft.EventHub/namespaces/\<eventhub-namespace> -The following list contains a few examples of resource ID formats for various Azure resources: +Use the Azure portal, PowerShell or the Azure CLI to find the resource ID. -* **Azure IoT Hub**: /subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/Microsoft.Devices/IotHubs/*{iot-hub-name}* -* **Elastic SQL pool**: /subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/Microsoft.Sql/servers/*{pool-db}*/elasticpools/*{sql-pool-name}* -* **Azure SQL Database (v12)**: /subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/Microsoft.Sql/servers/*{server-name}*/databases/*{database-name}* -* **Azure Service Bus**: /subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/Microsoft.ServiceBus/*{namespace}*/*{servicebus-name}* -* **Azure Virtual Machine Scale Sets**: /subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/Microsoft.Compute/virtualMachineScaleSets/*{vm-name}* -* **Azure Virtual Machines**: /subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/Microsoft.Compute/virtualMachines/*{vm-name}* -* **Azure Event Hubs**: /subscriptions/*{subscription-id}*/resourceGroups/*{resource-group-name}*/providers/Microsoft.EventHub/namespaces/*{eventhub-namespace}* -There are alternative approaches to retrieving the resource ID. You can use Azure Resource Explorer, view the desired resource in the Azure portal, and use PowerShell or the Azure CLI. +### [Azure portal](#tab/portal) -### Azure Resource Explorer +To find the resourceID in the portal, from the resource's overview page, select **JSON view** -To find the resource ID for a desired resource, one helpful approach is to use the [Azure Resource Explorer](https://resources.azure.com) tool. Go to the desired resource and then look at the ID shown, as in the following screenshot: - +The Resource JSON page is displayed. The resource ID can be copied using the icon on the right of the ID -### Azure portal -The resource ID can also be obtained from the Azure portal. To do so, go to the desired resource and then select **Properties**. The resource ID appears in the **Properties** section, as seen in the following screenshot: - --### Azure PowerShell +### [PowerShell](#tab/powershell) The resource ID can be retrieved by using Azure PowerShell cmdlets too. For example, to obtain the resource ID for an Azure logic app, execute the `Get-AzureLogicApp` cmdlet, as in the following example: Get-AzLogicApp -ResourceGroupName azmon-rest-api-walkthrough -Name contosotweets The result should be similar to the following example: ```output-Id : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Logic/workflows/ContosoTweets +Id : /subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Logic/workflows/ContosoTweets Name : ContosoTweets Type : Microsoft.Logic/workflows Location : centralus PlanId : Version : 08586982649483762729 ``` -### Azure CLI +### [Azure CLI](#tab/cli) To retrieve the resource ID for an Azure Storage account by using the Azure CLI, execute the `az storage account show` command, as shown in the following example: ```azurecli-az storage account show -g azmon-rest-api-walkthrough -n contosotweets2017 +az storage account show -g azmon-rest-api-walkthrough -n azmonstorage001 ``` The result should be similar to the following example: The result should be similar to the following example: ```json { "accessTier": null,- "creationTime": "2017-08-18T19:58:41.840552+00:00", + "creationTime": "2023-08-18T19:58:41.840552+00:00", "customDomain": null, "enableHttpsTrafficOnly": false, "encryption": null,- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/contosotweets2017", + "id": "/subscriptions/12345678-abcd-98765432-abcdef012345/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/azmonstorage001", "identity": null, "kind": "Storage", "lastGeoFailoverTime": null, "location": "centralus",- "name": "contosotweets2017", + "name": "azmonstorage001", "networkAcls": null, "primaryEndpoints": {- "blob": "https://contosotweets2017.blob.core.windows.net/", - "file": "https://contosotweets2017.file.core.windows.net/", - "queue": "https://contosotweets2017.queue.core.windows.net/", - "table": "https://contosotweets2017.table.core.windows.net/" + "blob": "https://azmonstorage001.blob.core.windows.net/", + "file": "https://azmonstorage001.file.core.windows.net/", + "queue": "https://azmonstorage001.queue.core.windows.net/", + "table": "https://azmonstorage001.table.core.windows.net/" }, "primaryLocation": "centralus", "provisioningState": "Succeeded", The result should be similar to the following example: > [!NOTE] > Azure logic apps aren't yet available via the Azure CLI. For this reason, an Azure Storage account is shown in the preceding example. >-+ ## Retrieve activity log data -In addition to metric definitions and related values, it's also possible to use the Azure Monitor REST API to retrieve other interesting insights related to Azure resources. As an example, it's possible to query [activity log](/rest/api/monitor/activitylogs) data. The following sample requests use the Azure Monitor REST API to query an activity log. +Use the Azure Monitor REST API to query [activity log](/rest/api/monitor/activitylogs) data. -Get activity logs with filter: +Use the following request format for activity log queries. -``` HTTP -GET https://management.azure.com/subscriptions/089bd33f-d4ec-47fe-8ba5-0753aa5c5b33/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp ge '2018-01-21T20:00:00Z' and eventTimestamp le '2018-01-23T20:00:00Z' and resourceGroupName eq 'MSSupportGroup' +```curl +GET /subscriptions/<subscriptionId>/providers/Microsoft.Insights/eventtypes/management/values \ +?api-version=2015-04-01 \ +&$filter=<filter> \ +&$select=<select> +host: management.azure.com ``` -Get activity logs with filter and select: +**$filter** reduces the set of data collected. +This argument is required and it also requires at least the start date/time. +The $filter argument accepts the following patterns: +- List events for a resource group: $filter=eventTimestamp ge '2014-07-16T04:36:37.6407898Z' and eventTimestamp le '2014-07-20T04:36:37.6407898Z' and resourceGroupName eq 'resourceGroupName'. +- List events for resource: $filter=eventTimestamp ge '2014-07-16T04:36:37.6407898Z' and eventTimestamp le '2014-07-20T04:36:37.6407898Z' and resourceUri eq 'resourceURI'. +- List events for a subscription in a time range: $filter=eventTimestamp ge '2014-07-16T04:36:37.6407898Z' and eventTimestamp le '2014-07-20T04:36:37.6407898Z'. +- List events for a resource provider: $filter=eventTimestamp ge '2014-07-16T04:36:37.6407898Z' and eventTimestamp le '2014-07-20T04:36:37.6407898Z' and resourceProvider eq 'resourceProviderName'. +- List events for a correlation ID: $filter=eventTimestamp ge '2014-07-16T04:36:37.6407898Z' and eventTimestamp le '2014-07-20T04:36:37.6407898Z' and correlationId eq 'correlationID'. -```HTTP -GET https://management.azure.com/subscriptions/089bd33f-d4ec-47fe-8ba5-0753aa5c5b33/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp ge '2015-01-21T20:00:00Z' and eventTimestamp le '2015-01-23T20:00:00Z' and resourceGroupName eq 'MSSupportGroup'&$select=eventName,id,resourceGroupName,resourceProviderName,operationName,status,eventTimestamp,correlationId,submissionTimestamp,level -``` -Get activity logs with select: +**$select** is used to fetch a specified list of properties for the returned events. +The $select argument is a comma separated list of property names to be returned. +Valid values are: +`authorization`, `claims`, `correlationId`, `description`, `eventDataId`, `eventName`, `eventTimestamp`, `httpRequest`, `level`, `operationId`, `operationName`, `properties`, `resourceGroupName`, `resourceProviderName`, `resourceId`, `status`, `submissionTimestamp`, `subStatus`, and `subscriptionId`. -```HTTP -GET https://management.azure.com/subscriptions/089bd33f-d4ec-47fe-8ba5-0753aa5c5b33/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01&$select=eventName,id,resourceGroupName,resourceProviderName,operationName,status,eventTimestamp,correlationId,submissionTimestamp,level -``` +The following sample requests use the Azure Monitor REST API to query an activity log. +### Get activity logs with filter: -Get activity logs without filter or select: +The following example gets the activity logs for resource group "MSSupportGroup" between the dates 2023-03-21T20:00:00Z and 2023-03-24T20:00:00Z +``` HTTP +GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef012345/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp ge '2023-03-21T20:00:00Z' and eventTimestamp le '2023-03-24T20:00:00Z' and resourceGroupName eq 'MSSupportGroup' +``` +### Get activity logs with filter and select: ++The following example gets the activity logs for resource group "MSSupportGroup", between the dates 2023-03-21T20:00:00Z and 2023-03-24T20:00:00Z, returning the elements eventName,operationName,status,eventTimestamp,correlationId,submissionTimestamp, and level ```HTTP-GET https://management.azure.com/subscriptions/089bd33f-d4ec-47fe-8ba5-0753aa5c5b33/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01 +GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef012345/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp ge '2023-03-21T20:00:00Z' and eventTimestamp le '2023-03-24T20:00:00Z'and resourceGroupName eq 'MSSupportGroup'&$select=eventName,operationName,status,eventTimestamp,correlationId,submissionTimestamp,level ``` + ## Troubleshooting -If you receive a 429, 503, or 504 error, retry the API in one minute. +You may receive one of the following HTTP error statuses: +* 429 Too Many Requests +* 503 Service Unavailable +* 504 Gateway Timeout ++If one of these statuses is returned, resend the request. ## Next steps * Review the [overview of monitoring](../overview.md). * View the [supported metrics with Azure Monitor](./metrics-supported.md). * Review the [Microsoft Azure Monitor REST API reference](/rest/api/monitor/).-* Review the [Azure Management Library](/previous-versions/azure/reference/mt417623(v=azure.100)). +* Review the [Azure Management Library](/previous-versions/azure/reference/mt417623(v=azure.100)). |
azure-monitor | Register App For Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/register-app-for-token.md | To access Azure REST APIs such as the Log analytics API, or to send custom metri ## Register an App +Create a service principal and register an app using the Azure portal, Azure CLI, or PowerShell. +### [Azure portal](#tab/portal) + 1. To register an app, open the Active Directory Overview page in the Azure portal. 1. Select **App registrations** from the side bar. To access Azure REST APIs such as the Log analytics API, or to send custom metri :::image type="content" source="../media/api-register-app/client-secret.png" alt-text="A screenshot showing the client secrets page."::: +### [Azure CLI](#tab/cli) +++Run the following script to create a service principal and app. ++```azurecli +az ad sp create-for-rbac -n <Service principal display name> ++``` +The response looks as follows: +```JSON +{ + "appId": "0a123b56-c987-1234-abcd-1a2b3c4d5e6f", + "displayName": "AzMonAPIApp", + "password": "123456.ABCDE.~XYZ876123ABcEdB7169", + "tenant": "a1234bcd-5849-4a5d-a2eb-5267eae1bbc7" +} ++``` +>[!Important] +> The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. ++Add a role and scope for the resources that you want to access using the API ++```azurecli +az role assignment create --assignee <`appId`> --role <Role> --scope <resource URI> +``` ++The CLI following example assigns the `Reader` role to the service principal for all resources in the `rg-001`resource group: ++```azurecli + az role assignment create --assignee 0a123b56-c987-1234-abcd-1a2b3c4d5e6f --role Reader --scope '\/subscriptions/a1234bcd-5849-4a5d-a2eb-5267eae1bbc7/resourceGroups/rg-001' +``` +For more information on creating a service principal using Azure CLI, see [Create an Azure service principal with the Azure CLI](https://learn.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli) ++### [PowerShell](#tab/powershell) +The following sample script demonstrates creating an Azure Active Directory service principal via PowerShell. For a more detailed walkthrough, see [using Azure PowerShell to create a service principal to access resources](../../../active-directory/develop/howto-authenticate-service-principal-powershell.md) ++```powershell +$subscriptionId = "{azure-subscription-id}" +$resourceGroupName = "{resource-group-name}" ++# Authenticate to a specific Azure subscription. +Connect-AzAccount -SubscriptionId $subscriptionId ++# Password for the service principal +$pwd = "{service-principal-password}" +$secureStringPassword = ConvertTo-SecureString -String $pwd -AsPlainText -Force ++# Create a new Azure Active Directory application +$azureAdApplication = New-AzADApplication ` + -DisplayName "My Azure Monitor" ` + -HomePage "https://localhost/azure-monitor" ` + -IdentifierUris "https://localhost/azure-monitor" ` + -Password $secureStringPassword ++# Create a new service principal associated with the designated application +New-AzADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId ++# Assign Reader role to the newly created service principal +New-AzRoleAssignment -RoleDefinitionName Reader ` + -ServicePrincipalName $azureAdApplication.ApplicationId.Guid ++``` ++ ## Next steps -Before you can generate a token using your app, client ID, and secret, assign the app to a role using Access control (IAM) for resource that you want to access. -The role will depend on the resource type and the API that you want to use. +Before you can generate a token using your app, client ID, and secret, assign the app to a role using Access control (IAM) for resource that you want to access. The role will depend on the resource type and the API that you want to use. For example, - To grant your app read from a Log Analytics Workspace, add your app as a member to the **Reader** role using Access control (IAM) for your Log Analytics Workspace. For more information, see [Access the API](./access-api.md) - To grant access to send custom metrics for a resource, add your app as a member to the **Monitoring Metrics Publisher** role using Access control (IAM) for your resource. For more information, see [ Send metrics to the Azure Monitor metric database using REST API](../../essentials/metrics-store-custom-rest-api.md) -For more information see [Assign Azure roles using the Azure portal](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal) +For more information, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md) -Once you have assigned a role you can use your app, client ID, and client secret to generate a bearer token to access the REST API. +Once you've assigned a role, you can use your app, client ID, and client secret to generate a bearer token to access the REST API. > [!NOTE] > When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new role-based access control (RBAC) permissions. While permissions are propagating, REST API calls may fail with error code 403. |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | Configure a table for Basic logs if: | [AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth) | Azure Media Services account health status. | | [AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | Azure Media Services information about requests to streaming endpoints. | | [ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) | Azure Sphere devices operations, with information about event types, event categories, event classes, event descriptions etc. |+ | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange) | Azure Virtual Network Manager changes to network group membership of network resources. | + | [AZFWNetworkRule](/azure/azure-monitor/reference/tables/AZFWNetworkRule) | Azure Firewalls network rules logs including data plane packet and rule's attributes. | | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Azure Container Apps logs, generated within a Container Apps environment. | | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. | | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | Dev Center resources data plane audit logs. For example, dev boxes and environment stop, start, delete. | |
azure-monitor | Cross Workspace Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md | There are two methods to query data that is stored in multiple workspace and app ## Cross-resource query limits * The number of Application Insights resources and Log Analytics workspaces that you can include in a single query is limited to 100.-* Cross-resource query is not supported in View Designer. You can Author a query in Log Analytics and pin it to Azure dashboard to [visualize a log query](../visualize/tutorial-logs-dashboards.md) or include in [Workbooks](../visualize/workbooks-overview.md). * Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md).+* References to cross resource such as another workspace, should be explicit and cannot be parameterized. See [Identifying workspace resources](#identifying-workspace-resources) for examples. ## Querying across Log Analytics workspaces and from Application Insights |
azure-monitor | Data Retention Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md | You can access archived data by [running a search job](search-jobs.md) or [resto > [!NOTE] > The archive period can only be set at the table level, not at the workspace level. -When you shorten an existing retention policy, it takes 30 days for Azure Monitor to remove data, to prevent data loss in error configuration, and let you revert it. You can [purge data](#purge-retained-data) immediately when required. +When you shorten an existing retention policy, Azure Monitor waits 30 days before removing the data, so you can revert the change and prevent data loss in the event of an error in configuration. You can [purge data](#purge-retained-data) immediately when required. ## Configure the default workspace retention policy The retention can also be [set programmatically with PowerShell](../app/powershe - [Learn more about Log Analytics workspaces and data retention and archive](log-analytics-workspace-overview.md) - [Create a search job to retrieve archive data matching particular criteria](search-jobs.md)-- [Restore archive data within a particular time range](restore.md)+- [Restore archive data within a particular time range](restore.md) |
azure-monitor | Logs Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md | Log Analytics workspace data export continuously exports data that's sent to you ## Limitations -- Custom logs created via [HTTP Data Collector API](./data-collector-api.md), or 'dataSources' API won't be supported in export. This includes text logs consumed by MMA. Custom log created using [data collection rule](./logs-ingestion-api-overview.md) can be exported, including text based logs.-- We are support more tables in data export gradually, but currently limited to those specified in the [supported tables](#supported-tables) section.+- Custom logs created using the [HTTP Data Collector API](./data-collector-api.md) and the dataSources API can't be exported. This includes text logs consumed by Log Analytics agent. You can export custom logs created using [data collection rules](./logs-ingestion-api-overview.md), including text-based logs. +- Data export will gradually support more tables, but is currently limited to the tables specified in the [supported tables](#supported-tables) section. - You can define up to 10 enabled rules in your workspace, each can include multiple tables. You can create more rules in workspace in disabled state. - Destinations must be in the same region as the Log Analytics workspace. - The storage account must be unique across rules in the workspace. Log Analytics workspace data export continuously exports data that's sent to you - Currently, data export isn't supported in China. ## Data completeness-Data export is optimized for moving large data volumes to your destinations. The export operation might fail for destinations capacity or availability, and a retry process continues for up to 12-hours. For more information, see [Create or update a data export rule](#create-or-update-a-data-export-rule) for destination limits and recommended alerts. If the destinations are still unavailable after the retry period, data is discarded. In certain retry conditions, retry can cause a fraction of duplicated records. +Data export is optimized to move large data volumes to your destinations. The export operation might fail if the destination doesn't have sufficient capacity or is unavailable. In the event of failure, the retry process continues for up to 12 hours. For more information about destination limits and recommended alerts, see [Create or update a data export rule](#create-or-update-a-data-export-rule). If the destinations are still unavailable after the retry period, the data is discarded. In certain cases, retry can cause duplication of a fraction of the exported records. ## Pricing model Data export charges are based on the volume of data exported measured in bytes. The size of data exported by Log Analytics Data Export is the number of bytes in the exported JSON-formatted data. Data volume is measured in GB (10^9 bytes). |
azure-monitor | Migrate Splunk To Azure Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md | To export your historical data from Splunk: - Learn more about using [Log Analytics](../logs/log-analytics-overview.md) and the [Log Analytics Query API](../logs/api/overview.md). - [Enable Microsoft Sentinel on your Log Analytics workspace](../../sentinel/quickstart-onboard.md).-- Learn more about roles and permissions in Sentinel [Roles and permissions in Microsoft Sentinel](../../sentinel/roles.md). - Take the [Analyze logs in Azure Monitor with KQL training module](/training/modules/analyze-logs-with-kql/). |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | +## January 2023 + +|Subservice| Article | Description | +|||| +Agents|[Tutorial: Transform text logs during ingestion in Azure Monitor Logs](agents/azure-monitor-agent-transformation.md)|New tutorial on how to write a KQL query that transforms text log data and add the transformation to a data collection rule.| +Agents|[Azure Monitor Agent overview](agents/agents-overview.md)|SQL Best Practices Assessment now available with Azure Monitor Agent.| +Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Streamlined alerts documentation, added the common schema definition to the common schema article, and moving sample ARM templates for alerts to the Samples section.| +Alerts|[Non-common alert schema definitions for Test Action Group (Preview)](alerts/alerts-non-common-schema-definitions.md)|Added a sample payload for the Actual Cost and Forecasted Budget schemas.| +Application-Insights|[Live Metrics: Monitor and diagnose with 1-second latency](app/live-stream.md)|Updated Live Metrics troubleshooting section.| +Application-Insights|[Application Insights for Azure VMs and Virtual Machine Scale Sets](app/azure-vm-vmss-apps.md)|Easily monitor your IIS-hosted .NET Framework and .NET Core applications running on Azure VMs and Virtual Machine Scale Sets using a new App Insights Extension.| +Application-Insights|[Sampling in Application Insights](app/sampling.md)|We've added embedded links to assist with looking up type definitions. (Dependency, Event, Exception, PageView, Request, Trace)| +Application-Insights|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Instructions are now available on how to set the http proxy using an environment variable, which overrides the JSON configuration. We've also provided a sample to configure connection string at runtime.| +Application-Insights|[Application Insights for Java 2.x](app/deprecated-java-2x.md)|The Java 2.x retirement notice is available at https://azure.microsoft.com/updates/application-insights-java-2x-retirement.| +Autoscale|[Diagnostic settings in Autoscale](autoscale/autoscale-diagnostics.md)|Updated and expanded content| +Autoscale|[Overview of common autoscale patterns](autoscale/autoscale-common-scale-patterns.md)|Clarification of weekend profiles| +Autoscale|[Autoscale with multiple profiles](autoscale/autoscale-multiprofile.md)|Added clarifications for profile end times| +Change-Analysis|[Scenarios for using Change Analysis in Azure Monitor](change/change-analysis-custom-filters.md)|Merged two low engagement docs into Visualizations article and removed from TOC| +Change-Analysis|[Scenarios for using Change Analysis in Azure Monitor](change/change-analysis-query.md)|Merged two low engagement docs into Visualizations article and removed from TOC| +Change-Analysis|[Scenarios for using Change Analysis in Azure Monitor](change/change-analysis-visualizations.md)|Merged two low engagement docs into Visualizations article and removed from TOC| +Change-Analysis|[Track a web app outage using Change Analysis](change/tutorial-outages.md)|Added new section on virtual network changes to the tutorial| +Containers|[Azure Monitor container insights for Azure Kubernetes Service (AKS) hybrid clusters (preview)](containers/container-insights-enable-provisioned-clusters.md)|New article.| +Containers|[Syslog collection with Container Insights (preview)](containers/container-insights-syslog.md)|New article.| +Essentials|[Query Prometheus metrics using Azure workbooks (preview)](essentials/prometheus-workbooks.md)|New article.| +Essentials|[Azure Workbooks data sources](visualize/workbooks-data-sources.md)|Added section for Prometheus metrics.| +Essentials|[Query Prometheus metrics using Azure workbooks (preview)](essentials/prometheus-workbooks.md)|New article| +Essentials|[Azure Monitor workspace (preview)](essentials/azure-monitor-workspace-overview.md)|Updated design considerations| +Essentials|[Supported metrics with Azure Monitor](essentials/metrics-supported.md)|Updated and refreshed the list of supported metrics| +Essentials|[Supported categories for Azure Monitor resource logs](essentials/resource-logs-categories.md)|Updated and refreshed the list of supported logs| +General|[Multicloud monitoring with Azure Monitor](best-practices-multicloud.md)|New article.| +Logs|[Set daily cap on Log Analytics workspace](logs/daily-cap.md)|Clarified special case for daily cap logic.| +Logs|[Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md)|Updated and refreshed how to send custom metrics| +Logs|[Migrate from Splunk to Azure Monitor Logs](logs/migrate-splunk-to-azure-monitor-logs.md)|New article that explains how to migrate your Splunk Observability deployment to Azure Monitor Logs for logging and log data analysis.| +Logs|[Manage access to Log Analytics workspaces](logs/manage-access.md)|Added permissions required to run a search job and restore archived data.| +Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added information about how to modify a table schema using the API.| +Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Per customer feedback, added new note that Consumption plan is not supported| +Virtual-Machines|[Collect IIS logs with Azure Monitor Agent](agents/data-collection-iis.md)|Added sample log queries.| +Virtual-Machines|[Collect text logs with Azure Monitor Agent](agents/data-collection-text-log.md)|Added sample log queries.| +Virtual-Machines|[Monitor virtual machines with Azure Monitor: Deploy agent](vm/monitor-virtual-machine-agent.md)|Rewritten for Azure Monitor agent.| +Virtual-Machines|[Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)|Rewritten for Azure Monitor agent.| +Virtual-Machines|[Monitor virtual machines with Azure Monitor: Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)|Rewritten for Azure Monitor agent.| +Virtual-Machines|[Monitor virtual machines with Azure Monitor: Collect data](vm/monitor-virtual-machine-data-collection.md)|Rewritten for Azure Monitor agent.| +Virtual-Machines|[Monitor virtual machines with Azure Monitor: Migrate management pack logic](vm/monitor-virtual-machine-management-packs.md)|Rewritten for Azure Monitor agent.| +Virtual-Machines|[Monitor virtual machines with Azure Monitor](vm/monitor-virtual-machine.md)|Rewritten for Azure Monitor agent.| +Virtual-Machines|[Monitor Azure virtual machines](../../articles/virtual-machines/monitor-vm.md)|VM scenario updates for AMA| + ## December 2022 |Subservice| Article | Description | |
azure-vmware | Concepts Design Public Internet Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-design-public-internet-access.md | Title: Concept - Internet connectivity design considerations description: Options for Azure VMware Solution Internet Connectivity. Previously updated : 9/17/2022 Last updated : 2/5/2023 # Internet connectivity design considerations The option that you select depends on the following factors: - There are scale limits on how many Azure Public IPv4 addresses can be allocated to a Network Virtual Appliance running in native Azure or provisioned on Azure Firewall. The Azure Public IPv4 address to NSX-T Data Center Edge option allows for much higher allocations (1,000s versus 100s). - Use an Azure Public IPv4 address to the NSX-T Data Center Edge for a localized exit to the internet from each private cloud in its local region. Using multiple Azure VMware Solution private clouds in several Azure regions that need to communicate with each other and the internet, it can be challenging to match an Azure VMware Solution private cloud with a security service in Azure. The difficulty is due to the way a default route from Azure works. +> [!IMPORTANT] +> By design, Public IPv4 Address with NSX-T Data Center does not allow the exchange of Azure/Microsoft owned Public IP Addresses over ExpressRoute Private Peering connections. This means you cannot advertise the Public IPv4 addresses to your customer vNET or on-premises network via ExpressRoute. All Public IPv4 Addresses with NSX-T Data Center traffic must take the internet path even if the Azure VMware Solution private cloud is connected via ExpressRoute. For more information, visit [ExpressRoute Circuit Peering](../expressroute/expressroute-circuit-peerings.md). + ## Next Steps [Enable Managed SNAT for Azure VMware Solution Workloads](enable-managed-snat-for-workloads.md) |
azure-vmware | Concepts Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md | Title: Concepts - Network interconnectivity description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 10/25/2022 Last updated : 2/4/2023 The diagram below shows the basic network interconnectivity established at the t - Outbound access from VMs on the private cloud to Azure services. - Inbound access of workloads running in the private cloud. +When connecting **production** Azure VMware Solution private clouds to an Azure virtual network, an ExpressRoute virtual network gateway with the Ultra Performance Gateway SKU should be used with FastPath enabled to achieve 10Gbps connectivity. Less critical environments can use the Standard or High Performance Gateway SKUs for slower network performance. :::image type="content" source="media/concepts/adjacency-overview-drawing-single.png" alt-text="Diagram showing the basic network interconnectivity established at the time of an Azure VMware Solution private cloud deployment." border="false"::: |
azure-vmware | Concepts Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md | The default storage policy is set to **RAID-1 FTT-1**, with Object Space Reserva In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an operations perspective. >[!NOTE]->When you log on to the vSphere Client, you may notice a VM Storage Policy called **vSAN Default Storage Policy** with **Object Space Reservation** set to **Thick** provisioning. Please note that this is not the default storage policy applied to the cluster. This policy exists for historical purposes and will eventually be modified to **Thin** provisioning. +>When you log on to the vSphere Client, you may notice a VM Storage Policy called **vSAN Default Storage Policy** with **Object Space Reservation** set to **Thin** provisioning. Please note that this is not the default storage policy applied to the cluster. This policy exists for historical purposes and will eventually be modified to **Thin** provisioning. >[!NOTE]->All of the software-defined data center (SDDC) management VMs (vCenter Server, NSX-T Manager, NSX-T Data Center Edges, and others) use the **Microsoft vSAN Management Storage Policy**, with **Object Space Reservation** set to **Thick** provisioning. +>All of the software-defined data center (SDDC) management VMs (vCenter Server, NSX-T Manager, NSX-T Data Center Edges, and others) use the **Microsoft vSAN Management Storage Policy**, with **Object Space Reservation** set to **Thin** provisioning. >[!TIP] >If you're unsure if the cluster will grow to four or more, then deploy using the default policy. If you're sure your cluster will grow, then instead of expanding the cluster after your initial deployment, we recommend deploying the extra hosts during deployment. As the VMs are deployed to the cluster, change the disk's storage policy in the VM settings to either RAID-5 FTT-1 or RAID-6 FTT-2. In reference to [SLA for Azure VMware Solution](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/), note that more than 6 hosts should be configured in the cluster to use an FTT-2 policy (RAID-1, or RAID-6). Also note that the storage policy is not automatically updated based on cluster size. Similarly, changing the default does not automatically update the running VM policies. |
azure-vmware | Configure Storage Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-storage-policy.md | Title: Configure storage policy description: Learn how to configure storage policy for your Azure VMware Solution virtual machines. Previously updated : 04/11/2022 Last updated : 2/5/2023 #Customer intent: As an Azure service administrator, I want set the VMware vSAN storage policies to determine how storage is allocated to the VM. Now that you've learned how to configure VMware vSAN storage policies, you can l - [How to attach disk pools to Azure VMware Solution hosts (Preview)](attach-disk-pools-to-azure-vmware-solution-hosts.md) - You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance. -- [How to configure external identity for vCenter](configure-identity-source-vcenter.md) - vCenter Server has a built-in local user called cloudadmin and assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). With the Run command feature, you can configure Active Directory over LDAP or LDAPS for vCenter as an external identity source.+- [How to configure external identity for vCenter Server](configure-identity-source-vcenter.md) - vCenter Server has a built-in local user called cloudadmin and assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). With the Run command feature, you can configure Active Directory over LDAP or LDAPS for vCenter as an external identity source. |
cosmos-db | Sdk Connection Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md | As detailed in the [introduction](#available-connectivity-modes), Direct mode cl ### Routing -When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node and considered [metadata](../concepts-limits.md#metadata-request-limits). It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes. Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations. +When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node. It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes and both are considered [metadata](../concepts-limits.md#metadata-request-limits). Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations. Each replica set contains one primary replica and three secondaries. Write operations are always routed to primary replica nodes while read operations can be served from primary or secondary nodes. |
cost-management-billing | Buy Savings Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md | -Azure savings plans help you save money by committing to an hourly spend for one-year or three-years plans for Azure compute resources. Saving plans discounts apply to usage from virtual machines, Dedicated Hosts, Container Instances, App Services and Azure Premium Functions. The hourly commitment is priced in USD for Microsoft Customer Agreement customers and local currency for Enterprise customers. --Before you enter a commitment to buy a savings plan, review the following sections to prepare for your purchase. +Azure savings plans help you save money by committing to an hourly spend for one-year or three-year plans for Azure compute resources. Before you enter a commitment to buy a savings plan, review the following sections to prepare for your purchase. ## Who can buy a savings plan -You can buy a savings plan for an Azure subscription that's of type Enterprise Agreement (EA) offer code MS-AZR-0017P or MS-AZR-0148P, Microsoft Customer Agreement (MCA), or Microsoft Partner Agreement (MPA). If don't know what subscription type you have, see [check your billing type](../manage/view-all-accounts.md#check-the-type-of-your-account). +Savings plan discounts only apply to resources associated with subscriptions purchased through an Enterprise Agreement (EA), Microsoft Customer Agreement (MCA), or Microsoft Partner Agreement (MPA). You can buy a savings plan for an Azure subscription that's of type EA (MS-AZR-0017P or MS-AZR-0148P), MCA or MPA. To determine if you're eligible to buy a plan, [check your billing type](../manage/view-all-accounts.md#check-the-type-of-your-account). -## Change agreement type to one supported by savings plan +### Enterprise Agreement customers -If your current agreement type isn't supported by a savings plan, you might be able to transfer or migrate it to one that's supported. For more information, see the following articles. +- EA admins with write permissions can directly purchase savings plans from **Cost Management + Billing** > **Savings plan**. No subscription-specific permissions are needed. +- Subscription owners for one of the subscriptions in the enrollment account can purchase savings plans from **Home** > **Savings plan**. -- [Transfer Azure products between different billing agreements](../manage/subscription-transfer.md)-- [Product transfer support](../manage/subscription-transfer.md#product-transfer-support)-- [From MOSA to the Microsoft Customer Agreement](https://www.microsoft.com/licensing/news/from-mosa-to-microsoft-customer-agreement)+Enterprise Agreement (EA) customers can limit purchases to only EA admins by disabling the Add Savings Plan option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings. -## Required permission and how to buy +### Microsoft Customer Agreement (MCA) customers -You can buy a savings plan using the Azure portal or with the [Savings Plan Order Alias - Create](/rest/api/billingbenefits/savings-plan-order-alias/create) REST API. +- Customers with billing profile contributor permissions or higher can purchase savings plans from **Cost Management + Billing** > **Savings plan** experience. No subscription-specific permissions are needed. +- Subscription owners for one of the subscriptions in the billing profile can purchase savings plans from **Home** > **Savings plan**. -### Purchase in the Azure portal +To disallow savings plan purchases on a billing profile, billing profile contributors can navigate to the **Policies** menu under the billing profile and adjust the Azure Savings Plan option. -Required permission and the steps to buy vary, depending on your agreement type. +### Microsoft Partner Agreement partners -#### Enterprise Agreement customers +- Partners can use **Home** > **Savings plan** in the [Azure portal](https://portal.azure.com/) to purchase savings plans on behalf of their customers. -- EA admins with write permissions can directly purchase savings plans from **Cost Management + Billing** > **Savings plan**. No specific permission for a subscription is needed.-- Subscription owners for one of the subscriptions in the EA enrollment can purchase savings plans from **Home** > **Savings plan**.-- Enterprise Agreement (EA) customers can limit purchases to EA admins only by disabling the **Add Savings Plan** option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings. +## Change agreement type to one supported by savings plan ++If your current agreement type isn't supported by a savings plan, you might be able to transfer or migrate it to one that's supported. For more information, see the following articles. -#### Microsoft Customer Agreement (MCA) customers +- [Transfer Azure products between different billing agreements](../manage/subscription-transfer.md) +- [Product transfer support](../manage/subscription-transfer.md#product-transfer-support) +- [From MOSA to the Microsoft Customer Agreement](https://www.microsoft.com/licensing/news/from-mosa-to-microsoft-customer-agreement) -- Customers with billing profile contributor permissions and above can purchase savings plans from **Cost Management + Billing** > **Savings plan** experience. No specific permissions on a subscription needed.-- Subscription owners for one of the subscriptions in the billing profile can purchase savings plans from **Home** > **Savings plan**.-- To disallow savings plan purchases on a billing profile, billing profile contributors can navigate to the Policies menu under the billing profile and adjust **Azure Savings Plan** option.+## Purchase savings plan -#### Microsoft Partner Agreement partners +You can purchase a savings plan using the [Azure portal](https://portal.azure.com/) or with the [Savings Plan Order Alias - Create](/rest/api/billingbenefits/savings-plan-order-alias/create) REST API. -- Partners can use **Home** > **Savings plan** in the Azure portal to purchase savings plans for their customers.+### Buy a savings plan in the Azure portal ++1. Sign in to the Azure portal. +2. Select **All services** > **Savings plans**. +3. Select **Add** to purchase a new savings plan. +4. Complete all required fields: + - **Name** ΓÇô Friendly name for the new savings plan. + - **Billing subscription** - Subscription used to pay for the savings plan. For more information about permissions and roles required to purchase a savings plan, see [Who can buy a savings plan](#who-can-buy-a-savings-plan). + - **Apply to any eligible resource** ΓÇô scope of resources that are eligible for savings plan benefits. For more information, see [Savings plan scopes](scope-savings-plan.md). + - **Term length** - One year or three years. + - **Hourly commitment** ΓÇô Amount available through the savings plan each hour. In the Azure portal, up to 10 recommendations may be presented. Recommendations are scope-specific. Azure doesn't currently provide recommendations for management groups. Each recommendation includes: + - An hourly commitment. + - The potential savings percentage compared to on-demand costs for the commitment. + - The percentage of the selected scopes compute usage that would be covered by new savings plan. It includes the commitment amount plus any other previously purchased savings plan or reservation. + - **Billing frequency** ΓÇô **All upfront** or **Monthly**. The total cost of the savings plan will be the same regardless of the selected frequency. ### Purchase with the Savings Plan Order Alias - Create API -Buy savings plans by using RBAC permissions or with permissions on your billing scope. When using the [Savings Plan Order Alias - Create](/rest/api/billingbenefits/savings-plan-order-alias/create) REST API, the format of the `billingScopeId` in the request body is used to control the permissions that are checked. +Buy savings plans by using Azure RBAC permissions or with permissions on your billing scope. When using the [Savings Plan Order Alias - Create](/rest/api/billingbenefits/savings-plan-order-alias/create) REST API, the format of the `billingScopeId` in the request body is used to control the permissions that are checked. -To purchase using RBAC permissions: +#### To purchase using Azure RBAC permissions -- You must be an Owner of the subscription which you plan to use, specified as `billingScopeId`.+- You must be an Owner of the subscription that you plan to use, specified as `billingScopeId`. - The `billingScopeId` property in the request body must use the `/subscriptions/10000000-0000-0000-0000-000000000000` format. -To purchase using billing permissions: +#### To purchase using billing permissions Permission needed to purchase varies by the type of account that you have. - For Enterprise agreement customers, you must be an EA admin with write permissions.-- For Microsoft Customer Agreement (MCA) customers, you must be a billing profile contributor or above.-- For Microsoft Partner Agreement partners, only RBAC permissions are currently supported+- For Microsoft Customer Agreement (MCA) customers, you must be a billing profile contributor or higher. +- For Microsoft Partner Agreement partners, only Azure RBAC permissions are currently supported The `billingScopeId` property in the request body must use the `/providers/Microsoft.Billing/billingAccounts/{accountId}/billingSubscriptions/10000000-0000-0000-0000-000000000000` format. -## Scope savings plans --You can scope a savings plan to a shared scope, management group, subscription, or resource group scopes. Setting the scope for a savings plan selects where the savings plan savings apply. When you scope the savings plan to a resource group, savings plan discounts apply only to the resource groupΓÇönot the entire subscription. --### Savings plan scoping options --You have the following options to scope a savings plan, depending on your needs: --- **Shared scope** - Applies the savings plan discounts to matching resources in eligible subscriptions that are in the billing scope. If a subscription was moved to a different billing scope, the benefit no longer applies to the subscription. It does continue to apply to other subscriptions in the billing scope.- - For Enterprise Agreement customers, the billing scope is the enrollment. The savings plan shared scope would include multiple Active Directory tenants in an enrollment. - - For Microsoft Customer Agreement customers, the billing scope is the billing profile. - - For Microsoft Partner Agreement, the billing scope is a customer. -- **Single subscription scope** - Applies the savings plan discounts to the matching resources in the selected subscription.-- **Management group** - Applies the savings plan discounts to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. To scope a savings plan to a management group, you must have at least read permission on the management group.-- **Single resource group scope** - Applies the savings plan discounts to the matching resources in the selected resource group only.--When savings plan discounts are applied to your usage, Azure processes the savings plan in the following order: --1. Savings plans with a single resource group scope -2. Savings plans with a single subscription scope -3. Savings plans scoped to a management group -4. Savings plans with a shared scope (multiple subscriptions), described previously --You can always update the scope after you buy a savings plan. To do so, go to the savings plan, select **Configuration**, and rescope the savings plan. Rescoping a savings plan isn't a commercial transaction. Your savings plan term isn't changed. For more information about updating the scope, see [Update the scope after you purchase a savings plan](manage-savings-plan.md#change-the-savings-plan-scope). ---## Discounted subscription and offer types --Savings plan discounts apply to the following eligible subscriptions and offer types. +## Cancel, exchange, or refund savings plans -- Enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P)-- Microsoft Customer Agreement subscriptions.-- Microsoft Partner Agreement subscriptions.+You can't cancel, exchange, or refund savings plans. ### Buy savings plans with monthly payments If you have Azure savings plan for compute questions, contact your account team ## Next steps -- [Permissions to view and manage Azure savings plans](permission-view-manage.md)-- [Manage Azure savings plans](manage-savings-plan.md)+- To learn how to manage a savings plan, see [Manage Azure savings plans](manage-savings-plan.md). +- To learn more about Azure Savings plans, see the following articles: + - [What are Azure Savings plans?](savings-plan-compute-overview.md) + - [Manage Azure savings plans](manage-savings-plan.md) + - [How saving plan discount is applied](discount-application.md) + - [Understand savings plan costs and usage](utilization-cost-reports.md) + - [Software costs not included with Azure savings plans](software-costs-not-included.md) |
cost-management-billing | Choose Commitment Amount | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/choose-commitment-amount.md | -You should purchase savings plans based on consistent base usage. Committing to a greater spend than your historical usage could result in an underutilized commitment, which should be avoided when possible. Unused commitment doesn't carry over from one hour to next. Usage exceeding the savings plan commitment is charged using more expensive pay-as-you-go rates. +You should purchase savings plans based on consistent base usage. Committing to a greater spend than your historical usage could result in an underutilized commitment, which should be avoided when possible. Unused commitment doesn't carry over from one hour to the next. Usage exceeding the savings plan commitment is charged using more expensive pay-as-you-go rates. ## Savings plan purchase recommendations -Savings plan purchase recommendations are calculated by analyzing your hourly usage data over the last 7, 30, and 60 days. Azure calculates what your costs would have been if you had a savings plan and compares it with your actual pay-as-you-go costs incurred over the time duration. The calculation is performed for every quantity that you used during the time frame. The commitment amount that maximizes your savings is recommended. +Savings plan purchase recommendations are calculated by analyzing your hourly usage data over the last 7, 30, and 60 days. Azure simulates what your costs would have been if you had a savings plan and compares it with your actual pay-as-you-go costs incurred over the time duration. The commitment amount that maximizes your savings is recommended. To learn more about how recommendations are generated, see [How hourly commitment recommendations are generated](purchase-recommendations.md#how-hourly-commitment-recommendations-are-generated). -For example, you might use 500 VMs most of the time, but sometimes usage spikes to 700 VMs. In this example, Azure calculates your savings for both the 500 and 700 VM quantities. Since the 700 VM usage is sporadic, the recommendation calculation determines that savings are maximized for a savings plan commitment that is sufficient to cover 500 VMs and the recommendation is provided for that commitment. +For example, you might incur about $500 in hourly pay-as-you-go compute charges most of the time, but sometimes usage spikes to $700. Azure determines your total costs (hourly savings plan commitment plus pay-as-you-go charges) if you had either a $500/hour or a $700/hour savings plan. Since the $700 usage is sporadic, the recommendation calculation is likely to determine that a $500 hourly commitment provides greater total savings. As a result, the $500/hour plan would be the recommended commitment. Note the following points: -- Savings plan recommendations are calculated using the on-demand usage rates that apply to you.-- Recommendations are calculated using individual sizes, not for the instance size family.-- The recommended commitment for a scope is reduced on the same day that you purchase a commitment for the scope.- - However, an update for the commitment amount recommendation across scopes can take up to 25 days. For example, if you purchase based on shared scope recommendations, the single subscription scope recommendations can take up to 25 days to adjust down. +- Savings plan recommendations are calculated using the pay-as-you-go rates that apply to you. +- Recommendations are calculated using individual VM sizes, not for the instance size family. +- The recommended commitment for a scope is updated on the same day that you purchase a savings plan for the scope. + - However, an update for the commitment amount recommendation across scopes can take up to three days. For example, if you purchase based on shared scope recommendations, the single subscription scope recommendations can take up to three days to adjust down. - Currently, Azure doesn't generate recommendations for the management group scope. ## Recommendations in the Azure portal -Savings plan purchases are calculated by the recommendations engine for the selected term and scope, based on last 30-days of usage. Recommendations are shown in the savings plan purchase experience in the Azure portal. +Savings plan purchases are calculated by the recommendations engine for the selected term and scope, based on last 30 days of usage. Recommendations are provided through [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/Cost), the savings plan purchase experience in [Azure portal](https://portal.azure.com/), and through the [savings plan benefit recommendations API](/rest/api/cost-management/benefit-recommendations/list). -## Need help? Contact us. +## Need help? Contact us -If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English. +If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English. ## Next steps - [Manage Azure savings plans](manage-savings-plan.md) - [View Azure savings plan cost and usage details](utilization-cost-reports.md)-- [Software costs not included in saving plan](software-costs-not-included.md)+- [Software costs not included in saving plan](software-costs-not-included.md) |
cost-management-billing | Permission View Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/permission-view-manage.md | +After you buy an Azure savings plan, with sufficient permissions, you can make the following types of changes to a savings plan: ++- Change who has access to, and manage, a savings plan +- Update savings plan name +- Update savings plan scope +- Change auto-renewal settings ++Except for auto-renewal, none of the changes cause a new commercial transaction or change the end date of the savings plan. ++You can't make the following types of changes after purchase: ++- Hourly commitment +- Term length +- Billing frequency + ## Who can manage a savings plan by default By default, the following users can view and manage savings plans: - The person who buys a savings plan and the account administrator of the billing subscription used to buy the savings plan are added to the savings plan order. - Enterprise Agreement and Microsoft Customer Agreement billing administrators.-- Users with elevated access to manage all Azure subscriptions and management groups+- Users with elevated access to manage all Azure subscriptions and management groups. The savings plan lifecycle is independent of an Azure subscription, so the savings plan isn't a resource under the Azure subscription. Instead, it's a tenant-level resource with its own Azure RBAC permission separate from subscriptions. Savings plans don't inherit permissions from subscriptions after the purchase. -## View and manage savings plans +## Grant access to individual savings plans ++Users who have owner access on the savings plan and billing administrators can delegate access management for an individual savings plan order in the Azure portal. ++To allow other people to manage savings plans, you have two options: -If you're a billing administrator, use following steps to view, and manage all savings plans and savings plan transactions in the Azure portal. +- Delegate access management for an individual savings plan order by assigning the Owner role to a user at the resource scope of the savings plan order. If you want to give limited access, select a different role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). +- Add a user as billing administrator to an Enterprise Agreement or a Microsoft Customer Agreement: + - For an Enterprise Agreement, add users with the Enterprise Administrator role to view and manage all savings plan orders that apply to the Enterprise Agreement. Users with the Enterprise Administrator (read only) role can only view the savings plan. Department admins and account owners can't view savings plans unless they're explicitly added to them using Access control (IAM). For more information, see [Manage Azure Enterprise roles](../manage/understand-ea-roles.md). + - For a Microsoft Customer Agreement, users with the billing profile owner role or the billing profile contributor role can manage all savings plan purchases made using the billing profile. Billing profile readers and invoice managers can view all savings plans that are paid for with the billing profile. However, they can't make changes to savings plans. For more information, see [Billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks). -1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to **Cost Management + Billing**. +## View and manage savings plans as a billing administrator ++If you're a billing administrator, use following steps to view and manage all savings plans and savings plan transactions in the Azure portal: ++1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to **Cost Management + Billing**. - If you're an EA admin, in the left menu, select **Billing scopes** and then in the list of billing scopes, select one. - If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one. 1. In the left menu, select **Products + services** > **Savings plans**.-1. The complete list of savings plans for your EA enrollment or billing profile is shown. + The complete list of savings plans for your EA enrollment or billing profile is shown. +1. Billing administrators can take ownership of a savings plan by selecting one or multiple savings plans, selecting **Grant access** and selecting **Grant access** in the window that appears. -## Add billing administrators +### Adding billing administrators Add a user as billing administrator to an Enterprise Agreement or a Microsoft Customer Agreement in the Azure portal. -- For an Enterprise Agreement, add users with the _Enterprise Administrator_ role to view and manage all savings plan orders that apply to the Enterprise Agreement. Enterprise administrators can view and manage savings plans in **Cost Management + Billing**.- - Users with the _Enterprise Administrator (read only)_ role can only view the savings plan from **Cost Management + Billing**. - - Department admins and account owners can't view savings plans _unless_ they're explicitly added to them using Access control (IAM). For more information, see [Managing Azure Enterprise roles](../manage/understand-ea-roles.md). -- For a Microsoft Customer Agreement, users with the billing profile owner role or the billing profile contributor role can manage all savings plan purchases made using the billing profile. Billing profile readers and invoice managers can view all savings plans that are paid for with the billing profile. However, they can't make changes to savings plans. For more information, see [Billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks).+- For an Enterprise Agreement, add users with the Enterprise Administrator role to view and manage all savings plan orders that apply to the Enterprise Agreement. Enterprise administrators can view and manage savings plan in **Cost Management + Billing**. + - Users with the _Enterprise Administrator (read only)_ role can only view the savings plan from **Cost Management + Billing**. + - Department admins and account owners can't view savings plans unless they're explicitly added to them using Access control (IAM). For more information, see [Manage Azure Enterprise roles](../manage/understand-ea-roles.md). +- For a Microsoft Customer Agreement, users with the billing profile owner role or the billing profile contributor role can manage all savings plan purchases made using the billing profile. + - Billing profile readers and invoice managers can view all savings plans that are paid for with the billing profile. However, they can't make changes to savings plans. For more information, see [Billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks). ## View savings plans with Azure RBAC access -If you purchased the savings plan or you're added to a savings plan, use the following steps to view and manage savings plans in the Azure portal. +If you purchased the savings plan or you're added to a savings plan, use the following steps to view and manage savings plans in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com/).-2. Select **All Services** > **Savings plans** to list savings plans that you have access to. +2. Select **All Services** > **Savings plans** to list savings plans that you have access to. ## Manage subscriptions and management groups with elevated access -You can elevate a user's [access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). After you have elevated access: +You can [elevate a user's access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). -1. Navigate to **All Services** > **Savings plan** to see all savings plans that are in the tenant. -2. To make modifications to the savings plan, add yourself as an owner of the savings plan order using Access control (IAM). --## Grant access to individual savings plans +After you have elevated access: -Users who have owner access on the savings plans, and billing administrators can delegate access management for an individual savings plan order in the Azure portal. To allow other people to manage savings plans, you have two options: --- Delegate access management for an individual savings plan order by assigning the Owner role to a user at the resource scope of the savings plan order. If you want to give limited access, select a different role.- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). -- Add a user as billing administrator to an Enterprise Agreement or a Microsoft Customer Agreement:- - For an Enterprise Agreement, add users with the _Enterprise Administrator_ role to view and manage all savings plan orders that apply to the Enterprise Agreement. Users with the _Enterprise Administrator (read only)_ role can only view the savings plan. Department admins and account owners can't view savings plans _unless_ they're explicitly added to them using Access control (IAM). For more information, see [Managing Azure Enterprise roles](../manage/understand-ea-roles.md). - - For a Microsoft Customer Agreement, users with the billing profile owner role or the billing profile contributor role can manage all savings plan purchases made using the billing profile. Billing profile readers and invoice managers can view all savings plans that are paid for with the billing profile. However, they can't make changes to savings plans. For more information, see [Billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks). +1. Navigate to **All Services** > **Savings plans** to see all savings plans that are in the tenant. +2. To make modifications to the savings plan, add yourself as an owner of the savings plan order using Access control (IAM). ## Next steps -- [Manage savings plans](manage-savings-plan.md).+- [Manage Azure savings plans](manage-savings-plan.md). |
cost-management-billing | Purchase Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md | -Azure savings plan purchase recommendations are provided through [Azure Advisor](../../advisor/advisor-reference-cost-recommendations.md#reserved-instances), and through the savings plan purchase experience in the Azure portal. +Azure savings plan purchase recommendations are provided through [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/Cost), the savings plan purchase experience in [Azure portal](https://portal.azure.com/), and through the [Savings plan benefit recommendations API](/rest/api/cost-management/benefit-recommendations/list). -## Purchase recommendations in the Azure portal +## How hourly commitment recommendations are generated -The savings plan purchase experience shows up to 10 commitment amounts. All recommendations are based on the last 30 days of usage. For each amount, we include the percentage (off your current pay-as-you-go costs) that the amount could save you. The percentage of your total compute usage that would be covered with the commitment amount is also included. +The goal of our savings plan recommendation is to help you make the most cost-effective commitment. Calculations are based on your actual on-demand costs, and don't include usage covered by existing reservations or savings plans is excluded. -By default, the recommendations are for the entire billing scope (billing account or billing profile for MCA and enrollment account for EA). You can also view subscription and resource group-level recommendations by restricting benefit application to one of those levels. +We start by looking at your hourly and total on-demand usage costs incurred from savings plan-eligible resources in the last 7, 30, and 60 days. These costs are inclusive of any negotiated discounts that you have. We then run hundreds of simulations of what your total cost would have been if you had purchased either a one or three-year savings plan with an hourly commitment equivalent to your hourly costs. -Recommendations are based the selected terms, so you'll see the 1-year or 3-year recommendations at each level by toggling the term options. We don't currently support management group-level recommendations. +As we simulate each candidate recommendation, some hours will result in savings. For example, when savings plan-discounted usage plus the hourly commitment less than that hourΓÇÖs historic on-demand charge. In other hours, no savings would be realized. For example, when discounted usage plus the hourly commitment is greater than or greater than on-demand charges. We sum up the simulated hourly charges for each candidate and compare it to your actual total on-demand charge. Only candidates that result in savings are eligible for consideration as recommendations. We also calculate the percentage of your compute usage costs that would be covered by the recommendation, plus any other previously purchased reservations or savings plan. -The first recommendation value is the one that is projected to result in the highest percent savings. The other values allow you to see how increasing or decreasing your commitment could affect both your savings and compute coverage. When the commitment amount is increased, your savings could be reduced because you could end up with reduced utilization. In other words, you'd pay for an hourly commitment that isn't fully used. If you lower the commitment, your savings could also be reduced. Although you'll have increased utilization, there will likely be periods when your savings plan won't fully cover your use. Usage beyond your hourly commitment will be charged at the more expensive pay-as-you-go rates. +Finally, we present a differentiated set of one-year and three-year recommendations (currently up to 10 each). The recommendations provide the greatest savings across different compute coverage levels. The recommendations with the greatest savings for one year and three years are the highlighted options. -## How hourly commitment recommendations are generated +To account for scenarios where there were significant reductions in your usage, including recently decommissioned services, we run more simulations using only the last three days of usage. The lower of the three day and 30-day recommendations are highlighted, even in situations where the 30-day recommendation may appear to provide greater savings. The lower recommendation is to ensure that we don't encourage overcommitment based on stale data. -When Azure recommends an hourly amount for a savings plan, it tries to help you make the most cost-effective commitment. Recommendations are generated by examining your historical on-demand usage. Calculations include any discounts that you might have on your on-demand rates. Usage covered by existing reservations or savings plans is excluded. +Recommendations are refreshed several times a day. However, it may take up to five days for the newly purchased savings plans and reservations to begin to be reflected in recommendations. -The first step is to determine the total on-demand charge from savings plan-eligible resources for each hour during the last 30 days. Each candidate amount is used to determine your recommended hourly commitments. +## Recommendations in Azure Advisor -Azure then runs simulations of what your total costs would have been if each candidate were your hourly commitment for both a 1-year and 3-year savings plan. Simulations are performed for usage at the billing account/profile, and at the subscription and resource group levels. +When available, a savings plan purchase recommendation can also be found in Azure Advisor. While we may generate up to 10 recommendations, Azure Advisor only surfaces the single three-year recommendation with the greatest savings for each billing subscription. Keep the following points in mind: -With each candidate: +- If you want to see recommendations for a one-year term or for other scopes, navigate to the savings plan purchase experience in Azure portal. For example, enrollment account, billing profile, resource groups, and so on. For more information, see [Who can buy a savings plan](buy-savings-plan.md#who-can-buy-a-savings-plan). +- Recommendations available in Advisor currently only consider your last 30 days of usage. +- Recommendations are for three-year savings plans. +- If you recently purchased a savings plan, Advisor reservation purchase and Azure saving plan recommendations can take up to five days to disappear. ++## Purchase recommendations in the Azure portal -- Some hours will result in savings.- - For example, when savings plan-discounted usage plus saving plan commitment is less than on-demand charges during that hour. -- Some hours won't result in savings.- - For example, when discounted usage plus hourly commitment is greater than or equal to on-demand charges. +When available, up to 10 savings plan commitment recommendations can be found in the savings plan purchase experience in Azure portal. For more information, see [Who can buy a savings plan](buy-savings-plan.md#who-can-buy-a-savings-plan). Each recommendation includes the commitment amount, the estimated savings percentage (off your current pay-as-you-go costs) and the percentage of your compute usage costs that would be covered by this and any other previously purchased savings plans and reservations. -For each candidate, Azure sums all simulated hourly charges. The sum is then compared to your actual on-demand charges to determine the total savings provided by that candidate. Because compute usage varies over time, the simulations are run several times a day, using a rolling 30-day usage window. +By default, the recommendations are for the entire billing scope (billing account or billing profile for MCA and billing account for EA). You can also view separate subscription and resource group-level recommendations by changing benefit application to one of those levels. -As the savings are calculated for each candidate, Azure determines the percentage of your compute usage that would be covered by the candidate savings plan plus any other previously purchased reservations or savings plan. For example, when a specific value hourly savings plan commitment results in a specific percentage of your compute usage being covered by one or more savings plans or reservations. +Recommendations are term-specific, so you'll see the one-year or three-year recommendations at each level by toggling the term options. We don't currently support management group-level recommendations. -Finally, Azure selects a set of differentiated candidates, currently up to 10 each for 1-year and 3-year commitments, that provide the greatest savings across different compute coverage levels. Azure provides them as recommendations - the amount with the greatest savings for 1-year and 3-year are the highlighted options. +The highlighted recommendation is projected to result in the greatest savings. The other values allow you to see how increasing or decreasing your commitment could affect both your savings. They also show how much of your total compute usage cost would be covered by savings plans or reservation commitments. When the commitment amount is increased, your savings could be reduced because you may end up with lower utilization each hour. If you lower the commitment, your savings could also be reduced. In this case, although you'll likely have greater utilization each hour, there will likely be other hours where your savings plan won't fully cover your usage. Usage beyond your hourly commitment is charged at the more expensive pay-as-you-go rates. -To account for scenarios where there was significant reductions in your usage, including recently decommissioned services, Azure runs more simulations using only the last three days of usage. The lower of the 3-day and 30-day recommendations are highlighted, even when the 30-day recommendation appears to provide greater savings. Azure tries to ensure that you don't overcommit based on stale data. +## Purchase recommendations with REST API ++For more information about retrieving savings plan commitment recommendations, see the saving plan [Benefit Recommendations API](/rest/api/cost-management/benefit-recommendations). ## Reservation trade in recommendations -When you trade one or more reservations for a savings plan, you're shifting the balance of your previous commitments to a new savings plan commitment. For example, if you have a one year reservation with a value of $500, and half way through the term you look to trade it for a savings plan, you would still have an outstanding commitment of about $250. +When you trade one or more reservations for a savings plan, you're shifting the balance of your previous commitments to a new savings plan commitment. For example, if you have a one-year reservation with a value of $500, and halfway through the term you look to trade it for a savings plan, you would still have an outstanding commitment of about $250. The minimum hourly commitment must be at least equal to the outstanding amount divided by (24 times the term length in days). -As part of the trade in, the outstanding commitment is automatically included in your new savings plan. We do it by dividing the outstanding commitment by the number of hours in the term of the new savings plan. For example, 24 \* term length in days. And by making the value the minimum hourly commitment you can make during as part of the trade-in. Using the previous example, the $250 amount would be converted into an hourly commitment of ~ $0.029 for a new one year savings plan. +As part of the trade in, the outstanding commitment is automatically included in your new savings plan. We do it by dividing the outstanding commitment by the number of hours in the term of the new savings plan. For example, 24 times the term length in days. And by making the value the minimum hourly commitment you can make during as part of the trade-in. Using the previous example, the $250 amount would be converted into an hourly commitment of about $0.029 for a new one-year savings plan. If you're trading multiple reservations, the aggregate outstanding commitment is used. You may choose to increase the value, but you can't decrease it. The new savings plan will be used to cover usage of eligible resources. The minimum value doesn't necessarily represent the hourly commitment necessary to cover the resources that were covered by the exchanged reservation. If you want to cover those resources, you'll most likely have to increase the hourly commitment. To determine the appropriate hourly commitment: -1. Download your price list. -2. For each reservation order you're returning, find the product in the price sheet and determine its unit price under either a 1-year or 3-year savings plan (filter by term and price type). -3. Multiply the rate by the number of instances that are being returned. -4. Repeat for each reservation order to be returned. -5. Sum the values and enter it as the hourly commitment. --## Recommendations in Azure Advisor --When appropriate, a savings plan purchase recommendation can also be found in Azure Advisor. Keep in mind the following points: --- The savings plan recommendations are for a single-subscription scope. If you want to see recommendations for the enrollment account or billing profile, then navigate to **Savings plans** > **Add** and then select the type that you want to see the recommendations for.-- Recommendations available in Advisor consider your past 30-day usage trend.-- The recommendation is for a three-year savings plan.-- The recommendation calculations reflect any discounted on-demand usage rates.-- If you recently purchased a savings plan, Advisor reservation purchase and Azure saving plan recommendations can take up to five days to disappear.+1. Download your price list. +2. For each reservation order you're returning, find the product in the price sheet and determine its unit price under either a one-year or three-year savings plan (filter by term and price type). +3. Multiply unit price by the number of instances that are being returned. The result gives you the total hourly commitment required to cover the product with your savings plan. +4. Repeat for each reservation order to be returned. +5. Sum the values and enter the total as the hourly commitment. ## Next steps -- Learn about [How the Azure savings plan discount is applied](discount-application.md).-- Learn about how to [trade in reservations](reservation-trade-in.md) for a savings plan.+- Learn about [how the Azure savings plan discount is applied](discount-application.md). +- Learn about how to [trade in reservations](reservation-trade-in.md) for a savings plan. |
cost-management-billing | Reservation Trade In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md | -Azure savings plans provide flexibility to help meet your evolving needs by providing discounted rates for VMs, dedicated hosts, container instances, Azure premium functions and Azure app services, across all supported regions. --If you find that your Azure VMs, Dedicated Hosts, or Azure App Services reservations, don't provide the necessary flexibility you require, you can trade these reservations for a savings plan. When you trade-in a reservation and purchase a savings plan, you can select a savings plan term of either one-year to three-year. +If you find that your [Azure Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/windows/), [Dedicated Hosts](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/), or [Azure App Service](https://azure.microsoft.com/pricing/details/app-service/windows/) reservations, don't provide the necessary flexibility you require, you can trade these reservations for a savings plan. When you trade-in a reservation and purchase a savings plan, you can select a savings plan term of either one-year to three-year. Although you can return the above offerings for a savings plan, you can't exchange a savings plan for them or for another savings plan. -> [!NOTE] -> Exchanges will be unavailable for Azure reserved instances for compute services purchased on or after **January 1, 2024**. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For a limited time you may trade-in your Azure reserved instances for compute for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md). - The following reservations aren't eligible to be traded in for savings plans: - Azure Databricks reserved capacity The following reservations aren't eligible to be traded in for savings plans: - SUSE Linux plans > [!NOTE]-> - You must have owner access on the Reservation Order to trade in an existing reservation. You can [Add or change users who can manage a savings plan](manage-savings-plan.md#who-can-manage-a-savings-plan). -> - Microsoft is not currently charging early termination fees for reservation trade ins. We might charge the fees made in the future. We currently don't have a date for enabling the fee. +> Exchanges will be unavailable for Azure reserved instances for compute services purchased on or after **January 1, 2024**. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For a limited time you may trade-in your Azure reserved instances for compute for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md). +++- You must have owner access on the Reservation Order to trade in an existing reservation. You can [Add or change users who can manage a savings plan](manage-savings-plan.md#who-can-manage-a-savings-plan). +- Microsoft isn't currently charging early termination fees for reservation trade ins. We might charge the fees made in the future. We currently don't have a date for enabling the fee. ## How to trade in an existing reservation You can trade in your reservation from [Azure portal](https://portal.azure.com/# :::image type="content" source="./media/reservation-trade-in/exchange-refund-return.png" alt-text="Screenshot showing the Exchange window." lightbox="./media/reservation-trade-in/exchange-refund-return.png" ::: 1. For each reservation order selected, enter the quantity of reservation instances you want to return. The bottom of the window shows the amount that will be refunded. It also shows the value of future payments that will be canceled, if applicable. 1. Select **Compute Savings Plan** as the product that you want to purchase.-1. Enter a friendly name for the savings plan. Select the scope where the savings plan benefit will apply and select the term length. Scopes include shared, subscription, resource group, and management group. +1. Enter the necessary information to complete the purchase. For more information, see [Buy a savings plan](buy-savings-plan.md#buy-a-savings-plan-in-the-azure-portal). ++## Determine savings plan commitment needed to replace your reservation ++During a reservation trade-in, the default hourly commitment for the savings plan is calculated using the remaining monetary value of the reservations that are being traded in. The resulting hourly commitment may not be a large enough benefit commitment to cover the virtual machines that were previously covered by the returned reservations. You can calculate the necessary savings plan hourly commitment to cover the reservations as follows: ++1. Follow the first six steps in [Estimate costs with the Azure pricing calculator](../manage/ea-pricing.md#estimate-costs-with-the-azure-pricing-calculator). +2. Search for the product that you want to return. +3. Select a savings plan term and operating system, if necessary. +4. Select **Upfront** as the payment option. You're using the annual cost only because it's easier to work with in this calculation example. +5. To determine the hourly commitment for the product, divide the upfront compute charge by: + - 8,760 for a one-year savings plan + - 26,280 for a three-year savings plan + :::image type="content" source="./media/reservation-trade-in/pricing-calculator-upfront-example.png" alt-text="Example screenshot showing the Azure pricing calculator upfront compute charge value example." lightbox="./media/reservation-trade-in/pricing-calculator-upfront-example.png" ::: +1. Multiply the product’s hourly commitment by the number of instances you're trading-in. +1. Repeat steps 2-6 for all reservation products you're trading-in. +1. Enter the total of the above steps as the hourly commitment, then **Add** to your cart. +1. Review and complete the transaction. -By default, the hourly commitment is derived from the remaining value of the reservations that are traded in. Although it's the minimum hourly commitment you may make, it might not be a large enough benefit commitment to cover the VMs that were previously covered by the reservations that you're returning. +The preceding image's price is an example. -To determine the remaining commitment amount needed to cover your VMs: +The preceding process assumes 100% utilization of the savings plan. -1. Download your price sheet. For more information, see [View and download your Azure pricing](../manage/ea-pricing.md). -1. Search the price sheet for the 1-year or 3-year savings plan rate for VM products associated with the reservations that you're returning. -1. For each reservation, multiply the savings plan rate with the quantity you want to return. -1. Enter the total of the above step as the hourly commitment, then **Add** to your cart. -1. Review and complete the transaction. +## Determine savings difference from reservations to a savings plan ++To determine the cost savings difference when switching from reservations to a savings plan, use the following steps. ++1. In the [Azure portal](https://portal.azure.com), navigate to **Reservations** to view your list of reservations. +1. Select the reservation that you want to trade in and select **Exchange**. +1. Under the Essentials section, select the **Reservation order ID**. +1. In the left menu, select **Payments**. +1. Depending on the payment schedule for the reservation, you're presented with either the monthly or full cost of the reservation. You need the monthly cost. If necessary, divide the value by either 12 or 36, depending on the reservation term. +1. Multiply the monthly cost of the reservation by the number of instances you want to return. For example, the total monthly reservation cost. +1. To determine the monthly cost of an equivalent-capable savings plan, follow the first six steps in [Estimate costs with the Azure pricing calculator](../manage/ea-pricing.md#estimate-costs-with-the-azure-pricing-calculator). +1. Search for the compute product associated with the reservation that you want to return. +1. Select savings plan term and operating system, if necessary. +1. Select **Monthly** as the payment option. It's the monthly cost of a savings plan providing equivalent coverage to a resource that was previously covered by the reservation. + :::image type="content" source="./media/reservation-trade-in/pricing-calculator-monthly-example.png" alt-text="Example screenshot showing the Azure pricing calculator monthly compute charge value example." lightbox="./media/reservation-trade-in/pricing-calculator-monthly-example.png" ::: +1. Multiply the cost by the number of instances that are currently covered by the reservations to be returned. ++The preceding image's price is an example. ++The result is the total monthly savings plan cost. The difference between the total monthly savings plan cost minus the total monthly reservation cost is the extra cost incurred by moving resources covered by reservations to a savings plan. ++The preceding process assumes 100% utilization of both the reservation(s) and savings plan. ## How transactions are processed |
cost-management-billing | Savings Plan Compute Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md | -Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan helps you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount. +Azure savings plan for compute is a flexible pricing model. It provides savings up to 65% off pay-as-you-go pricing when you commit to spend a fixed hourly amount on compute services for one or three years. Committing to a savings plan allows you to get discounts, up to the hourly commitment amount, on the resources you use. Savings plan commitments are priced in USD for MCA and CSP customers, and in local currency for EA customers. Savings plan discounts vary by meter and by commitment term (1-year or 3-year), not commitment amount. Savings plans provide a billing discount and don't affect the runtime state of your resources -Each hour with savings plan, your compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage afterward is priced at pay-as-you-go rates. Savings plan commitments are priced in USD for Microsoft Customer Agreement and Microsoft Partner Agreement customers, and in local currency for Enterprise Agreement customers. Usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions, and Azure app services are eligible for savings plan discounts. +You can pay for a savings plan up front or monthly. The total cost of the up-front and monthly savings plan is the same. -You can acquire a savings plan by making a new commitment, or you can trade in one or more active reservations for a savings plan. When you acquire a savings plan with a reservation trade in, the reservation is canceled. The prorated residual value of the unused reservation benefit is converted to the equivalent hourly commitment for the savings plan. The commitment may not be sufficient for your needs, and while you may not reduce it, you can increase it to cover your needs. --Currently, you can only acquire savings plans in the Azure portal. You can pay for a savings plan up front or monthly. The total cost of up-front and monthly savings plan is the same. You don't pay any extra fees when you choose to pay monthly. After you purchase a savings plan, the discount automatically applies to matching resources. Savings plans provide a billing discount and don't affect the runtime state of your resources. +You can buy savings plans in the [Azure portal](https://portal.azure.com/) or with the [Savings Plan Order Alias API](/rest/api/billingbenefits/savings-plan-order-alias). ## Why buy a savings plan? -If you have consistent compute spend, buying a savings plan gives you the option to reduce your costs. For example, when you continuously run instances of a service without a savings plan, you're charged at pay-as-you-go rates. When you buy a savings plan, your compute usage is immediately eligible for the savings plan discount. Your discounted rates add-up to the commitment amount. Usage covered by a savings plan receives discounted rates, not the pay-as-you-go rates. If you need help to decide on which type of commitment to make, see [Decide between a savings plan and a reservation](decide-between-savings-plan-reservation.md). +If you have consistent compute spend, but your use of disparate resources makes reservations infeasible, buying a savings plan gives you the ability to reduce your costs. For example, If you consistently spend at least $X every hour, but your usage comes from different resources and/or different datacenter regions, you likely can't effectively cover these costs with reservations. When you buy a savings plan, your hourly usage, up to your commitment amount, is discounted. For this usage, you no longer charged at the pay-as-you-go rates. ## How savings plan benefits are applied -Almost immediately after purchase the savings plan benefit begins to apply without other action required by you. Every hour, we apply savings plan benefit to plan-eligible meters that are within the savings plan's scope. The benefits are applied to the meter with the greatest discount percentage first. Savings plan scope selects where the savings plan benefit applies. +With Azure savings plan, hourly usage charges incurred from [savings plan-eligible resources](https://azure.microsoft.com/pricing/offers/savings-plan-compute/#how-it-works), which are within the benefit scope of the savings plan, are discounted and applied to your hourly commitment until the hourly commitment is reached. Usage charges above the commitment are billed at your on-demand rate. ++You don't need to assign a savings plan to your compute resources. The savings plan benefit is applied automatically to compute usage that matches the savings plan scope. A savings plan purchase covers only the compute part of your usage. For example, for Windows VMs, the usage meter is split into two separate meters. There's a compute meter, which is same as the Linux meter, and a Windows IP meter. The charges that you see when you make the purchase are only for the compute costs. Charges don't include Windows software costs. For more information about software costs, see [Software costs not included with Azure savings plans](software-costs-not-included.md). -For more information about benefits are applied, see [Savings plan discount application](discount-application.md). +For more information about how savings plan discounts are applied, see [Savings plan discount application](discount-application.md). -For more information about how savings plan scope works, see [Scope savings plans](buy-savings-plan.md#scope-savings-plans). +For more information about how savings plan scope works, see [Saving plan scopes](scope-savings-plan.md). ## Determine your savings plan commitment -On-demand usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions, and Azure app services are eligible for savings plan benefits. It's important to consider your usage when you determine your hourly commitment. Azure provides [commitment recommendations](purchase-recommendations.md) based on usage from your last 30 days. The recommendations are found in: +Pay-as-you-go usage from the following compute services is [eligible for savings plan benefits](https://azure.microsoft.com/pricing/offers/savings-plan-compute/#how-it-works). ++- [Azure Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) +- [Azure Dedicated Hosts](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/) +- [Azure Container Instances](https://azure.microsoft.com/pricing/details/container-instances/) +- [Azure Functions premium plan](https://azure.microsoft.com/pricing/details/functions/) +- [Azure App Service](https://azure.microsoft.com/pricing/details/app-service/windows/) -- [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/score)-- Savings plan purchase experience in the [Azure portal](https://portal.azure.com/)-- [Benefit Recommendation APIs](/rest/api/cost-management/benefit-recommendations/list)+It's important to consider your hourly spend when you determine your hourly commitment. Azure provides commitment recommendations based on usage from your last 30 days. The recommendations may be found in: -You can also analyze your usage data to determine a different hourly commitment. +- [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/%7E/score) +- The savings plan purchase experience in the [Azure portal](https://portal.azure.com/) +- Benefit [Recommendation APIs](/rest/api/cost-management/benefit-recommendations/list) -For more information, seeΓÇ»[Choose an Azure saving plan commitment amount](choose-commitment-amount.md) +For more information, seeΓÇ»[Choose an Azure saving plan commitment amount](choose-commitment-amount.md). ## Buy a savings plan -You can purchase savings plans from the Azure portal. For more information, seeΓÇ»[Buy a savings plan](buy-savings-plan.md). +You can purchase savings from the [Azure portal](https://portal.azure.com/) and APIs. For more information, seeΓÇ»[Buy a savings plan](buy-savings-plan.md). ++## How to find products covered under a savings plan ++The complete list of savings plan eligible products is found in your price sheet, which can be downloaded from the [Azure portal](https://portal.azure.com). The EA portal price sheet doesn't include savings plan pricing. After you download the file, filter `Price Type` by `Savings Plan` to see the one-year and three-year prices. ## How is a savings plan billed? -The savings plan is charged to the payment method tied to the subscription. The savings plan cost is deducted from your Azure Prepayment (previously called monetary commitment) balance, if available. When your Azure Prepayment balance doesn't cover the cost of the savings plan, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go rates, the credit card you have on your account is billed immediately for up-front and for monthly purchases. Monthly payments that's you've made appear on your invoice. When you're billed by invoice, you see the charges on your next invoice. +The savings plan is charged to the payment method tied to the subscription. The savings plan cost is deducted from your Azure Prepayment (previously called monetary commitment) balance, if available. When your Azure Prepayment balance doesn't cover the cost of the savings plan, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go rates, the credit card you have in your account is billed immediately for up-front and for monthly purchases. Monthly payments that's you've made appear on your invoice. When you're billed by invoice, you see the charges on your next invoice. -## Who can manage a savings plan by default +## Who can buy a savings plan? ++To determine what roles are permitted to purchase savings plans, see [Who can buy a savings plan](buy-savings-plan.md#who-can-buy-a-savings-plan). ++## Who can manage a savings plan by default? By default, the following users can view and manage savings plans: - The person who buys a savings plan, and the account administrator of the billing subscription used to buy the savings plan are added to the savings plan order.-- Enterprise Agreement and Microsoft Customer Agreement billing administrators.+- EA and MCA billing administrators. To allow other people to manage savings plans, see [Manage savings plan resources](manage-savings-plan.md). ## Get savings plan details and utilization after purchase -With sufficient permissions, you can view the savings plan and usage in the Azure portal. You can get the data using APIs, as well. --For more information about savings plan permissions in the Azure portal, seeΓÇ»[Permissions to view and manage Azure savings plans](permission-view-manage.md) +With sufficient permissions, you can view the savings plan and usage in the Azure portal. You can get the data using APIs, as well. For more information about savings plan permissions in the Azure portal, seeΓÇ»[Permissions to view and manage Azure savings plans](permission-view-manage.md). ## Manage savings plan after purchase -After you buy an Azure savings plan, you can update the scope to apply the savings plan to a different subscription and change who can manage the savings plan. --For more information, seeΓÇ»[Manage Azure savings plans](manage-savings-plan.md). +After you buy an Azure savings plan, you can update the scope to apply the savings plan to a different subscription and change who can manage the savings plan. For more information, seeΓÇ»[Manage Azure savings plans](manage-savings-plan.md). ## Cancellation and refund policy Savings plan purchases can't be canceled or refunded. ## Charges covered by savings plan -- **Virtual Machines** - A savings plan only covers the virtual machine and cloud services compute costs. It doesn't cover other software, Windows, networking, or storage charges. Virtual machines don't include BareMetal Infrastructure, A, G, and GS series. Spot VMs are also not covered by savings plans. -- **Azure Dedicated Host** - Only the compute costs are included with the Dedicated host.-- **Container Instances** -- **Azure Premium Functions**-- **Azure App Services** - The Azure savings plan for compute can only be applied to the App Service upgraded Premium v3 plan and the upgraded Isolated v2 plan.+- Virtual Machines - A savings plan only covers the virtual machine and cloud services compute costs. It doesn't cover other software, Windows, networking, or storage charges. Virtual machines don't include BareMetal Infrastructure, A, G, and GS series. Spot VMs aren't covered by savings plans. +- Azure Dedicated Hosts - Only the compute costs are included with the dedicated hosts. +- Container Instances +- Azure Premium Functions +- Azure App Services - The Azure savings plan for compute can only be applied to the App Service upgraded Premium v3 plan and the upgraded Isolated v2 plan. -Some exclusions apply to the above services. +Exclusions apply to the above services. -For Windows virtual machines and SQL Database, the savings plan discount doesn't apply to the software costs. You can cover the licensing costs with [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/). +For Windows virtual machines and SQL Database, the savings plan discount doesn't apply to the software costs. You might be able to cover the licensing costs with [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/). ## Need help? Contact us. -If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English. +If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English. ## Next steps - Learn [how discounts apply to savings plans](discount-application.md). - [Trade in reservations for a savings plan](reservation-trade-in.md).-- [Buy a savings plan](buy-savings-plan.md).+- [Buy a savings plan](buy-savings-plan.md). |
cost-management-billing | Scope Savings Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/scope-savings-plan.md | + + Title: Savings plan scopes ++description: Learn about savings plan scopes and how they're processed. ++++++ Last updated : 02/03/2023+++# Savings plan scopes ++Setting the scope for a savings plan selects where the benefits apply. ++You have the following options to scope a savings plan, depending on your needs: ++## Scope options ++- **Single resource group scope** - Applies the savings plan benefit to the eligible resources in the selected resource group only. +- **Single subscription scope** - Applies the savings plan benefit to the eligible resources in the selected subscription. +- **Shared scope** - Applies the savings plan benefit to eligible resources within subscriptions that are in the billing context. If a subscription was moved to different billing context, the benefit will no longer be applied to this subscription and will continue to apply to other subscriptions in the billing context. + - For Enterprise Agreement customers, the billing context is the enrollment. + - For Microsoft Customer Agreement customers, the billing scope is the billing profile. +- **Management group** - Applies the savings plan benefit to eligible resources in the list of subscriptions that are a part of both the management group and billing scope. To buy a savings plan for a management group, you must have at least read permission on the management group and be a savings plan owner on the billing subscription. ++## Scope processing order ++While applying savings plan benefits to your usage, Azure processes savings plans in the following order: ++1. Savings plans with a single resource group scope. +2. Savings plans with a single subscription scope. +3. Savings plans scoped to a management group. +4. Savings plans with a shared scope (multiple subscriptions), described previously. ++You can always update the scope after you buy a savings plan. To do so, go to the savings plan, select **Configuration**, and rescope the savings plan. Rescoping a savings plan isn't a commercial transaction, so your savings plan term isn't changed. For more information about updating the scope, see [Update the scope](manage-savings-plan.md#change-the-savings-plan-scope) after you purchase a savings plan. ++## Next steps ++- [Change the savings plan scope](manage-savings-plan.md#change-the-savings-plan-scope). |
cost-management-billing | Utilization Cost Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/utilization-cost-reports.md | Enhanced data for savings plan costs and usage is available for Enterprise Agree ## Savings plan charges in Azure cost data -Fields in the Azure cost data that are relevant to savings plan scenarios are listed below. +In Cost Management, cost details provide savings plan cost in two separate data sets: _Actual Cost_ and _Amortized Cost_. How these two datasets differ: ++**Actual Cost** - Provides data to reconcile with your monthly bill. The data has savings plan purchase costs and savings plan application details. With the data, you can know which subscription or resource group, or resource received the savings plan discount on a particular day. The EffectivePrice for the usage that receives the savings plan discount is zero. ++**Amortized Cost** - The dataset is like the Actual Cost dataset except that - the EffectivePrice for the usage that gets savings plan discount is the prorated cost of the savings plan (instead of being zero). It helps you know the monetary value of savings plan consumption by a subscription, resource group or a resource, and can help you charge back for the savings plan utilization internally. The dataset also has unused hours in the savings plan that have been charged for the hourly commitment amount. The dataset doesn't have savings plan purchase records. ++The following fields in the Azure cost data that are relevant to savings plan scenarios. - `BenefitId` and `BenefitName` - They are their own fields in the data and correspond to the Savings Plan ID and Savings Plan name associated with your purchase.-- `PricingModel` - This field will be "SavingsPlan" for purchase and usage cost records that are relevant to a Savings Plan.+- `PricingModel` - The field will be `SavingsPlan` for purchase and usage cost records that are relevant to a Savings Plan. - `ProductOrderId` - The savings plan order ID, added as its own field. - `ProductOrderName` - The product name of the purchased savings plan.-- `Term` – The time period associated with your savings plan purchase.+- `Term` – The period associated with your savings plan purchase. -In Azure Cost Management, cost details provide savings plan cost in two separate data sets: _Actual Cost_ and _Amortized Cost_. How these two datasets differ: -**Actual Cost** - Provides data to reconcile with your monthly bill. The data has savings plan purchase costs and savings plan application details. With the data, you can know which subscription or resource group or resource received the savings plan discount in a particular day. The `EffectivePrice` for the usage that receives the savings plan discount is zero. --**Amortized Cost** - This dataset is similar to the Actual Cost dataset except that - the `EffectivePrice` for the usage that gets savings plan discount is the prorated cost of the savings plan (instead of being zero). It helps you know the monetary value of savings plan consumption by a subscription, resource group or a resource, and it can help you charge back for the savings plan utilization internally. The dataset also has unused hours in the savings plan that have been charged for the hourly commitment amount. The dataset doesn't have savings plan purchase records. --Here's a comparison of the two data sets: +Comparison of two data sets: | **Data** | **Actual Cost data set** | **Amortized Cost data set** | | | | |-| Savings plan purchases | Available in the view.<br><br>To get the data, filter on ChargeType = `Purchase`.<br><br>Refer to `BenefitID` or `BenefitName` to know which savings plan the charge is for. | Not applicable to the view.<br><br>Purchase costs aren't provided in amortized data. | +| Savings plan purchases | To get the data filter on `ChargeType` = `Purchase`.<br><br> Refer to `BenefitID` or `BenefitName` to know which savings plan the charge is for. | Purchase costs aren't provided in amortized data. | | `EffectivePrice` | The value is zero for usage that gets savings plan discount. | The value is per-hour prorated cost of the savings plan for usage that has the savings plan discount. |-| Unused Savings Plan (provides the number of hours the savings plan wasn't used in a day and the monetary value of the waste) | Not applicable in the view. | Available in the view.<br><br>To get the data, filter on ChargeType = `UnusedSavingsPlan`.<br><br>Refer to `BenefitID` or `BenefitName` to know which savings plan was underutilized. Indicates how much of the savings plan was wasted for the day. | -| UnitPrice (price of the resource from your price sheet) | Available | Available | +| Unused benefit (provides the number of hours the savings plan wasn't used in a day and the monetary value of the waste) | Not applicable in the view. | To get the data, filter on `ChargeType` = `UnusedBenefit`.<br><br> Refer to `BenefitID` or `BenefitName` to know which savings plan was underutilized. It's how much of the savings plan was wasted for the day. | +| `UnitPrice` (price of the resource from your price sheet) | Available | Available | ## Get Azure consumption and savings plan cost data using API -You can get the data using the API or download it from Azure portal. Call the [Cost Details API](/rest/api/cost-management/generate-cost-details-report/create-operation) to get the new data. For details about terminology, see [Usage terms](../understand/understand-usage.md). To learn more about how to call the Cost Details API, see [Get cost data on demand](../automate/get-small-usage-datasets-on-demand.md). +You can get the data using the API or download it from Azure portal. Call the [Cost Details API](/rest/api/cost-management/generate-cost-details-report/create-operation)to get the new data. For details about terminology, see [usage terms](../understand/understand-usage.md). For more information about how to call the Cost Details API, see [Get cost data on demand](../automate/get-small-usage-datasets-on-demand.md). -Information in the following table about metric and filter can help solve for common savings plan problems. +Information in the following table about metrics and filters can help solve for common savings plan problems. -| **Type of API data** | **API call action** | -||| -| **All Charges (usage and purchases)** | Request for an ActualCost report. | -| **Usage that got savings plan discount** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, look for records with ChargeType = 'Usage' and PricingModel = 'SavingsPlan'. | -| **Usage that didn't get savings plan discount** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with PricingModel = 'OnDemand'. | -| **Amortized charges (usage and purchases)** | Request for an AmortizedCost report. | -| **Unused savings plan report** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'UnusedSavingsPlan' and PricingModel ='SavingsPlan'. | -| **Savings plan purchases** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Purchase' and PricingModel = 'SavingsPlan'. | -| **Refunds** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Refund'. | +| Type of API data | API call action | +| | | +| All Charges (usage and purchases) | Request for an ActualCost report. | +| Usage that got savings plan discount | Request for an ActualCost report. + Once you've ingested all the usage, look for records with ChargeType = `Usage` and `PricingModel` = `SavingsPlan`. | +| Usage that didn't get savings plan discount | Request for an ActualCost report. <br><br> Once you've ingested all the usage, filter for usage records with `PricingModel` = `OnDemand`. | +| Amortized charges (usage and purchases) | Request for an AmortizedCost report. | +| Unused savings plan report | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = `UnusedBenefit` and `PricingModel` =`SavingsPlan`. | +| Savings plan purchases | Request for an ActualCost report. <br><br> Once you've ingested all the usage, filter for usage records with `ChargeType` = `Purchase` and `PricingModel` = `SavingsPlan`. | +| Refunds | Request for an ActualCost report. <br><br> Once you've ingested all the usage, filter for usage records with `ChargeType` = `Refund`. | ## Download the cost CSV file with new data -To download your saving plan cost and usage file, using the information in the following sections. +To download your saving plan cost and usage file, use the information in the following sections. -### Download for EA customers +### EA customers -If you're an EA admin, you can download the CSV file that contains new cost data from the Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the cost file from Azure portal (portal.azure.com) to see the new data. +If you're an EA admin, you can download the CSV file that contains new cost data from the Azure portal. This data isn't available from the [EA portal](https://ea.azure.com/), you must download the cost file from Azure portal (portal.azure.com) to see the new data. In the Azure portal, navigate to [Cost Management + Billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts). -1. Select the enrollment. -1. Select **Usage + charges**. -1. Select **Download**. -1. In **Download Usage + Charges**, under **Usage Details Version 2**, select **All Charges (usage and purchases)** and then select download. Repeat for **Amortized charges (usage and purchases)**. +1. Select the billing account. +2. In the left menu, select **Usage + charges**. +3. Select **Download**. + :::image type="content" source="./media/utilization-cost-reports/download-usage-file.png" alt-text="Screenshot showing the download usage file option." lightbox="./media/utilization-cost-reports/download-usage-file.png" ::: +4. In **Download Usage + Charges**, under **Usage Details Version 2**, select **All Charges (usage and purchases)** and then select download. + - Repeat for **Amortized charges (usage and purchases)**. -### Download for MCA customers +### MCA customers -If you're an Owner, Contributor, or Reader on your Billing Account, you can download the CSV file that contains new usage data from the Azure portal. In the Azure portal, navigate to Cost Management + Billing. +If you're an Owner, Contributor or Reader on your Billing Account, you can download the CSV file that contains new usage data from the [Azure portal](https://ms.portal.azure.com/#home). In the portal, navigate to **Cost management + Billing**. 1. Select the billing account. 2. Select **Invoices.** 3. Download the Actual Cost CSV file based on your scenario.- 1. To download the usage for the current month, select **Download pending usage**. - 2. To download the usage for a previous invoice, select the ellipsis symbol (**...**) and select **Prepare Azure usage file**. -4. If you want to download the Amortized Cost CSV file, you'll need to use Exports or our Cost Details API. - 1. To use Exports, see [Export data](../costs/tutorial-export-acm-data.md). - 2. To use the Cost Details API, see [Get small cost datasets on demand](../automate/get-small-usage-datasets-on-demand.md). + - To download the usage for the current month, select **Download pending usage.** + - To download the usage for a prior invoice, select the ellipsis symbol ( **...** ) and select **Prepare Azure usage file.** +1. If you want to download the Amortized Cost CSV file, you'll need to use [Exports](../costs/tutorial-export-acm-data.md) or the [Cost Details API](../automate/get-small-usage-datasets-on-demand.md). ## Common cost and usage tasks -The following sections are common tasks that most people use to view their savings plan cost and usage data. +The following sections are common tasks that are used to view savings plan cost and usage data. ### Get savings plan purchase costs -Savings plan purchase costs are available in Actual Cost data. Filter for ChargeType = `Purchase`. Refer to `ProductOrderID` to determine which savings plan order the purchase is for. +Savings plan purchase costs are available in Actual Cost data. Filter for `ChargeType` = `Purchase`. Refer to `ProductOrderID` to determine which savings plan order the purchase is for. ### Get underutilized savings plan quantity and costs -Get amortized cost data and filter for `ChargeType` = `UnusedSavingsPlan` and `PricingModel` = `SavingsPlan`. You get the daily unused savings plan quantity and the cost. You can filter the data for a savings plan or savings plan order using `BenefitId` and `ProductOrderId` fields, respectively. If a savings plan was 100% utilized, the record has a quantity of 0. +Get Amortized Cost data and filter for `ChargeType` = `UnusedBenefit` and `PricingModel` = `SavingsPlan`. You get the daily unused savings plan quantity and the cost. You can filter the data for a Savings Plan or Savings Plan order using `BenefitId` and `ProductOrderId` fields, respectively. If a Savings Plan was 100% utilized, the record has a quantity of 0. -### Amortized savings plan costs +### Amortize savings plan costs -Get amortized cost data and filter for a savings plan order using `ProductOrderID` to get daily amortized costs for a savings plan. +Get Amortized Cost data and filter for a savings plan order using `ProductOrderID` to get daily amortized costs for a savings plan. ### Chargeback for a savings plan -You can charge-back savings plan use to other organizations by subscription, resource groups, or tags. Amortized cost data provides the monetary value of a savings plan's utilization at the following data types: +You can charge back savings plan use to other organizations by subscription, resource groups, or tags. Amortized cost data provides monetary value of a savings plan's utilization at the following data types: - Resources (such as a VM) - Resource group - Tags - Subscription -### Determine savings plan savings +### Determine savings resulting from savings plan Get the Amortized costs data and filter the data for a savings plan instance. Then: -1. Get estimated pay-as-you-go costs. Multiply the _UnitPrice_ value with _Quantity_ values to get estimated pay-as-you-go costs, if the savings plan discount didn't apply to the usage. -2. Get the savings plan costs. Sum the _Cost_ values to get the monetary value of what you paid for the savings plan. It includes the used and unused costs of the savings plan. +1. Get estimated pay-as-you-go costs. Multiply the `UnitPrice` value with `Quantity` values to get estimated pay-as-you-go costs if the savings plan discount didn't apply to the usage. +2. Get the savings plan costs. Sum the `Cost` values to get the monetary value of what you paid for the savings plan. It includes the used and unused costs of the savings plan. 3. Subtract savings plan costs from estimated pay-as-you-go costs to get the estimated savings. -Keep in mind that if you have an underutilized savings plan, the _UnusedSavingsPlan_ entry for _ChargeType_ becomes a factor to consider. When you have a fully utilized savings plan, you receive the maximum savings possible. Any _UnusedSavingsPlan_ quantity reduces savings. +Keep in mind that if you have an underutilized savings plan, the `UnusedBenefit` entry for `ChargeType` becomes a factor to consider. When you have a fully utilized savings plan, you receive the maximum savings possible. Any `UnusedBenefit` quantity reduces savings. -## Purchase and amortization costs in cost analysis +### Savings plan purchases and amortization in cost analysis -Savings plan costs are available in [cost analysis](https://aka.ms/costanalysis). By default, cost analysis shows **Actual cost**, which is how costs are shown on your bill. To view savings plan purchases broken down and associated with the resources that used the benefit, switch to **Amortized cost**. Here's an example. +Savings plan costs are available in [cost analysis](https://aka.ms/costanalysis). By default, cost analysis shows  **Actual cost** , which is how costs will be shown on your bill. To view savings plan purchases broken down and associated with the resources that used the benefit, switch to **Amortized cost**. Here's an example. :::image type="content" source="./media/utilization-cost-reports/portal-cost-analysis-amortized-view.png" alt-text="Example showing where to select amortized cost in cost analysis." lightbox="./media/utilization-cost-reports/portal-cost-analysis-amortized-view.png" ::: -Group by _Charge Type_ to see a breakdown of usage, purchases, and refunds; or by _Pricing Model_ for a breakdown of savings plan and on-demand costs. You can also group by _Benefit_ and use the _BenefitId_ and _BenefitName_ associated with your Savings Plan to identify the costs related to specific savings plan purchases. The only savings plan costs that you see when looking at actual cost are purchases. Costs aren't allocated to the individual resources that used the benefit when looking at amortized cost. You'll also see a new _**UnusedSavingsPlan**_ plan charge type when looking at amortized cost. +Group by **Charge Type** to see a breakdown of usage, purchases, and refunds; or by **Pricing Model** for a breakdown of savings plan and on-demand costs. You can also group by **Benefit** and use the **BenefitId** and **BenefitName** associated with your savings plan to identify the costs related to specific savings plan purchases. The only savings plan costs you'll see when looking at actual cost are purchases. Costs will be allocated to the individual resources that used the benefit when looking at amortized cost. You'll also see a new **UnusedBenefit** plan charge type when looking at amortized cost. ## Next steps -- Learn more about how to [Charge back Azure saving plan costs](charge-back-costs.md).+- Learn more about how to [Charge back Azure saving plan costs](charge-back-costs.md). |
databox-online | Azure Stack Edge Gpu 2301 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2301-release-notes.md | The 2301 release has the following new features and enhancements: - Starting March 2023, Azure Stack Edge devices will be required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible. - Beginning this release, you can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md). +## Issues fixed in this release ++| No. | Feature | Issue | +| | | | +|**1.**|Virtual network |In the earlier versions, virtual switches would get deleted when virtual network was deleted, causing the VM provisioning to time out. This issue was fixed and the virtual switch reference is now checked when the virtual network is deleted. | +|**2.**|Virtual network |In the previous versions, when the VM network interfaces were deleted, IP address was in use even after the associated network interface was removed. In this release, the IP address reference is removed after the VM network interface is deleted. | +|**3.**|VM |In earlier releases, change notifications weren't cleaned from the datastore. This resulted in the network resource provider (NRP) becoming unresponsive after the datastore was full. This release fixes this issue by adding a notification manager in the NRP to clean up change notifications. | +|**4.**|VM |In this release, reliability improvements have been made for the deployment of VM extensions. | ++## Known issues in this release ++| No. | Feature | Issue | Workaround/comments | +| | | | | +|**1.**|AKS on Azure Stack Edge |When you update your AKS on Azure Stack Edge deployment from a previous preview version to 2301 release, there is an additional nodepool rollout. |The update may take longer. | +|**2.**|Azure portal |When the Arc deployment fails in this release, you will see a generic *NO PARAM* error code, as all the errors are not propagated in the portal. |There is no workaround for this behavior in this release. | +|**3.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you will need to delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. | +|**4.**|AKS on Azure Stack Edge |In this release, attaching the PVC takes a long time. As a result, some pods that use persistent volumes (PVs) come up slowly after the host reboots. |A workaround is to restart the nodepool VM by connecting via the Windows PowerShell interface of the device. | + ## Known issues from previous releases The following table provides a summary of known issues carried over from the previous releases. |
defender-for-iot | Respond Ot Alert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/respond-ot-alert.md | For example, a device that attempted to connect to a malicious IP, together with 1. On the **Alerts** page, select an alert to view details on the right. -1. Locate the device links in the **Entities** area, either in the details pane on the right or in the alert details page. Select an entity link to open the related device details page, for both a source and destination device. <!--no links for some alerts?--> +1. Locate the device links in the **Entities** area, either in the details pane on the right or in the alert details page. Select an entity link to open the related device details page, for both a source and destination device. 1. On the device details page, select the **Alerts** tab to view all alerts for that device. For example: |
event-hubs | Azure Event Hubs Kafka Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md | Title: Azure Event Hubs for Apache Kafka ecosystems -description: Learn how Apache Kafka application developers can use Azure Event Hubs instead of building and using their own Kafka clusters. + Title: Use Azure Event Hubs to stream data from Apache Kafka apps +description: Learn how to use Azure Event Hubs to stream data from Apache Kafka applications without setting up a Kafka cluster on your own. Previously updated : 02/01/2023-keywords: "Kafka, Azure, topics, message-broker" Last updated : 02/03/2023 -# Azure Event Hubs for Apache Kafka ecosystems -Azure Event Hubs provides an Apache Kafka endpoint on an event hub, which enables users to connect to the event hub using the Kafka protocol. You can often use an event hub's Kafka endpoint from your applications without any code changes. You modify only the configuration, that is, update the connection string in configurations to point to the Kafka endpoint exposed by your event hub instead of pointing to a Kafka cluster. Then, you can start streaming events from your applications that use the Kafka protocol into event hubs, which are equivalent to Kafka topics. +# Use Azure Event Hubs to stream data from Apache Kafka applications ++This article explains how you can use Azure Event Hubs to stream data from [Apache Kafka](https://kafka.apache.org) applications without setting up a Kafka cluster on your own. ++## Overview ++Azure Event Hubs provides an Apache Kafka endpoint on an event hub, which enables users to connect to the event hub using the Kafka protocol. You can often use an event hub's Kafka endpoint from your applications without any code changes. You modify only the configuration, that is, update the connection string in configurations to point to the Kafka endpoint exposed by your event hub instead of pointing to a Kafka cluster. Then, you can start streaming events from your applications that use the Kafka protocol into event hubs, which are equivalent to Kafka topics. > [!NOTE] > Event Hubs for Kafka Ecosystems supports [Apache Kafka version 1.0](https://kafka.apache.org/10/documentation.html) and later. -This article provides detailed information on using Azure Event Hubs to stream data from [Apache Kafka](https://kafka.apache.org) applications without setting up a Kafka cluster on your own. --### Kafka and Event Hubs conceptual mapping +## Apache Kafka and Azure Event Hubs conceptual mapping Conceptually, Kafka and Event Hubs are very similar. They're both partitioned logs built for streaming data, whereby the client controls which part of the retained log it wants to read. The following table maps concepts between Kafka and Event Hubs. Conceptually, Kafka and Event Hubs are very similar. They're both partitioned lo | Consumer Group | Consumer Group | | Offset | Offset| -### Key differences between Apache Kafka and Event Hubs +## Key differences between Apache Kafka and Azure Event Hubs While [Apache Kafka](https://kafka.apache.org/) is software you typically need to install and operate, Event Hubs is a fully managed, cloud-native service. There are no servers, disks, or networks to manage and monitor and no brokers to consider or configure, ever. You create a namespace, which is an endpoint with a fully qualified domain name, and then you create Event Hubs (topics) within that namespace. For more information about Event Hubs and namespaces, see [Event Hubs features]( Scale in Event Hubs is controlled by how many [throughput units (TUs)](event-hubs-scalability.md#throughput-units) or [processing units](event-hubs-scalability.md#processing-units) you purchase. If you enable the [Auto-Inflate](event-hubs-auto-inflate.md) feature for a standard tier namespace, Event Hubs automatically scales up TUs when you reach the throughput limit. This feature work also works with the Apache Kafka protocol support. For a premier tier namespace, you can increase the number of processing units assigned to the namespace. -### Is Apache Kafka the right solution for your workload? +## Is Apache Kafka the right solution for your workload? Coming from building applications using Apache Kafka, it's also useful to understand that Azure Event Hubs is part of a fleet of services, which also includes [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md), and [Azure Event Grid](../event-grid/overview.md). sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule require ## Samples For a **tutorial** with step-by-step instructions to create an event hub and access it using SAS or OAuth, see [Quickstart: Data streaming with Event Hubs using the Kafka protocol](event-hubs-quickstart-kafka-enabled-event-hubs.md). -## Other Event Hubs features +## Other Azure Event Hubs features The Event Hubs for Apache Kafka feature is one of three protocols concurrently available on Azure Event Hubs, complementing HTTP and AMQP. You can write with any of these protocols and read with any another, so that your current Apache Kafka producers can continue publishing via Apache Kafka, but your reader can benefit from the native integration with Event Hubs' AMQP interface, such as Azure Stream Analytics or Azure Functions. Conversely, you can readily integrate Azure Event Hubs into AMQP routing networks as a target endpoint, and yet read data through Apache Kafka integrations. Azure Event Hubs for Apache Kafka supports both idempotent producers and idempot One of the core tenets of Azure Event Hubs is the concept of **at-least once** delivery. This approach ensures that events will always be delivered. It also means that events can be received more than once, even repeatedly, by consumers such as a function. For this reason, it's important that the consumer supports the [idempotent consumer](https://microservices.io/patterns/communication-style/idempotent-consumer.html) pattern. -## Apache Kafka feature differences +## Feature differences with Apache Kafka The goal of Event Hubs for Apache Kafka is to provide access to Azure Event Hubs capabilities to applications that are locked into the Apache Kafka API and would otherwise have to be backed by an Apache Kafka cluster. |
event-hubs | Dynamically Add Partitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/dynamically-add-partitions.md | You can specify the number of partitions at the time of creating an event hub. I > Dynamic additions of partitions is available only in **premium** and **dedicated** tiers of Event Hubs. > [!NOTE]-> For Apache Kafka clients, an **event hub** maps to a **Kafka topic**. For more mappings between Azure Event Hubs and Apache Kafka, see [Kafka and Event Hubs conceptual mapping](azure-event-hubs-kafka-overview.md#kafka-and-event-hubs-conceptual-mapping) +> For Apache Kafka clients, an **event hub** maps to a **Kafka topic**. For more mappings between Azure Event Hubs and Apache Kafka, see [Kafka and Event Hubs conceptual mapping](azure-event-hubs-kafka-overview.md#apache-kafka-and-azure-event-hubs-conceptual-mapping) ## Update the partition count |
event-hubs | Event Hubs Kafka Connect Debezium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md | Follow the latest instructions in the [Debezium documentation](https://debezium. Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs: > [!IMPORTANT]-> - Debezium will auto-create a topic per table and a bunch of metadata topics. Kafka **topic** corresponds to an Event Hubs instance (event hub). For Apache Kafka to Azure Event Hubs mappings, see [Kafka and Event Hubs conceptual mapping](azure-event-hubs-kafka-overview.md#kafka-and-event-hubs-conceptual-mapping). +> - Debezium will auto-create a topic per table and a bunch of metadata topics. Kafka **topic** corresponds to an Event Hubs instance (event hub). For Apache Kafka to Azure Event Hubs mappings, see [Kafka and Event Hubs conceptual mapping](azure-event-hubs-kafka-overview.md#apache-kafka-and-azure-event-hubs-conceptual-mapping). > - There are different **limits** on number of event hubs in an Event Hubs namespace depending on the tier (Basic, Standard, Premium, or Dedicated). For these limits, See [Quotas](compare-tiers.md#quotas). ```properties |
event-hubs | Event Hubs Kafka Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-tutorial.md | Last updated 11/03/2022 > [!WARNING] > Use of the Apache Kafka Connect framework and its connectors is **not eligible for product support through Microsoft Azure**.-> -> Apache Kafka Connect assumes for its dynamic configuration to be held in compacted topics with otherwise unlimited retention. Azure Event Hubs [does not implement compaction as a broker feature](event-hubs-federation-overview.md#log-projections) and always imposes a time-based retention limit on retained events, rooting from the principle that Azure Event Hubs is a real-time event streaming engine and not a long-term data or configuration store. -> -> While the Apache Kafka project might be comfortable with mixing these roles, Azure believes that such information is best managed in a proper database or configuration store. -> -> Many Apache Kafka Connect scenarios will be functional, but these conceptual differences between Apache Kafka's and Azure Event Hubs' retention models may cause certain configurations to not work as expected. +> Kafka Connect feature relies on Kafka Log compaction feature to fully function. [Log Compaction](./log-compaction.md) feature is currently available as a preview. Hence, Kafka Connect support is also in the preview state. + This tutorial walks you through integrating Kafka Connect with an event hub and deploying basic FileStreamSource and FileStreamSink connectors. This feature is currently in preview. While these connectors are not meant for production use, they demonstrate an end-to-end Kafka Connect scenario where Azure Event Hubs acts as a Kafka broker. |
firewall | Forced Tunneling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/forced-tunneling.md | +Some customers prefer not to expose a public IP address directly to the Internet. In this case, you can deploy Azure Firewall in Forced Tunneling mode without a public IP address. This configuration creates a management interface with a public IP address that is used by Azure Firewall for its operations. The public IP address is used exclusively by the Azure platform and can't be used for any other purpose.The tenant data path network can be configured without a public IP address, and Internet traffic can be forced tunneled to another Firewall or completely blocked. + Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. Azure Firewall doesnΓÇÖt SNAT when the destination IP address is a private IP address range per IANA RFC 1918. This logic works perfectly when you egress directly to the Internet. However, with forced tunneling enabled, Internet-bound traffic is SNATed to one of the firewall private IP addresses in the AzureFirewallSubnet. This hides the source address from your on-premises firewall. You can configure Azure Firewall to not SNAT regardless of the destination IP address by adding *0.0.0.0/0* as your private IP address range. With this configuration, Azure Firewall can never egress directly to the Internet. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md). > [!IMPORTANT] |
firewall | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md | Azure Firewall Standard has the following known issues: |Moving a firewall to a different resource group or subscription isn't supported|Moving a firewall to a different resource group or subscription isn't supported.|Supporting this functionality is on our road map. To move a firewall to a different resource group or subscription, you must delete the current instance and recreate it in the new resource group or subscription.| |Threat intelligence alerts may get masked|Network rules with destination 80/443 for outbound filtering masks threat intelligence alerts when configured to alert only mode.|Create outbound filtering for 80/443 using application rules. Or, change the threat intelligence mode to **Alert and Deny**.| |Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|This is a current limitation.|-|Can't remove first public IP configuration|Each Azure Firewall public IP address is assigned to an *IP configuration*. The first IP configuration is assigned during the firewall deployment, and typically also contains a reference to the firewall subnet (unless configured explicitly differently via a template deployment). You can't delete this IP configuration because it would de-allocate the firewall. You can still change or remove the public IP address associated with this IP configuration if the firewall has at least one other public IP address available to use.|This is by design.| |Availability zones can only be configured during deployment.|Availability zones can only be configured during deployment. You can't configure Availability Zones after a firewall has been deployed.|This is by design.| |SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules. |
healthcare-apis | Github Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/github-projects.md | Title: Related GitHub Projects for Azure Health Data Services description: List all Open Source (GitHub) repositories -+ Last updated 06/06/2022-+ # GitHub Projects We have many open-source projects on GitHub that provide you the source code and ## Azure Health Data Services samples -* This repo contains [samples for Azure Health Data Services](https://github.com/microsoft/healthcare-apis-samples), including Fast Healthcare Interoperability Resources (FHIR®), DICOM, MedTech service, and data-related services. +* This repo contains [samples for Azure Health Data Services](https://github.com/Azure-Samples/azure-health-data-services-samples), including Fast Healthcare Interoperability Resources (FHIR®), DICOM, MedTech service, and data-related services. ++## Azure Health Data Services Toolkit ++* The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-health-data-services-toolkit) helps you extend the functionality of Azure Health Data Services by providing a consistent toolset to build custom operations to modify the core service behavior. ## FHIR Server In this article, you learned about some of Azure Health Data Services open-sourc >[!div class="nextstepaction"] >[Overview of Azure Health Data Services](healthcare-apis-overview.md) -(FHIR®) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +(FHIR®) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
machine-learning | How To Create Image Labeling Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md | In many cases, it's fine to just upload local files. But [Azure Storage Explorer To create a dataset from data that you've already stored in Azure Blob storage: -1. Select **Create a dataset** > **From datastore**. -1. Assign a **Name** to your dataset. +1. Select **+ Create** . +1. Assign a **Name** to your dataset, and optionally a description. 1. **Dataset type** is set to file, only file dataset types are supported for images.-1. Select the datastore. +1. Select **Next**. +1. Select **From Azure storage**, then **Next**. +1. Select the datastore, then select **Next**. 1. If your data is in a subfolder within your blob storage, choose **Browse** to select the path. * Append "/**" to the path to include all the files in subfolders of the selected path. * Append "**/*.*" to include all the data in the current container and its subfolders.-1. (Optional) Provide a description for your dataset. -1. Select **Next**. -1. Confirm the details. Select **Back** to modify the settings or **Create** to create the dataset. +1. Select **Create**. +1. Now select the data asset you just created. ### Create a dataset from uploaded data To directly upload your data: -1. Select **Create a dataset** > **From local files**. -1. Assign a **Name** to your dataset. +1. Select **+ Create**. +1. Assign a **Name** to your dataset, and optionally a description. 1. **Dataset type** is set to file, only file dataset types are supported for images.-1. (Optional) Provide a description for your dataset. 1. Select **Next**.-1. (Optional) Select or create a datastore. Or keep the default to upload to the default blob store ("workspaceblobstore") of your Machine Learning workspace. -1. Select **Browse** to select the local files or folder(s) to upload. +1. Select **From local files**, then select **Next**. +1. (Optional) Select a datastore. Or keep the default to upload to the default blob store ("workspaceblobstore") of your Machine Learning workspace. +1. Select **Next**. +1. Select **Upload > Upload files** or **Upload > Upload folder** to select the local files or folder(s) to upload. +1. In the browser window, find your files or folder, then select **Open**. +1. Continue using **Upload** until you have specified all your files/folders. +1. Check the box **Overwrite if already exists** if you wish. Verify the list of files/folders. 1. Select **Next**. 1. Confirm the details. Select **Back** to modify the settings or **Create** to create the dataset.+1. Now select the data asset you just created. -## <a name="incremental-refresh"> </a> Configure incremental refresh ++## Configure incremental refresh [!INCLUDE [refresh](../../includes/machine-learning-data-labeling-refresh.md)] The **Dashboard** tab shows the progress of the labeling task. :::image type="content" source="./media/how-to-create-labeling-projects/labeling-dashboard.png" alt-text="Data labeling dashboard"::: -The progress chart shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of items in each section. +The progress charts shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of items in each section. ++Below the charts is a distribution of the labels for those tasks that are complete. Remember that in some project types, an item can have multiple labels, in which case the total number of labels can be greater than the total number items. ++You also see a distribution of labelers and how many items they've labeled. -The middle section shows the queue of tasks yet to be assigned. When ML assisted labeling is off, this section shows the number of manual tasks to be assigned. When ML assisted labeling is on, this section will also show: +Finally, in the middle section, there is a table showing a queue of tasks yet to be assigned. When ML assisted labeling is off, this section shows the number of manual tasks to be assigned. When ML assisted labeling is on, this section will also show: * Tasks containing clustered items in the queue * Tasks containing prelabeled items in the queue -Additionally, when ML assisted labeling is enabled, a small progress bar shows when the next training run will occur. The Experiments sections give links for each of the machine learning runs. +Additionally, when ML assisted labeling is enabled, scroll down to see the ML assisted labeling status. The Jobs sections give links for each of the machine learning runs. * Training - trains a model to predict the labels * Validation - determines whether this model's prediction will be used for pre-labeling the items * Inference - prediction run for new items * Featurization - clusters items (only for image classification projects) -On the right side is a distribution of the labels for those tasks that are complete. Remember that in some project types, an item can have multiple labels, in which case the total number of labels can be greater than the total number items. ### Data tab |
machine-learning | How To Create Text Labeling Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md | In many cases, it's fine to just upload local files. But [Azure Storage Explorer To create a dataset from data that you've already stored in Azure Blob storage: -1. Select **Create a dataset** > **From datastore**. -1. Assign a **Name** to your dataset. +1. Select **+ Create** . +1. Assign a **Name** to your dataset, and optionally a description. 1. Choose the **Dataset type**: * Select **Tabular** if you're using a .csv or .tsv file, where each row contains a response. * Select **File** if you're using separate .txt files for each response.-1. (Optional) Provide a description for your dataset. 1. Select **Next**.-1. Select the datastore. +1. Select **From Azure storage**, then **Next**. +1. Select the datastore, then select **Next**. 1. If your data is in a subfolder within your blob storage, choose **Browse** to select the path. * Append "/**" to the path to include all the files in subfolders of the selected path. * Append "**/*.*" to include all the data in the current container and its subfolders.-1. Select **Next**. -1. Confirm the details. Select **Back** to modify the settings or **Create** to create the dataset. +1. Select **Create**. +1. Now select the data asset you just created. ### Create a dataset from uploaded data To directly upload your data: -1. Select **Create a dataset** > **From local files**. -1. Assign a **Name** to your dataset. -1. Choose the **Dataset type**. - * Select **Tabular** if you're using a .csv or .tsv file, where each row is a response. +1. Select **+ Create**. +1. Assign a **Name** to your dataset, and optionally a description. +1. Choose the **Dataset type**: + * Select **Tabular** if you're using a .csv or .tsv file, where each row contains a response. * Select **File** if you're using separate .txt files for each response.-1. (Optional) Provide a description of your dataset. -1. Select **Next** -1. (Optional) Select or create a datastore. Or keep the default to upload to the default blob store ("workspaceblobstore") of your Machine Learning workspace. -1. Select **Upload** to select the local file(s) or folder(s) to upload. 1. Select **Next**.-1. If uploading .csv or .tsv files: - * Confirm the settings and preview, select **Next**. - * Include all columns of text you'd like the labeler to see when classifying that row. If you'll be using ML assisted labeling, adding numeric columns may degrade the ML assist model. - * Select **Next**. -1. Confirm the details. Select **Back** to modify the settings or **Create** to create the dataset. +1. Select **From local files**, then select **Next**. +1. (Optional) Select a datastore. Or keep the default to upload to the default blob store ("workspaceblobstore") of your Machine Learning workspace. +1. Select **Next**. +1. Select **Upload > Upload files** or **Upload > Upload folder** to select the local files or folder(s) to upload. +1. In the browser window, find your files or folder, then select **Open**. +1. Continue using **Upload** until you have specified all your files/folders. +1. Check the box **Overwrite if already exists** if you wish. Verify the list of files/folders. +1. Select **Next**. +1. Confirm the details. Select **Back** to modify the settings or **Create** to create the dataset. +1. Now select the data asset you just created. + ## Configure incremental refresh The **Dashboard** tab shows the progress of the labeling task. :::image type="content" source="./media/how-to-create-text-labeling-projects/text-labeling-dashboard.png" alt-text="Text data labeling dashboard"::: +The progress charts shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of items in each section. -The progress chart shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of items in each section. +Below the charts is a distribution of the labels for those tasks that are complete. Remember that in some project types, an item can have multiple labels, in which case the total number of labels can be greater than the total number items. -The middle section shows the queue of tasks yet to be assigned. If ML-assisted labeling is on, you'll also see the number of pre-labeled items. +You also see a distribution of labelers and how many items they've labeled. +Finally, in the middle section, there is a table showing a queue of tasks yet to be assigned. When ML assisted labeling is off, this section shows the number of manual tasks to be assigned. -On the right side is a distribution of the labels for those tasks that are complete. Remember that in some project types, an item can have multiple labels, in which case the total number of labels can be greater than the total number items. +Additionally, when ML assisted labeling is enabled, scroll down to see the ML assisted labeling status. The Jobs sections give links for each of the machine learning runs. -### Data tab +### Data On the **Data** tab, you can see your dataset and review labeled data. Scroll through the labeled data to see the labels. If you see incorrectly labeled data, select it and choose **Reject**, which will remove the labels and put the data back into the unlabeled queue. |
postgresql | Concepts Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md | All incoming connections that use earlier versions of the TLS protocol, such as [Certificate authentication](https://www.postgresql.org/docs/current/auth-cert.html) is performed using **SSL client certificates** for authentication. In this scenario, PostgreSQL server compares the CN (common name) attribute of the client certificate presented, against the requested database user. **Azure Database for PostgreSQL - Flexible Server does not support SSL certificate based authentication at this time.** +To determine your current SSL connection status you can load the [sslinfo extension](concepts-extensions.md) and then call the `ssl_is_used()` function to determine if SSL is being used. The function returns t if the connection is using SSL, otherwise it returns f. ## Next steps |
security | Paas Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-deployments.md | Web applications are increasingly targets of malicious attacks that exploit comm [Web application firewall (WAF)](../../web-application-firewall/afds/afds-overview.md) is a feature of Application Gateway that provides centralized protection of your web applications from common exploits and vulnerabilities. WAF is based on rules from the [Open Web Application Security Project (OWASP) core rule sets](https://owasp.org/www-project-modsecurity-core-rule-set/) 3.0 or 2.2.9. +## DDoS protection ++[Azure DDoS Protection Standard](../../ddos-protection/ddos-protection-overview.md), combined with application-design best practices, provides enhanced DDoS mitigation features to provide more defense against DDoS attacks. You should enable [Azure DDOS Protection Standard](../../ddos-protection/ddos-protection-overview.md) on any perimeter virtual network. + ## Monitor the performance of your applications Monitoring is the act of collecting and analyzing data to determine the performance, health, and availability of your application. An effective monitoring strategy helps you understand the detailed operation of the components of your application. It helps you increase your uptime by notifying you of critical issues so that you can resolve them before they become problems. It also helps you detect anomalies that might be security related. |
spring-apps | How To Enterprise Application Configuration Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md | az spring app deploy \ --config-file-pattern <config-file-pattern> ``` +## Enable/disable Application Configuration Service after service creation ++You can enable and disable Application Configuration Service after service creation using the Azure portal or Azure CLI. Before disabling Application Configuration Service, you're required to unbind all of your apps from it. ++### [Azure portal](#tab/Portal) ++Use the following steps to enable or disable Application Configuration Service using the Azure portal: ++1. Navigate to your service resource, and then select **Application Configuration Service**. +1. Select **Manage**. +1. Select or unselect **Enable Application Configuration Service**, and then select **Save**. +1. You can now view the state of Application Configuration Service on the **Application Configuration Service** page. ++### [Azure CLI](#tab/Azure-CLI) ++Use the following Azure CLI commands to enable or disable Application Configuration Service: ++```azurecli +az spring application-configuration-service create \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` ++```azurecli +az spring application-configuration-service delete \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` +++ ## Next steps - [Azure Spring Apps](index.yml) |
spring-apps | How To Enterprise Service Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-service-registry.md | This command produces the following output. In this way, you can obtain detailed information from the program as needed. +## Enable/disable Service Registry after service creation ++You can enable and disable Service Registry after service creation using the Azure portal or Azure CLI. Before disabling Service Registry, you're required to unbind all of your apps from it. ++### [Azure portal](#tab/Portal) ++Use the following steps to enable or disable Service Registry using the Azure portal: ++1. Navigate to your service resource, and then select **Service Registry**. +1. Select **Manage**. +1. Select or unselect the **Enable Service Registry**, and then select **Save**. +1. You can now view the state of Service Registry on the **Service Registry** page. ++### [Azure CLI](#tab/Azure-CLI) ++Use the following Azure CLI commands to enable or disable Service Registry: ++```azurecli +az spring service-registry create \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` ++```azurecli +az spring service-registry delete \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` +++ ## Next steps - [Create Roles and Permissions](./how-to-permissions.md) |
spring-apps | How To Use Enterprise Api Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-api-portal.md | Use the following steps to try out APIs: :::image type="content" source="media/how-to-use-enterprise-api-portal/api-portal-tryout.png" alt-text="Screenshot of API portal."::: +## Enable/disable API portal after service creation ++You can enable and disable API portal after service creation using the Azure portal or Azure CLI. Before disabling API portal, you're required to unassign its endpoint. ++### [Azure portal](#tab/Portal) ++Use the following steps to enable or disable API portal using the Azure portal: ++1. Navigate to your service resource, and then select **API portal**. +1. Select **Manage**. +1. Select or unselect the **Enable API portal**, and then select **Save**. +1. You can now view the state of API portal on the **API portal** page. ++### [Azure CLI](#tab/Azure-CLI) ++Use the following Azure CLI commands to enable or disable API portal: ++```azurecli +az spring api-portal create \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` ++```azurecli +az spring api-portal delete \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` +++ ## Next steps - [Azure Spring Apps](index.yml) |
spring-apps | How To Use Enterprise Spring Cloud Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-spring-cloud-gateway.md | az spring gateway route-config create \ --routes-file <json-file-with-routes> ``` +## Enable/disable Spring Cloud Gateway after service creation ++You can enable and disable Spring Cloud Gateway after service creation using the Azure portal or Azure CLI. Before disabling Spring Cloud Gateway, you're required to unassign its endpoint and remove all route configs. ++### [Azure portal](#tab/Portal) ++Use the following steps to enable or disable Spring Cloud Gateway using the Azure portal: ++1. Navigate to your service resource, and then select **Spring Cloud Gateway**. +1. Select **Manage**. +1. Select or unselect the **Enable Spring Cloud Gateway**, and then select **Save**. +1. You can now view the state of Spring Cloud Gateway on the **Spring Cloud Gateway** page. ++### [Azure CLI](#tab/Azure-CLI) ++Use the following Azure CLI commands to enable or disable Spring Cloud Gateway: ++```azurecli +az spring spring-cloud-gateway create \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` ++```azurecli +az spring spring-cloud-gateway delete \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> +``` +++ ## Next steps - [Azure Spring Apps](index.yml) |
storage | Storage Files Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md | Title: Frequently asked questions (FAQ) for Azure Files description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 09/08/2022 Last updated : 02/03/2023 - Azure Files on-premises AD DS authentication only integrates with the forest of the domain service that the storage account is registered to. To support authentication from another forest, your environment must have a forest trust configured correctly. The way Azure Files register in AD DS almost the same as a regular file server, where it creates an identity (computer or service logon account) in AD DS for authentication. The only difference is that the registered SPN of the storage account ends with "file.core.windows.net" which does not match with the domain suffix. Consult your domain administrator to see if any update to your suffix routing policy is required to enable multiple forest authentication due to the different domain suffix. We provide an example below to configure suffix routing policy. + Azure Files on-premises AD DS authentication only integrates with the forest of the domain service that the storage account is registered to. To support authentication from another forest, your environment must have a forest trust configured correctly. ++ > [!Note] + > In a multi-forest setup, don't use Windows Explorer to configure Windows ACLs/NTFS permissions at the root, directory, or file level. [Use icacls](storage-files-identity-ad-ds-configure-permissions.md#configure-windows-acls-with-icacls) instead. ++ The way Azure Files register in AD DS almost the same as a regular file server, where it creates an identity (computer or service logon account) in AD DS for authentication. The only difference is that the registered SPN of the storage account ends with "file.core.windows.net" which does not match with the domain suffix. Consult your domain administrator to see if any update to your suffix routing policy is required to enable multiple forest authentication due to the different domain suffix. We provide an example below to configure suffix routing policy. Example: When users in forest A domain want to reach a file share with the storage account registered against a domain in forest B, this won't automatically work because the service principal of the storage account doesn't have a suffix matching the suffix of any domain in forest A. We can address this issue by manually configuring a suffix routing rule from forest A to forest B for a custom suffix of "file.core.windows.net". |
storage | Storage Files Identity Ad Ds Configure Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md | net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /use You can configure the Windows ACLs using either [icacls](#configure-windows-acls-with-icacls) or [Windows File Explorer](#configure-windows-acls-with-windows-file-explorer). You can also use the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command. +> [!IMPORTANT] +> If your environment has multiple AD DS forests, don't use Windows Explorer to configure ACLs. Use icacls instead. + If you have directories or files in on-premises file servers with Windows ACLs configured against the AD DS identities, you can copy them over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or [Azure AzCopy v 10.4+](https://github.com/Azure/azure-storage-azcopy/releases). If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs are carried over and persisted in their native format. ### Configure Windows ACLs with icacls |
storage | Storage Troubleshoot Windows File Connection Problems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md | The cmdlet performs these checks below in sequence and provides guidance for fai ### Symptom -You may experience either symptoms described below when trying to configure Windows ACLs with File Explorer on a mounted file share: +You may experience one of the symptoms described below when trying to configure Windows ACLs with File Explorer on a mounted file share: - After you click on **Edit permission** under the Security tab, the Permission wizard doesn't load. - When you try to select a new user or group, the domain location doesn't display the right AD DS domain. +- You're using multiple AD forests and get the following error message: "The Active Directory domain controllers required to find the selected objects in the following domains are not available. Ensure the Active Directory domain controllers are available, and try to select the objects again." ### Solution |
virtual-machines | Update Linux Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md | To update your [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) on a L You should always check for a package in the Linux distro repository first. It is possible the package available may not be the latest version, however, enabling autoupdate will ensure the Linux Agent will always get the latest update. Should you have issues installing from the package managers, you should seek support from the distro vendor. > [!NOTE]-> For more information see [Endorsed Linux distributions on Azure](../linux/endorsed-distros.md) +> For more information, see [Endorsed Linux distributions on Azure](../linux/endorsed-distros.md) Verify the [Minimum version support for virtual machine agents in Azure](https://support.microsoft.com/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support) before proceeding. Install the latest package version sudo apt-get install walinuxagent ``` -Ensure auto update is enabled. First, check to see if it is enabled: +Ensure auto update is enabled. First, check to see if it's enabled: ```bash cat /etc/waagent.conf ``` -Find 'AutoUpdate.Enabled'. If you see this output, it is enabled: +Find 'AutoUpdate.Enabled'. If you see this output, it's enabled: ```bash # AutoUpdate.Enabled=y To enable run: sudo sed -i 's/# AutoUpdate.Enabled=n/AutoUpdate.Enabled=y/g' /etc/waagent.conf ``` -Restart waagengt service for 14.04 +Restart waagent service for 22.04 ```bash-initctl restart walinuxagent +sudo service walinuxagent restart ``` -Restart waagent service for 16.04 / 17.04 --```bash -systemctl restart walinuxagent.service -``` ## Red Hat / CentOS -### RHEL/CentOS 6 +### RHEL/CentOS 7.9 Check your current package version sudo yum install WALinuxAgent Ensure auto update is enabled -First, check to see if it is enabled: +First, check to see if it's enabled: ```bash cat /etc/waagent.conf ``` -Find 'AutoUpdate.Enabled'. If you see this output, it is enabled: +Find 'AutoUpdate.Enabled'. If you see this output, it's enabled: ```bash # AutoUpdate.Enabled=y Install the latest package version sudo yum install WALinuxAgent ``` -Ensure auto update is enabled. First, check to see if it is enabled: +Ensure auto update is enabled. First, check to see if it's enabled: ```bash cat /etc/waagent.conf ``` -Find 'AutoUpdate.Enabled'. If you see this output, it is enabled: +Find 'AutoUpdate.Enabled'. If you see this output, it's enabled: ```bash # AutoUpdate.Enabled=y sudo systemctl restart waagent.service ## SUSE SLES -### SUSE SLES 11 SP4 +### SUSE SLES 15 SP4 Check your current package version sudo zypper install python-azure-agent Ensure auto update is enabled -First, check to see if it is enabled: +First, check to see if it's enabled: ```bash cat /etc/waagent.conf ``` -Find 'AutoUpdate.Enabled'. If you see this output, it is enabled: +Find 'AutoUpdate.Enabled'. If you see this output, it's enabled: ```bash # AutoUpdate.Enabled=y Restart the waagent service sudo /etc/init.d/waagent restart ``` -### SUSE SLES 12 SP2 +### SUSE SLES 12 SP5 Check your current package version sudo zypper install python-azure-agent Ensure auto update is enabled -First, check to see if it is enabled: +First, check to see if it's enabled: ```bash cat /etc/waagent.conf ``` -Find 'AutoUpdate.Enabled'. If you see this output, it is enabled: +Find 'AutoUpdate.Enabled'. If you see this output, it's enabled: ```bash # AutoUpdate.Enabled=y sudo systemctl restart waagent.service ## Debian -### Debian 7 ΓÇ£JesseΓÇ¥/ Debian 7 "Stretch" +### Debian 11 "Bullseye" Check your current package version Install the latest package version sudo apt-get install waagent ``` -Enable agent auto update -This version of Debian does not have a version >= 2.0.16, therefore AutoUpdate is not available for it. The output from the above command will show you if the package is up-to-date. +Enable agent auto update. +This version of Debian doesn't have a version >= 2.0.16, therefore AutoUpdate isn't available for it. The output from the above command will show you if the package is up-to-date. -### Debian 8 ΓÇ£JessieΓÇ¥ / Debian 9 ΓÇ£StretchΓÇ¥ +### Debian 9 ΓÇ£StretchΓÇ¥ / Debian 10 ΓÇ£BusterΓÇ¥ Check your current package version sudo apt-get install waagent ``` Ensure auto update is enabled-First, check to see if it is enabled: +First, check to see if it's enabled: ```bash cat /etc/waagent.conf ``` -Find 'AutoUpdate.Enabled'. If you see this output, it is enabled: +Find 'AutoUpdate.Enabled'. If you see this output, it's enabled: ```bash AutoUpdate.Enabled=y Then, to install the latest version of the Azure Linux Agent, type: sudo yum install WALinuxAgent ``` -If you don't find the add-on repository you can simply add these lines at the end of your .repo file according to your Oracle Linux release: +If you don't find the add-on repository, you can simply add these lines at the end of your `.repo` file according to your Oracle Linux release: For Oracle Linux 6 virtual machines: You may need to install the package `setuptools` first--see [setuptools](https:/ sudo python setup.py install ``` -Ensure auto update is enabled. First, check to see if it is enabled: +Ensure auto update is enabled. First, check to see if it's enabled: ```bash cat /etc/waagent.conf ``` -Find 'AutoUpdate.Enabled'. If you see this output, it is enabled: +Find 'AutoUpdate.Enabled'. If you see this output, it's enabled: ```bash # AutoUpdate.Enabled=y waagent -version For CoreOS, the above command may not work. -You will see that the Azure Linux Agent version has been updated to the new version. +You'll see that the Azure Linux Agent version has been updated to the new version. For more information regarding the Azure Linux Agent, see [Azure Linux Agent README](https://github.com/Azure/WALinuxAgent). |
virtual-machines | Azure Hybrid Benefit Byos Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md | Azure Hybrid Benefit converts BYOS billing to pay-as-you-go, so that you pay onl Azure Hybrid Benefit BYOS to PAYG capability is available to all RHEL and SLES virtual machines that come from a custom image. It's also available to all RHEL and SLES BYOS virtual machines that come from an Azure Marketplace image. -Azure dedicated host instances and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines. Azure Hybrid Benefit BYOS to PAYG capability does not support virtual machine scale sets and reserved instances (RIs). +Azure dedicated host instances and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines. Azure Hybrid Benefit BYOS to PAYG capability does not support virtual machine scale sets. ## Get started |
virtual-wan | Howto Openvpn Clients | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-openvpn-clients.md | |
virtual-wan | Hub Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md | For pricing information, see [Azure Virtual WAN pricing](https://azure.microsoft | 49 | 49 | 49000 | | 50 | 50 | 50000 | -## <a name="routing-preference"></a>Virtual hub routing preference (Preview) +## <a name="routing-preference"></a>Virtual hub routing preference A Virtual WAN virtual hub connects to virtual networks (VNets) and on-premises sites using connectivity gateways, such as site-to-site (S2S) VPN gateway, ExpressRoute (ER) gateway, point-to-site (P2S) gateway, and SD-WAN Network Virtual Appliance (NVA). The virtual hub router provides central route management and enables advanced routing scenarios using route propagation, route association, and custom route tables. When a virtual hub router makes routing decisions, it considers the configuration of such capabilities. |
vpn-gateway | Point To Site Vpn Client Cert Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-linux.md | description: Learn how to configure a Linux VPN client solution for VPN Gateway Previously updated : 12/01/2022 Last updated : 02/03/2023 |
vpn-gateway | Point To Site Vpn Client Cert Mac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-mac.md | description: Learn how to configure the VPN client for VPN Gateway P2S configura Previously updated : 12/01/2022 Last updated : 02/03/2023 |
vpn-gateway | Point To Site Vpn Client Cert Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md | description: Learn how to configure VPN clients for P2S configurations that use Previously updated : 01/25/2023 Last updated : 02/03/2023 |