Updates from: 09/14/2023 01:15:48
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
active-directory Inbound Provisioning Api Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-issues.md
This document covers commonly encountered errors and issues with inbound provisi
**Probable causes** 1. Your API-driven provisioning app is paused.
-1. The provisioning service is yet to update the provisioning logs with the bulk request processing details.
+1. The provisioning service is yet to update the provisioning logs with the bulk request processing details.
+2. Your On-premises provisioning agent status is inactive (If you are running the [/API-driven inbound user provisioning to on-premises Active Directory](https://go.microsoft.com/fwlink/?linkid=2245182)).
+ **Resolution:** 1. Verify that your provisioning app is running. If it isn't running, select the menu option **Start provisioning** to process the data.
+2. Turn your On-premises provisioning agent status to active by restarting the On-premise agent.
1. Expect 5 to 10-minute delay between processing the request and writing to the provisioning logs. If your API client is sending data to the provisioning /bulkUpload API endpoint, then introduce a time delay between the request invocation and provisioning logs query. ### Forbidden 403 response code
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
The following table lists each setting that can be set to Microsoft managed and
| Setting | Configuration | |-||
-| [Registration campaign](how-to-mfa-registration-campaign.md) | Beginning in July, 2023, enabled for SMS and voice call users with free and trial subscriptions. |
+| [Registration campaign](how-to-mfa-registration-campaign.md) | Beginning in July, 2023, enabled for text message and voice call users with free and trial subscriptions. |
| [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled | | [Report suspicious activity](howto-mfa-mfasettings.md#report-suspicious-activity) | Disabled |
-As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication.
+As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using text message and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication.
## Next steps
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
To manage the legacy MFA policy, click **Security** > **Multifactor Authenticati
:::image type="content" border="true" source="./media/concept-authentication-methods-manage/service-settings.png" alt-text="Screenshot of MFA service settings.":::
-To manage authentication methods for self-service password reset (SSPR), click **Password reset** > **Authentication methods**. The **Mobile phone** option in this policy allows either voice calls or SMS to be sent to a mobile phone. The **Office phone** option allows only voice calls.
+To manage authentication methods for self-service password reset (SSPR), click **Password reset** > **Authentication methods**. The **Mobile phone** option in this policy allows either voice calls or text message to be sent to a mobile phone. The **Office phone** option allows only voice calls.
:::image type="content" border="true" source="./media/concept-authentication-methods-manage/password-reset.png" alt-text="Screenshot of password reset settings.":::
If the user can't register Microsoft Authenticator based on either of those poli
- **Mobile app notification** - **Mobile app code**
-For users who are enabled for **Mobile phone** for SSPR, the independent control between policies can impact sign-in behavior. Where the other policies have separate options for SMS and voice calls, the **Mobile phone** for SSPR enables both options. As a result, anyone who uses **Mobile phone** for SSPR can also use voice calls for password reset, even if the other policies don't allow voice calls.
+For users who are enabled for **Mobile phone** for SSPR, the independent control between policies can impact sign-in behavior. Where the other policies have separate options for text message and voice calls, the **Mobile phone** for SSPR enables both options. As a result, anyone who uses **Mobile phone** for SSPR can also use voice calls for password reset, even if the other policies don't allow voice calls.
Similarly, let's suppose you enable **Voice calls** for a group. After you enable it, you find that even users who aren't group members can sign-in with a voice call. In this case, it's likely those users are enabled for **Mobile phone** in the legacy SSPR policy or **Call to phone** in the legacy MFA policy.
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
Microsoft recommends passwordless authentication methods such as Windows Hello,
:::image type="content" border="true" source="media/concept-authentication-methods/authentication-methods.png" alt-text="Illustration of the strengths and preferred authentication methods in Azure AD." :::
-Azure AD Multi-Factor Authentication (MFA) adds additional security over only using a password when a user signs in. The user can be prompted for additional forms of authentication, such as to respond to a push notification, enter a code from a software or hardware token, or respond to an SMS or phone call.
+Azure AD Multi-Factor Authentication (MFA) adds additional security over only using a password when a user signs in. The user can be prompted for additional forms of authentication, such as to respond to a push notification, enter a code from a software or hardware token, or respond to a text message or phone call.
To simplify the user on-boarding experience and register for both MFA and self-service password reset (SSPR), we recommend you [enable combined security information registration](howto-registration-mfa-sspr-combined.md). For resiliency, we recommend that you require users to register multiple authentication methods. When one method isn't available for a user during sign-in or SSPR, they can choose to authenticate with another method. For more information, see [Create a resilient access control management strategy in Azure AD](concept-resilient-controls.md).
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 07/17/2023 Last updated : 08/23/2023
# Authentication methods in Azure Active Directory - phone options
-Microsoft recommends users move away from using SMS or voice calls for multifactor authentication (MFA). Modern authentication methods like [Microsoft Authenticator](concept-authentication-authenticator-app.md) are a recommended alternative. For more information, see [It's Time to Hang Up on Phone Transports for Authentication](https://aka.ms/hangup). Users can still verify themselves using a mobile phone or office phone as secondary form of authentication used for multifactor authentication (MFA) or self-service password reset (SSPR).
+Microsoft recommends users move away from using text messages or voice calls for multifactor authentication (MFA). Modern authentication methods like [Microsoft Authenticator](concept-authentication-authenticator-app.md) are a recommended alternative. For more information, see [It's Time to Hang Up on Phone Transports for Authentication](https://aka.ms/hangup). Users can still verify themselves using a mobile phone or office phone as secondary form of authentication used for multifactor authentication (MFA) or self-service password reset (SSPR).
-You can [configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md) for direct authentication using text message. SMS-based sign-in is convenient for Frontline workers. With SMS-based sign-in, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
+You can [configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md) for direct authentication using text message. Text messages are convenient for Frontline workers. With text messages, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
>[!NOTE] >Phone call verification isn't available for Azure AD tenants with trial subscriptions. For example, if you sign up for a trial license Microsoft Enterprise Mobility and Security (EMS), phone call verification isn't available. Phone numbers must be provided in the format *+CountryCode PhoneNumber*, for example, *+1 4251234567*. There must be a space between the country/region code and the phone number. ## Mobile phone verification
-For Azure AD Multi-Factor Authentication or SSPR, users can choose to receive an SMS message with a verification code to enter in the sign-in interface, or receive a phone call.
+For Azure AD Multi-Factor Authentication or SSPR, users can choose to receive a text message with a verification code to enter in the sign-in interface, or receive a phone call.
If users don't want their mobile phone number to be visible in the directory but want to use it for password reset, administrators shouldn't populate the phone number in the directory. Instead, users should populate their **Authentication Phone** at [My Sign-Ins](https://aka.ms/setupsecurityinfo). Administrators can see this information in the user's profile, but it's not published elsewhere.
If users don't want their mobile phone number to be visible in the directory but
> [!NOTE] > Phone extensions are supported only for office phones.
-Microsoft doesn't guarantee consistent SMS or voice-based Azure AD Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve SMS deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada.
+Microsoft doesn't guarantee consistent text message or voice-based Azure AD Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve text message deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada.
> [!NOTE]
-> Starting July 2023, we will apply delivery method optimizations such that tenants with a free or trial subscription may receive an SMS message or voice call.
+> Starting July 2023, we will apply delivery method optimizations such that tenants with a free or trial subscription may receive a text message or voice call.
-### SMS message verification
+### Text message verification
-With SMS message verification during SSPR or Azure AD Multi-Factor Authentication, a Short Message Service (SMS) text is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
+With text message verification during SSPR or Azure AD Multi-Factor Authentication, a text message is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
-Android users can enable Rich Communication Services (RCS) on their devices. RCS offers encryption and other improvements over SMS. For Android, MFA text messages may be sent over RCS rather than SMS. The MFA text message is similar to SMS, but RCS messages have more Microsoft branding and a verified checkmark so users know they can trust the message.
+Text messages can be sent over channels such as Short Message Service (SMS), Rich Communication Services (RCS), or WhatsApp.
+
+Android users can enable RCS on their devices. RCS offers encryption and other improvements over SMS. For Android, MFA text messages may be sent over RCS rather than SMS. The MFA text message is similar to SMS, but RCS messages have more Microsoft branding and a verified checkmark so users know they can trust the message.
:::image type="content" source="media/concept-authentication-methods/brand.png" alt-text="Screenshot of Microsoft branding in RCS messages.":::
+Some users with phone numbers that have country codes belonging to India, Indonesia and New Zealand may receive their verification codes via WhatsApp. Like RCS, these messages are similar to SMS, but have more Microsoft branding and a verified checkmark. Only users that have WhatsApp will receive verification codes via this channel. To determine whether a user has WhatsApp, we silently attempt delivering them a message via the app using the phone number they already registered for text message verification and see if it's successfully delivered. If users don't have any internet connectivity or uninstall WhatsApp, they'll receive their verification codes via SMS. The phone number associated with Microsoft's WhatsApp Business Agent is: *+1 (217) 302 1989*.
+ ### Phone call verification With phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad.
With office phone call verification during SSPR or Azure AD Multi-Factor Authent
If you have problems with phone authentication for Azure AD, review the following troubleshooting steps:
-* ΓÇ£You've hit our limit on verification callsΓÇ¥ or ΓÇ£YouΓÇÖve hit our limit on text verification codesΓÇ¥ error messages during sign-in
+* "You've hit our limit on verification calls" or "You've hit our limit on text verification codes" error messages during sign-in
* Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation does not apply to Microsoft Authenticator or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes. * "Sorry, we're having trouble verifying your account" error message during sign-in
- * Microsoft may limit or block voice or SMS authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or SMS authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support.
+ * Microsoft may limit or block voice or text message authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or text message authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support.
* Blocked caller ID on a single device. * Review any blocked numbers configured on the device. * Wrong phone number or incorrect country/region code, or confusion between personal phone number versus work phone number.
If you have problems with phone authentication for Azure AD, review the followin
* Ensure that the user has their phone turned on and that service is available in their area, or use alternate method. * User is blocked * Have an Azure AD administrator unblock the user in the Azure portal.
-* SMS is not subscribed on the device.
- * Have the user change methods or activate SMS on the device.
-* Faulty telecom providers such as no phone input detected, missing DTMF tones issues, blocked caller ID on multiple devices, or blocked SMS across multiple devices.
- * Microsoft uses multiple telecom providers to route phone calls and SMS messages for authentication. If you see any of the above issues, have a user attempt to use the method at least five times within 5 minutes and have that user's information available when contacting Microsoft support.
+* Text messaging platforms like SMS, RCS, or WhatsApp aren't subscribed on the device.
+ * Have the user change methods or activate a text messaging platform on the device.
+* Faulty telecom providers, such as when no phone input is detected, missing DTMF tones issues, blocked caller ID on multiple devices, or blocked text messages across multiple devices.
+ * Microsoft uses multiple telecom providers to route phone calls and text messages for authentication. If you see any of these issues, have a user attempt to use the method at least five times within 5 minutes and have that user's information available when contacting Microsoft support.
* Poor signal quality. * Have the user attempt to log in using a wi-fi connection by installing the Authenticator app.
- * Or, use SMS authentication instead of phone (voice) authentication.
+ * Or use a text message instead of phone (voice) authentication.
* Phone number is blocked and unable to be used for Voice MFA
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
Previously updated : 08/23/2023 Last updated : 08/28/2023
# Conditional Access authentication strength
-Authentication strength is a Conditional Access control that allows administrators to specify which combination of authentication methods can be used to access a resource. For example, they can make only phishing-resistant authentication methods available to access a sensitive resource. But to access a nonsensitive resource, they can allow less secure multifactor authentication (MFA) combinations, such as password + SMS.
+Authentication strength is a Conditional Access control that allows administrators to specify which combination of authentication methods can be used to access a resource. For example, they can make only phishing-resistant authentication methods available to access a sensitive resource. But to access a nonsensitive resource, they can allow less secure multifactor authentication (MFA) combinations, such as password + text message.
Authentication strength is based on the [Authentication methods policy](concept-authentication-methods.md), where administrators can scope authentication methods for specific users and groups to be used across Azure Active Directory (Azure AD) federated applications. Authentication strength allows further control over the usage of these methods based upon specific scenarios such as sensitive resource access, user risk, location, and more.
The following table lists the combinations of authentication methods for each bu
|Email One-time pass (Guest)| | | | -->
-<sup>1</sup> Something you have refers to one of the following methods: SMS, voice, push notification, software OATH token and Hardware OATH token.
+<sup>1</sup> Something you have refers to one of the following methods: text message, voice, push notification, software OATH token and Hardware OATH token.
The following API call can be used to list definitions of all the built-in authentication strengths:
Users may register for authentications for which they are enabled, and in other
### How an authentication strength policy is evaluated during sign-in
-The authentication strength Conditional Access policy defines which methods can be used. Azure AD checks the policy during sign-in to determine the userΓÇÖs access to the resource. For example, an administrator configures a Conditional Access policy with a custom authentication strength that requires FIDO2 Security Key or Password + SMS. The user accesses a resource protected by this policy. During sign-in, all settings are checked to determine which methods are allowed, which methods are registered, and which methods are required by the Conditional Access policy. To be used, a method must be allowed, registered by the user (either before or as part of the access request), and satisfy the authentication strength.
+The authentication strength Conditional Access policy defines which methods can be used. Azure AD checks the policy during sign-in to determine the userΓÇÖs access to the resource. For example, an administrator configures a Conditional Access policy with a custom authentication strength that requires FIDO2 Security Key or Password + text message. The user accesses a resource protected by this policy. During sign-in, all settings are checked to determine which methods are allowed, which methods are registered, and which methods are required by the Conditional Access policy. To be used, a method must be allowed, registered by the user (either before or as part of the access request), and satisfy the authentication strength.
### How multiple Conditional Access authentication strength policies are evaluated
The following factors determine if the user gains access to the resource:
- Which methods are allowed for user sign-in in the Authentication methods policy? - Is the user registered for any available method?
-When a user accesses a resource protected by an authentication strength Conditional Access policy, Azure AD evaluates if the methods they have previously used satisfy the authentication strength. If a satisfactory method was used, Azure AD grants access to the resource. For example, let's say a user signs in with password + SMS. They access a resource protected by MFA authentication strength. In this case, the user can access the resource without another authentication prompt.
+When a user accesses a resource protected by an authentication strength Conditional Access policy, Azure AD evaluates if the methods they have previously used satisfy the authentication strength. If a satisfactory method was used, Azure AD grants access to the resource. For example, let's say a user signs in with password + text message. They access a resource protected by MFA authentication strength. In this case, the user can access the resource without another authentication prompt.
Let's suppose they next access a resource protected by Phishing-resistant MFA authentication strength. At this point, they'll be prompted to provide a phishing-resistant authentication method, such as Windows Hello for Business.
In external user scenarios, the authentication methods that can satisfy authenti
|Authentication method |Home tenant | Resource tenant | ||||
-|SMS as second factor | &#x2705; | &#x2705; |
+|text message as second factor | &#x2705; | &#x2705; |
|Voice call | &#x2705; | &#x2705; | |Microsoft Authenticator push notification | &#x2705; | &#x2705; | |Microsoft Authenticator phone sign-in | &#x2705; | |
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Azure AD CBA is an MFA (Multi factor authentication) capable method, that is Azu
If CBA enabled user only has a Single Factor (SF) certificate and need MFA 1. Use Password + SF certificate. 1. Issue Temporary Access Pass (TAP)
- 1. Admin adds Phone Number to user account and allows Voice/SMS method for user.
+ 1. Admin adds Phone Number to user account and allows Voice/text message method for user.
If CBA enabled user has not yet been issued a certificate and need MFA 1. Issue Temporary Access Pass (TAP)
- 1. Admin adds Phone Number to user account and allows Voice/SMS method for user.
+ 1. Admin adds Phone Number to user account and allows Voice/text message method for user.
If CBA enabled user cannot use MF cert (such as on mobile device without smart card support) and need MFA 1. Issue Temporary Access Pass (TAP) 1. User Register another MFA method (when user can use MF cert) 1. Use Password + MF cert (when user can use MF cert)
- 1. Admin adds Phone Number to user account and allows Voice/SMS method for user
+ 1. Admin adds Phone Number to user account and allows Voice/text message method for user
## MFA with Single-factor certificate-based authentication
active-directory Concept Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication.md
The following images show how Azure AD CBA simplifies the customer environment b
The following scenarios are supported: - User sign-ins to web browser-based applications on all platforms.-- User sign-ins to Office mobile apps, including Outlook, OneDrive, and so on.
+- User sign-ins to Office mobile apps on iOS/Android platforms as well as Office native apps in Windows, including Outlook, OneDrive, and so on.
- User sign-ins on mobile native browsers. - Support for granular authentication rules for multifactor authentication by using the certificate issuer **Subject** and **policy OIDs**. - Configuring certificate-to-user account bindings by using any of the certificate fields:
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-licensing.md
The following table provides a list of the features that are available in the va
| Protect Azure AD tenant admin accounts with MFA | ΓùÅ | ΓùÅ (*Azure AD Global Administrator* accounts only) | ΓùÅ | ΓùÅ | ΓùÅ | | Mobile app as a second factor | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ | | Phone call as a second factor | | | ΓùÅ | ΓùÅ | ΓùÅ |
-| SMS as a second factor | | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ |
+| Text message as a second factor | | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ |
| Admin control over verification methods | | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ | | Fraud alert | | | | ΓùÅ | ΓùÅ | | MFA Reports | | | | ΓùÅ | ΓùÅ |
Our recommended approach to enforce MFA is using [Conditional Access](../conditi
| Configuration flexibility | | ΓùÅ | | | **Functionality** | | Exempt users from the policy | | ΓùÅ | ΓùÅ |
-| Authenticate by phone call or SMS | ΓùÅ | ΓùÅ | ΓùÅ |
+| Authenticate by phone call or text message | ΓùÅ | ΓùÅ | ΓùÅ |
| Authenticate by Microsoft Authenticator and Software tokens | ΓùÅ | ΓùÅ | ΓùÅ | | Authenticate by FIDO2, Windows Hello for Business, and Hardware tokens | | ΓùÅ | ΓùÅ | | Blocks legacy authentication protocols | ΓùÅ | ΓùÅ | ΓùÅ |
active-directory Concept Mfa Regional Opt In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-regional-opt-in.md
Previously updated : 09/11/2023 Last updated : 09/12/2023
As a protection for our customers, Microsoft doesn't automatically support telep
In today's digital world, telecommunication services have become ingrained into our lives. But advancements come with a risk of fraudulent activities. International Revenue Share Fraud (IRSF) is a threat with severe financial implications that also makes using services more difficult. Let's look at IRSF fraud more in-depth.
-IRSF is a type of telephony fraud where criminals exploit the billing system of telecommunication services providers to make profit for themselves. Bad actors gain unauthorized access to a telecommunication network and divert traffic to those networks to skim profit for every transaction that is sent to that network. To divert traffic, bad actors steal existing usernames and passwords, create new usernames and passwords, or try a host of other things to send SMS messages and voice calls through their telecommunication network. Bad actors take advantage of multifactor authentication screens, which require an SMS or voice call before a user can access their account. This activity causes exorbitant charges and makes services unreliable for our customers, causing downtime, and system errors.
+IRSF is a type of telephony fraud where criminals exploit the billing system of telecommunication services providers to make profit for themselves. Bad actors gain unauthorized access to a telecommunication network and divert traffic to those networks to skim profit for every transaction that is sent to that network. To divert traffic, bad actors steal existing usernames and passwords, create new usernames and passwords, or try a host of other things to send text message messages and voice calls through their telecommunication network. Bad actors take advantage of multifactor authentication screens, which require a text message or voice call before a user can access their account. This activity causes exorbitant charges and makes services unreliable for our customers, causing downtime, and system errors.
Here's how an IRSF attack may happen: 1. A bad actor first gets premium rate phone numbers and registers them.
-1. A bad actor uses automated scripts to request voice calls or SMS messages. The bad actor is colluding with number providers and the telecommunication network to drive more traffic to those services. The bad actor skims some of the profits of the increased traffic.
+1. A bad actor uses automated scripts to request voice calls or text messages. The bad actor is colluding with number providers and the telecommunication network to drive more traffic to those services. The bad actor skims some of the profits of the increased traffic.
1. A bad actor will hop around different region codes to continue to drive traffic and make it hard for them to get caught. The most common way to conduct IRSF is through an end-user experience that requires a two-factor authentication code. Bad actors add those premium rate phone numbers and pump traffic to them by requesting two-factor authentication codes. This activity results in revenue-skimming, and can lead to billions of dollars in loss.
For SMS verification, the following region codes require an opt-in.
| 998 | Uzbek | ## Voice verification
-For Voice verification, the following region codes require an opt-in.
+For voice verification, the following region codes require an opt-in.
| Region Code | Region Name | |:-- |:- |
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
In addition to choosing who can be nudged, you can define how many days a user c
![Confirmation of approval](./media/how-to-nudge-authenticator-app/approved.png)
- 1. Authenticator app is now successfully set up as the userΓÇÖs default sign-in method.
+ 1. Authenticator app is now successfully set up as the user's default sign-in method.
![Installation complete](./media/how-to-nudge-authenticator-app/finish.png)
In addition to using the Azure portal, you can also enable the registration camp
To configure the policy using Graph Explorer:
-1. Sign in to Graph Explorer and ensure youΓÇÖve consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+1. Sign in to Graph Explorer and ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
To open the Permissions panel:
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
The following table lists more numbers for different countries.
| Vietnam | +84 2039990161 | > [!NOTE]
-> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-).
+> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-short-codes-are-used-for-sending-text-messages-to-my-users-).
To configure your own caller ID number, complete the following steps:
To configure your own caller ID number, complete the following steps:
1. Select **Save**. > [!NOTE]
-> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-).
+> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-short-codes-are-used-for-sending-text-messages-to-my-users-).
### Custom voice messages
active-directory How To Add Remove User To Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-user-to-group.md
This article describes how you can add or remove a new user for a group in Permi
## Add a user
-1. Navigate to the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
1. From the Azure Active Directory tile, select **Go to Azure Active Directory**. 1. From the navigation pane, select the **Groups** drop-down menu, then **All groups**. 1. Select the group name for the group you want to add the user to.
This article describes how you can add or remove a new user for a group in Permi
## Remove a user
-1. Navigate to the Microsoft [Entra admin center](https://entra.microsoft.com/#home).
+1. Sign in to the Microsoft [Entra admin center](https://entra.microsoft.com/#home).
1. From the Azure Active Directory tile, select **Go to Azure Active Directory**. 1. From the navigation pane, select the **Groups** drop-down menu, then **All groups**. 1. Select the group name for the group you want to remove the user from.
active-directory Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md
Previously updated : 06/16/2023 Last updated : 09/13/2023
This article describes how to add an Amazon Web Services (AWS) account, Microsof
The **Permissions Management Onboarding - AWS Member Account Details** page displays.
-1. Go to **Enter Your AWS Account IDs**, and then select **Add** (the plus **+** sign).
+1. Go to **Enter Your AWS Account IDs**, then select **Add** (the plus **+** sign).
1. Copy your account ID from AWS and paste it into the **Enter Account ID** box. The AWS account ID is automatically added to the script.
This article describes how to add an Amazon Web Services (AWS) account, Microsof
The **Permissions Management Onboarding - Summary** page displays.
-1. Go to **Azure subscription IDs**, and then select **Edit** (the pencil icon).
-1. Go to **Enter your Azure Subscription IDs**, and then select **Add subscription** (the plus **+** sign).
+1. Go to **Azure subscription IDs**, then select **Edit** (the pencil icon).
+1. Go to **Enter your Azure Subscription IDs**, then select **Add subscription** (the plus **+** sign).
1. Copy and paste your subscription ID from Azure and paste it into the subscription ID box. The subscription ID is automatically added to the subscriptions line in the script.
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
Previously updated : 08/24/2023 Last updated : 09/13/2023
This article describes how to onboard an Amazon Web Services (AWS) account in Microsoft Entra Permissions Management. > [!NOTE]
-> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Microsoft Entra Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+> You must have Global Administrator permissions to perform the tasks in this article.
## Explanation
Any current or future accounts found get onboarded automatically.
To view status of onboarding after saving the configuration: -- Navigate to data collectors tab.
+- Go to **Data Collectors** tab.
- Click on the status of the data collector. -- View accounts on the In Progress page
+- View accounts on the **In Progress** page
#### Option 2: Enter authorization systems 1. In the **Permissions Management Onboarding - AWS Member Account Details** page, enter the **Member Account Role** and the **Member Account IDs**.
To view status of onboarding after saving the configuration:
You can enter up to 100 account IDs. Click the plus icon next to the text box to add more account IDs. > [!NOTE]
- > Perform the next 6 steps for each account ID you add.
+ > Do the following steps for each account ID you add:
1. Open another browser window and sign in to the AWS console for the member account.
This option detects all AWS accounts that are accessible through OIDC role acces
- If AWS SSO is enabled, organization account CFT also adds policy needed to collect AWS SSO configuration details. - Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. These actions create a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection. - Click Verify and Save. -- Navigate to newly create Data Collector row under AWSdata collectors. -- Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
+- Go to the newly create Data Collector row under AWSdata collectors.
+- Click on Status column when the row has **Pending** status
- To onboard and start collection, choose specific ones from the detected list and consent for collection. ### 6. Review and save
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management. Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management. > [!NOTE]
-> A *global administrator* or *root user* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+> You must have [Global Administrator](https://aka.ms/globaladmin) permissions to perform the tasks in this article.
## Explanation
The Permissions Management service is built on Azure, and given you're onboardin
## Prerequisites
-To add Permissions Management to your Azure AD tenant:
-- You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+To add Permissions Management to your Entra ID tenant:
+- You must have an Entra ID user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
- You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you. ## How to onboard an Azure subscription
Choose from three options to manage Azure subscriptions.
#### Option 1: Automatically manage
-This option allows subscriptions to be automatically detected and monitored without further work required. A key benefit of automatic management is that any current or future subscriptions found will be onboarded automatically. The steps to detect a list of subscriptions and onboard for collection are as follows:
+This option lets subscriptions be automatically detected and monitored without further work required. A key benefit of automatic management is that any current or future subscriptions found are onboarded automatically. The steps to detect a list of subscriptions and onboard for collection are as follows:
- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope. To do this: 1. In the EPM portal, left-click the cog on the top right-hand side.
-1. Navigate to data collectors tab
-1. Ensure 'Azure' is selected
-1. Click ΓÇÿCreate ConfigurationΓÇÖ
-1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
+1. Go to data collectors tab
+1. Ensure **Azure** is selected.
+1. Click **Create Configuration.**
+1. For onboarding mode, select **Automatically Manage.**
> [!NOTE]
- > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programmatically with PowerShell or the Azure CLI.
+ > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This is performed manually in the Entra console, or programmatically with PowerShell or the Azure CLI.
-- Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ
+- Once complete, Click **Verify Now & Save.**
To view status of onboarding after saving the configuration:
-1. Collectors will now be listed and change through status types. For each collector listed with a status of ΓÇ£Collected InventoryΓÇ¥, click on that status to view further information.
-1. You can then view subscriptions on the In Progress page
+1. Collectors are now listed and change through status types. For each collector listed with a status of **Collected Inventory,** click on that status to view further information.
+1. You can then view subscriptions on the In Progress page.
#### Option 2: Enter authorization systems
-You have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 100 per collector). Follow the steps below to configure these subscriptions to be monitored:
+You have the ability to specify only certain subscriptions to manage and monitor with Permissions Management (up to 100 per collector). Follow the steps below to configure these subscriptions to be monitored:
1. For each subscription you wish to manage, ensure that the ΓÇÿReaderΓÇÖ role has been granted to Cloud Infrastructure Entitlement Management application for the subscription. 1. In the EPM portal, click the cog on the top right-hand side.
-1. Navigate to data collectors tab
+1. Go to data collectors tab
1. Ensure 'Azure' is selected 1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. Select ΓÇÿEnter Authorization SystemsΓÇÖ
You have the ability to specify only certain subscriptions to manage and monitor
To view status of onboarding after saving the configuration:
-1. Navigate to data collectors tab.
+1. Go to the **Data Collectors** tab.
1. Click on the status of the data collector.
-1. View subscriptions on the In Progress page
+1. View subscriptions on the In Progress page.
#### Option 3: Select authorization systems
This option detects all subscriptions that are accessible by the Cloud Infrastru
- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope.
-1. In the EPM portal, click the cog on the top right-hand side.
-1. Navigate to data collectors tab
-1. Ensure 'Azure' is selected
-1. Click ΓÇÿCreate ConfigurationΓÇÖ
-1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
+1. In the Permissions Management portal, click the cog on the top right-hand side.
+1. Go to the **Data Collectors** tab.
+1. Ensure **Azure** is selected.
+1. Click **Create Configuration.**
+1. For onboarding mode, select **Automatically Manage.**
> [!NOTE] > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programmatically with PowerShell or the Azure CLI. -- Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ
+- Once complete, Click **Verify Now & Save.**
To view status of onboarding after saving the configuration:
-1. Navigate to newly create Data Collector row under Azure data collectors.
-1. Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
+1. Go to newly create Data Collector row under Azure data collectors.
+1. Click on Status column when the row has **Pending** status
1. To onboard and start collection, choose specific ones subscriptions from the detected list and consent for collection. ### 2. Review and save.
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
Previously updated : 08/24/2023 Last updated : 09/13/2023
This article also describes how to disable the controller in Microsoft Azure and
> [!NOTE] > You can enable the controller in AWS if you disabled it during onboarding. Once you enable the controller in AWS, you canΓÇÖt disable it.
-1. Sign in to the AWS console of the member account in a separate browser window.
-1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
-1. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**.
+1. In a separate browser window, sign in to the AWS console of the member account.
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **AWS**, then select **Create Configuration**.
1. On the **Permissions Management Onboarding - AWS Member Account Details** page, select **Launch Template**. The **AWS CloudFormation create stack** page opens, displaying the template.
This article also describes how to disable the controller in Microsoft Azure and
This AWS CloudFormation stack creates a collection role in the member account with necessary permissions (policies) for data collection. A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack. 1. Return to Permissions Management, and on the Permissions Management **Onboarding - AWS Member Account Details** page, select **Next**.
-1. On **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+1. On **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, then select **Verify Now & Save**.
The following message appears: **Successfully created configuration.**
You can enable or disable the controller in Azure at the Subscription level of y
- If you have read-only permission, the **Role** column displays **Reader**. - If you have administrative permission, the **Role** column displays **User Access Administrator**.
-1. To add the administrative role assignment, return to the **Access control (IAM)** page, and then select **Add role assignment**.
+1. To add the administrative role assignment, return to the **Access control (IAM)** page, then select **Add role assignment**.
1. Add or remove the role assignment for Cloud Infrastructure Entitlement Management.
-1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
-1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**.
-1. On the **Permissions Management Onboarding - Azure Subscription Details** page, enter the **Subscription ID**, and then select **Next**.
-1. On **Permissions Management Onboarding ΓÇô Summary** page, review the controller permissions, and then select **Verify Now & Save**.
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **Azure**, then select **Create Configuration**.
+1. On the **Permissions Management Onboarding - Azure Subscription Details** page, enter the **Subscription ID**, then select **Next**.
+1. On **Permissions Management Onboarding ΓÇô Summary** page, review the controller permissions, then select **Verify Now & Save**.
The following message appears: **Successfully Created Configuration.**
You can enable or disable the controller in Azure at the Subscription level of y
1. Optionally, execute ``mciem-enable-gcp-api.sh`` to enable all recommended GCP APIs.
-1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
1. On the **Data Collectors** dashboard, select **GCP**, and then select **Create Configuration**. 1. On the **Permissions Management Onboarding - Azure AD OIDC App Creation** page, select **Next**. 1. On the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID**, and then select **Next**.
-1. On the **Permissions Management Onboarding - GCP Project IDs** page, enter the **Project IDs**, and then select **Next**.
-1. On the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+1. On the **Permissions Management Onboarding - GCP Project IDs** page, enter the **Project IDs**, then select **Next**.
+1. On the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, then select **Verify Now & Save**.
The following message appears: **Successfully Created Configuration.**
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
Previously updated : 07/21/2023 Last updated : 09/13/2023
This article describes how to enable Microsoft Entra Permissions Management in y
To enable Permissions Management in your organization: -- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must have an Entra ID tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
- You must be eligible for or have an active assignment to the *Permissions Management Administrator* role as a user in that tenant. ## How to enable Permissions Management on your Azure AD tenant 1. In your browser:
- 1. Go to [Entra services](https://entra.microsoft.com) and use your credentials to sign in to [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
- 1. If you aren't already authenticated, sign in as a *Permissions Management Administrator* user.
- 1. If needed, activate the *Permissions Management Administrator* role in your Azure AD tenant.
- 1. In the Azure portal, select **Permissions Management**, and then select the link to purchase a license or begin a trial.
+ 1. Browse to the [Microsoft Entra admin center](https://entra.microsoft.com) and sign in to [Microsoft Entra ID](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) as a [Global Administrator](https://aka.ms/globaladmin).
+ 1. If needed, activate the *Permissions Management Administrator* role in your Entra ID tenant.
+ 1. In the Azure portal, select **Entra Permissions Management**, then select the link to purchase a license or begin a trial.
## Activate a free trial or paid license There are two ways to activate a trial or a full product license. -- The first way is to go to [admin.microsoft.com](https://admin.microsoft.com).
- - Sign in with *Global Admin* or *Billing Admin* credentials for your tenant.
- - Go to Setup and sign up for an Entra Permissions Management trial.
- - For self-service, navigate to the [Microsoft 365 portal](https://aka.ms/TryPermissionsManagement) to sign up for a 45-day free trial or to purchase licenses.
-- The second way is through Volume Licensing or Enterprise agreements. If your organization falls under a volume license or enterprise agreement scenario, contact your Microsoft representative.
+- The first way is to go to the [Microsoft 365 admin center](https://admin.microsoft.com).
+ - Sign in as a *Global Administrator* for your tenant.
+ - Go to Setup and sign up for a Microsoft Entra Permissions Management trial.
+ - For self-service, Go to the [Microsoft 365 portal](https://aka.ms/TryPermissionsManagement) to sign up for a 45-day free trial or to purchase licenses.
+- The second way is through Volume Licensing or Enterprise agreements.
+ - If your organization falls under a volume license or enterprise agreement scenario, contact your Microsoft representative.
Permissions Management launches with the **Data Collectors** dashboard.
Use the **Data Collectors** dashboard in Permissions Management to configure dat
1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
- - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+ - In the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
1. Select the authorization system you want: **AWS**, **Azure**, or **GCP**.
active-directory Permissions Management Quickstart Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-quickstart-guide.md
Previously updated : 08/24/2023 Last updated : 09/13/2023
Before you begin, you need access to these tools for the onboarding process:
- Access to a local BASH shell with the Azure CLI or Azure Cloud Shell using BASH environment (Azure CLI is included). - Access to AWS, Azure, and GCP consoles.-- A user must have *Global Administrator* or *Permissions Management Administrator* role assignments to create a new app registration in Entra ID tenant is required for AWS and GCP onboarding.
+- A user must have the *Global Administrator* role assignment to create a new app registration in Entra ID tenant is required for AWS and GCP onboarding.
## Step 1: Set-up Permissions Management
If the above points are met, continue with:
[Enable Microsoft Entra Permissions Management in your organization](onboard-enable-tenant.md)
-Ensure you're a *Global Administrator* or *Permissions Management Administrator*. Learn more about [Permissions Management roles and permissions](product-roles-permissions.md).
+Ensure you're a *Global Administrator*. Learn more about [Permissions Management roles and permissions](product-roles-permissions.md).
## Step 2: Onboard your multicloud environment
Permissions Management automatically discovers all current subscriptions. Once d
> To use **Automatic** or **Select** modes, the controller must be enabled while configuring data collection. To configure data collection:
-1. In Permissions Management, navigate to the data collectors page.
-2. Select a cloud environment: AWS, Azure, or GCP.
+1. In Permissions Management, go to the **Data Collectors** page.
+2. Select a cloud environment: **AWS**, **Azure**, or **GCP**.
3. Click **Create configuration**. ### Onboard Amazon Web Services (AWS)
active-directory Product Privileged Role Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-privileged-role-insights.md
The **Azure AD Insights** tab shows you who is assigned to privileged roles in y
> Microsoft recommends that you keep two break glass accounts permanently assigned to the global administrator role. Make sure that these accounts don't require the same multi-factor authentication mechanism to sign in as other administrative accounts. This is described further in [Manage emergency access accounts in Microsoft Entra](../roles/security-emergency-access.md). > [!NOTE]
-> Keep role assignments permanent if a user has a an additional Microsoft account (for example, an account they use to sign in to Microsoft services like Skype, or Outlook.com). If you require multi-factor authentication to activate a role assignment, a user with an additional Microsoft account will be locked out.
+> Keep role assignments permanent if a user has a an additional Microsoft account (for example, an account they use to sign in to Microsoft services like Skype or Outlook.com). If you require multi-factor authentication to activate a role assignment, a user with an additional Microsoft account will be locked out.
## Prerequisite To view information on the Azure AD Insights tab, you must have Permissions Management Administrator role permissions.
active-directory Product Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-roles-permissions.md
Title: Microsoft Entra Permissions Management roles and permissions description: Review roles and the level of permissions assigned in Microsoft Entra Permissions Management.
-# customerintent: As a cloud administer, I want to understand Permissions Management role assignments, so that I can effectively assign the correct permissions to users.
+# customerintent: As a cloud administrator, I want to understand Permissions Management role assignments, so that I can effectively assign the correct permissions to users.
In Microsoft Azure and Microsoft Entra Permissions Management role assignments g
- **Billing Administrator**: Performs common billing related tasks like updating payment information. - **Permissions Management Administrator**: Manages all aspects of Entra Permissions Management.
-See [Microsoft Entra ID built-in roles to learn more.](product-privileged-role-insights.md)
+See [Microsoft Entra ID built-in roles to learn more.](https://go.microsoft.com/fwlink/?linkid=2247090)
## Enabling Permissions Management-- To activate a trial or purchase a license, you must have *Global Administrator* or *Billing Administrator* permissions.
+- To activate a trial or purchase a license, you must have *Global Administrator* permissions.
## Onboarding your Amazon Web Service (AWS), Microsoft Entra, or Google Cloud Platform (GCP) environments
active-directory Quickstart Single Page App Angular Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-angular-sign-in.md
Previously updated : 07/27/2023 Last updated : 09/13/2023
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using Angular
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript Angular single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+This quickstart uses a sample Angular single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-This quickstart uses MSAL Angular v2 with the authorization code flow.
+In this article you'll register a SPA in the Microsoft Entra admin center, and download a sample Angular SPA. Next, you'll run the sample application, sign in with your personal Microsoft account or a work/school account, and sign out.
## Prerequisites
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-
-## Register your quickstart application
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+## Register the application in the Microsoft Entra admin center
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **New registration**.
-1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. Select **Register**.
+1. The application's Overview pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
+
+## Add a redirect URI
+ 1. Under **Manage**, select **Authentication**.
-1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Single-page application**.
1. Set the **Redirect URIs** value to `http://localhost:4200/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user. 1. Select **Configure** to apply the changes. 1. Under **Platform Configurations** expand **Single-page application**.
-1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png) Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
-
-#### Step 2: Download the project
-
-To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa/archive/main.zip).
-
-#### Step 3: Configure your JavaScript app
-
-In the *src* folder, open the *app* folder then open the *app.module.ts* file and update the `clientID`, `authority`, and `redirectUri` values in the `auth` object.
-
-```javascript
-// MSAL instance to be passed to msal-angular
-export function MSALInstanceFactory(): IPublicClientApplication {
- return new PublicClientApplication({
- auth: {
- clientId: 'Enter_the_Application_Id_Here',
- authority: 'Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here',
- redirectUri: 'Enter_the_Redirect_Uri_Here'
- },
- cache: {
- cacheLocation: BrowserCacheLocation.LocalStorage,
- storeAuthStateInCookie: isIE, // set to true for IE 11 },
- });
-}
-```
-
-Modify the values in the `auth` section as described here:
+1. Confirm that for **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png), your **Redirect URI** is eligible for the Authorization Code Flow with PKCE.
-- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+## Clone or download the sample application
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
-- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).-- `Enter_the_Tenant_info_here` is set to one of the following:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+To obtain the sample application, you can either clone it from GitHub or download it as a .zip file.
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
- - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
- To find the value of **Supported account types**, go to the app registration's **Overview** page.
-- `Enter_the_Redirect_Uri_Here` is `http://localhost:4200/`.
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-docs-code-javascript.git
+ ```
-The `authority` value in your *app.module.ts* should be similar to the following if you're using the main (global) Azure cloud:
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-docs-code-javascript/archive/refs/heads/main.zip)
-```javascript
-authority: "https://login.microsoftonline.com/common",
-```
+## Configure the project
-Scroll down in the same file and update the `graphMeEndpoint`.
-- Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`-- `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](/graph/deployments).
+1. In your IDE, open the project folder, *ms-identity-docs-code-javascript/angular-spa*, containing the sample.
+1. Open *src/app/app.module.ts* and replace the file contents with the following snippet:
-```javascript
-export function MSALInterceptorConfigFactory(): MsalInterceptorConfiguration {
- const protectedResourceMap = new Map<string, Array<string>>();
- protectedResourceMap.set('Enter_the_Graph_Endpoint_Herev1.0/me', ['user.read']);
+ :::code language="csharp" source="~/ms-identity-docs-code-javascript/angular-spa/src/app/app.module.ts":::
- return {
- interactionType: InteractionType.Redirect,
- protectedResourceMap
- };
-}
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the **Directory (tenant) ID** that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ * `RedirectUri` - The **Redirect URI** of the application. If necessary, replace the text in quotes with the redirect URI that was recorded earlier from the overview page of the registered application.
- #### Step 4: Run the project
+## Run the application and sign in
Run the project with a web server by using Node.js: 1. To start the server, run the following commands from within the project directory:+ ```console npm install npm start ```
-1. Browse to `http://localhost:4200/`.
-
-1. Select **Login** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click the **Profile** button to display your user information on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### msal.js
+1. Copy the https URL that appears in the terminal, for example, `https://localhost:4200`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+ :::image type="content" source="./media/quickstarts/angular-spa/quickstart-angular-spa-sign-in.png" alt-text="Screenshot of JavaScript App depicting the results of the API call.":::
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+## Sign out from the application
-```console
-npm install @azure/msal-browser @azure/msal-angular@2
-```
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
-## Next steps
+## Related content
-For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md)
-> [!div class="nextstepaction"]
-> [Tutorial to sign in users and call Microsoft Graph](tutorial-v2-javascript-auth-code.md)
+- Learn more by building this Angular SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-angular-auth-code.md)
active-directory Quickstart Single Page App Javascript Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-javascript-sign-in.md
Previously updated : 07/27/2023 Last updated : 09/13/2023
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using JavaScript
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+This quickstart uses a sample JavaScript (JS) single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
-See [How the sample works](#how-the-sample-works) for an illustration.
+In this article you'll register a SPA in the Microsoft Entra admin center, and download a sample JS SPA. Next, you'll run the sample application, sign in with your personal Microsoft account or a work or school account, and sign out.
## Prerequisites
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-
-## Register and download your quickstart application
--
-### Step 1: Register your application
+## Register the application in the Microsoft Entra admin center
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select **New registration**.
-1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
-1. Under **Manage**, select **Authentication**.
-1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
-1. Set the **Redirect URI** value to `http://localhost:3000/`.
-1. Select **Configure**.
-
-### Step 2: Download the project
-
-To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-v2/archive/master.zip).
-
-### Step 3: Configure your JavaScript app
-
-In the *app* folder, open the *authConfig.js* file, and then update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
-
-```javascript
-// Config object to be passed to MSAL on creation
-const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_Uri_Here",
- },
- cache: {
- cacheLocation: "sessionStorage", // This configures where your cache will be stored
- storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
- }
-};
-```
-
-Modify the values in the `msalConfig` section:
--- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.-
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
-- `Enter_the_Cloud_Instance_Id_Here` is the Azure cloud instance. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).-- `Enter_the_Tenant_info_here` is one of the following:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
-
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
- - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+1. Select **Register**.
+1. The application's Overview pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
- To find the value of **Supported account types**, go to the app registration's **Overview** page.
-- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
+## Add a redirect URI
-The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
-
-```javascript
-authority: "https://login.microsoftonline.com/common",
-```
+1. Under **Manage**, select **Authentication**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Single-page application**.
+1. Set the **Redirect URIs** value to `http://localhost:3000/`.
+1. Select **Configure** to apply the changes.
+1. Under **Platform Configurations** expand **Single-page application**.
+1. Confirm that for **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png), your **Redirect URI** is eligible for the Authorization Code Flow with PKCE.
-Next, open the *graphConfig.js* file to update the `graphMeEndpoint` and `graphMailEndpoint` values in the `apiConfig` object.
+## Clone or download the sample application
-```javascript
- // Add here the endpoints for MS Graph API services you would like to use.
- const graphConfig = {
- graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me",
- graphMailEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me/messages"
- };
+To obtain the sample application, you can either clone it from GitHub or download it as a .zip file.
- // Add here scopes for access token to be used at MS Graph API endpoints.
- const tokenRequest = {
- scopes: ["Mail.Read"]
- };
-```
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
-`Enter_the_Graph_Endpoint_Here` is the endpoint that API calls are made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information about Microsoft Graph on national clouds, see [National cloud deployment](/graph/deployments).
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-javascript-tutorial
+ ```
-If you're using the main (global) Microsoft Graph API service, the `graphMeEndpoint` and `graphMailEndpoint` values in the *graphConfig.js* file should be similar to the following:
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/archive/refs/heads/main.zip).
+
+## Configure the project
+
+1. In your IDE, open the project folder, *ms-identity-javascript-tutorial/angular-spa*, containing the sample.
+1. Open *1-Authentication/1-sign-in/App/authConfig.js* and replace the file contents with the following snippet:
+
+ ```javascript
+ /**
+ * Configuration object to be passed to MSAL instance on creation.
+ * For a full list of MSAL.js configuration parameters, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
+ */
+
+ const msalConfig = {
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply.
+ authority: 'https://login.microsoftonline.com/Enter_the_Tenant_Info_Here', // Defaults to "https://login.microsoftonline.com/common"
+ redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/
+ navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response.
+ },
+ cache: {
+ cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO.
+ storeAuthStateInCookie: false, // set this to true if you have to support IE
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback: (level, message, containsPii) => {
+ if (containsPii) {
+ return;
+ }
+ switch (level) {
+ case msal.LogLevel.Error:
+ console.error(message);
+ return;
+ case msal.LogLevel.Info:
+ console.info(message);
+ return;
+ case msal.LogLevel.Verbose:
+ console.debug(message);
+ return;
+ case msal.LogLevel.Warning:
+ console.warn(message);
+ return;
+ }
+ },
+ },
+ },
+ };
+
+ /**
+ * Scopes you add here will be prompted for user consent during sign-in.
+ * By default, MSAL.js will add OIDC scopes (openid, profile, email) to any login request.
+ * For more information about OIDC scopes, visit:
+ * https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent#openid-connect-scopes
+ */
+ const loginRequest = {
+ scopes: ["openid", "profile"],
+ };
+
+ /**
+ * An optional silentRequest object can be used to achieve silent SSO
+ * between applications by providing a "login_hint" property.
+ */
+
+ // const silentRequest = {
+ // scopes: ["openid", "profile"],
+ // loginHint: "example@domain.net"
+ // };
+
+ // exporting config object for jest
+ if (typeof exports !== 'undefined') {
+ module.exports = {
+ msalConfig: msalConfig,
+ loginRequest: loginRequest,
+ };
+ }
+ ```
-```javascript
-graphMeEndpoint: "https://graph.microsoft.com/v1.0/me",
-graphMailEndpoint: "https://graph.microsoft.com/v1.0/me/messages"
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the **Directory (tenant) ID** that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ * `RedirectUri` - The **Redirect URI** of the application. If necessary, replace the text in quotes with the redirect URI that was recorded earlier from the overview page of the registered application.
-### Step 4: Run the project
+## Run the application and sign in
-Run the project with a web server by using Node.js.
+Run the project with a web server by using Node.js:
1. To start the server, run the following commands from within the project directory:
Run the project with a web server by using Node.js.
npm install npm start ```
+1. Copy the https URL that appears in the terminal, for example, `https://localhost:3000`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
-1. Go to `http://localhost:3000/`.
-
-1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information is displayed on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### MSAL.js
-
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. The sample's *https://docsupdatetracker.net/index.html* file contains a reference to the library:
-
-```html
-<script type="text/javascript" src="https://alcdn.msauth.net/browser/2.0.0-beta.0/js/msal-browser.js" integrity=
-"sha384-r7Qxfs6PYHyfoBR6zG62DGzptfLBxnREThAlcJyEfzJ4dq5rqExc1Xj3TPFE/9TH" crossorigin="anonymous"></script>
-```
+ :::image type="content" source="./media/quickstarts/js-spa/quickstart-js-spa-sign-in.png" alt-text="Screenshot of JavaScript App depicting the results of the API call.":::
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+## Sign out from the application
-```console
-npm install @azure/msal-browser
-```
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
-## Next steps
+## Related content
-For a more detailed step-by-step guide on building the application used in this quickstart, see the following tutorial:
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md).
-> [!div class="nextstepaction"]
-> [Tutorial to sign in users and call Microsoft Graph](tutorial-v2-javascript-auth-code.md)
+- Learn more by building this JavaScript SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-spa.md)
active-directory Quickstart Single Page App React Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-react-sign-in.md
Previously updated : 07/27/2023 Last updated : 09/13/2023
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using React
+This quickstart uses a sample React single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE). The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript React single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
+In this article you'll register a SPA in the Microsoft Entra admin center, and download a sample React SPA. Next, you'll run the sample application, sign in with your personal Microsoft account or a work or school account, and sign out.
## Prerequisites
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
--
-## Register and download your quickstart application
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-
-### Step 1: Register your application
+## Register the application in the Microsoft Entra admin center
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **New registration**.
-1. When the **Register an application** page appears, enter a name for your application.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. The application's overview pane is displayed when registration is complete. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in your application source code.
+1. Select **Register**.
+1. The application's Overview pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
+
+## Add a redirect URI
+ 1. Under **Manage**, select **Authentication**.
-1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
-1. Set the **Redirect URIs** value to `http://localhost:3000/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Single-page application**.
+1. Set the **Redirect URIs** value to `http://localhost:3000/`.
1. Select **Configure** to apply the changes. 1. Under **Platform Configurations** expand **Single-page application**.
-1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png) Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
-
-### Step 2: Download the project
-
-To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip).
-
-### Step 3: Configure your JavaScript app
-
-In the *src* folder, open the *authConfig.js* file and update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
+1. Confirm that for **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png), your **Redirect URI** is eligible for the Authorization Code Flow with PKCE.
-```javascript
-/**
-* Configuration object to be passed to MSAL instance on creation.
-* For a full list of MSAL.js configuration parameters, visit:
-* https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
-*/
-export const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_Uri_Here"
- },
- cache: {
- cacheLocation: "sessionStorage", // This configures where your cache will be stored
- storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
- },
-```
+## Clone or download the sample application
-Modify the values in the `msalConfig` section as described here:
+To obtain the sample application, you can either clone it from GitHub or download it as a *.zip* file.
-- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
-- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).-- `Enter_the_Tenant_info_here` is set to one of the following:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
-
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
- - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-docs-code-javascript.git
+ ```
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-docs-code-javascript/tree/main)
- To find the value of **Supported account types**, go to the app registration's **Overview** page.
-- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
+If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
-The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
+## Configure the project
-```javascript
-authority: "https://login.microsoftonline.com/common",
-```
+1. In your IDE, open the project folder, *ms-identity-docs-code-javascript/react-spa*, containing the sample.
+1. Open *src/authConfig.js* and replace the file contents with the following snippet:
-Scroll down in the same file and update the `graphMeEndpoint`.
-- Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`-- `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](/graph/deployments).
+ :::code language="csharp" source="~/ms-identity-docs-code-javascript/react-spa/src/authConfig.js":::
-```javascript
- // Add here the endpoints for MS Graph API services you would like to use.
- export const graphConfig = {
- graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me"
- };
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the **Directory (tenant) ID** that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ * `RedirectUri` - The **Redirect URI** of the application. If necessary, replace the text in quotes with the redirect URI that was recorded earlier from the overview page of the registered application.
-### Step 4: Run the project
+## Run the application and sign in
Run the project with a web server by using Node.js: 1. To start the server, run the following commands from within the project directory:+ ```console npm install npm start ```
-1. Browse to `http://localhost:3000/`.
-
-1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click on the **Request Profile Information** to display your profile information on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### msal.js
+1. Copy the https URL that appears in the terminal, for example, `https://localhost:3000`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/display-api-call-results.png" alt-text="Screenshot of React App depicting the results of the API call.":::
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+## Sign out from the application
-```console
-npm install @azure/msal-browser @azure/msal-react
-```
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
-## Next steps
+## Related content
-Next, try a step-by-step tutorial to learn how to build a React SPA from scratch that signs in users and calls the Microsoft Graph API to get user profile data:
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md)
-> [!div class="nextstepaction"]
-> [Tutorial: Sign in users and call Microsoft Graph](./single-page-app-tutorial-01-register-app.md)
+- Learn more by building this React SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./single-page-app-tutorial-01-register-app.md)
active-directory Quickstart Web App Aspnet Core Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-aspnet-core-sign-in.md
Title: "Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET Core web app"
-description: Learn how an ASP.NET Core web app leverages Microsoft.Identity.Web to implement Microsoft sign-in using OpenID Connect and call Microsoft Graph
+description: Learn how an ASP.NET Core web app uses Microsoft.Identity.Web to implement Microsoft sign-in using OpenID Connect and call Microsoft Graph
Previously updated : 04/16/2023 Last updated : 08/28/2023
# Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET Core web app
-The following quickstart uses a ASP.NET Core web app code sample to demonstrate how to sign in users from any Azure Active Directory (Azure AD) organization.
-See [How the sample works](#how-the-sample-works) for an illustration.
+This quickstart uses a sample ASP.NET Core web app to show you how to sign in users by using the [authorization code flow](./v2-oauth2-auth-code-flow.md) and call the Microsoft Graph API. The sample uses [Microsoft Authentication Library for .NET](/entra/msal/dotnet/) and [Microsoft Identity Web](/entra/msal/dotnet/microsoft-identity-web/) for ASP.NET to handle authentication.
+
+In this article you register a web application in the Microsoft Entra admin center, and download a sample ASP.NET web application. You'll run the sample application, sign in with your personal Microsoft account or a work or school account, and sign out.
## Prerequisites
-* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [.NET Core SDK 6.0+](https://dotnet.microsoft.com/download)
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-## Register and download your quickstart application
-
-### Step 1: Register your application
-
+## Register the application in the Microsoft Entra admin center
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. For **Name**, enter a name for the application. For example, enter **AspNetCore-Quickstart**. Users of the app will see this name, and can be changed later.
-1. Set the **Redirect URI** type to **Web** and value to `https://localhost:44321/signin-oidc`.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. On the page that appears, select **+ New registration**.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
+1. Under **Supported account types**, select *Accounts in this organizational directory only*.
1. Select **Register**.
-1. Under **Manage**, select **Authentication**.
-1. For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
-1. Under **Implicit grant and hybrid flows**, select **ID tokens**.
-1. Select **Save**.
-1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-1. Enter a **Description**, for example `clientsecret1`.
-1. Select **In 1 year** for the secret's expiration.
-1. Select **Add** and immediately record the secret's **Value** for use in a later step. The secret value is *never displayed again* and is irretrievable by any other means. Record it in a secure location as you would any password.
-
-### Step 2: Download the ASP.NET Core project
-
-[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
-
-### Step 3: Configure your ASP.NET Core project
-
-1. Extract the *.zip* file to a local folder that's close to the root of the disk to avoid errors caused by path length limitations on Windows. For example, extract to *C:\Azure-Samples*.
-1. Open the solution in the chosen code editor.
-1. In *appsettings.json*, replace the values of `ClientId`, and `TenantId`. The value for the application (client) ID and the directory (tenant) ID, can be found in the app's **Overview** page on the Azure portal.
-
- ```json
- "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]",
- "ClientId": "Enter_the_Application_Id_here",
- "TenantId": "common",
- ```
-
- - `Enter_the_Application_Id_Here` is the application (client) ID for the registered application.
- - Replace `Enter_the_Tenant_Info_Here` with one of the following:
- - If the application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). The directory (tenant) ID can be found on the app's **Overview** page.
- - If the application supports **Accounts in any organizational directory**, replace this value with `organizations`.
- - If the application supports **All Microsoft account users**, leave this value as `common`.
- - Replace `Enter_the_Client_Secret_Here` with the **Client secret** that was created and recorded in an earlier step.
-
-For this quickstart, don't change any other values in the *appsettings.json* file.
-
-### Step 4: Build and run the application
+1. The application's **Overview** pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
-Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the F5 key.
+## Add a redirect URI
-A prompt for credentials will appear, and then a request for consent to the permissions that the app requires. Select **Accept** on the consent prompt.
--
-After consenting to the requested permissions, the app displays that sign-in has been successful using correct Azure Active Directory credentials. The user's account email address will be displayed in the *API result* section of the page. This was extracted using the Microsoft Graph API.
--
-## More information
-
-This section gives an overview of the code required to sign in users and call the Microsoft Graph API on their behalf. This overview can be useful to understand how the code works, main arguments, and also if you want to add sign-in to an existing ASP.NET Core application and call Microsoft Graph. It uses [Microsoft.Identity.Web](microsoft-identity-web.md), which is a wrapper around [MSAL.NET](msal-overview.md).
-
-### How the sample works
-
-![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
-
-### Startup class
-
-The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts:
-
-```csharp
- // Get the scopes from the configuration (appsettings.json)
- var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
-
- public void ConfigureServices(IServiceCollection services)
- {
- // Add sign-in with Microsoft
- services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
-
- // Add the possibility of acquiring a token to call a protected web API
- .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
+1. Under **Manage**, select **Authentication**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Web**.
+1. For **Redirect URIs**, enter `https://localhost:5001/signin-oidc`.
+1. Under **Front-channel logout URL**, enter `https://localhost:5001/signout-oidc`.
+1. Select **Configure** to apply the changes.
- // Enables controllers and pages to get GraphServiceClient by dependency injection
- // And use an in memory token cache
- .AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
- .AddInMemoryTokenCaches();
+## Clone or download the sample application
- services.AddControllersWithViews(options =>
- {
- var policy = new AuthorizationPolicyBuilder()
- .RequireAuthenticatedUser()
- .Build();
- options.Filters.Add(new AuthorizeFilter(policy));
- });
+To obtain the sample application, you can either clone it from GitHub or download it as a *.zip* file.
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-docs-code-dotnet/archive/refs/heads/main.zip). Extract it to a file path where the length of the name is fewer than 260 characters.
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
+
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-docs-code-dotnet.git
+ ```
- // Enables a UI and controller for sign in and sign out.
- services.AddRazorPages()
- .AddMicrosoftIdentityUI();
- }
-```
+## Create and upload a self-signed certificate
-The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
+1. Using your terminal, use the following commands to navigate to create a self-signed certificate in the project directory.
-The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity platform authentication to the application. The application is then configured to sign in users based on the following information in the `AzureAD` section of the *appsettings.json* configuration file:
+ ```console
+ cd ms-identity-docs-code-dotnet\web-app-aspnet\
+ dotnet dev-certs https -ep ./certificate.crt --trust
+ ```
-| *appsettings.json* key | Description |
-||-|
-| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
-| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
-| `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+1. Return to the Microsoft Entra admin center, and under **Manage**, select **Certificates & secrets** > **Upload certificate**.
+1. Select the **Certificates (0)** tab, then select **Upload certificate**.
+1. An **Upload certificate** pane appears. Use the icon to navigate to the certificate file you created in the previous step, and select **Open**.
+1. Enter a description for the certificate, for example *Certificate for aspnet-web-app*, and select **Add**.
+1. Record the **Thumbprint** value for use in the next step.
-The `EnableTokenAcquisitionToCallDownstreamApi` method enables the application to acquire a token to call protected web APIs. `AddMicrosoftGraph` enables the controllers or Razor pages to benefit directly the `GraphServiceClient` (by dependency injection) and the `AddInMemoryTokenCaches` methods enables your app to benefit from a token cache.
+## Configure the project
-The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
+1. In your IDE, open the project folder, *ms-identity-docs-code-dotnet\web-app-aspnet*, containing the sample.
+1. Open *appsettings.json* and replace the file contents with the following snippet;
-```csharp
-app.UseAuthentication();
-app.UseAuthorization();
+ :::code language="csharp" source="~/ms-identity-docs-code-dotnet/web-app-aspnet/appsettings.json" :::
-app.UseEndpoints(endpoints =>
-{
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
- endpoints.MapRazorPages();
-});
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the `Directory (tenant) ID` that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the `Application (client) ID` value that was recorded earlier from the overview page of the registered application.
+ * `ClientCertificates` - A self-signed certificate is used for authentication in the application. Replace the text of the `CertificateThumbprint` with the thumbprint of the certificate that was previously recorded.
-### Protect a controller or a controller's method
+## Run the application and sign in
-The controller or its methods can be protected by applying the `[Authorize]` attribute to the controller's class or one or more of its methods. This `[Authorize]` attribute restricts access by allowing only authenticated users. If the user isn't already authenticated, an authentication challenge can be started to access the controller. In this quickstart, the scopes are read from the configuration file:
+1. In your project directory, use the terminal to enter the following command;
-```csharp
-[AuthorizeForScopes(ScopeKeySection = "DownstreamApi:Scopes")]
-public async Task<IActionResult> Index()
-{
- var user = await _graphServiceClient.Me.GetAsync();
- ViewData["ApiResult"] = user.DisplayName;
+ ```console
+ dotnet run
+ ```
- return View();
-}
-```
+1. Copy the `https` URL that appears in the terminal, for example, `https://localhost:5001`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
+ ![Screenshot of the application showing the user's profile details.](media/quickstarts/aspnet-core/quickstart-dotnet-webapp-sign-in.png)
-## Next steps
+## Sign-out from the application
-The following GitHub repository contains the ASP.NET Core code sample referenced in this quickstart and more samples that show how to achieve the following:
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
+1. Although you have signed out, the application is still running from your terminal. To stop the application in your terminal, press **Ctrl+C**.
-- Add authentication to a new ASP.NET Core web application.-- Call Microsoft Graph, other Microsoft APIs, or your own web APIs.-- Add authorization.-- Sign in users in national clouds or with social identities.
+## Related content
-> [!div class="nextstepaction"]
-> [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md).
+- Create an ASP.NET web app from scratch with the series [Tutorial: Register an application with the Microsoft identity platform](./web-app-tutorial-01-register-application.md).
active-directory Tenant Restrictions V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md
Although these alternatives provide protection, certain scenarios can only be co
After you create a tenant restrictions v2 policy, you can enforce the policy on each Windows 10, Windows 11, and Windows Server 2022 device by adding your tenant ID and the policy ID to the device's **Tenant Restrictions** configuration. When tenant restrictions are enabled on a Windows device, corporate proxies aren't required for policy enforcement. Devices don't need to be Azure AD managed to enforce tenant restrictions v2; domain-joined devices that are managed with Group Policy are also supported. > [!NOTE]
-> Tenant restrictions V2 on Windows is a partial solution that protects the authentication and data planes for some scenarios. It works on managed Windows devices and does not protect .NET stack, Chrome, or Firefox. The Windows solution provides a temporary solution until general availability of Universal tenant restrictions in Global Secure Access (preview).
+> Tenant restrictions V2 on Windows is a partial solution that protects the authentication and data planes for some scenarios. It works on managed Windows devices and does not protect .NET stack, Chrome, or Firefox. The Windows solution provides a temporary solution until general availability of Universal tenant restrictions in [Microsoft Entra Global Secure Access (preview)](/azure/global-secure-access/overview-what-is-global-secure-access).
#### Administrative Templates (.admx) for Windows 10 November 2021 Update (21H2) and Group policy settings
active-directory Concept Fundamentals Mfa Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-mfa-get-started.md
- Title: Azure AD Multi-Factor Authentication for your organization
-description: Learn about the available features of Azure AD Multi-Factor Authentication for your organization based on your license model
----- Previously updated : 03/18/2020--------
-# Overview of Azure AD Multi-Factor Authentication for your organization
-
-There are multiple ways to enable Azure AD Multi-Factor Authentication for your Azure Active Directory (AD) users based on the licenses that your organization owns.
-
-![Investigate signals and enforce MFA if needed](./media/concept-fundamentals-mfa-get-started/verify-signals-and-perform-mfa-if-required.png)
-
-Based on our studies, your account is more than 99.9% less likely to be compromised if you use multi-factor authentication (MFA).
-
-So how does your organization turn on MFA even for free, before becoming a statistic?
-
-## Free option
-
-Customers who are utilizing the free benefits of Azure AD can use [security defaults](../fundamentals/security-defaults.md) to enable multi-factor authentication in their environment.
-
-## Microsoft 365 Business, E3, or E5
-
-For customers with Microsoft 365, there are two options:
-
-* Azure AD Multi-Factor Authentication is either enabled or disabled for all users, for all sign-in events. There is no ability to only enable multi-factor authentication for a subset of users, or only under certain scenarios. Management is through the Office 365 portal.
-* For an improved user experience, upgrade to Azure AD Premium P1 or P2 and use Conditional Access. For more information, see secure Microsoft 365 resources with multi-factor authentication.
-
-## Azure AD Premium P1
-
-For customers with Azure AD Premium P1 or similar licenses that include this functionality such as Enterprise Mobility + Security E3, Microsoft 365 F1, or Microsoft 365 E3:
-
-Use [Azure AD Conditional Access](../authentication/tutorial-enable-azure-mfa.md) to prompt users for multi-factor authentication during certain scenarios or events to fit your business requirements.
-
-## Azure AD Premium P2
-
-For customers with Azure AD Premium P2 or similar licenses that include this functionality such as Enterprise Mobility + Security E5 or Microsoft 365 E5:
-
-Provides the strongest security position and improved user experience. Adds [risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md) to the Azure AD Premium P1 features that adapts to user's patterns and minimizes multi-factor authentication prompts.
-
-## Authentication methods
-
-| Method | Security defaults | All other methods |
-| | | |
-| Notification through mobile app | X | X |
-| Verification code from mobile app or hardware token | | X |
-| Text message to phone | | X |
-| Call to phone | | X |
-
-## Next steps
-
-To get started, see the tutorial to [secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md).
-
-For more information on licensing, see [Features and licenses for Azure AD Multi-Factor Authentication](../authentication/concept-mfa-licensing.md).
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
To learn more about Azure AD built-in roles and their permissions, see [Azure AD
One Azure AD tenant can have up to 500 role-assignable groups. To learn more about Azure AD service limits and restrictions, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).
-Azure AD role-assignable group feature is not part of Azure AD Privileged Identity Management (Azure AD PIM). It requires a Microsoft Entra Premium P1, P2, or Microsoft Entra ID Governance license.
+Azure AD role-assignable group feature is not part of Azure AD Privileged Identity Management (Azure AD PIM). For more information on licensing, see [Microsoft Entra ID Governance licensing fundamentals](../../active-directory/governance/licensing-fundamentals.md) .
+ ## Relationship between role-assignable groups and PIM for Groups
If a user is an active member of Group A, and Group A is an eligible member of G
## Privileged Identity Management and app provisioning (Public Preview)
-> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q]
- If the group is configured for [app provisioning](../app-provisioning/index.yml), activation of group membership will trigger provisioning of group membership (and user account itself if it wasnΓÇÖt provisioned previously) to the application using SCIM protocol. In Public Preview we have a functionality that triggers provisioning right after group membership is activated in PIM.
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md
-# Understand the Privileged Identity Management APIs
+# Privileged Identity Management APIs
-You can perform Privileged Identity Management (PIM) tasks using the Microsoft Graph APIs for Azure Active Directory (Azure AD) roles and groups, and the Azure Resource Manager API for Azure resource roles. This article describes important concepts for using the APIs for Privileged Identity Management.
+Privileged Identity Management (PIM), part of Microsoft Entra, includes three providers:
-For requests and other details about PIM APIs, check out:
+ - PIM for Azure AD roles
+ - PIM for Azure resources
+ - PIM for Groups
+
+You can manage assignments in PIM for Azure AD roles and PIM for Groups using Microsoft Graph API. You can manage assignments in PIM for Azure Resources using Azure Resource Manager (ARM) API. This article describes important concepts for using the APIs for Privileged Identity Management.
+
+Find more details about APIs that allow to manage assignments in the documentation:
- [PIM for Azure AD roles API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)-- [PIM for groups API reference (preview))(/graph/api/resources/privilegedidentitymanagement-for-groups-api-overview)-- [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests)
+- [PIM for Azure resource roles API reference](/rest/api/authorization/privileged-role-eligibility-rest-sample)
+- [PIM for Groups API reference](/graph/api/resources/privilegedidentitymanagement-for-groups-api-overview)
+- [PIM Alerts for Azure AD Roles API reference](/graph/api/resources/privilegedidentitymanagementv3-overview?view=graph-rest-beta#building-blocks-of-the-pim-alerts-apis)
+- [PIM Alerts for Azure Resources API reference](/rest/api/authorization/role-management-alert-rest-sample)
+ ## PIM API history
-There have been several iterations of the PIM APIs over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
+There have been several iterations of the PIM API over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
### Iteration 1 ΓÇô Deprecated
-Under the `/beta/privilegedRoles` endpoint, Microsoft had a classic version of the PIM APIs which only supported Azure AD roles. Access to this API was retired in June 2021.
+Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the PIM API which only supported Azure AD roles and is no longer supported. Access to this API was deprecated in June 2021.
### Iteration 2 ΓÇô Supports Azure AD roles and Azure resource roles
-Under the `/beta/privilegedAccess` endpoint, Microsoft supported both `/aadRoles` and `/azureResources`. The `/aadRoles` endpoint has been retired but the `/azureResources` endpoint is still available in your tenant. Microsoft recommends against starting any new development with the APIs available through the `/azureResources` endpoint. This API will never be released to general availability and will be eventually deprecated and retired.
-
-### Current iteration ΓÇô Azure AD roles and groups in Microsoft Graph and Azure resource roles in Azure Resource Manager
-
-Currently, in general availability, this is the final iteration of the PIM APIs. Based on customer feedback, the PIM APIs for managing Azure AD roles are now under the **unifiedRoleManagement** set of APIs and the Azure Resource PIM APIs is now under the Azure Resource Manager role assignment APIs. These locations also provide a few additional benefits including:
--- Alignment of the PIM APIs for regular role assignment of both Azure AD roles and Azure Resource roles.-- Reducing the need to call additional PIM APIs to onboard a resource, get a resource, or get a role definition.-- Supporting app-only permissions.-- New features such as approval and email notification configuration.-
-This iteration also includes PIM APIs for managing ownership and membership of groups as well as security alerts for PIM for Azure AD roles.
+Under the `/beta/privilegedAccess` endpoint, Microsoft supported both `/aadRoles` and `/azureResources`. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
-## Current permissions required
+### Iteration 3 (Current) ΓÇô PIM for Azure AD roles, groups in Microsoft Graph API, and for Azure resources in ARM API
-### Azure AD roles
+This is the final iteration of the PIM API. It includes:
+ - PIM for Azure AD Roles in Microsoft Graph API - Generally available.
+ - PIM for Azure resources in ARM API - Generally available.
+ - PIM for groups in Microsoft Graph API - Preview.
+ - PIM Alerts for Azure AD Roles in Microsoft Graph API - Preview.
+ - PIM Alerts for Azure Resources in ARM API - Preview.
-To understand the permissions that you need to call the PIM Microsoft Graph API for Azure AD roles, see [Role management permissions](/graph/permissions-reference#role-management-permissions).
+Having PIM for Azure AD Roles in Microsoft Graph API and PIM for Azure Resources in ARM API provide a few benefits including:
+ - Alignment of the PIM API for regular role assignment API for both Azure AD roles and Azure Resource roles.
+ - Reducing the need to call additional PIM API to onboard a resource, get a resource, or get role definition.
+ - Supporting app-only permissions.
+ - New features such as approval and email notification configuration.
-The easiest way to specify the required permissions is to use the Azure AD consent framework.
-### Azure resource roles
+### Overview of PIM API iteration 3
- The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Microsoft Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+PIM APIs across providers (both Microsoft Graph APIs and ARM APIs) follow the same principles.
-## Calling PIM API with an app-only token
+#### Assignments management
+To create assignment (active or eligible), renew, extend, of update assignment (active or eligible), activate eligible assignment, deactivate eligible assignment, use resources **\*AssignmentScheduleRequest** and **\*EligibilityScheduleRequest**:
-### Azure AD roles
+ - For Azure AD Roles: [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest), [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest);
+ - For Azure resources: [Role Assignment Schedule Request](/rest/api/authorization/role-assignment-schedule-requests), [Role Eligibility Schedule Request](/rest/api/authorization/role-eligibility-schedule-requests);
+ - For Groups: [privilegedAccessGroupAssignmentScheduleRequest](/graph/api/resources/privilegedaccessgroupassignmentschedulerequest), [privilegedAccessGroupEligibilityScheduleRequest](/graph/api/resources/privilegedaccessgroupeligibilityschedulerequest).
- PIM API now supports app-only permissions on top of delegated permissions.
+Creation of **\*AssignmentScheduleRequest** or **\*EligibilityScheduleRequest** objects may lead to creation of read-only **\*AssignmentSchedule**, **\*EligibilitySchedule**, **\*AssignmentScheduleInstance**, and **\*EligibilityScheduleInstance** objects.
-- For app-only permissions, you must call the API with an application that's already been consented with either the required Azure AD or Azure role permissions.-- For delegated permission, you must call the PIM API with both a user and an application token. The user must be assigned to either the Global Administrator role or Privileged Role Administrator role, and ensure that the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+ - **\*AssignmentSchedule** and **\*EligibilitySchedule** objects show current assignments and requests for assignments to be created in the future.
+ - **\*AssignmentScheduleInstance** and **\*EligibilityScheduleInstance** objects show current assignments only.
-### Azure resource roles
+When an eligible assignment is activated (Create **\*AssignmentScheduleRequest** was called), the **\*EligibilityScheduleInstance** continues to exist, new **\*AssignmentSchedule** and a **\*AssignmentScheduleInstance** objects will be created for that activated duration.
- PIM API for Azure resources supports both user only and application only calls. Simply make sure the service principal has either the owner or user access administrator role on the resource.
+For more information about assignment and activation APIs, seeΓÇ»[PIM API for managing role assignments and eligibilities](/graph/api/resources/privilegedidentitymanagementv3-overview#pim-api-for-managing-role-assignment).
-## Design of current API iteration
+
-PIM API consists of two categories that are consistent for both the API for Azure AD roles and Azure resource roles: assignment and activation API requests, and policy settings.
+#### PIM Policies (role settings)
-### Assignment and activation APIs
+To manage the PIM policies, use **roleManagementPolicy** and **roleManagementPolicyAssignment** entities:
+ - For PIM for Azure AD roles, PIM for Groups: [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy), [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment)
+ - For PIM for Azure resources: [Role Management Policies](/rest/api/authorization/role-management-policies), [Role Management Policy Assignments](/rest/api/authorization/role-management-policy-assignments)
-To make eligible assignments, time-bound eligible or active assignments, and to activate eligible assignments, PIM provides the following resources:
+The **\*roleManagementPolicy** resource includes rules that constitute PIM policy: approval requirements, maximum activation duration, notification settings, etc.
-- [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest)-- [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest)
+The **\*roleManagementPolicyAssignment** object attaches the policy to a specific role.
-These entities work alongside pre-existing **roleDefinition** and **roleAssignment** resources for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
+For more information about the policy settings APIs, seeΓÇ»[role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim).
-- If you are trying to create or retrieve a persistent (active) role assignment that does not have a schedule (start or end time), you should avoid these PIM entities and focus on the read/write operations under the roleAssignment entity
+## Permissions
-- To create an eligible assignment with or without an expiration time you can use the write operation on the [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest) resource
+### PIM for Azure AD roles
-- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on the [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest) resource
+For Graph API permissions required for PIM for Azure AD roles, seeΓÇ»[Role management permissions](/graph/permissions-reference#role-management-permissions).
-- To activate an eligible assignment, you should also use the [write operation on roleAssignmentScheduleRequest](/graph/api/rbacapplication-post-roleassignmentschedulerequests) with a `selfActivate` **action** property.
+### PIM for Azure resources
-Each of the request objects would create the following read-only objects:
+The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Microsoft Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
-- [unifiedRoleAssignmentSchedule](/graph/api/resources/unifiedroleassignmentschedule)-- [unifiedRoleEligibilitySchedule](/graph/api/resources/unifiedroleeligibilityschedule)-- [unifiedRoleAssignmentScheduleInstance](/graph/api/resources/unifiedroleassignmentscheduleinstance)-- [unifiedRoleEligibilityScheduleInstance](/graph/api/resources/unifiedroleeligibilityscheduleinstance)
+### PIM for Groups
-The **unifiedRoleAssignmentSchedule** and **unifiedRoleEligibilitySchedule** objects show a schedule of all the current and future assignments.
+For Graph API permissions required for PIM for Groups, see [PIM for Groups ΓÇô Permissions and privileges](/graph/api/resources/privilegedidentitymanagement-for-groups-api-overview#permissions-and-privileges).
-When an eligible assignment is activated, the **unifiedRoleEligibilityScheduleInstance** continues to exist. The **unifiedRoleAssignmentScheduleRequest** for the activation would create a separate **unifiedRoleAssignmentSchedule** object and a **unifiedRoleAssignmentScheduleInstance** for that activated duration.
-The instance objects are the actual assignments that currently exist whether it is an eligible assignment or an active assignment. You should use the GET operation on the instance entity to retrieve a list of eligible assignments / active assignments to a role/user.
-For more information about assignment and activation APIs, see [PIM API for managing role assignments and eligibilities](/graph/api/resources/privilegedidentitymanagementv3-overview#pim-api-for-managing-role-assignment).
-
-### Policy settings APIs
-
-To manage the settings of Azure AD roles, we provide the following entities:
--- [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy)-- [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment)-
-The [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy) resource through it's **rules** relationship defines the rules or settings of the Azure AD role. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment) object attaches the policy to a specific role.
-
-Use the APIs supported by these resources retrieve role management policy assignments for all Azure AD role or filter the list by a **roleDefinitionId**, and then update the rules or settings in the policy associated with the Azure AD role.
-
-For more information about the policy settings APIs, see [role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim).
## Relationship between PIM entities and role assignment entities
-The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the unifiedRoleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and unifiedRoleAssignmentScheduleInstance would both include:
+The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the **\*AssignmentScheduleInstance**. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and **\*AssignmentScheduleInstance** would both include:
- Persistent (active) assignments made outside of PIM - Persistent (active) assignments with a schedule made inside PIM - Activated eligible assignments
+PIM-specific properties (such as end time) will be available only through **\*AssignmentScheduleInstance** object.
+ ## Next steps - [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
active-directory Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-approval-workflow.md
With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure roles to require approval for activation, and choose one or multiple users or groups as delegated approvers. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable. ++ ## View pending requests [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentSche
>[!NOTE] >Approvers are not able to approve their own role activation requests.
-1. Find and select the request that you want to approve. An approve or deny page appears.
-
- ![Screenshot that shows the "Approve requests - Azure AD roles" page.](./media/azure-ad-pim-approval-workflow/resources-approve-pane.png)
-
-1. In the **Justification** box, enter the business justification.
-
-1. Select **Approve**. You will receive an Azure notification of your approval.
-
- ![Approve notification showing request was approved](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Submit**. You will receive an Azure notification of your approval.
## Approve pending requests using Microsoft Graph API
+>[!NOTE]
+> Approval for **extend and renew** requests is currently not supported by the Microsoft Graph API
+ ### Get IDs for the steps that require approval For a specific activation request, this command gets all the approval steps that need approval. Multi-step approvals are not currently supported.
GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentAppr
PATCH https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentApprovals/<request-ID-GUID>/steps/<approval-step-ID-GUID> {
- "reviewResult": "Approve",
- "justification": "abcdefg"
+ "reviewResult": "Approve", // or "Deny"
+ "justification": "Trusted User"
} ````
Successful PATCH calls generate an empty response.
## Deny requests
-1. Find and select the request that you want to deny. An approve or deny page appears.
-
- ![Approve requests - approve or deny pane with details and Justification box](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
-
-1. In the **Justification** box, enter the business justification.
-
-1. Select **Deny**. A notification appears with your denial.
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Deny**. A notification appears with your denial.
## Workflow notifications
active-directory Pim Powershell Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-powershell-migration.md
+
+ Title: PIM PowerShell for Azure Resources Migration Guidance
+description: The following documentation provides guidance for Privileged Identity Management (PIM) PowerShell migration.
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 07/11/2023+++++
+# PIM PowerShell for Azure Resources Migration Guidance
+The following table provides guidance on using the new PowerShell cmdlts in the newer Azure PowerShell module.
++
+## New cmdlts in the Azure PowerShell module
+
+|Old AzureADPreview cmd|New Az cmd equivalent|Description|
+|--|--|--|
+|Get-AzureADMSPrivilegedResource|[Get-AzResource](/powershell/module/az.resources/get-azresource)|Get resources|
+|Get-AzureADMSPrivilegedRoleDefinition|[Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition)| Get role definitions|
+|Get-AzureADMSPrivilegedRoleSetting|[Get-AzRoleManagementPolicy](/powershell/module/az.resources/get-azrolemanagementpolicy)|Get the specified role management policy for a resource scope|
+|Set-AzureADMSPrivilegedRoleSetting|[Update-AzRoleManagementPolicy](/powershell/module/az.resources/update-azrolemanagementpolicy)| Update a rule defined for a role management policy|
+|Open-AzureADMSPrivilegedRoleAssignmentRequest|[New-AzRoleAssignmentScheduleRequest](/powershell/module/az.resources/new-azroleassignmentschedulerequest)|Used for Assignment Requests</br>Create role assignment schedule request
+|Open-AzureADMSPrivilegedRoleAssignmentRequest|[New-AzRoleEligibilityScheduleRequest](/powershell/module/az.resources/new-azroleeligibilityschedulerequest)|Used for Eligibility Requests</br>Create role eligibility schedule request|
+
+## Next steps
+
+- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
active-directory Pim Resource Roles Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md
With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD),
Follow the steps in this article to approve or deny requests for Azure resource roles. + ## View pending requests [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] As a delegated approver, you'll receive an email notification when an Azure resource role request is pending your approval. You can view these pending requests in Privileged Identity Management. + 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator). 1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests**.
As a delegated approver, you'll receive an email notification when an Azure reso
In the **Requests for role activations** section, you'll see a list of requests pending your approval. + ## Approve requests
-1. Find and select the request that you want to approve. An approve or deny page appears.
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Approve**. You will receive an Azure notification of your approval.
- ![Approve requests - approve or deny pane with details and Justification box](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
-1. In the **Justification** box, enter the business justification.
+## Approve pending requests using Microsoft ARM API
-1. Select **Approve**. You will receive an Azure notification of your approval.
+>[!NOTE]
+> Approval for **extend and renew** requests is currently not supported by the Microsoft ARM API
- ![Approve notification showing request was approved](./media/pim-resource-roles-approval-workflow/resources-approve-notification.png)
+### Get IDs for the steps that require approval
-## Deny requests
+To get the details of any stage of a role assignment approval, you can use [Role Assignment Approval Step - Get By ID](/rest/api/authorization/role-assignment-approval-step/get-by-id?tabs=HTTP) REST API.
+
+#### HTTP request
+
+````HTTP
+GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignmentApprovals/{approvalId}/stages/{stageId}?api-version=2021-01-01-preview
+````
-1. Find and select the request that you want to deny. An approve or deny page appears.
- ![Approve requests - approve or deny pane with details and Justification box](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
+### Approve the activation request step
-1. In the **Justification** box, enter the business justification.
+#### HTTP request
+
+````HTTP
+PATCH
+PATCH https://management.azure.com/providers/Microsoft.Authorization/roleAssignmentApprovals/{approvalId}/stages/{stageId}?api-version=2021-01-01-preview
+{
+ "reviewResult": "Approve", // or "Deny"
+ "justification": "Trusted User"
+}
+ ````
+
+#### HTTP response
+
+Successful PATCH calls generate an empty response.
+
+For more information, see [Use Role Assignment Approvals to approve PIM role activation requests with REST API](/rest/api/authorization/privileged-approval-sample)
+
+## Deny requests
-1. Select **Deny**. A notification appears with your denial.
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Deny**. A notification appears with your denial.
## Workflow notifications
ai-services Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/background-removal.md
The SDK example assumes that you defined the environment variables `VISION_KEY`
Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.common.visionserviceoptions) object using one of the constructors. For example:
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)]
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/how-to/program.cs?name=vision_service_options)]
#### [Python](#tab/python) Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example:
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)]
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/how-to/main.py?name=vision_service_options)]
#### [C++](#tab/cpp) At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example:
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)]
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=vision_service_options)]
Where we used this helper function to read the value of an environment variable:
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)]
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=get_env_var)]
#### [REST API](#tab/rest)
Create a new **VisionSource** object from the URL of the image you want to analy
**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)]
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/how-to/program.cs?name=vision_source)]
> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.common.visionsource.fromfile).
+> You can also analyze a local image by passing in the full-path image file name (see [VisionSource.FromFile](/dotnet/api/azure.ai.vision.common.visionsource.fromfile)), or by copying the image into the SDK's input buffer (see [VisionSource.FromImageSourceBuffer](/dotnet/api/azure.ai.vision.common.visionsource.fromimagesourcebuffer))
#### [Python](#tab/python) In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze.
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)]
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/how-to/main.py?name=vision_source)]
> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL.
+> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL (see argument name **filename**). Alternatively, you can analyze an image in a memory buffer by constructing **VisionSource** using the argument **image_source_buffer**.
#### [C++](#tab/cpp) Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl).
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)]
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=vision_source)]
> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile).
+> You can also analyze a local image by passing in the full-path image file name (see [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile)), or by copying the image into the SDK's input buffer (see [VisionSource::FromImageSourceBuffer](/cpp/cognitive-services/vision/input-visionsource#fromimagesourcebuffer)).
#### [REST API](#tab/rest)
ai-services Install Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/install-sdk.md
+
+ Title: Install the Vision SDK
+
+description: In this guide, you learn how to install the Vision SDK for your preferred programming language.
++++++ Last updated : 08/01/2023++
+zone_pivot_groups: programming-languages-vision-40-sdk
++
+# Install the Vision SDK
+++++
+## Next steps
+
+Follow the [Image Analysis quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) to get started.
ai-services Overview Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/overview-sdk.md
+
+ Title: Vision SDK Overview
+
+description: This page gives you an overview of the Azure AI Vision SDK for Image Analysis.
++++++ Last updated : 08/01/2023++++
+# Vision SDK overview
+
+The Vision SDK (Preview) provides a convenient way to access the Image Analysis service using [version 4.0 of the REST APIs](https://aka.ms/vision-4-0-ref).
++
+## Supported languages
+
+The Vision SDK supports the following languages and platforms:
+
+| Programming language | Quickstart | API Reference | Platform support |
+|-||--||
+| C# <sup>1</sup> | [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp) | [reference](/dotnet/api/azure.ai.vision.imageanalysis) | Windows, UWP, Linux |
+| C++ <sup>2</sup> | [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-cpp) | [reference](/cpp/cognitive-services/vision) | Windows, Linux |
+| Python | [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-python) | [reference](/python/api/azure-ai-vision) | Windows, Linux |
+
+<sup>1 The Vision SDK for C# is based on .NET Standard 2.0. See [.NET Standard](/dotnet/standard/net-standard?tabs=net-standard-2-0#net-implementation-support) documentation.</sup>
+
+<sup>2 ANSI-C isn't a supported programming language for the Vision SDK.</sup>
+
+## GitHub samples
+
+Numerous samples are available in the [Azure-Samples/azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) repository on GitHub.
+
+## Getting help
+
+If you need assistance using the Vision SDK or would like to report a bug or suggest new features, open a [GitHub issue in the samples repository](https://github.com/Azure-Samples/azure-ai-vision-sdk/issues). The SDK development team monitors these issues.
+
+Before you create a new issue:
+* Make sure you first scan to see if a similar issue already exists.
+* Find the sample closest to your scenario and run it to see if you see the same issue in the sample code.
+
+## Release notes
+
+* **Vision SDK 0.15.1-beta.1** released September 2023.
+ * Image Analysis Java JRE APIs for Windows x64 and Linux x64 were added.
+ * Image Analysis can now be done from a memory buffer (C#, C++, Python, Java).
+* **Vision SDK 0.13.0-beta.1** released July 2023. Image Analysis support was added for Universal Windows Platform (UWP) applications (C++, C#). Run-time package size reduction: Only the two native binaries
+`Azure-AI-Vision-Native.dll` and `Azure-AI-Vision-Extension-Image.dll` are now needed.
+* **Vision SDK 0.11.1-beta.1** released May 2023. Image Analysis APIs were updated to support [Background Removal](../how-to/background-removal.md).
+* **Vision SDK 0.10.0-beta.1** released April 2023. Image Analysis APIs were updated to support [Dense Captions](../concept-describe-images-40.md?tabs=dense).
+* **Vision SDK 0.9.0-beta.1** first released on March 2023, targeting Image Analysis applications on Windows and Linux platforms.
++
+## Next steps
+
+- [Install the SDK](./install-sdk.md)
+- [Try the Image Analysis Quickstart](../quickstarts-sdk/image-analysis-client-library-40.md)
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Previously updated : 09/05/2023 Last updated : 09/13/2023 recommendations: false
After you approve the request in your search service, you can start using the [c
> Virtual networks & private endpoints are only supported for the API, and not currently supported for Azure OpenAI Studio. ### Storage accounts
-Storage accounts in virtual networks and private endpoints are currently not supported by Azure OpenAI on your data.
+Storage accounts in virtual networks, firewalls, and private endpoints are currently not supported by Azure OpenAI on your data.
## Azure Role-based access controls (Azure RBAC)
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
zone_pivot_groups: openai-use-your-data
# Quickstart: Chat with Azure OpenAI models using your own data +
+[Reference](/javascript/api/@azure/openai) | [Source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) | [Package (npm)](https://www.npmjs.com/package/@azure/openai) | [Samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/tests/Samples)
++ In this quickstart you can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication. + ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
- Be sure that you are assigned at least the [Cognitive Services Contributor](./how-to/role-based-access-control.md#cognitive-services-contributor) role for the Azure OpenAI resource. +
+- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
+ > [!div class="nextstepaction"] > [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=OVERVIEW&Pillar=AOAI&Product=ownData&Page=quickstart&Section=Prerequisites) + [!INCLUDE [Connect your data to OpenAI](includes/connect-your-data-studio.md)] ::: zone pivot="programming-language-studio"
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
::: zone-end +++ ::: zone pivot="rest-api" [!INCLUDE [REST API quickstart](includes/use-your-data-rest.md)]
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
ai-services Speech Container Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-lid.md
The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-
| Version | Path | |--|| | Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
-| 1.11.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.11.0-amd64-preview` |
+| 1.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.12.0-amd64-preview` |
All tags, except for `latest`, are in the following format and are case sensitive:
The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-
"tags": [ "1.1.0-amd64-preview", "1.11.0-amd64-preview",
+ "1.12.0-amd64-preview",
"1.3.0-amd64-preview", "1.5.0-amd64-preview", <--redacted for brevity-->
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md
| Chinese (Literary) | `lzh` |Γ£ö|Γ£ö|||| | Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Chinese Traditional | `zh-Hant` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| chiShona|`sn`|Γ£ö|Γ£ö||||
| Croatian | `hr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Czech | `cs` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Danish | `da` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Gujarati | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|| | Haitian Creole | `ht` |Γ£ö|Γ£ö||Γ£ö|Γ£ö|
+| Hausa|`ha`|Γ£ö|Γ£ö||||
| Hebrew | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Hindi | `hi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Hmong Daw (Latin) | `mww` |Γ£ö|Γ£ö|||Γ£ö| | Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Igbo|`ig`|Γ£ö|Γ£ö||||
| Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Inuinnaqtun | `ikt` |Γ£ö|Γ£ö|||| | Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
| Kannada | `kn` |Γ£ö|Γ£ö|Γ£ö||| | Kazakh | `kk` |Γ£ö|Γ£ö|||| | Khmer | `km` |Γ£ö|Γ£ö||Γ£ö||
+| Kinyarwanda|`rw`|Γ£ö|Γ£ö||||
| Klingon | `tlh-Latn` |Γ£ö| ||Γ£ö|Γ£ö| | Klingon (plqaD) | `tlh-Piqd` |Γ£ö| ||Γ£ö||
+| Konkani|`gom`|Γ£ö|Γ£ö||||
| Korean | `ko` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö|| | Kurdish (Northern) | `kmr` |Γ£ö|Γ£ö||||
| Lao | `lo` |Γ£ö|Γ£ö||Γ£ö|| | Latvian | `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Lithuanian | `lt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Lingala|`ln`|Γ£ö|Γ£ö||||
+| Lower Sorbian|`dsb`|Γ£ö| ||||
+| Luganda|`lug`|Γ£ö|Γ£ö||||
| Macedonian | `mk` |Γ£ö|Γ£ö||Γ£ö||
+| Maithili|`mai`|Γ£ö|Γ£ö||||
| Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö||| | Malay (Latin) | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö|| | Nepali | `ne` |Γ£ö|Γ£ö|||| | Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Nyanja|`nya`|Γ£ö|Γ£ö||||
| Odia | `or` |Γ£ö|Γ£ö|Γ£ö||| | Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö|| | Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Punjabi | `pa` |Γ£ö|Γ£ö|Γ£ö||| | Queretaro Otomi | `otq` |Γ£ö|Γ£ö|||| | Romanian | `ro` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Rundi|`run`|Γ£ö|Γ£ö||||
| Russian | `ru` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Samoan (Latin) | `sm` |Γ£ö|Γ£ö |Γ£ö||| | Serbian (Cyrillic) | `sr-Cyrl` |Γ£ö|Γ£ö||Γ£ö|| | Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Sesotho|`st`|Γ£ö|Γ£ö||||
+| Sesotho sa Leboa|`nso`|Γ£ö|Γ£ö||||
+| Setswana|`tn`|Γ£ö|Γ£ö||||
+| Sindhi|`sd`|Γ£ö|Γ£ö||||
+| Sinhala|`si`|Γ£ö|Γ£ö||||
| Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Somali (Arabic) | `so` |Γ£ö|Γ£ö||Γ£ö||
| Uzbek (Latin) | `uz` |Γ£ö|Γ£ö||Γ£ö|| | Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Xhosa|`xh`|Γ£ö|Γ£ö||||
+| Yoruba|`yo`|Γ£ö|Γ£ö||||
| Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö|| | Zulu | `zu` |Γ£ö|Γ£ö||||
## Transliteration
-The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Translation feature supports the following languages. In the "To/From", "<-->" indicates that the language can be transliterated from or to either of the scripts listed. The "-->" indicates that the language can only be transliterated from one script to the other.
+The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Translation feature supports the following languages. In the `To/From`, `<-->` indicates that the language can be transliterated from or to either of the scripts listed. The `-->` indicates that the language can only be transliterated from one script to the other.
| Language | Language code | Script | To/From | Script| |:-- |:-:|:-:|:-:|:-:|
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/whats-new.md
Previously updated : 07/18/2023 Last updated : 09/12/2023 <!-- markdownlint-disable MD024 -->
Translator is a language service that enables users to translate text and docume
Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+## September 2023
+
+* Translator service has [text, document translation, and container language support](language-support.md) for the following 18 languages:
+
+|Language|Code|Cloud ΓÇô Text Translation and Document Translation|Containers ΓÇô Text Translation|Description
+|:-|:-|:-|:-|
+|chiShona|`sn`|Γ£ö|Γ£ö|The official language of Zimbabwe with more than 8 million native speakers.|
+|Hausa|`ha`|Γ£ö|Γ£ö|The most widely used language in West Africa with more than 150 million speakers worldwide.|
+|Igbo|`ig`|Γ£ö|Γ£ö|The principal native language of the Igbo people of Nigeria with more than 44 million speakers.|
+|Kinyarwanda|`rw`|Γ£ö|Γ£ö|The national language of Rwanda with more than 12 million speakers primarily in East and Central Africa.|
+|Lingala|`ln`|Γ£ö|Γ£ö|One of four official languages of the Democratic Republic of the Congo with more than 60 million speakers.|
+|Luganda|`lug`|Γ£ö|Γ£ö|A major language of Uganda with more than 5 million speakers.|
+|Nyanja|`nya`|Γ£ö|Γ£ö| Nynaja, also known as Chewa, is spoken mainly in Malawi and has more than 2 million native speakers.|
+|Rundi|`run`|Γ£ö|Γ£ö| Rundi, also known as Kirundi, is the national language of Burundi and has more than 6 million native speakers.|
+|Sesotho|`st`|Γ£ö|Γ£ö| Sesotho, also know as Sotho, is the national and official language of Lesotho, one of 12 official languages of South Africa, and one of 16 official languages of Zimbabwe. It has more than 5.6 native speakers.
+|Sesotho sa Leboa|`nso`|Γ£ö|Γ£ö|Sesotho, also known as Northern Sotho, is the native language of more than 4.6 million people in South Africa.|
+|Setswana|`tn`|Γ£ö|Γ£ö|Setswana, also known as Tswana, is an official language of Botswana and South Africa and has more than 5 million speakers.|
+|Xhosa|`xh`|Γ£ö|Γ£ö|An official language of South Africa and Zimbabwe, Xhosa has more than 20 million speakers.|
+|Yoruba|`yo`|Γ£ö|Γ£ö|The principal native language of the Yoruba people of West Africa, it has more than 50 million speakers.|
+|Konkani|`gom`|Γ£ö|Γ£ö|The official language of the Indian state of Goa with more than 7 million speakers worldwide.|
+|Maithili|`mai`|Γ£ö|Γ£ö|One of the 22 officially recognized languages of India and the second most spoken language in Nepal. It has more than 20 million speakers.|
+|Sindhi|`sd`|Γ£ö|Γ£ö|Sindhi is an official language of the Sindh province of Pakistan and the Rajasthan state in India. It has more than 33 million speakers worldwide.|
+|Sinhala|`si`|Γ£ö|Γ£ö|One of the official and national languages of Sri Lanka, Sinhala has more than 16 million native speakers.|
+|Lower Sorbian|`dsb`|Γ£ö|Currently, not supported in containers |A West Slavic language spoken primarily in eastern Germany. It has approximately 7,000 speakers.|
+ ## July 2023 [!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)]
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
Yes, you can run all three configurations i.e `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`simultaneously. In case the windows overlap AKS decides the running order.
+* I configured a maintenance window, but upgrade didn't happen - why?
+
+ AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 6 hours between the creation/update of the maintenance configuration, and when it's scheduled to start.
+
+* AKS auto-upgrade didn't upgrade all my agent pools - or one of the pools was upgraded outside of the maintenance window?
+
+ If an agent pool fails to upgrade (eg. because of Pod Disruption Budgets preventing it to upgrade) or is in a Failed state, then it might be upgraded later outside of the maintenance window. This scenario is called "catch-up upgrade" and avoids letting Agent pools with a different version than the AKS control plane.
+ * Are there any best practices for the maintenance configurations? We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy].
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 08/02/2023 Last updated : 09/13/2023
The following are the current limitations and known issues with PowerShell runbo
**Known issues**
+* Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required directory.
+* Modules imported through an ARM template might not load with `Import-module`. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+* `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
+* PowerShell 5.1 modules uploaded through .zip files might not load in Runbooks. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+* Completed jobs might show a warning message: *Both Az and AzureRM modules were detected on this machine. Az and AzureRM modules cannot be imported in the same session or used in the same script or runbook*. This is just a warning message and does not impact job execution.
* PowerShell runbooks can't retrieve an unencrypted [variable asset](./shared-resources/variables.md) with a null value. * PowerShell runbooks can't retrieve a variable asset with `*~*` in the name. * A [Get-Process](/powershell/module/microsoft.powershell.management/get-process) operation in a loop in a PowerShell runbook can crash after about 80 iterations.
The following are the current limitations and known issues with PowerShell runbo
**Limitations** - You must be familiar with PowerShell scripting.- - The Azure Automation internal PowerShell cmdlets aren't supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your PowerShell runbook to access the Automation account shared resources (assets) functions. - For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules. - *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version.
The following are the current limitations and known issues with PowerShell runbo
**Known issues**
+- Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required directory.
+- Modules imported through an ARM template might not load with `Import-module`. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+- `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
- Executing child scripts using `.\child-runbook.ps1` isn't supported in this preview. **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from `Az.Automation` module) to start another runbook from parent runbook. - Runbook properties defining logging preference isn't supported in PowerShell 7 runtime.
The following are the current limitations and known issues with PowerShell runbo
- Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md). **Known issues**-
+- Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required directory.
+- Modules imported through an ARM template might not load with `Import-module`. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+- `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview. **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook. - Runbook properties defining logging preference isn't supported in PowerShell 7 runtime.
Following are the limitations of Python runbooks
- Azure Automation doesn't supportΓÇ»**sys.stderr**. - The Python **automationassets** package isn't available on pypi.org, so it's not available for import onto a Windows machine.
-# [Python 3.10 (preview)](#tab/py10)
-**Limitations**
+# [Python 3.10 (preview)](#tab/py10)
- For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md) - Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if required dependencies of packages aren't imported into automation account.
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-app-configuration Reference Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md
The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which
#### Use Connection String 1. Create a Kubernetes Secret in the same namespace as the `AzureAppConfigurationProvider` resource and add Azure App Configuration connection string with key *azure_app_configuration_connection_string* in the Secret.
-2. Set the `spec.connectionStringReference` property to the name of the Secret in the following sample `AzureAppConfigurationProvider` resource and deploy it to the Kubernetes cluster.
+1. Set the `spec.connectionStringReference` property to the name of the Secret in the following sample `AzureAppConfigurationProvider` resource and deploy it to the Kubernetes cluster.
``` yaml apiVersion: azconfig.io/v1beta1
The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which
target: configMapName: configmap-created-by-appconfig-provider ```- ### Key-value selection Use the `selectors` property to filter the key-values to be downloaded from Azure App Configuration.
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
You'll need to connect and authenticate to a Kubernetes cluster and have an exis
kubectl config use-context <Kubernetes cluster name> ```
-### Upgrade Arc data controller extension
-
-Upgrade the Arc data controller extension first.
-
-Retrieve the name of your extension and its version:
-
-1. Go to the Azure portal
-1. Select **Overview** for your Azure Arc enabled Kubernetes cluster
-1. Selecting the **Extensions** tab on the left.
-
-Alternatively, you can use `az` CLI to get the name of your extension and its version running.
-
-```azurecli
-az k8s-extension list --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters
-```
-
-Example:
-
-```azurecli
-az k8s-extension list --resource-group rg-arcds --cluster-name aks-arc --cluster-type connectedClusters
-```
-
-After you retrieve the extension name and its version, upgrade the extension.
-
-```azurecli
-az k8s-extension update --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters --name <name of extension> --version <extension version> --release-train stable --config systemDefaultValues.image="<registry>/<repository>/arc-bootstrapper:<imageTag>"
-```
-
-Example:
-
-```azurecli
-az k8s-extension update --resource-group rg-arcds --cluster-name aks-arc --cluster-type connectedClusters --name aks-arc-ext --version
-1.2.19581002 --release-train stable --config systemDefaultValues.image="mcr.microsoft.com/arcdata/arc-bootstrapper:v1.7.0_2022-05-24"
-```
- ### Upgrade data controller You can perform a dry run first. The dry run validates the registry exists, the version schema, and the private repository authorization token (if used). To perform a dry run, use the `--dry-run` parameter in the `az arcdata dc upgrade` command. For example:
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023 #
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 06/06/2023 Last updated : 09/11/2023
Metadata information about a connected machine is collected after the Connected
* Cluster resource ID (for Azure Stack HCI nodes) * Hardware manufacturer * Hardware model
-* CPU socket, physical core and logical core counts
+* CPU family, socket, physical core and logical core counts
+* Total physical memory
+* Serial number
+* SMBIOS asset tag
* Cloud provider * Amazon Web Services (AWS) metadata, when running in AWS: * Account ID
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.30 - May 2023
+
+Download for [Windows](https://download.microsoft.com/download/7/7/9/779eae73-a12b-4170-8c5e-abec71bc14cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- Introduced a scheduled task that checks for agent updates on a daily basis. Currently, the update mechanism is inactive and no changes are made to your server even if a newer agent version is available. In the future, you'll be able to schedule updates of the Azure Connected Machine agent from Azure. For more information, see [Automatic agent upgrades](manage-agent.md#automatic-agent-upgrades).
+
+### Fixed
+
+- Resolved an issue that could cause the agent to go offline after rotating its connectivity keys.
+- `azcmagent show` no longer shows an incomplete resource ID or Azure portal page URL when the agent isn't configured.
+ ## Version 1.29 - April 2023 Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-949a-4b16-a29a-3d1dcb29cff7/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Connected Machine agent description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. Previously updated : 07/11/2023 Last updated : 09/11/2023
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md).
+## Version 1.34 - September 2023
+
+Download for [Windows](https://download.microsoft.com/download/b/3/2/b3220316-13db-4f1f-babf-b1aab33b364f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- [Extended Security Updates for Windows Server 2012 and 2012 R2](prepare-extended-security-updates.md) can be purchased and enabled through Azure Arc. If your server is already running the Azure Connected Machine agent, [upgrade to agent version 1.34](manage-agent.md#upgrade-the-agent) or later to take advantage of this new capability.
+- Additional system metadata is collected to enhance your device inventory in Azure:
+ - Total physical memory
+ - Additional processor information
+ - Serial number
+ - SMBIOS asset tag
+- Network requests to Microsoft Entra ID (formerly Azure Active Directory) now use `login.microsoftonline.com` instead of `login.windows.net`
+
+### Fixed
+
+- Better handling of disconnected agent scenarios in the extension manager and policy engine.
+ ## Version 1.33 - August 2023 Download for [Windows](https://download.microsoft.com/download/0/c/7/0c7a484b-e29e-42f9-b3e9-db431df2e904/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
Agent version 1.33 contains a fix for [CVE-2023-38176](https://msrc.microsoft.co
### Known issue
-[azcmagent check](azcmagent-check.md) validates a new endpoint in this release: `<geography>-ats.his.arc.azure.com`. This endpoint is reserved for future use and not required for the Azure Connected Machine agent to operate successfully. However, if you are using a private endpoint, this endpoint will fail the network connectivity check. You can safely ignore this endpoint in the results and should instead confirm that all other endpoints are reachable.
+[azcmagent check](azcmagent-check.md) validates a new endpoint in this release: `<geography>-ats.his.arc.azure.com`. This endpoint is reserved for future use and not required for the Azure Connected Machine agent to operate successfully. However, if you're using a private endpoint, this endpoint will fail the network connectivity check. You can safely ignore this endpoint in the results and should instead confirm that all other endpoints are reachable.
This endpoint will be removed from `azcmagent check` in a future release.
To check if you're running the latest version of the Azure connected machine age
- Improved output of the [azcmagent check](azcmagent-check.md) command - Better handling of spaces in the `--location` parameter of [azcmagent connect](azcmagent-connect.md)
-## Version 1.30 - May 2023
-
-Download for [Windows](https://download.microsoft.com/download/7/7/9/779eae73-a12b-4170-8c5e-abec71bc14cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
-
-### New features
--- Introduced a scheduled task that checks for agent updates on a daily basis. Currently, the update mechanism is inactive and no changes are made to your server even if a newer agent version is available. In the future, you'll be able to schedule updates of the Azure Connected Machine agent from Azure. For more information, see [Automatic agent upgrades](manage-agent.md#automatic-agent-upgrades).-
-### Fixed
--- Resolved an issue that could cause the agent to go offline after rotating its connectivity keys.-- `azcmagent show` no longer shows an incomplete resource ID or Azure portal page URL when the agent isn't configured.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the follow
Other Azure services through Azure Arc-enabled servers are available, with offerings such as: * [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) - As part of the cloud security posture management (CSPM) pillar, it provides server protections through [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers.md) to help protect you from various cyber threats and vulnerabilities.
-* [Update Manager (preview)](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
+* [Azure Update Manager (preview)](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
* [Azure Policy](../../governance/policy/overview.md) helps to enforce organizational standards and to assess compliance at-scale. Beyond providing an aggregated view to evaluate the overall state of the environment, Azure Policy helps to bring your resources to compliance through bulk and automatic remediation. >[!NOTE]
- >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Update Manager (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
+ >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
## Prepare delivery of ESUs
azure-cache-for-redis Cache Remove Tls 10 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-remove-tls-10-11.md
description: Learn how to remove TLS 1.0 and 1.1 from your application when comm
Previously updated : 07/13/2023 Last updated : 09/12/2023 ms.devlang: csharp, golang, java, javascript, php, python
ms.devlang: csharp, golang, java, javascript, php, python
# Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
-There's an industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later. TLS versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. They also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. This [TLS security blog](https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/) explains some of these vulnerabilities in more detail.
+To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of the TLS 1.2 in October, 2024. TLS versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses.
-As a part of this effort, we'll be making the following changes to Azure Cache for Redis:
+TLS versions 1.0 and 1.1 also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. This [TLS security blog](https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/) explains some of these vulnerabilities in more detail.
-* **Phase 1:** We'll configure the default minimum TLS version to be 1.2 for newly created cache instances (previously, it was TLS 1.0). Existing cache instances won't be updated at this point. You can still use the Azure portal or other management APIs to [change the minimum TLS version](cache-configure.md#access-ports) to 1.0 or 1.1 for backward compatibility.
-* **Phase 2:** We'll stop supporting TLS 1.1 and TLS 1.0. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service is expected to be available while we migrate it to support only TLS 1.2 or later.
+> [!IMPORTANT]
+> On October 1, 2024, the TLS 1.2 requirement will be enforced.
+>
+>
+
+As a part of this effort, you can expect the following changes to Azure Cache for Redis:
+
+- _Phase 1_: Azure Cache for Redis stops offering TLS 1.0/1.1 as an option for MinimumTLSVersion setting for new cache creates. Existing cache instances won't be updated at this point. You can still use the Azure portal or other management APIs to [change the minimum TLS version](cache-configure.md#access-ports) to 1.0 or 1.1 for backward compatibility.
+- _Phase 2_: Azure Cache for Redis stops supporting TLS 1.1 and TLS 1.0 starting October 1, 2024. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service will be available while we update the MinimumTLSVerion for all caches to 1.2.
- > [!WARNING]
- > Phase 2 is postponed because of COVID-19. We strongly recommend that you begin planning for this change now and proactively update clients to support TLS 1.2 or later.
- >
+| Date | Description |
+|-- |-|
+| September 2023 | TLS 1.0/1.1 retirement announcement |
+| March 1, 2024 | Beginning March 1, 2024, you will not be able to set the Minimum TLS version for any cache to 1.0 or 1.1.
+| September 30, 2024 | Ensure that all your applications are connecting to Azure Cache for Redis using TLS 1.2 and Minimum TLS version on your cache settings is set to 1.2
+| October 1, 2024 | Minimum TLS version for all cache instances is updated to 1.2. This means Azure Cache for Redis instances will reject connections using TLS 1.0 or 1.1.
> [!IMPORTANT]
- > The content in this article does not apply to Azure Cache for Redis Enterprise/Enterprise Flash as the Enterprise tiers support TLS 1.2 only.
+ > The content in this article does not apply to Azure Cache for Redis Enterprise/Enterprise Flash because the Enterprise tiers only support TLS 1.2.
>
-As part of this change, we'll also remove support for older cypher suites that aren't secure. Our supported cypher suites are restricted to the following suites when the cache is configured with a minimum of TLS 1.2:
+As part of this change, Azure Cache for Redis removes support for older cipher suites that aren't secure. Supported cipher suites are restricted to the following suites when the cache is configured with a minimum of TLS 1.2:
-* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
-This article provides general guidance about how to detect dependencies on these earlier TLS versions and remove them from your application.
-
-The dates when these changes take effect are:
-
-| Cloud | Phase 1 Start Date | Phase 2 Start Date |
-|-|--|-|
-| Azure (global) | January 13, 2020 | Postponed because of COVID-19 |
-| Azure Government | March 13, 2020 | Postponed because of COVID-19 |
-| Azure Germany | March 13, 2020 | Postponed because of COVID-19 |
-| Microsoft Azure operated by 21Vianet | March 13, 2020 | Postponed because of COVID-19 |
-
-> [!NOTE]
-> Phase 2 is postponed because of COVID-19. This article will be updated when specific dates are set.
->
+The following sections provide guidance about how to detect dependencies on these earlier TLS versions and remove them from your application.
## Check whether your application is already compliant
-You can find out whether your application works with TLS 1.2 by setting the **Minimum TLS version** value to TLS 1.2 on a test or staging cache, then running tests. The **Minimum TLS version** setting is in the [Advanced settings](cache-configure.md#advanced-settings) of your cache instance in the Azure portal. If the application continues to function as expected after this change, it's probably compliant. You might need to configure the Redis client library used by your application to enable TLS 1.2 to connect to Azure Cache for Redis.
+You can find out whether your application works with TLS 1.2 by setting the **Minimum TLS version** value to TLS 1.2 on a test or staging cache, then running tests. The **Minimum TLS version** setting is in the [Advanced settings](cache-configure.md#advanced-settings) of your cache instance in the Azure portal. If the application continues to function as expected after this change, it's probably compliant. You also need to configure the Redis client library used by your application to enable TLS 1.2 to connect to Azure Cache for Redis.
## Configure your application to use TLS 1.2
Most applications use Redis client libraries to handle communication with their
Redis .NET clients use the earliest TLS version by default on .NET Framework 4.5.2 or earlier, and use the latest TLS version on .NET Framework 4.6 or later. If you're using an older version of .NET Framework, enable TLS 1.2 manually:
-* **StackExchange.Redis:** Set `ssl=true` and `sslProtocols=tls12` in the connection string.
-* **ServiceStack.Redis:** Follow the [ServiceStack.Redis](https://github.com/ServiceStack/ServiceStack.Redis#servicestackredis-ssl-support) instructions and requires ServiceStack.Redis v5.6 at a minimum.
+- _StackExchange.Redis_: Set `ssl=true` and `sslProtocols=tls12` in the connection string.
+- _ServiceStack.Redis_: Follow the [ServiceStack.Redis](https://github.com/ServiceStack/ServiceStack.Redis#servicestackredis-ssl-support) instructions and requires ServiceStack.Redis v5.6 at a minimum.
### .NET Core
-Redis .NET Core clients default to the OS default TLS version, which depends on the OS itself.
+Redis .NET Core clients default to the OS default TLS version, which depends on the OS itself.
Depending on the OS version and any patches that have been applied, the effective default TLS version can vary. For more information, see [here](/dotnet/framework/network-programming/#support-for-tls-12). However, if you're using an old OS or just want to be sure, we recommend configuring the preferred TLS version manually through the client. - ### Java Redis Java clients use TLS 1.0 on Java version 6 or earlier. Jedis, Lettuce, and Redisson can't connect to Azure Cache for Redis if TLS 1.0 is disabled on the cache. Upgrade your Java framework to use new TLS versions. For Java 7, Redis clients don't use TLS 1.2 by default but can be configured for it. Jedis allows you to specify the underlying TLS settings with the following code snippet:
-``` Java
+```java
SSLSocketFactory sslSocketFactory = (SSLSocketFactory) SSLSocketFactory.getDefault(); SSLParameters sslParameters = new SSLParameters(); sslParameters.setEndpointIdentificationAlgorithm("HTTPS");
shardInfo.setPassword("cachePassword");
Jedis jedis = new Jedis(shardInfo); ```
-The Lettuce and Redisson clients don't yet support specifying the TLS version. They'll break if the cache accepts only TLS 1.2 connections. Fixes for these clients are being reviewed, so check with those packages for an updated version with this support.
+The Lettuce and Redisson clients don't yet support specifying the TLS version. They break if the cache accepts only TLS 1.2 connections. Fixes for these clients are being reviewed, so check with those packages for an updated version with this support.
In Java 8, TLS 1.2 is used by default and shouldn't require updates to your client configuration in most cases. To be safe, test your application.
Node Redis and IORedis use TLS 1.2 by default.
### PHP #### Predis
-
-* Versions earlier than PHP 7: Predis supports only TLS 1.0. These versions don't work with TLS 1.2; you must upgrade to use TLS 1.2.
-
-* PHP 7.0 to PHP 7.2.1: Predis uses only TLS 1.0 or 1.1 by default. You can use the following workaround to use TLS 1.2. Specify TLS 1.2 when you create the client instance:
+
+- Versions earlier than PHP 7: Predis supports only TLS 1.0. These versions don't work with TLS 1.2; you must upgrade to use TLS 1.2.
+
+- PHP 7.0 to PHP 7.2.1: Predis uses only TLS 1.0 or 1.1 by default. You can use the following workaround to use TLS 1.2. Specify TLS 1.2 when you create the client instance:
``` PHP $redis=newPredis\Client([
Node Redis and IORedis use TLS 1.2 by default.
]); ```
-* PHP 7.3 and later versions: Predis uses the latest TLS version.
+- PHP 7.3 and later versions: Predis uses the latest TLS version.
#### PhpRedis
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 05/31/2023- Last updated : 09/12/2023 # What's New in Azure Cache for Redis
+## September 2023
+
+### Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
+
+To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of the TLS 1.2 in October, 2024.
+
+As a part of this effort, you can expect the following changes to Azure Cache for Redis:
+
+- _Phase 1_: Azure Cache for Redis stops offering TLS 1.0/1.1 as an option for MinimumTLSVersion setting for new cache creates. Existing cache instances won't be updated at this point. You can still use the Azure portal or other management APIs to [change the minimum TLS version](cache-configure.md#access-ports) to 1.0 or 1.1 for backward compatibility.
+- _Phase 2_: Azure Cache for Redis stops supporting TLS 1.1 and TLS 1.0 starting October 1, 2024. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service is expected to be available while we update the MinimumTLSVerion for all caches to 1.2.
+
+For more information, see [Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis](cache-remove-tls-10-11.md).
+ ## June 2023
-Azure Active Directory for authentication and role-based access control are available across regions that support Azure Cache for Redis.
+Azure Active Directory for authentication and role-based access control is available across regions that support Azure Cache for Redis.
## May 2023
For more information, see [Configure clustering for Azure Cache for Redis instan
### 99th percentile latency metric (preview)
-A new metric is available to track the worst-case latency of server-side commands in Azure Cache for Redis instances. Latency is measured by using `PING` commands and tracking response times. This metric can be used to track the health of your cache instance and to see if long-running commands are compromising latency performance.
+A new metric is available to track the worst-case latency of server-side commands in Azure Cache for Redis instances. Latency is measured by using `PING` commands and tracking response times. This metric can be used to track the health of your cache instance and to see if long-running commands are compromising latency performance.
For more information, see [Monitor Azure Cache for Redis](cache-how-to-monitor.md#list-of-metrics).
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
Before you begin, you must have the following prerequisites:
+ The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later. ::: zone pivot="nodejs-model-v3"
-+ [Node.js](https://nodejs.org/) version 18 or 16.
++ [Node.js](https://nodejs.org/) version 20 (preview), 18 or 16. ::: zone-end ::: zone pivot="nodejs-model-v4"
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> ```
- The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
+ The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It's recommended that you use the latest LTS version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
# [Azure PowerShell](#tab/azure-powershell)
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 18 -FunctionsVersion 4 -Location <REGION> ```
- The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It's recommended that you use the latest LTS version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
description: Learn how to develop and test Azure Functions by using the Azure Fu
ms.devlang: csharp, java, javascript, powershell, python Previously updated : 06/19/2022 Last updated : 09/01/2023
+zone_pivot_groups: programming-languages-set-functions
#Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
The Azure Functions extension provides these benefits:
* Publish your Azure Functions project directly to Azure. * Write your functions in various languages while taking advantage of the benefits of Visual Studio Code.
-The extension can be used with the following languages, which are supported by the Azure Functions runtime starting with version 2.x:
-
-* [C# compiled](functions-dotnet-class-library.md)
-* [C# script](functions-reference-csharp.md)<sup>*</sup>
-* [JavaScript](functions-reference-node.md?tabs=javascript)
-* [Java](functions-reference-java.md)
-* [PowerShell](functions-reference-powershell.md)
-* [Python](functions-reference-python.md)
-* [TypeScript](functions-reference-node.md?tabs=typescript)
-
-<sup>*</sup>Requires that you [set C# script as your default project language](#c-script-projects).
-
-In this article, examples are currently available only for JavaScript (Node.js) and C# class library functions.
-
-This article provides details about how to use the Azure Functions extension to develop functions and publish them to Azure. Before you read this article, you should [create your first function by using Visual Studio Code](./create-first-function-vs-code-csharp.md).
+>You're viewing the C# version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-csharp.md).
+>You're viewing the Java version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-java.md).
+>You're viewing the JavaScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-node.md).
+>You're viewing the PowerShell version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-powershell.md).
+>You're viewing the Python version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-python.md).
+>You're viewing the TypeScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](./create-first-function-vs-code-typescript.md).
> [!IMPORTANT] > Don't mix local development and portal development for a single function app. When you publish from a local project to a function app, the deployment process overwrites any functions that you developed in the portal.
This article provides details about how to use the Azure Functions extension to
These prerequisites are only required to [run and debug your functions locally](#run-functions-locally). They aren't required to create or publish projects to Azure Functions.
-# [C\#](#tab/csharp)
-
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-
-* The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-
-* [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x).
-
-# [Java](#tab/java)
-
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-
-* [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
-
-* [Java](/azure/developer/jav#java-versions).
-
-* [Maven 3 or later](https://maven.apache.org/).
-
-# [JavaScript](#tab/nodejs)
++ The [Azure Functions Core Tools](functions-run-local.md), which enables an integrated local debugging experience. When using the Azure Functions extension, the easiest way to install Core Tools is by running the `Azure Functions: Install or Update Azure Functions Core Tools` command from the command pallet. ++ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
++ [.NET (CLI)](/dotnet/core/tools/), which is included in the .NET SDK.++ [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
-* [Node.js](https://nodejs.org/), one of the [supported versions](functions-reference-node.md#node-version). Use the `node --version` command to check your version.
++ [Java](/azure/developer/jav#java-versions).
-# [PowerShell](#tab/powershell)
++ [Maven 3 or later](https://maven.apache.org/).++ [Node.js](https://nodejs.org/), one of the [supported versions](functions-reference-node.md#node-version). Use the `node --version` command to check your version.++ [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time.
++ [.NET 6.0 runtime](https://dotnet.microsoft.com/download).
-* [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
++ The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). ++ [Python](https://www.python.org/downloads/), one of the [supported versions](functions-reference-python.md#python-version).
-* [.NET 6.0 runtime](https://dotnet.microsoft.com/download).
-
-* The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
-
-# [Python](#tab/python)
-
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time.
-
-* [Python](https://www.python.org/downloads/), one of the [supported versions](functions-reference-python.md#python-version).
-
-* [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
++ [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. [!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)]-- ## Create an Azure Functions project
The Functions extension lets you create a function app project, along with your
1. A function is created in your chosen language and in the template for an HTTP-triggered function.
- :::image type="content" source="./media/functions-develop-vs-code/new-function-created.png" alt-text="Screenshot for H T T P-triggered function template in Visual Studio Code.":::
- ### Generated project files The project template creates a project in your chosen language and installs required dependencies. For any language, the new project has these files:
The project template creates a project in your chosen language and installs requ
Depending on your language, these other files are created:
-# [C\#](#tab/csharp)
-
-* [HttpExample.cs class library file](functions-dotnet-class-library.md#functions-class-library-project) that implements the function.
-
-# [Java](#tab/java)
-
-* A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
-
-* A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
-
-# [JavaScript](#tab/nodejs)
+An HttpExample.cs class library file, the contents of which vary depending on whether your project runs in an [isolated worker process](dotnet-isolated-process-guide.md#net-isolated-worker-process-project) or [in-process](functions-dotnet-class-library.md#functions-class-library-project) with the Functions host.
++ A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
-* A package.json file in the root folder.
++ A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
+Files generated depend on the chosen Node.js programming model for Functions:
+### [v3](#tab/node-v3)
++ A package.json file in the root folder.
-* An HttpExample folder that contains the [function.json definition file](functions-reference-node.md#folder-structure) and the [index.js file](functions-reference-node.md#exporting-a-function), a Node.js file that contains the function code.
++ An HttpExample folder that contains:
-# [PowerShell](#tab/powershell)
+ + The [function.json definition file](functions-reference-node.md#folder-structure)
+ + An [index.js file](functions-reference-node.md#exporting-a-function), which contains the function code.
-* An HttpExample folder that contains the [function.json definition file](functions-reference-powershell.md#folder-structure) and the run.ps1 file, which contains the function code.
+### [v4](#tab/node-v4)
-# [Python](#tab/python)
++ A package.json file in the root folder.
-* A project-level requirements.txt file that lists packages required by Functions.
-
-* An HttpExample folder that contains the [function.json definition file](functions-reference-python.md#folder-structure) and the \_\_init\_\_.py file, which contains the function code.
++ A named .js file in the _src\functions_ folder, which contains both the function definition and your function code.
-At this point, you can [add input and output bindings](#add-input-and-output-bindings) to your function.
-You can also [add a new function to your project](#add-a-function-to-your-project).
+An HttpExample folder that contains:
-## Install binding extensions
++ The [function.json definition file](functions-reference-powershell.md#folder-structure)++ A run.ps1 file, which contains the function code.
-Except for HTTP and timer triggers, bindings are implemented in extension packages. You must install the extension packages for the triggers and bindings that need them. The process for installing binding extensions depends on your project's language.
+Files generated depend on the chosen Python programming model for Functions:
+
+### [v2](#tab/python-v2)
-# [C\#](#tab/csharp)
++ A project-level requirements.txt file that lists packages required by Functions.
-Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. The following example demonstrates how you add a binding for an [in-process class library](functions-dotnet-class-library.md):
++ A function_app.py file that contains both the function definition and code.
-```terminal
-dotnet add package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
-```
+### [v1](#tab/python-v1)
-The following example demonstrates how you add a binding for an [isolated-process class library](dotnet-isolated-process-guide.md):
++ A project-level requirements.txt file that lists packages required by Functions.
-```terminal
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
-```
-
-In either case, replace `<BINDING_TYPE_NAME>` with the name of the package that contains the binding you need. You can find the desired binding reference article in the [list of supported bindings](./functions-triggers-bindings.md#supported-bindings).
-
-Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
-
-# [Java](#tab/java)
-++ An HttpExample folder that contains:
+ + The [function.json definition file](functions-reference-python.md#folder-structure)
+ + An \_\_init\_\_.py file, which contains the function code.
-# [JavaScript](#tab/nodejs)
--
-# [PowerShell](#tab/powershell)
-+
-# [Python](#tab/python)
+At this point, you can do one of these tasks:
-++ [Add input or output bindings to an existing function](#add-input-and-output-bindings).++ [Add a new function to your project](#add-a-function-to-your-project).++ [Run your functions locally](#run-functions-locally).++ [Publish your project to Azure](#publish-to-azure). ## Add a function to your project You can add a new function to an existing project by using one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
-The results of this action depend on your project's language:
-
-# [C\#](#tab/csharp)
-
-A new C# class library (.cs) file is added to your project.
-
-# [Java](#tab/java)
-
-A new Java (.java) file is added to your project.
+The results of this action are that a new C# class library (.cs) file is added to your project.
+The results of this action are that a new Java (.java) file is added to your project.
+The results of this action depend on the Node.js model version.
-# [JavaScript](#tab/nodejs)
+### [v3](#tab/node-v3)
A new folder is created in the project. The folder contains a new function.json file and the new JavaScript code file.
-# [PowerShell](#tab/powershell)
+### [v4](#tab/node-v4)
-A new folder is created in the project. The folder contains a new function.json file and the new PowerShell code file.
++ A package.json file in the root folder.
-# [Python](#tab/python)
-
-The results depend on the Python programming model. For more information, see the [Azure Functions Python developer guide](./functions-reference-python.md).
-
-**Python v1**: A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
-
-**Python v2**: New function code is added either to the default function_app.py file or to another Python file you selected.
++ A named .js file in the _src\functions_ folder, which contains both the function definition and your function code.
+The results of this action are that a new folder is created in the project. The folder contains a new function.json file and the new PowerShell code file.
+The results of this action depend on the Python model version.
-## <a name="add-input-and-output-bindings"></a>Connect to services
-
-You can connect your function to other Azure services by adding input and output bindings. Bindings connect your function to other services without you having to write the connection code. The process for adding bindings depends on your project's language. To learn more about bindings, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
-
-The following examples connect to a storage queue named `outqueue`, where the connection string for the storage account is set in the `MyStorageConnection` application setting in local.settings.json.
-
-# [C\#](#tab/csharp)
+### [v2](#tab/python-v2)
-Update the function method to add the following parameter to the `Run` method definition:
+New function code is added either to the function_app.py file (the default behavior) or to another Python file you selected.
+### [v1](#tab/python-v1)
-The `msg` parameter is an `ICollector<T>` type, which represents a collection of messages that are written to an output binding when the function completes. The following code adds a message to the collection:
+A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+
- Messages are sent to the queue when the function completes.
+## <a name="add-input-and-output-bindings"></a>Connect to services
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=csharp) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+You can connect your function to other Azure services by adding input and output bindings. Bindings connect your function to other services without you having to write the connection code.
-# [Java](#tab/java)
+For example, the way you define an output binding that writes data to a storage queue depends on your process model:
-Update the function method to add the following parameter to the `Run` method definition:
+### [In-process](#tab/in-process)
+Update the function method to add a binding parameter defined by using the `Queue` attribute. You can use an `ICollector<T>` type to represent a collection of messages.
-The `msg` parameter is an `OutputBinding<T>` type, where `T` is a string that is written to an output binding when the function completes. The following code sets the message in the output binding:
+### [Isolated process](#tab/isolated-process)
+Update the function method to add a binding parameter defined by using the `QueueOutput` attribute. You can use a `MultiResponse` object to return multiple messages or multiple output streams.
-This message is sent to the queue when the function completes.
+
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=java) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
+For example, to add an output binding that writes data to a storage queue you update the function method to add a binding parameter defined by using the [`QueueOutput`](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation. The [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding) object represents the messages that are written to an output binding when the function completes.
+For example, the way you define the output binding that writes data to a storage queue depends on your Node.js model version:
-# [JavaScript](#tab/nodejs)
+### [v3](#tab/node-v3)
[!INCLUDE [functions-add-output-binding-vs-code](../../includes/functions-add-output-binding-vs-code.md)]
-In your function code, the `msg` binding is accessed from the `context`, as in this example:
+### [v4](#tab/node-v4)
+Using the Node.js v4 model, you must manually add a `return:` option in the function definition using the `storageQueue` function on the `output` object, which defines the storage queue to write the `return` output. Output is written when the function completes.
-This message is sent to the queue when the function completes.
-
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=javascript) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
-
-# [PowerShell](#tab/powershell)
+
[!INCLUDE [functions-add-output-binding-vs-code](../../includes/functions-add-output-binding-vs-code.md)]
+For example, the way you define the output binding that writes data to a storage queue depends on your Python model version:
-
-This message is sent to the queue when the function completes.
+### [v2](#tab/python-v2)
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=powershell) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
+The `@queue_output` decorator on the function is used to define a named binding parameter for the output to the storage queue, where `func.Out` defines what output is written.
-# [Python](#tab/python)
+### [v1](#tab/python-v1)
[!INCLUDE [functions-add-output-binding-vs-code](../../includes/functions-add-output-binding-vs-code.md)]
-Update the `Main` definition to add an output parameter `msg: func.Out[func.QueueMessage]` so that the definition looks like the following example:
+
-
-The following code adds string data from the request to the output queue:
--
-This message is sent to the queue when the function completes.
-
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=python) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
-- [!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
To learn more, see the [Queue storage output binding reference article](function
Before you can publish your Functions project to Azure, you must have a function app and related resources in your Azure subscription to run your code. The function app provides an execution context for your functions. When you publish to a function app in Azure from Visual Studio Code, the project is packaged and deployed to the selected function app in your Azure subscription.
-When you create a function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you'll have more control over the remote resources created.
+When you create a function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you have more control over the remote resources created.
### Quick function app create
When the project is running, you can use the **Execute Function Now...** feature
1. When the function runs locally and after the response is received, a notification is raised in Visual Studio Code. Information about the function execution is shown in **Terminal** panel.
-Running functions locally doesn't require using keys.
+Keys aren't required when running locally, which applies to both function keys and admin-level keys.
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
By default, these settings aren't migrated automatically when the project is pub
Values in **ConnectionStrings** are never published.
-The function application settings values can also be read in your code as environment variables. For more information, see the Environment variables sections of these language-specific reference articles:
+### [Isolated process](#tab/isolated-process)
+The function application settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).
-* [C# precompiled](functions-dotnet-class-library.md#environment-variables)
-* [C# script (.csx)](functions-reference-csharp.md#environment-variables)
-* [Java](functions-reference-java.md#environment-variables)
-* [JavaScript](functions-reference-node.md#environment-variables)
-* [PowerShell](functions-reference-powershell.md#environment-variables)
-* [Python](functions-reference-python.md#environment-variables)
+### [In-process](#tab/in-process)
+The function application settings values can also be read in your code as environment variables as with any ASP.NET Core app.
+++++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-java.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-node.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-powershell.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-python.md#environment-variables). ## Application settings in Azure
If you've created application settings in Azure, you can download them into your
As with uploading, if the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings that have conflicting values in the two locations, you're prompted to choose how to proceed.
+## Install binding extensions
+
+Except for HTTP and timer triggers, bindings are implemented in extension packages.
+
+You must explicitly install the extension packages for the triggers and bindings that need them. The specific package you install depends on your project's process model.
+
+### [Isolated process](#tab/isolated-process)
+
+Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. This template demonstrates how you add a binding for an [isolated-process class library](dotnet-isolated-process-guide.md):
+
+```terminal
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
+```
+
+### [In-process](#tab/in-process)
+
+Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. This template demonstrates how you add a binding for an [in-process class library](functions-dotnet-class-library.md):
+
+```terminal
+dotnet add package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
+```
+++
+Replace `<BINDING_TYPE_NAME>` with the name of the package that contains the binding you need. You can find the desired binding reference article in the [list of supported bindings](./functions-triggers-bindings.md#supported-bindings).
+
+Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to the current Functions runtime are specified in the reference article for the binding.
+
+C# script uses [extension bundles](functions-bindings-register.md#extension-bundles).
++
+If for some reason you can't use an extension bundle to install binding extensions for your project, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
++ ## Monitoring functions When you [run functions locally](#run-functions-locally), log data is streamed to the Terminal console. You can also get log data when your Functions project is running in a function app in Azure. You can connect to streaming logs in Azure to see near-real-time log data. You should enable Application Insights for a more complete understanding of how your function app is behaving.
When you're developing an application, it's often useful to see logging informat
:::image type="content" source="media/functions-develop-vs-code/streaming-logs-vscode-console.png" alt-text="Screenshot for streaming logs output for H T T P trigger.":::
-To learn more, see [Streaming logs](functions-monitoring.md#streaming-logs).
--
-> [!NOTE]
-> Streaming logs support only a single instance of the Functions host. When your function is scaled to multiple instances, data from other instances isn't shown in the log stream. [Live Metrics Stream](../azure-monitor/app/live-stream.md) in Application Insights does support multiple instances. While also in near-real time, streaming analytics is based on [sampled data](configure-monitoring.md#configure-sampling).
+To learn more, see [Streaming logs](functions-monitoring.md?tabs=vs-code#streaming-logs).
### Application Insights
-We recommend that you monitor the execution of your functions by integrating your function app with Application Insights. When you create a function app in the Azure portal, this integration occurs by default. When you create your function app during Visual Studio publishing, you need to integrate Application Insights yourself. To learn how, see [Enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
+You should monitor the execution of your functions by integrating your function app with Application Insights. When you create a function app in the Azure portal, this integration occurs by default. When you create your function app during Visual Studio publishing, you need to integrate Application Insights yourself. To learn how, see [Enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
To learn more about monitoring using Application Insights, see [Monitor Azure Functions](functions-monitoring.md).
Now that you've configured the Terminal with Rosetta to run x86 emulation for Py
![Screenshot of starting a new Rosetta terminal in Visual Studio Code.](./media/functions-develop-vs-code/vs-code-rosetta.png) + ## C\# script projects By default, all C# projects are created as [C# compiled class library projects](functions-dotnet-class-library.md). If you prefer to work with C# script projects instead, you must select C# script as the default language in the Azure Functions extension settings:
By default, all C# projects are created as [C# compiled class library projects](
1. Select **C#Script** from **Azure Function: Project Language**. After you complete these steps, calls made to the underlying Core Tools include the `--csx` option, which generates and publishes C# script (.csx) project files. When you have this default language specified, all projects that you create default to C# script projects. You're not prompted to choose a project language when a default is set. To create projects in other languages, you must change this setting or remove it from the user settings.json file. After you remove this setting, you're again prompted to choose your language when you create a project. ## Command palette reference
The Azure Functions extension provides a useful graphical interface in the area
| **Disconnect from Repo** | Removes the [continuous deployment](functions-continuous-deployment.md) connection between a function app in Azure and a source control repository. | | **Download Remote Settings** | Downloads settings from the chosen function app in Azure into your local.settings.json file. If the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings that have conflicting values in the two locations, you're prompted to choose how to proceed. Be sure to save changes to your local.settings.json file before you run this command. | | **Edit settings** | Changes the value of an existing function app setting in Azure. This command doesn't affect settings in your local.settings.json file. |
-| **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime will decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
+| **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
| **Execute Function Now** | Manually starts a function using admin APIs. This command is used for testing, both locally during debugging and against functions running in Azure. When a function in Azure starts, the extension first automatically obtains an admin key, which it uses to call the remote admin APIs that start functions in Azure. The body of the message sent to the API depends on the type of trigger. Timer triggers don't require you to pass any data. | | **Initialize Project for Use with VS Code** | Adds the required Visual Studio Code project files to an existing Functions project. Use this command to work with a project that you created by using Core Tools. | | **Install or Update Azure Functions Core Tools** | Installs or updates [Azure Functions Core Tools], which is used to run functions locally. |
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
In code, assemblies are referenced like the following example:
To reference a custom assembly, you can use either a *shared* assembly or a *private* assembly:
-* Shared assemblies are shared across all functions within a function app. To reference a custom assembly, upload the assembly to a folder named `bin` in your [function app root folder](functions-reference.md#folder-structure) (wwwroot).
+* Shared assemblies are shared across all functions within a function app. To reference a custom assembly, upload the assembly to a folder named `bin` in the root folder (wwwroot) of your function app.
* Private assemblies are part of a given function's context, and support side-loading of different versions. Private assemblies should be uploaded in a `bin` folder in the function directory. Reference the assemblies using the file name, such as `#r "MyAssembly.dll"`.
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
The following table shows each version of the Node.js programming model along wi
| [Programming Model Version](https://www.npmjs.com/package/@azure/functions?activeTab=versions) | Support Level | [Functions Runtime Version](./functions-versions.md) | [Node.js Version](https://github.com/nodejs/release#release-schedule) | Description | | - | - | | | |
-| 4.x | Preview | 4.16+ | 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. |
-| 3.x | GA | 4.x | 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file |
+| 4.x | Preview | 4.16+ | 20.x (Preview), 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. |
+| 3.x | GA | 4.x | 20.x (Preview), 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file |
| 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | | 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. |
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Title: Guidance for developing Azure Functions
description: Learn the Azure Functions concepts and techniques that you need to develop functions in Azure, across all programming languages and bindings. ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Previously updated : 08/10/2023 Last updated : 09/06/2023
+zone_pivot_groups: programming-languages-set-functions
+ # Azure Functions developer guide
-In Azure Functions, specific functions share a few core technical concepts and components, regardless of the language or binding you use. Before you jump into learning details specific to a given language or binding, be sure to read through this overview that applies to all of them.
+
+In Azure Functions, all functions share some core technical concepts and components, regardless of your preferred language or development environment. This article is language-specific. Choose your preferred language at the top of the article.
This article assumes that you've already read the [Azure Functions overview](functions-overview.md).
-## Function code
-A *function* is the primary concept in Azure Functions. A function contains two important pieces - your code, which can be written in various languages, and some config, the function.json file. For compiled languages, this config file is generated automatically from annotations in your code. For scripting languages, you must provide the config file yourself.
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio](./functions-create-your-first-function-visual-studio.md), [Visual Studio Code](./create-first-function-vs-code-csharp.md), or from the [command prompt](./create-first-function-cli-csharp.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Maven](create-first-function-cli-java.md) (command line), [Eclipse](functions-create-maven-eclipse.md), [IntelliJ IDEA](functions-create-maven-intellij.md), [Gradle](functions-create-first-java-gradle.md), [Quarkus](functions-create-first-quarkus.md), [Spring Cloud](/azure/developer/jav).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-node.md) or from the [command prompt](./create-first-function-cli-node.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-typescript.md) or from the [command prompt](./create-first-function-cli-typescript.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-powershell.md) or from the [command prompt](./create-first-function-cli-powershell.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-python.md) or from the [command prompt](./create-first-function-cli-python.md).
-The function.json file defines the function's trigger, bindings, and other configuration settings. Every function has one and only one trigger. The runtime uses this config file to determine the events to monitor and how to pass data into and return data from a function execution. The following is an example function.json file.
+## Code project
-```json
-{
- "disabled":false,
- "bindings":[
- // ... bindings here
- {
- "type": "bindingType",
- "direction": "in",
- "name": "myParamName",
- // ... more depending on binding
- }
- ]
-}
-```
+At the core of Azure Functions is a language-specific code project that implements one or more units of code execution called _functions_. Functions are simply methods that run in the Azure cloud based on events, in response to HTTP requests, or on a schedule. Think of your Azure Functions code project as a mechanism for organizing, deploying, and collectively managing your individual functions in the project when they're running in Azure. For more information, see [Organize your functions](functions-best-practices.md#organize-your-functions).
-For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For detailed language-specific guidance, see the [C# developers guide](dotnet-isolated-process-guide.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [Java developers guide](functions-reference-java.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [Node.js developers guide](functions-reference-node.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [PowerShell developers guide](functions-reference-powershell.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [Python developers guide](functions-reference-python.md).
+All functions must have a trigger, which defines how the function starts and can provide input to the function. Your functions can optionally define input and output bindings. These bindings simplify connections to other services without you having to work with client SDKs. For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
-The `bindings` property is where you configure both triggers and bindings. Each binding shares a few common settings and some settings, which are specific to a particular type of binding. Every binding requires the following settings:
+Azure Functions provides a set of language-specific project and function templates that make it easy to create new code projects and add functions to your project. You can use any of the tools that support Azure Functions development to generate new apps and functions using these templates.
-| Property | Values | Type | Comments|
-|||||
-| type | Name of binding.<br><br>For example, `queueTrigger`. | string | |
-| direction | `in`, `out` | string | Indicates whether the binding is for receiving data into the function or sending data from the function. |
-| name | Function identifier.<br><br>For example, `myQueue`. | string | The name that is used for the bound data in the function. For C#, this is an argument name; for JavaScript, it's the key in a key/value list. |
+## Development tools
-## Function app
-A function app provides an execution context in Azure in which your functions run. As such, it's the unit of deployment and management for your functions. A function app is composed of one or more individual functions that are managed, deployed, and scaled together. All of the functions in a function app share the same pricing plan, deployment method, and runtime version. Think of a function app as a way to organize and collectively manage your functions. To learn more, see [How to manage a function app](functions-how-to-use-azure-function-app-settings.md).
+The following tools provide an integrated development and publishing experience for Azure Functions in your preferred language:
-> [!NOTE]
-> All functions in a function app must be authored in the same language. In [previous versions](functions-versions.md) of the Azure Functions runtime, this wasn't required.
++ [Visual Studio](./functions-develop-vs.md)++ [Visual Studio Code](./functions-develop-vs-code.md)
-## Folder structure
++ [Azure Functions Core Tools](./functions-develop-local.md) (command prompt) ++ [Eclipse](functions-create-maven-eclipse.md )
-The above is the default (and recommended) folder structure for a Function app. If you wish to change the file location of a function's code, modify the `scriptFile` section of the _function.json_ file. We also recommend using [package deployment](deployment-zip-push.md) to deploy your project to your function app in Azure. You can also use existing tools like [continuous integration and deployment](functions-continuous-deployment.md) and Azure DevOps.
++ [Gradle](functions-create-first-java-gradle.md)
-> [!NOTE]
-> If deploying a package manually, make sure to deploy your _host.json_ file and function folders directly to the `wwwroot` folder. Do not include the `wwwroot` folder in your deployments. Otherwise, you end up with `wwwroot\wwwroot` folders.
++ [IntelliJ IDEA](functions-create-maven-intellij.md)
-#### Use local tools and publishing
-Function apps can be authored and published using a variety of tools, including [Visual Studio](./functions-develop-vs.md), [Visual Studio Code](./create-first-function-vs-code-csharp.md), [IntelliJ](./functions-create-maven-intellij.md), [Eclipse](./functions-create-maven-eclipse.md), and the [Azure Functions Core Tools](./functions-develop-local.md). For more information, see [Code and test Azure Functions locally](./functions-develop-local.md).
++ [Quarkus](functions-create-first-quarkus.md)
-<!--NOTE: I've removed documentation on FTP, because it does not sync triggers on the consumption plan --glenga -->
++ [Spring Cloud](/azure/developer/java/spring-framework/getting-started-with-spring-cloud-function-in-azure?toc=/azure/azure-functions/toc.json)
-## <a id="fileupdate"></a> How to edit functions in the Azure portal
-The Functions editor built into the Azure portal lets you update your code and your *function.json* file directly inline. This is recommended only for small changes or proofs of concept - best practice is to use a local development tool like VS Code.
+These tools integrate with [Azure Functions Core Tools](./functions-develop-local.md) so that you can run and debug on your local computer using the Functions runtime. For more information, see [Code and test Azure Functions locally](./functions-develop-local.md).
-## Parallel execution
-When multiple triggering events occur faster than a single-threaded function runtime can process them, the runtime may invoke the function multiple times in parallel. If a function app is using the [Consumption hosting plan](event-driven-scaling.md), the function app could scale out automatically. Each instance of the function app, whether the app runs on the Consumption hosting plan or a regular [App Service hosting plan](../app-service/overview-hosting-plans.md), might process concurrent function invocations in parallel using multiple threads. The maximum number of concurrent function invocations in each function app instance varies based on the type of trigger being used as well as the resources used by other functions within the function app.
+<a id="fileupdate"></a> There's also an editor in the Azure portal that lets you update your code and your *function.json* definition file directly in the portal. You should only use this editor for small changes or creating proof-of-concept functions. You should always develop your functions locally, when possible. For more information, see [Create your first function in the Azure portal](functions-create-function-app-portal.md).
+Portal editing is only supported for [Node.js version 3](functions-reference-node.md?pivots=nodejs-model-v3), which uses the function.json file.
+Portal editing is only supported for [Python version 1](functions-reference-python.md?pivots=python-mode-configuration), which uses the function.json file.
-## Functions runtime versioning
+## Deployment
-You can configure the version of the Functions runtime using the `FUNCTIONS_EXTENSION_VERSION` app setting. For example, the value "~4" indicates that your function app uses 4.x as its major version. Function apps are upgraded to each new minor version as they're released. For more information, including how to view the exact version of your function app, see [How to target Azure Functions runtime versions](set-runtime-version.md).
+When you publish your code project to Azure, you're essentially deploying your project to an existing function app resource. A function app provides an execution context in Azure in which your functions run. As such, it's the unit of deployment and management for your functions. From an Azure Resource perspective, a function app is equivalent to a site resource (`Microsoft.Web/sites`) in Azure App Service, which is equivalent to a web app.
-## Repositories
-The code for Azure Functions is open source and stored in GitHub repositories:
+A function app is composed of one or more individual functions that are managed, deployed, and scaled together. All of the functions in a function app share the same [pricing plan](functions-scale.md), [deployment method](functions-deployment-technologies.md), and [runtime version](functions-versions.md). For more information, see [How to manage a function app](functions-how-to-use-azure-function-app-settings.md).
-* [Azure Functions](https://github.com/Azure/Azure-Functions)
-* [Azure Functions host](https://github.com/Azure/azure-functions-host/)
-* [Azure Functions portal](https://github.com/azure/azure-functions-ux)
-* [Azure Functions templates](https://github.com/azure/azure-functions-templates)
-* [Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/)
-* [Azure WebJobs SDK Extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/)
+When the function app and any other required resources don't already exist in Azure, you first need to create these resources before you can deploy your project files. You can create these resources in one of these ways:
++ During [Visual Studio](./functions-develop-vs.md#publish-to-azure) publishing ++ Using [Visual Studio Code](./functions-develop-vs-code.md#publish-to-azure)+++ Programmatically using [Azure CLI](./scripts/functions-cli-create-serverless.md), [Azure PowerShell](./create-resources-azure-powershell.md#create-a-serverless-function-app-for-c), [ARM templates](functions-create-first-function-resource-manager.md), or [Bicep templates](functions-create-first-function-bicep.md)+++ In the [Azure portal](functions-create-function-app-portal.md)+
+In addition to tool-based publishing, Functions supports other technologies for deploying source code to an existing function app. For more information, see [Deployment technologies in Azure Functions](functions-deployment-technologies.md).
+
+## Connect to services
+
+A major requirement of any cloud-based compute service is reading data from and writing data to other cloud services. Functions provides an extensive set of bindings that makes it easier for you to connect to services without having to work with client SDKs.
+
+Whether you use the binding extensions provided by Functions or you work with client SDKs directly, you securely store connection data and do not include it in your code. For more information, see [Connections](#connections).
-## Bindings
-Here's a table of all supported bindings.
+### Bindings
+Functions provides bindings for many Azure services and a few third-party services, which are implemented as extensions. For more information, see the [complete list of supported bindings](functions-triggers-bindings.md#supported-bindings).
-Having issues with errors coming from the bindings? Review the [Azure Functions Binding Error Codes](functions-bindings-error-pages.md) documentation.
+Binding extensions can support both inputs and outputs, and many triggers also act as input bindings. Bindings let you configure the connection to services so that the Functions host can handle the data access for you. For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+If you're having issues with errors coming from bindings, see the [Azure Functions Binding Error Codes](functions-bindings-error-pages.md) documentation.
+
+### Client SDKs
+
+While Functions provides bindings to simplify data access in your function code, you're still able to use a client SDK in your project to directly access a given service, if you prefer. You might need to use client SDKs directly should your functions require a functionality of the underlying SDK that's not supported by the binding extension.
+
+When using client SDKs, you should use the same process for [storing and accessing connection strings](#connections) used by binding extensions.
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-dotnet-class-library.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-java.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-node.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-powershell.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-python.md#environment-variables).
## Connections
-Your function project references connection information by name from its configuration provider. It doesn't directly accept the connection details, allowing them to be changed across environments. For example, a trigger definition might include a `connection` property. This might refer to a connection string, but you can't set the connection string directly in a `function.json`. Instead, you would set `connection` to the name of an environment variable that contains the connection string.
+As a security best practice, Azure Functions takes advantage of the application settings functionality of Azure App Service to help you more securely store strings, keys, and other tokens required to connect to other services. Application settings in Azure are stored encrypted and can be accessed at runtime by your app as environment variable `name` `value` pairs. For triggers and bindings that require a connection property, you set the application setting name instead of the actual connection string. You can't configure a binding directly with a connection string or key.
-The default configuration provider uses environment variables. These might be set by [Application Settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure Functions service, or from the [local settings file](functions-develop-local.md#local-settings-file) when developing locally.
+For example, consider a trigger definition that has a `connection` property. Instead of the connection string, you set `connection` to the name of an environment variable that contains the connection string. Using this secrets access strategy both makes your apps more secure and makes it easier for you to change connections across environments. For even more security, you can use identity-based connections.
+
+The default configuration provider uses environment variables. These variables are defined in [application settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure and in the [local settings file](functions-develop-local.md#local-settings-file) when developing locally.
### Connection values
-When the connection name resolves to a single exact value, the runtime identifies the value as a _connection string_, which typically includes a secret. The details of a connection string are defined by the service to which you wish to connect.
+When the connection name resolves to a single exact value, the runtime identifies the value as a _connection string_, which typically includes a secret. The details of a connection string depend on the service to which you connect.
However, a connection name can also refer to a collection of multiple configuration items, useful for configuring [identity-based connections](#configure-an-identity-based-connection). Environment variables can be treated as a collection by using a shared prefix that ends in double underscores `__`. The group can then be referenced by setting the connection name to this prefix.
The following components support identity-based connections:
[!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)]
-Choose a tab below to learn about permissions for each component:
+Choose one of these tabs to learn about permissions for each component:
# [Azure Blobs extension](#tab/blob)
An identity-based connection for an Azure service accepts the following common p
| Property | Environment variable template | Description | ||||| | Token Credential | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. This setting should be set to `managedidentity` if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment. |
-| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It is invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. |
+| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It's invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. |
| Resource ID | `<CONNECTION_NAME_PREFIX>__managedIdentityResourceId` | When `credential` is set to `managedidentity`, this property can be set to specify the resource Identifier to be used when obtaining a token. The property accepts a resource identifier corresponding to the resource ID of the user-defined managed identity. It's invalid to specify both a resource ID and a client ID. If neither are specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set.
-Additional options may be supported for a given connection type. Refer to the documentation for the component making the connection.
+Other options may be supported for a given connection type. Refer to the documentation for the component making the connection.
##### Local development with identity-based connections > [!NOTE]
-> Local development with identity-based connections requires updated versions of the [Azure Functions Core Tools](./functions-run-local.md). You can check your currently installed version by running `func -v`. For Functions v3, use version `3.0.3904` or later. For Functions v4, use version `4.0.3904` or later.
+> Local development with identity-based connections requires version `4.0.3904` of [Azure Functions Core Tools](functions-run-local.md), or a later version.
When you're running your function project locally, the above configuration tells the runtime to use your local developer identity. The connection attempts to get a token from the following locations, in order:
If none of these options are successful, an error occurs.
Your identity may already have some role assignments against Azure resources used for development, but those roles may not provide the necessary data access. Management roles like [Owner](../role-based-access-control/built-in-roles.md#owner) aren't sufficient. Double-check what permissions are required for connections for each component, and make sure that you have them assigned to yourself.
-In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity based on a client ID and client Secret for an Azure Active Directory service principal. **This configuration option is not supported when hosted in the Azure Functions service.** To use an ID and secret on your local machine, define the connection with the following additional properties:
+In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity based on a client ID and client Secret for an Azure Active Directory service principal. **This configuration option is not supported when hosted in the Azure Functions service.** To use an ID and secret on your local machine, define the connection with the following extra properties:
| Property | Environment variable template | Description | ||||
Here's an example of `local.settings.json` properties required for identity-base
#### Connecting to host storage with an identity
-The Azure Functions host uses the `AzureWebJobsStorage` connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This can be configured to use an identity as well.
+The Azure Functions host uses the `AzureWebJobsStorage` connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This connection can also be configured to use an identity.
> [!CAUTION] > Other components in Functions rely on `AzureWebJobsStorage` for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md).
To use an identity-based connection for `AzureWebJobsStorage`, configure the fol
[Common properties for identity-based connections](#common-properties-for-identity-based-connections) may also be set as well.
-If you're configuring `AzureWebJobsStorage` using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service will be inferred for this account. This won't work if the storage account is in a sovereign cloud or has a custom DNS.
+If you're configuring `AzureWebJobsStorage` using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service are inferred for this account. This doesn't work when the storage account is in a sovereign cloud or has a custom DNS.
| Setting | Description | Example value | |--|--||
If you're configuring `AzureWebJobsStorage` using a storage account that uses th
## Reporting Issues [!INCLUDE [Reporting Issues](../../includes/functions-reporting-issues.md)]
+## Open source repositories
+
+The code for Azure Functions is open source, and you can find key components in these GitHub repositories:
+
+* [Azure Functions](https://github.com/Azure/Azure-Functions)
+
+* [Azure Functions host](https://github.com/Azure/azure-functions-host/)
+
+* [Azure Functions portal](https://github.com/azure/azure-functions-ux)
+
+* [Azure Functions templates](https://github.com/azure/azure-functions-templates)
+
+* [Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/)
+
+* [Azure WebJobs SDK Extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/)
+* [Azure Functons .NET worker (isolated process)](https://github.com/Azure/azure-functions-dotnet-worker)
+* [Azure Functions Java worker](https://github.com/Azure/azure-functions-java-worker)
+* [Azure Functions Node.js Programming Model](https://github.com/Azure/azure-functions-nodejs-library)
+* [Azure Functions PowerShell worker](https://github.com/Azure/azure-functions-powershell-worker)
+* [Azure Functions Python worker](https://github.com/Azure/azure-functions-python-worker)
++ ## Next steps+ For more information, see the following resources:
-* [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-* [Code and test Azure Functions locally](./functions-develop-local.md)
-* [Best Practices for Azure Functions](functions-best-practices.md)
-* [Azure Functions C# developer reference](functions-dotnet-class-library.md)
-* [Azure Functions Node.js developer reference](functions-reference-node.md)
++ [Azure Functions scenarios](functions-scenarios.md)++ [Code and test Azure Functions locally](./functions-develop-local.md)++ [Best Practices for Azure Functions](functions-best-practices.md)
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 01/09/2023 Last updated : 09/01/2023 zone_pivot_groups: programming-languages-set-functions # Azure Functions runtime versions overview
-<a name="top"></a>Azure Functions currently supports several versions of the runtime host. The following table details the available versions, their support level, and when they should be used:
+<a name="top"></a>Azure Functions currently supports two versions of the runtime host. The following table details the currently supported runtime versions, their support level, and when they should be used:
| Version | Support level | Description | | | | | | 4.x | GA | **_Recommended runtime version for functions in all languages._** Check out [Supported language versions](#languages). |
-| 3.x | GA<sup>*</sup> | Reached the end of life (EOL) for extended support on December 13, 2022. We highly recommend you [migrate your apps to version 4.x](migrate-version-3-version-4.md) for full support. |
-| 2.x | GA<sup>*</sup> | Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrate your apps to version 4.x](migrate-version-3-version-4.md) for full support. |
| 1.x | GA | Supported only for C# apps that must use .NET Framework. This version is in maintenance mode, with enhancements provided only in later versions. We highly recommend you migrate your apps to version 4.x, which [supports .NET Framework 4.8](migrate-version-1-version-4.md?tabs=v4&pivots=programming-language-csharp).|
-<sup>*</sup>For a detailed support statement about end-of-life versions, see [this migration article](migrate-version-3-version-4.md).
+> [!IMPORTANT]
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support. For more information, see [Retired versions](#retired-versions).
-This article details some of the differences between these versions, how you can create each version, and how to change the version on which your functions run.
+This article details some of the differences between supported versions, how you can create each version, and how to change the version on which your functions run.
+
+## Levels of support
[!INCLUDE [functions-support-levels](../../includes/functions-support-levels.md)] ## Languages
-All functions in a function app must share the same language. You chose the language of functions in your function app when you create the app. The language of your function app is maintained in the [FUNCTIONS\_WORKER\_RUNTIME](functions-app-settings.md#functions_worker_runtime) setting, and shouldn't be changed when there are existing functions.
-
-The following table indicates which programming languages are currently supported in each runtime version.
+All functions in a function app must share the same language. You choose the language of functions in your function app when you create the app. The language of your function app is maintained in the [FUNCTIONS\_WORKER\_RUNTIME](functions-app-settings.md#functions_worker_runtime) setting, and shouldn't be changed when there are existing functions.
[!INCLUDE [functions-supported-languages](../../includes/functions-supported-languages.md)]
+For information about the language versions of previously supported versions of the Functions runtime, see [Retired runtime versions](language-support-policy.md#retired-runtime-versions).
+ ## <a name="creating-1x-apps"></a>Run on a specific version The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. In some cases and for certain languages, other settings may apply.
If you receive a warning about your extension bundle version not meeting a minim
To learn more about extension bundles, see [Extension bundles](functions-bindings-register.md#extension-bundles). ::: zone-end
+## Retired versions
+
+These versions of the Functions runtime reached the end of life (EOL) for extended support on December 13, 2022.
+
+| Version | Current support level | Previous support level |
+| | | |
+| 3.x | Out-of-support |GA |
+| 2.x | Out-of-support | GA |
+
+As soon as possible, you should migrate your apps to version 4.x to obtain full support. For a complete set of language-specific migration instructions, see [Migrate apps to Azure Functions version 4.x](migrate-version-3-version-4.md).
+
+Apps using versions 2.x and 3.x can still be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps aren't eligible for new features, security patches, and performance optimizations. You can only get related service support after you upgrade your apps to version 4.x.
+
+End of support for these older runtime versions is due to the end of support for .NET Core 3.1, which they had as a core dependency. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
+ ## Locally developed application versions You can make the following updates to function apps to locally change the targeted versions. ### Visual Studio runtime versions
-In Visual Studio, you select the runtime version when you create a project. Azure Functions tools for Visual Studio supports the three major runtime versions. The correct version is used when debugging and publishing based on project settings. The version settings are defined in the `.csproj` file in the following properties:
+In Visual Studio, you select the runtime version when you create a project. Azure Functions tools for Visual Studio supports the two major runtime versions. The correct version is used when debugging and publishing based on project settings. The version settings are defined in the `.csproj` file in the following properties:
# [Version 4.x](#tab/v4)
You can also choose `net6.0`, `net7.0`, `net8.0`, or `net48` as the target frame
> [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
-# [Version 3.x](#tab/v3)
-
-Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrating your apps to version 4.x](migrate-version-3-version-4.md) for full support.
-
-# [Version 2.x](#tab/v2)
-
-Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrating your apps to version 4.x](migrate-version-3-version-4.md) for full support.
- # [Version 1.x](#tab/v1) ```xml
Reached the end of life (EOL) on December 13, 2022. We highly recommend you [mig
```
-### VS Code and Azure Functions Core Tools
+### Visual Studio Code and Azure Functions Core Tools
[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
This article explains Azure functions language runtime support policy.
## Retirement process
-Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full-support coverages for function apps, Functions support aligns with end-of-life support for a given language. To achieve this, Functions implements a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
+Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full-support coverages for function apps, Functions support aligns with end-of-life support for a given language. To achieve this goal, Functions implements a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
### Notification phase
-We'll send notification emails to function app users about upcoming language version retirements. The notifications will be at least one year before the date of retirement. Upon the notification, you should prepare to upgrade the language version that your functions apps use to a supported version.
+The Functions team sends notification emails to function app users about upcoming language version retirements. The notifications are sent at least one year before the date of retirement. When you receive the notification, you should prepare to upgrade functions apps to use to a supported version.
### Retirement phase
-After the language end-of-life date, function apps that use retired language versions can still be created and deployed, and they continue to run on the platform. However your apps won't be eligible for new features, security patches, and performance optimizations until you upgrade them to a supported language version.
+After the language end-of-life date, function apps that use retired language versions can still be created and deployed, and they continue to run on the platform. However your apps aren't eligible for new features, security patches, and performance optimizations until you upgrade them to a supported language version.
> [!IMPORTANT] >You're highly encouraged to upgrade the language version of your affected function apps to a supported version.
After the language end-of-life date, function apps that use retired language ver
## Retirement policy exceptions
-There are few exceptions to the retirement policy outlined above. Here is a list of languages that are approaching or have reached their end-of-life (EOL) dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they are no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
+There are few exceptions to the retirement policy outlined above. Here's a list of languages that are approaching or have reached their end-of-life (EOL) dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they're no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
|Language Versions |EOL Date |Retirement Date| |--|--|-|
To learn more about specific language version support policy timeline, visit the
|PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)| |Python |[link](./functions-reference-python.md#python-version)|
+## Retired runtime versions
+
+This historical table shows the highest language level for specific Azure Functions runtime versions that are no longer supported:
+
+|Language |2.x | 3.x |
+|--|| |
+|[C#](functions-dotnet-class-library.md)|GA (.NET Core 2.1)| GA (.NET Core 3.1 & .NET 5<sup>*</sup>) |
+|[JavaScript/TypeScript](functions-reference-node.md?tabs=javascript)|GA (Node.js 10 & 8)| GA (Node.js 14, 12, & 10) |
+|[Java](functions-reference-java.md)|GA (Java 8)| GA (Java 11 & 8)|
+|[PowerShell](functions-reference-powershell.md) |N/A|N/A|
+|[Python](functions-reference-python.md#python-version)|GA (Python 3.7)| GA (Python 3.9, 3.8, 3.7)|
+|[TypeScript](functions-reference-node.md?tabs=typescript) |GA| GA |
+
+<sup>*</sup>.NET 5 was only supported for C# apps running in the [isolated worker model](dotnet-isolated-process-guide.md).
+
+For the language levels currently supported by Azure Functions, see [Languages by runtime version](supported-languages.md#languages-by-runtime-version).
## Next steps
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
In version 2.x, the following changes were made:
* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
+* The names of some [pre-defined custom metrics](analyze-telemetry-data.md) were changed after version 1.x. `Duration` was replaced with `MaxDurationMs`, `MinDurationMs`, and `AvgDurationMs`. `Success Rate` was also renamed to `Success Rate`.
+ ## Considerations for Azure Stack Hub [App Service on Azure Stack Hub](/azure-stack/operator/azure-stack-app-service-overview) does not support version 4.x of Azure Functions. When you are planning a migration off of version 1.x in Azure Stack Hub, you can choose one of the following options:
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
Last updated 07/31/2023
zone_pivot_groups: programming-languages-set-functions
-# <a name="top"></a>Migrate apps from Azure Functions version 3.x to version 4.x
+# Migrate apps from Azure Functions version 3.x to version 4.x
Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md). > [!IMPORTANT]
-> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support.
->
-> Apps using versions 2.x and 3.x can still be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll only get related service support once you upgrade them to version 4.x.
->
-> End of support for these older runtime versions is due to the end of support for .NET Core 3.1, which they had as a core dependency. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
->
-> We highly recommend that you migrate your function apps to version 4.x of the Functions runtime by following this article.
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support. For more information, see [Retired versions](functions-versions.md#retired-versions).
This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
azure-functions Streaming Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/streaming-logs.md
While developing an application, you often want to see what's being written to t
There are two ways to view a stream of log files being generated by your function executions.
-* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan.
+* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan. When your function is scaled to multiple instances, data from other instances isn't shown using this method.
* **Live Metrics Stream**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances and supports all plan types. This method uses [sampled data](configure-monitoring.md#configure-sampling).
In Application Insights, select **Live Metrics Stream**. [Sampled log entries](c
## [Visual Studio Code](#tab/vs-code)
+To turn on the streaming logs for your function app in Azure:
+
+1. Select F1 to open the command palette, and then search for and run the command **Azure Functions: Start Streaming Logs**.
+
+1. Select your function app in Azure, and then select **Yes** to enable application logging for the function app.
+
+1. Trigger your functions in Azure. Notice that log data is displayed in the Output window in Visual Studio Code.
+
+1. When you're done, remember to run the command **Azure Functions: Stop Streaming Logs** to disable logging for the function app.
## [Core Tools](#tab/core-tools)
azure-functions Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/supported-languages.md
Title: Supported languages in Azure Functions
-description: Learn which languages are supported (GA) and which are in preview, and ways to extend Functions development to other languages.
+description: Learn which languages are supported for developing your Functions in Azure, the support level of the various language versions, and potential end-of-life dates.
Previously updated : 11/27/2019- Last updated : 08/27/2023
+zone_pivot_groups: programming-languages-set-functions
# Supported languages in Azure Functions
-This article explains the levels of support offered for languages that you can use with Azure Functions. It also describes strategies for creating functions using languages not natively supported.
+This article explains the levels of support offered for your preferred language when using Azure Functions. It also describes strategies for creating functions using languages not natively supported.
[!INCLUDE [functions-support-levels](../../includes/functions-support-levels.md)] ## Languages by runtime version
-[Several versions of the Azure Functions runtime](functions-versions.md) are available. The following table shows which languages are supported in each runtime version.
- [!INCLUDE [functions-portal-language-support](../../includes/functions-portal-language-support.md)]
Azure Functions provides a guarantee of support for the major versions of suppor
> [!NOTE] >Because Azure Functions can remove the support of older minor versions at any time after a new minor version is available, you shouldn't pin your function apps to a specific minor/patch version of a programming language.
->
## Custom handlers
Custom handlers are lightweight web servers that receive events from the Azure F
Starting with version 2.x, the runtime is designed to offer [language extensibility](https://github.com/Azure/azure-webjobs-sdk-script/wiki/Language-Extensibility). The JavaScript and Java languages in the 2.x runtime are built with this extensibility.
-## Next steps
+## Next steps
+### [Isolated process](#tab/isolated-process)
+
+> [!div class="nextstepaction"]
+> [.NET isolated worker process reference](dotnet-isolated-process-guide.md).
-To learn more about how to develop functions in the supported languages, see the following resources:
+### [In-process](#tab/in-process)
+
+> [!div class="nextstepaction"]
+> [In-process C# developer reference](functions-dotnet-class-library.md)
++
-+ [C# class library developer reference](functions-dotnet-class-library.md)
-+ [C# script developer reference](functions-reference-csharp.md)
-+ [Java developer reference](functions-reference-java.md)
-+ [JavaScript developer reference](functions-reference-node.md?tabs=javascript)
-+ [PowerShell developer reference](functions-reference-powershell.md)
-+ [Python developer reference](functions-reference-python.md)
-+ [TypeScript developer reference](functions-reference-node.md?tabs=typescript)
+> [!div class="nextstepaction"]
+> [Java developer reference](functions-reference-java.md)
+> [!div class="nextstepaction"]
+> [Node.js developer reference](functions-reference-node.md?tabs=javascript)
+> [!div class="nextstepaction"]
+> [PowerShell developer reference](functions-reference-powershell.md)
+> [!div class="nextstepaction"]
+> [Python developer reference](functions-reference-python.md)
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
||| |Number of violations|The number of violations that trigger the alert.| |Evaluation period|The time period within which the number of violations occur. |
- |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data.<br> If the query requires more data than the alert evaluation you can change the time range manually.
-If the query contains **ago** command in the query, it will be cahnged automatically to 2 days (48 hours).|
+ |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data. If the query requires more data than the alert evaluation you can change the time range manually. If the query contains **ago** command, it will be changed automatically to 2 days (48 hours).|<br>
> [!NOTE] > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**. If you don't, the rule creation will fail because it won't meet the policy requirements.
azure-monitor Alerts Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-plan.md
+
+ Title: 'Azure Monitor best practices: Alerts and automated actions'
+description: Recommendations for deployment of Azure Monitor alerts and automated actions.
+++ Last updated : 05/31/2023++++
+# Deploy Azure Monitor: Alerts and automated actions
+
+This article provides guidance on alerts in Azure Monitor. Alerts proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal. You can create alerts that:
+
+- Send a proactive notification.
+- Initiate an automated action to attempt to remediate an issue.
+
+## Alerting strategy
+
+An alerting strategy defines your organization's standards for:
+
+- The types of alert rules that you'll create for different scenarios.
+- How you'll categorize and manage alerts after they're created.
+- Automated actions and notifications that you'll take in response to alerts.
+
+Defining an alert strategy assists you in defining the configuration of alert rules including alert severity and action groups.
+
+For factors to consider as you develop an alerting strategy, see [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy).
+
+## Alert rule types
+
+Alerts in Azure Monitor are created by alert rules that you must create. For guidance on recommended alert rules, see the monitoring documentation for each Azure service. Azure Monitor doesn't have any alert rules by default.
+
+Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require.
+
+- Activity log rules. Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating an activity log alert.
+- Metric alert rules. Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a metric alert.
+- Log alert rules. Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a log query alert.
+- [Application alerts](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) for a description of the different tests and information on creating them.
+
+## Alert severity
+
+Each alert rule defines the severity of the alerts that it creates based on the following table. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify alerts that require the greatest urgency.
+
+| Level | Name | Description |
+|:|:|:|
+| Sev 0 | Critical | Loss of service or application availability or severe degradation of performance. Requires immediate attention. |
+| Sev 1 | Error | Degradation of performance or loss of availability of some aspect of an application or service. Requires attention but not immediate. |
+| Sev 2 | Warning | A problem that doesn't include any current loss in availability or performance, although it has the potential to lead to more severe problems if unaddressed. |
+| Sev 3 | Informational | Doesn't indicate a problem but provides interesting information to an operator, such as successful completion of a regular process. |
+| Sev 4 | Verbose | Doesn't indicate a problem but provides detailed information that is verbose.
+
+Assess the severity of the condition each rule is identifying to assign an appropriate level. Define the types of issues you assign to each severity level and your standard response to each in your alerts strategy.
+
+## Action groups
+
+Automated responses to alerts in Azure Monitor are defined in [action groups](action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following items:
+
+- **Notifications**: Messages that notify operators and administrators that an alert was created.
+- **Actions**: Automated processes that attempt to correct the detected issue.
+
+## Notifications
+
+Notifications are messages sent to one or more users to notify them that an alert has been created. Because a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards:
+
+- Email
+- SMS
+- Push to Azure app
+- Voice
+- Email Azure Resource Manager role
+
+## Actions
+
+Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each action is typically used.
+
+### Automated remediation
+
+Use the following actions to attempt automated remediation of the issue identified by the alert:
+
+- **Automation runbook**: Start a built-in runbook or a custom runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine.
+- **Azure Functions**: Start an Azure function.
+
+### ITSM and on-call management
+
+- **IT service management (ITSM)**: Use the ITSM Connector to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules.
+- **Webhooks**: Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call.
+- **Secure webhook**: Integrate ITSM with Azure Active Directory Authentication.
+
+## Minimize alert activity
+
+You want to create alerts for any important information in your environment. But you don't want to create excessive alerts and notifications for issues that don't warrant them. To minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators, follow these guidelines:
+
+- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting.
+- Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.
+- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue.
+- Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together.
+- Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention.
+
+## Create alert rules at scale
+
+Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale:
+
+- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
+- For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](resource-manager-alerts-metric.md).
+- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
+
+> [!NOTE]
+> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert.
+
+## Next steps
+
+[Optimize cost in Azure Monitor](../best-practices-cost.md).
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 01/24/2023 Last updated : 09/12/2023 ms.devlang: csharp, java, javascript, vb
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
Title: Deploy Application Insights Agent description: Learn how to use Application Insights Agent to monitor website performance. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
description: Monitor ASP.NET Core web applications for availability, performance
ms.devlang: csharp Previously updated : 04/24/2023 Last updated : 09/12/2023 # Application Insights for ASP.NET Core applications
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Review TrackAvailability() test results description: This article explains how to review data logged by TrackAvailability() tests Previously updated : 08/20/2023 Last updated : 09/12/2023 # Review TrackAvailability() test results
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
Title: Availability Standard test - Azure Monitor Application Insights description: Set up Standard tests in Application Insights to check for availability of a website with a single request test. Previously updated : 03/22/2023 Last updated : 09/12/2023 # Standard test
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Autoinstrumentation for Azure Monitor Application Insights
description: Overview of autoinstrumentation for Azure Monitor Application Insights codeless application performance management. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Title: ApplicationInsights.config reference - Azure | Microsoft Docs description: Enable or disable data collection modules and add performance counters and other parameters. Previously updated : 03/22/2023 Last updated : 09/12/2023 ms.devlang: csharp
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
Title: Use Search in Azure Application Insights | Microsoft Docs description: Search and filter raw telemetry sent by your web app. Previously updated : 03/22/2023 Last updated : 09/12/2023
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
Title: Java Profiler for Azure Monitor Application Insights description: How to configure the Azure Monitor Application Insights for Java Profiler Previously updated : 11/15/2022 Last updated : 09/12/2023 ms.devlang: java
The ApplicationInsights Java Agent monitors CPU, memory, and request duration su
#### Profile now
-Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button will immediately request a profile in all agents that are attached to the Application Insights instance.
+Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button immediately requests a profile in all agents that are attached to the Application Insights instance.
> [!WARNING] > Invoking Profile now will enable the profiler feature, and Application Insights will apply default CPU and memory SLA triggers. When your application breaches those SLAs, Application Insights will gather Java profiles. If you wish to disable profiling later on, you can do so within the trigger menu shown in [Installation](#installation).
For instance, take the following scenario:
- Therefore the maximum possible size of tenured would be 922 mb. - Your threshold was set via the user interface to 75%, therefore your threshold would be 75% of 922 mb, 691 mb.
-In this scenario, a profile will occur in the following circumstances:
+In this scenario, a profile occurs in the following circumstances:
- Full garbage collection is executed - The Tenured regions occupancy is above 691 mb after collection ### Request
-SLA triggers are based on OpenTelemetry (otel) and they will initiate a profile if certain criteria is fulfilled.
+SLA triggers are based on OpenTelemetry (otel) and they initiate a profile if certain criteria is fulfilled.
Each individual trigger configuration is formed as follows:
For instance, the following scenario would trigger a profile if: more than 75% o
### Installation
-The following steps will guide you through enabling the profiling component on the agent and configuring resource limits that will trigger a profile if breached.
+The following steps guide you through enabling the profiling component on the agent and configuring resource limits that trigger a profile if breached.
-1. Configure the resource thresholds that will cause a profile to be collected:
+1. Configure the resource thresholds that cause a profile to be collected:
1. Browse to the Performance -> Profiler section of the Application Insights instance. :::image type="content" source="./media/java-standalone-profiler/performance-blade.png" alt-text="Screenshot of the link to open performance pane." lightbox="media/java-standalone-profiler/performance-blade.png":::
The following steps will guide you through enabling the profiling component on t
> [!WARNING] > The Java profiler does not support the "Sampling" trigger. Configuring this will have no effect.
-After these steps have been completed, the agent will monitor the resource usage of your process and trigger a profile when the threshold is exceeded. When a profile has been triggered and completed, it will be viewable from the
+After these steps have been completed, the agent will monitor the resource usage of your process and trigger a profile when the threshold is exceeded. When a profile has been triggered and completed, it's viewable from the
Application Insights instance within the Performance -> Profiler section. From that screen the profile can be downloaded, once download the JFR recording file can be opened and analyzed within a tool of your choosing, for example JDK Mission Control (JMC). :::image type="content" source="./media/java-standalone-profiler/configure-blade-inline.png" alt-text="Screenshot of profiler page features and settings." lightbox="media/java-standalone-profiler/configure-blade-inline.png":::
Example configuration:
```
-`memoryTriggeredSettings` This configuration will be used if a memory profile is requested. This value can be one of:
+`memoryTriggeredSettings` This configuration is used if a memory profile is requested. This value can be one of:
- `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see Warning section above for details. - `profile`. Uses the `profile.jfc` configuration that ships with JFR. - A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
-`cpuTriggeredSettings` This configuration will be used if a cpu profile is requested.
+`cpuTriggeredSettings` This configuration is used if a cpu profile is requested.
This value can be one of: - `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see Warning section above for details. - `profile`. Uses the `profile.jfc` jfc configuration that ships with JFR. - A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
-`manualTriggeredSettings` This configuration will be used if a manual profile is requested.
+`manualTriggeredSettings` This configuration is used if a manual profile is requested.
This value can be one of: - `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see
This value can be one of:
`enableRequestTriggering` Whether JFR profiling should be triggered based on request configuration. This value can be one of: -- `true` Profiling will be triggered if a request trigger threshold is breached.
+- `true` Profiling is triggered if a request trigger threshold is breached.
- `false` (default value). Profiling will not be triggered by request configuration. ## Frequently asked questions
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Title: Telemetry processor examples - Azure Monitor Application Insights for Java description: Explore examples that show telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 05/13/2023 Last updated : 09/12/2023 ms.devlang: java
# Telemetry processor examples - Azure Monitor Application Insights for Java
-This article provides examples of telemetry processors in Application Insights for Java. You'll find samples for include and exclude configurations. You'll also find samples for attribute processors and span processors.
+This article provides examples of telemetry processors in Application Insights for Java, including samples for include and exclude configurations. It also includes samples for attribute processors and span processors.
## Include and exclude Span samples In this section, you'll see how to include and exclude spans. You'll also see how to exclude multiple spans and apply selective processing. ### Include spans
-This section shows how to include spans for an attribute processor. Spans that don't match the properties aren't processed by the processor.
+This section shows how to include spans for an attribute processor. The processor doesn't process spans that don't match the properties.
A match requires the span name to be equal to `spanA` or `spanB`.
This span doesn't match the include properties, and the processor actions aren't
### Exclude spans
-This section demonstrates how to exclude spans for an attribute processor. Spans that match the properties aren't processed by this processor.
+This section demonstrates how to exclude spans for an attribute processor. This processor doesn't process spans that match the properties.
A match requires the span name to be equal to `spanA` or `spanB`.
This span doesn't match the exclude properties, and the processor actions are ap
### Exclude spans by using multiple criteria
-This section demonstrates how to exclude spans for an attribute processor. Spans that match the properties aren't processed by this processor.
+This section demonstrates how to exclude spans for an attribute processor. This processor doesn't process spans that match the properties.
A match requires the following conditions to be met: * An attribute (for example, `env` with value `dev`) must exist in the span.
Let's assume the input log message body is `Starting PetClinicApplication on Wor
### Masking sensitive data in log message The following sample shows how to mask sensitive data in a log message body using both log processor and attribute processor.
-Let's assume the input log message body is `User account with userId 123456xx failed to login`. The log processor updates output message body to `User account with userId {redactedUserId} failed to login` and the attribute processor deletes the new attribute `redactedUserId` which was adding in the previous step.
+Let's assume the input log message body is `User account with userId 123456xx failed to login`. The log processor updates output message body to `User account with userId {redactedUserId} failed to login` and the attribute processor deletes the new attribute `redactedUserId`, which was adding in the previous step.
```json { "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Title: Telemetry processors (preview) - Azure Monitor Application Insights for Java description: Learn to configure telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 05/13/2023 Last updated : 09/12/2023 ms.devlang: java
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
description: Learn how to install and use JavaScript feature extensions (Click A
ibiza Previously updated : 07/10/2023 Last updated : 09/12/2023 ms.devlang: javascript
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
Title: Microsoft Azure Monitor Application Insights JavaScript SDK description: Microsoft Azure Monitor Application Insights JavaScript SDK is a powerful tool for monitoring and analyzing web application performance. Previously updated : 08/11/2023 Last updated : 09/12/2023 ms.devlang: javascript
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from Application Insights instrumentation keys to connection strings description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Title: Monitor Node.js services with Application Insights | Microsoft Docs description: Monitor performance and diagnose problems in Node.js services with Application Insights. Previously updated : 06/23/2023 Last updated : 09/12/2023 ms.devlang: javascript
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Title: Add, modify, and filter Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to add, modify, and filter OpenTelemetry for applications using Azure Monitor. Previously updated : 08/11/2023 Last updated : 09/12/2023 ms.devlang: csharp, javascript, typescript, python
You can't extend the Java Distro with community instrumentation libraries. To re
Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient. ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics, trace } = require("@opentelemetry/api");
+ const { registerInstrumentations } = require( "@opentelemetry/instrumentation");
const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express');
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const traceHandler = appInsights.getTraceHandler();
- traceHandler.addInstrumentation(new ExpressInstrumentation());
+ useAzureMonitor();
+ const tracerProvider = trace.getTracerProvider().getDelegate();
+ const meterProvider = metrics.getMeterProvider();
+ registerInstrumentations({
+ instrumentations: [
+ new ExpressInstrumentation(),
+ ],
+ tracerProvider: tracerProvider,
+ meterProvider: meterProvider
+ });
``` ### [Python](#tab/python)
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const meter = metrics.getMeter("testMeter");
let histogram = meter.createHistogram("histogram"); histogram.record(1, { "testKey": "testValue" }); histogram.record(30, { "testKey": "testValue2" });
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const meter = metrics.getMeter("testMeter");
let counter = meter.createCounter("counter"); counter.add(1, { "testKey": "testValue" }); counter.add(5, { "testKey2": "testValue" });
public class Program {
#### [Node.js](#tab/nodejs) ```typescript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const meter = metrics.getMeter("testMeter");
let gauge = meter.createObservableGauge("gauge"); gauge.addCallback((observableResult: ObservableResult) => { let randomNumber = Math.floor(Math.random() * 100);
You can use `opentelemetry-api` to update the status of a span and record except
#### [Node.js](#tab/nodejs) ```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const tracer = appInsights.getTraceHandler().getTracer();
-let span = tracer.startSpan("hello");
-try{
- throw new Error("Test Error");
-}
-catch(error){
- span.recordException(error);
-}
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const tracer = trace.getTracer("testTracer");
+ let span = tracer.startSpan("hello");
+ try{
+ throw new Error("Test Error");
+ }
+ catch(error){
+ span.recordException(error);
+ }
``` #### [Python](#tab/python)
you can add your spans by using the OpenTelemetry API.
#### [Node.js](#tab/nodejs) ```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const tracer = appInsights.getTraceHandler().getTracer();
-let span = tracer.startSpan("hello");
-span.end();
+ useAzureMonitor();
+ const tracer = trace.getTracer("testTracer");
+ let span = tracer.startSpan("hello");
+ span.end();
```
Not available in .NET.
#### [Node.js](#tab/nodejs)
-First, get the `LogHandler`:
+You need to use `applicationinsights` v3 Beta package to achieve this. (https://www.npmjs.com/package/applicationinsights/v/beta)
```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const logHandler = appInsights.getLogHandler();
+ const { TelemetryClient } = require("applicationinsights");
+
+ const appInsights = new TelemetryClient();
```
-Then use the `LogHandler` to send custom telemetry:
+Then use the `TelemetryClient` to send custom telemetry:
##### Events ```javascript
-let eventTelemetry = {
- name: "testEvent"
-};
-logHandler.trackEvent(eventTelemetry);
+ let eventTelemetry = {
+ name: "testEvent"
+ };
+ appInsights.trackEvent(eventTelemetry);
``` ##### Logs ```javascript
-let traceTelemetry = {
- message: "testMessage",
- severity: "Information"
-};
-logHandler.trackTrace(traceTelemetry);
+ let traceTelemetry = {
+ message: "testMessage",
+ severity: "Information"
+ };
+ appInsights.trackTrace(traceTelemetry);
``` ##### Exceptions ```javascript
-try {
- ...
-} catch (error) {
- let exceptionTelemetry = {
- exception: error,
- severity: "Critical"
- };
- logHandler.trackException(exceptionTelemetry);
-}
+ try {
+ ...
+ } catch (error) {
+ let exceptionTelemetry = {
+ exception: error,
+ severity: "Critical"
+ };
+ appInsights.trackException(exceptionTelemetry);
+ }
``` #### [Python](#tab/python)
Adding one or more span attributes populates the `customDimensions` field in the
##### [Node.js](#tab/nodejs) ```typescript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
-const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
+ const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
+ const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ useAzureMonitor();
+ const tracerProvider = trace.getTracerProvider().getDelegate();
-class SpanEnrichingProcessor implements SpanProcessor{
- forceFlush(): Promise<void>{
- return Promise.resolve();
- }
- shutdown(): Promise<void>{
- return Promise.resolve();
- }
- onStart(_span: Span): void{}
- onEnd(span: ReadableSpan){
- span.attributes["CustomDimension1"] = "value1";
- span.attributes["CustomDimension2"] = "value2";
+ class SpanEnrichingProcessor implements SpanProcessor{
+ forceFlush(): Promise<void>{
+ return Promise.resolve();
+ }
+ shutdown(): Promise<void>{
+ return Promise.resolve();
+ }
+ onStart(_span: Span): void{}
+ onEnd(span: ReadableSpan){
+ span.attributes["CustomDimension1"] = "value1";
+ span.attributes["CustomDimension2"] = "value2";
+ }
}
-}
-appInsights.getTraceHandler().addSpanProcessor(new SpanEnrichingProcessor());
+ tracerProvider.addSpanProcessor(new SpanEnrichingProcessor());
``` ##### [Python](#tab/python)
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```typescript ...
-const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+ const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-class SpanEnrichingProcessor implements SpanProcessor{
- ...
+ class SpanEnrichingProcessor implements SpanProcessor{
+ ...
- onEnd(span){
- span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
+ onEnd(span){
+ span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
+ }
}
-}
``` ##### [Python](#tab/python)
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```typescript ...
-import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
+ import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
-class SpanEnrichingProcessor implements SpanProcessor{
- ...
+ class SpanEnrichingProcessor implements SpanProcessor{
+ ...
- onEnd(span: ReadableSpan){
- span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
+ onEnd(span: ReadableSpan){
+ span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
+ }
}
-}
``` ##### [Python](#tab/python)
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c
Attributes could be added only when calling manual track APIs only. Log attributes for console, bunyan and Winston are currently not supported.
-```javascript
-const config = new ApplicationInsightsConfig();
-config.instrumentations.http = httpInstrumentationConfig;
-const appInsights = new ApplicationInsightsClient(config);
-const logHandler = appInsights.getLogHandler();
-const attributes = {
- "testAttribute1": "testValue1",
- "testAttribute2": "testValue2",
- "testAttribute3": "testValue3"
-};
-logHandler.trackEvent({
- name: "testEvent",
- properties: attributes
-});
+```typescript
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { logs } = require("@opentelemetry/api-logs");
+
+ useAzureMonitor();
+ const logger = logs.getLogger("testLogger");
+ const logRecord = {
+ body : "testEvent",
+ attributes: {
+ "testAttribute1": "testValue1",
+ "testAttribute2": "testValue2",
+ "testAttribute3": "testValue3"
+ }
+ };
+ logger.emit({
+ name: "testEvent",
+ properties: attributes
+ });
``` #### [Python](#tab/python)
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): ```typescript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { useAzureMonitor, ApplicationInsightsOptions } = require("@azure/monitor-opentelemetry");
const { IncomingMessage } = require("http"); const { RequestOptions } = require("https"); const { HttpInstrumentationConfig }= require("@opentelemetry/instrumentation-http");
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
return false; } };
- const config = new ApplicationInsightsConfig();
- config.instrumentations.http = httpInstrumentationConfig;
- const appInsights = new ApplicationInsightsClient(config);
+ const config: ApplicationInsightsOptions = {
+ instrumentationOptions: {
+ http: {
+ httpInstrumentationConfig
+ },
+ },
+ };
+ useAzureMonitor(config);
``` 2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
Get the request trace ID and the span ID in your code:
### [Node.js](#tab/nodejs) -- To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates, see the [`applicationinsights` npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry).
+- To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.
- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Title: Configure Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications. Previously updated : 08/11/2023 Last updated : 09/12/2023 ms.devlang: csharp, javascript, typescript, python
For more information about Java, see the [Java supplemental documentation](java-
```typescript const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base'); const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 08/30/2023 Last updated : 09/12/2023 ms.devlang: csharp, javascript, typescript, python
Follow the steps in this section to instrument your application with OpenTelemet
### [ASP.NET Core](#tab/aspnetcore) -- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2
+- [ASP.NET Core Application](/aspnet/core/introduction-to-aspnet-core) using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet)
### [.NET](#tab/net)
As part of using Application Insights instrumentation, we collect and send diagn
### [Node.js](#tab/nodejs) - For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md)-- To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates, see the [`applicationinsights` npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry).
+- To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.
- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Nodejs Exporter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-exporter.md
Title: Enable the Azure Monitor OpenTelemetry exporter for Node.js applications description: This article provides guidance on how to enable the Azure Monitor OpenTelemetry exporter for Node.js applications. Previously updated : 05/10/2023 Last updated : 09/12/2023 ms.devlang: javascript
function doWork(parent) {
#### Set the Application Insights connection string
-You can set the connection string either programmatically or by setting the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. In the event that both have been set, the programmatic connection string will take precedence.
+You can set the connection string either programmatically or by setting the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. If both have been set, the programmatic connection string takes precedence.
You can find your connection string in the Overview Pane of your Application Insights Resource.
As part of using Application Insights instrumentation, we collect and send diagn
## Set the Cloud Role Name and the Cloud Role Instance
-You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node.
+You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They appear on the Application Map as the name underneath a node.
Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
The following table represents the currently supported custom telemetry types:
You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries).
-The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
+The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
class SpanEnrichingProcessor {
#### Set the user ID or authenticated user ID
-You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance below. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
+You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance in this section. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
> [!IMPORTANT] > Consult applicable privacy laws before you set the Authenticated User ID.
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: Data Collection Basics of Azure Monitor Application Insights description: This article provides an overview of how to collect telemetry to send to Azure Monitor Application Insights. Previously updated : 07/07/2023 Last updated : 09/12/2023
azure-monitor Opentelemetry Python Opencensus Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md
Title: Migrating Azure Monitor Application Insights Python from OpenCensus to OpenTelemetry description: This article provides guidance on how to migrate from the Azure Monitor Application Insights Python SDK and OpenCensus exporter to OpenTelemetry. Previously updated : 08/01/2023 Last updated : 09/12/2023 ms.devlang: python
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Title: Automate Application Insights with PowerShell | Microsoft Docs description: Automate creating and managing resources, alerts, and availability tests in PowerShell by using an Azure Resource Manager template. Previously updated : 03/22/2023 Last updated : 09/12/2023
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Application Insights | Microsoft Docs description: This article shows how to use connection strings. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: 'Design your Application Insights deployment: One vs. many resources?' description: Direct telemetry to different resources for development, test, and production stamps. Previously updated : 11/15/2022 Last updated : 09/12/2023
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Title: Usage analysis with Application Insights | Azure Monitor description: Understand your users and what they do with your app. Previously updated : 07/10/2023 Last updated : 09/12/2023
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 06/23/2023 Last updated : 09/12/2023
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Title: 'Azure Monitor best practices: Alerts and automated actions'
-description: Recommendations for deployment of Azure Monitor alerts and automated actions.
+ Title: Best practices for Azure Monitor alerts
+description: Provides a template for a Well-Architected Framework (WAF) article specific to Azure Monitor alerts.
-- Previously updated : 05/31/2023--++ Last updated : 09/04/2023+
-# Deploy Azure Monitor: Alerts and automated actions
-
-This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It provides guidance on alerts in Azure Monitor. Alerts proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal. You can create alerts that:
--- Send a proactive notification.-- Initiate an automated action to attempt to remediate an issue.-
-## Alerting strategy
-
-An alerting strategy defines your organization's standards for:
--- The types of alert rules that you'll create for different scenarios.-- How you'll categorize and manage alerts after they're created.-- Automated actions and notifications that you'll take in response to alerts.-
-Defining an alert strategy assists you in defining the configuration of alert rules including alert severity and action groups.
-
-For factors to consider as you develop an alerting strategy, see [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy).
-
-## Alert rule types
-
-Alerts in Azure Monitor are created by alert rules that you must create. For guidance on recommended alert rules, see the monitoring documentation for each Azure service. Azure Monitor doesn't have any alert rules by default.
-
-Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require.
+# Best practices for Azure Monitor alerts
+This article provides architectural best practices for Azure Monitor alerts, alert processing rules, and action groups. The guidance is based on the five pillars of architecture excellence described in [Azure Well-Architected Framework](/azure/architecture/framework/).
-- [Activity log rules](alerts/activity-log-alerts.md). Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md) for information on creating an activity log alert.-- [Metric alert rules](alerts/alerts-metric-overview.md). Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create, view, and manage metric alerts by using Azure Monitor](alerts/alerts-metric.md) for information on creating a metric alert.-- [Log alert rules](alerts/alerts-unified-log.md). Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create, view, and manage log alerts by using Azure Monitor](alerts/alerts-log.md) for information on creating a log query alert.-- [Application alerts](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) for a description of the different tests and information on creating them.
-## Alert severity
-Each alert rule defines the severity of the alerts that it creates based on the following table. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify alerts that require the greatest urgency.
+## Reliability
+In the cloud, we acknowledge that failures happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. Use the following information to minimize failure of your Azure Monitor alert rule components.
-| Level | Name | Description |
-|:|:|:|
-| Sev 0 | Critical | Loss of service or application availability or severe degradation of performance. Requires immediate attention. |
-| Sev 1 | Error | Degradation of performance or loss of availability of some aspect of an application or service. Requires attention but not immediate. |
-| Sev 2 | Warning | A problem that doesn't include any current loss in availability or performance, although it has the potential to lead to more severe problems if unaddressed. |
-| Sev 3 | Informational | Doesn't indicate a problem but provides interesting information to an operator, such as successful completion of a regular process. |
-| Sev 4 | Verbose | Doesn't indicate a problem but provides detailed information that is verbose.
-Assess the severity of the condition each rule is identifying to assign an appropriate level. Define the types of issues you assign to each severity level and your standard response to each in your alerts strategy.
-## Action groups
+## Security
+Security is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to maximize the security of Azure Monitor alerts.
-Automated responses to alerts in Azure Monitor are defined in [action groups](alerts/action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following items:
-- **Notifications**: Messages that notify operators and administrators that an alert was created.-- **Actions**: Automated processes that attempt to correct the detected issue.
-## Notifications
+## Cost optimization
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
-Notifications are messages sent to one or more users to notify them that an alert has been created. Because a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards:
--- Email-- SMS-- Push to Azure app-- Voice-- Email Azure Resource Manager role-
-## Actions
-
-Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each action is typically used.
-
-### Automated remediation
-
-Use the following actions to attempt automated remediation of the issue identified by the alert:
--- **Automation runbook**: Start a built-in runbook or a custom runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine.-- **Azure Functions**: Start an Azure function.-
-### ITSM and on-call management
--- **IT service management (ITSM)**: Use the [ITSM Connector]() to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules.-- **Webhooks**: Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call.-- **Secure webhook**: Integrate ITSM with Azure Active Directory Authentication.-
-## Minimize alert activity
+> [!NOTE]
+> See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
-You want to create alerts for any important information in your environment. But you don't want to create excessive alerts and notifications for issues that don't warrant them. To minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators, follow these guidelines:
-- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting.-- Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.-- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue.-- Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together.-- Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention.
-## Create alert rules at scale
+## Operational excellence
+Operational excellence refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for supporting Azure Monitor alerts.
-Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale:
-- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts/alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).-- For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md).-- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
-> [!NOTE]
-> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert.
+## Performance efficiency
+Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner.
+Alerts offer a high degree of performance efficiency without any design decisions.
-## Next steps
+## Next step
-[Optimize cost in Azure Monitor](best-practices-cost.md)
+- [Get best practices for a complete deployment of Azure Monitor](best-practices.md).
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
To run a cross-service query, you need:
## Function supportability
-Azure Monitor cross-service queries support functions for Application Insights, Log Analytics, Azure Data Explorer, and Azure Resource Graph.
+Azure Monitor cross-service queries support **only ".show"** functions for Application Insights, Log Analytics, Azure Data Explorer, and Azure Resource Graph.
This capability enables cross-cluster queries to reference an Azure Monitor, Azure Data Explorer, or Azure Resource Graph tabular function directly. The following commands are supported with the cross-service query:
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.16.22650 | BE537D4396625ADD93B8C1D5AF098AE9D9472D8A20B2682B32920C5517F1C041 |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.16.22650 | FF86D821BA845833C9FE5F6D5C8A5F7A60617D3AD7D84C75143F3E244ABAAB74 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.17.3860 | BA3D1CF76E2BCCE35815B0F62C0A18E84E0459B468066D0F80F56514A74E0BF6 |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.17.3860 | 22538642730748F4AD8688D00C2919055825BA425BAAD3591D6EBE0021863617 |
## Install the Dependency agent on Windows
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 08/23/2023 Last updated : 09/13/2023
Azure NetApp Files backup is supported for the following regions:
* Qatar Central * South Africa North * South Central US
+* South India
* Southeast Asia
+* Sweden Central
* UAE North * UK South * West Europe
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a n
1. Reboot the Windows systems connecting to the existing SMB share. > [!NOTE]
- > Selecting the **Enable Continuous Availability** option alone does not automatically make the existing SMB sessions continuously available. After selecting the option, be sure to reboot the server for the change to take effect.
+ > Selecting the **Enable Continuous Availability** option alone does not automatically make the existing SMB sessions continuously available. After selecting the option, be sure to reboot the server immediately for the change to take effect.
1. Use the following command to verify that CA is enabled and used on the system thatΓÇÖs mounting the volume:
You can enable the SMB Continuous Availability (CA) feature when you [create a n
If you know the server name, you can use the `-ServerName` parameter with the command. See the [Get-SmbConnection](/powershell/module/smbshare/get-smbconnection?view=windowsserver2019-ps&preserve-view=true) PowerShell command details.
-1. Once you enable SMB Continuous Availability, reboot the server for the change to take effect.
- ## Next steps * [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
You can enable preview features by adding:
The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include:
+- **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference.
- **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). - **symbolicNameCodegen**: Allows the ARM template layer to use a new schema to represent resources as an object dictionary rather than an array of objects. This feature improves the semantic equivalent of the Bicep and ARM templates, resulting in more reliable code generation. Enabling this feature has no effect on the Bicep layer's functionality.
+- **testFramework**: Should be enabled in tandem with `assertions` experimental feature flag for expected functionality. Allows you to author client-side, offline unit-test test blocks that reference Bicep files and mock deployment parameters in a separate `test.bicep` file using the new `test` keyword. Test blocks can be run with the command *bicep test <filepath_to_file_with_test_blocks>* which runs all `assert` statements in the Bicep files referenced by the test blocks.
- **userDefinedFunctions**: Allows you to define your own custom functions. See [User-defined functions in Bicep](./user-defined-functions.md). ## Next steps
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | expressRouteProviderPorts | No | No | > | expressRouteServiceProviders | No | No | > | firewallPolicies | Yes, see [note below](#network-limitations) | Yes |
+> | firewallPolicies / ruleCollectionGroups | No | No |
> | frontdoors | Yes, but limited (see [note below](#network-limitations)) | Yes | > | frontdoors / frontendEndpoints | Yes, but limited (see [note below](#network-limitations)) | No | > | frontdoors / frontendEndpoints / customHttpsConfiguration | Yes, but limited (see [note below](#network-limitations)) | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | internalPublicIpAddresses | No | No | > | ipGroups | Yes | Yes | > | loadBalancers | Yes | Yes |
+> | loadBalancers / backendAddressPools | No | No |
> | localNetworkGateways | Yes | Yes | > | natGateways | Yes | Yes | > | networkExperimentProfiles | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | networkManagers | Yes | Yes | > | networkProfiles | Yes | Yes | > | networkSecurityGroups | Yes | Yes |
+> | networkSecurityGroups / securityRules | No | No |
> | networkSecurityPerimeters | Yes | Yes | > | networkVirtualAppliances | Yes | Yes | > | networkWatchers | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | publicIPPrefixes | Yes | Yes | > | routeFilters | Yes | Yes | > | routeTables | Yes | Yes |
+> | routeTables / routes | No | No |
> | securityPartnerProviders | Yes | Yes | > | serviceEndpointPolicies | Yes | Yes | > | trafficManagerGeographicHierarchies | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | virtualNetworkGateways | Yes | Yes | > | virtualNetworks | Yes | Yes | > | virtualNetworks / privateDnsZoneLinks | No | No |
+> | virtualNetworks / subnets | No | No |
> | virtualNetworks / taggedTrafficConsumers | No | No | > | virtualNetworkTaps | Yes | Yes | > | virtualRouters | Yes | Yes |
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
description: Learn how to connect to your virtual machines using a specified pri
Previously updated : 08/23/2023 Last updated : 09/13/2023
IP-based connection lets you connect to your on-premises, non-Azure, and Azure v
* IP-based connection wonΓÇÖt work with force tunneling over VPN, or when a default route is advertised over an ExpressRoute circuit. Azure Bastion requires access to the Internet and force tunneling, or the default route advertisement will result in traffic blackholing.
-* Azure Active Directory authentication and custom ports and protocols aren't currently supported when connecting to a VM via native client.
+* Azure Active Directory Authentication isn't supported for RDP connections. Azure Active Directory authentication is supported for SSH connections via native client.
-* UDR is not supported on Bastion subnet, including with IP-based connection.
+* Custom ports and protocols aren't currently supported when connecting to a VM via native client.
+
+* UDR isn't supported on Bastion subnet, including with IP-based connection.
## Prerequisites
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 23-09 | [5030329] | Servicing Stack Update LKG | 4.122 | Sep 12, 2023 | | Rel 23-09 | [5030504] | Servicing Stack Update LKG | 5.86 | Sep 12, 2023 | | Rel 23-09 | [5028264] | Servicing Stack Update LKG | 2.142 | Jul 11, 2023 |
+| Rel 23-09 | [4494175] | January '20 Microcode | 5.86 | Sep 1, 2020 |
+| Rel 23-09 | [4494174] | January '20 Microcode | 6.62 | Sep 1, 2020 |
| Rel 23-09 | 5030369 | Servicing Stack Update | 7.31 | | | Rel 23-09 | 5030505 | Servicing Stack Update | 6.62 | |
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
TODO:
- Should we be using a newer API version? --> ```bash
- token=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".accessToken")
+ token=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".access_token")
curl https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token" -s | jq ```
TODO:
Uri = "$env:MSI_ENDPOINT`?resource=https://management.core.windows.net/" Headers = @{Metadata='true'} }
- $token= ((Invoke-WebRequest @parameters ).content | ConvertFrom-Json).accessToken
+ $token= ((Invoke-WebRequest @parameters ).content | ConvertFrom-Json).access_token
$parameters = @{ Uri = 'https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview' Headers = @{Authorization = "Bearer $token"}
again.
Bash: ```bash
- TOKEN=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")
+ TOKEN=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".access_token")
curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $TOKEN" ```
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
Chat conversations happen within **chat threads**. Chat threads have the followi
Typically the thread creator and participants have same level of access to the thread and can execute all related operations available in the SDK, including deleting it. Participants don't have write access to messages sent by other participants, which means only the message sender can update or delete their sent messages. If another participant tries to do that, they get an error. ### Chat Data
-Azure Communication Services stores chat messages for 90 days. Chat thread participants can use `ListMessages` to view message history for a particular thread. However, the API does not return messages once the 90 day period has passed. Users that are removed from a chat thread are able to view previous message history for 90 days but cannot send or receive new messages. To learn more about data being stored in Azure Communication Services chat service, refer to the [data residency and privacy page](../privacy.md).
+Azure Communication Services stores chat messages indefinitely till they are deleted by the customer. Chat thread participants can use `ListMessages` to view message history for a particular thread. Users that are removed from a chat thread are able to view previous message history but cannot send or receive new messages. Accidentally deleted messages are not recoverable by the system. To learn more about data being stored in Azure Communication Services chat service, refer to the [data residency and privacy page](../privacy.md).
+
+In 2024, new functionality will be introduced where customers must choose between indefinite message retention or automatic deletion after 90 days. Existing messages remain unaffected.
For customers that use Virtual appointments, refer to our Teams Interoperability [user privacy](../interop/guest/privacy.md#chat-storage) for storage of chat messages in Teams meetings.
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
This sandbox setup is designed to help developers begin building the application
|Send typing indicator|Chat thread|10|30| ### Chat storage
-Chat messages are stored for 90 days. Submit [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you require storage for longer time period. If the time period is less than 90 days for chat messages, use the delete chat thread APIs.
+Azure Communication Services stores chat messages indefinitely till they are deleted by the customer.
+
+Beginning in CY24 Q1, customers must choose between indefinite message retention or automatic deletion after 90 days. Existing messages remain unaffected, but customers can opt for a 90-day retention period if desired.
+
+> [!NOTE]
+> Accidentally deleted messages are not recoverable by the system.
## Voice and video calling
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Alphanumeric sender ID is not capable of receiving inbound messages or STOP mess
Short Code availability is currently restricted to paid Azure subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md). ### Can you text to a toll-free number from a short code?
-No. Texting to a toll-free number from a short code is not supported. You also wont be able to receive a message from a toll-free number to a short code.
+ACS toll-free numbers are enabled to receive messages from short codes. However, short codes are not typically enabled to send messages to toll-free numbers. If your messages from short codes to ACS toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers.
### How should a short code be formatted? Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes blade without any prefix.
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Call ID**: This ID is used to identify Communication Services calls. * **SMS message ID**: This ID is used to identify SMS messages. * **Short Code Program Brief ID**: This ID is used to identify a short code program brief application.
+* **Toll-free verification campaign brief ID**: This ID is used to identify a toll-free verification campaign brief application.
* **Email message ID**: This ID is used to identify Send Email requests. * **Correlation ID**: This ID is used to identify requests made using Call Automation. * **Call logs**: These logs contain detailed information can be used to troubleshoot calling and network issues.
The program brief ID can be found on the [Azure portal](https://portal.azure.com
:::image type="content" source="./media/short-code-trouble-shooting.png" alt-text="Screenshot showing a short code program brief ID."::: +
+## Access your toll-free verification campaign brief ID
+The program brief ID can be found on the [Azure portal](https://portal.azure.com) in the Regulatory Documents blade.
++ ## Access your email operation ID
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Previously updated : 06/30/2021 Last updated : 09/12/2023
The following bandwidth requirements are for the native Windows, Android, and iO
## Firewall configuration
-Communication Services connections require internet connectivity to specific ports and IP addresses to deliver high-quality multimedia experiences. Without access to these ports and IP addresses, Communication Services can still work. The optimal experience is provided when the recommended ports and IP ranges are open.
+Communication Services connections require internet connectivity to specific ports and IP addresses to deliver high-quality multimedia experiences. Without access to these ports and IP addresses, Communication Services will not work properly. The list of IP ranges and allow listed domains that need to be enabled are:
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- |
confidential-computing Choose Confidential Containers Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/choose-confidential-containers-offerings.md
Title: Choose container offerings for confidential computing description: How to choose the right confidential container offerings to meet your security, isolation and developer needs.-+ Previously updated : 11/01/2021- Last updated : 9/12/2023+
Azure confidential computing offers multiple types of containers with varying ti
Confidential containers also help with code protection through encryption. You can create hardware-based assurances and hardware root of trust. You can also lower your attack surface area with confidential containers.
-The diagram below will guide different offerings in this portfolio
--- ## Links to container compute offerings
-**Confidential VM worker nodes on AKS)** supporting full AKS features with node level VM based Trusted Execution Environment (TEE). Also support remote guest attestation. [Get started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md)
+**Confidential VM worker nodes on AKS** supporting full AKS features with node level VM based Trusted Execution Environment (TEE). Also support remote guest attestation. [Get started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md)
-**Unmodified containers with serverless offering** [confidential containers on Azure Container Instance (ACI)](./confidential-containers.md#vm-isolated-confidential-containers-on-azure-container-instances-acipublic-preview) supporting existing Linux containers with remote guest attestation flow.
+**Unmodified containers with serverless offering** [confidential containers on Azure Container Instance (ACI)](./confidential-containers.md#vm-isolated-confidential-containers-on-azure-container-instances-aci) supporting existing Linux containers with remote guest attestation flow.
**Unmodified containers with Intel SGX** support higher programming languages on Intel SGX through the Azure Partner ecosystem of OSS projects. For more information, see the [unmodified containers deployment flow and samples](./confidential-containers.md).
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
Title: Confidential containers on Azure description: Learn about unmodified container support with confidential containers. -+ Previously updated : 3/1/2023- Last updated : 9/12/2023+
Below are the qualities of confidential containers:
- Provides strong assurances of data confidentiality, code integrity and data integrity in a cloud environment with hardware based confidential computing offerings - Helps isolate your containers from other container groups/pods, as well as VM node OS kernel
-## VM Isolated Confidential containers on Azure Container Instances (ACI) - Public preview
+## VM Isolated Confidential containers on Azure Container Instances (ACI)
[Confidential containers on ACI](../container-instances/container-instances-confidential-overview.md) enables fast and easy deployment of containers natively in Azure and with the ability to protect data and code in use thanks to AMD EPYC™ processors with confidential computing capabilities. This is because your container(s) runs in a hardware-based and attested Trusted Execution Environment (TEE) without the need to adopt a specialized programming model and without infrastructure management overhead. With this launch you get: 1. Full guest attestation, which reflects the cryptographic measurement of all hardware and software components running within your Trusted Computing Base (TCB). 2. Tooling to generate policies that will be enforced in the Trusted Execution Environment.
confidential-computing Virtual Machine Solutions Sgx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-sgx.md
Previously updated : 12/20/2021 Last updated : 9/12/2023
Under **properties**, you also have to specify an image under **storageProfile**
"version": "latest" }, "20_04-lts-gen2": {
- "offer": "UbuntuServer",
+ "offer": "0001-com-ubuntu-server-focal",
"publisher": "Canonical", "sku": "20_04-lts-gen2", "version": "latest" } "22_04-lts-gen2": {
- "offer": "UbuntuServer",
+ "offer": "0001-com-ubuntu-server-jammy",
"publisher": "Canonical", "sku": "22_04-lts-gen2", "version": "latest"
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
container-registry Tutorial Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-cache.md
+
+ Title: Artifact Cache - Overview
+description: An overview on Artifact Cache feature, its limitations and benefits of enabling the feature in your Registry.
+ Last updated : 04/19/2022++
+# Artifact Cache - Overview
+
+Artifact Cache feature allows users to cache container images in a private container registry. Artifact Cache is available in *Basic*, *Standard*, and *Premium* [service tiers](container-registry-skus.md).
+
+This article is part one in a six-part tutorial series. The tutorial covers:
+
+> [!div class="checklist"]
+1. [Artifact Cache](tutorial-artifact-cache.md)
+2. [Enable Artifact Cache - Azure portal](tutorial-enable-artifact-cache.md)
+3. [Enable Artifact Cache with authentication - Azure portal](tutorial-enable-artifact-cache-auth.md)
+4. [Enable Artifact Cache - Azure CLI](tutorial-enable-artifact-cache-cli.md)
+5. [Enable Artifact Cache with authentication - Azure CLI](tutorial-enable-artifact-cache-auth-cli.md)
+6. [Troubleshooting guide for Artifact Cache](tutorial-troubleshoot-artifact-cache.md)
+
+## Artifact Cache
+
+Artifact Cache enables you to cache container images from public and private repositories.
+
+Implementing Artifact Cache provides the following benefits:
+
+***More Reliable pull operations:*** Faster pulls of container images are achievable by caching the container images in ACR. Since Microsoft manages the Azure network, pull operations are faster by providing Geo-Replication and Availability Zone support to the customers.
+
+***Private networks:*** Cached registries are available on private networks. Therefore, users can configure their firewall to meet compliance standards.
+
+***Ensuring upstream content is delivered***: All registries, especially public ones like Docker Hub and others, have anonymous pull limits in order to ensure they can provide services to everyone. Artifact Cache allows users to pull images from the local ACR instead of the upstream registry. Artifact Cache ensures the content delivery from upstream and users gets the benefit of pulling the container images from the cache without counting to the pull limits.
+
+## Terminology
+
+- Cache Rule - A Cache Rule is a rule you can create to pull artifacts from a supported repository into your cache.
+ - A cache rule contains four parts:
+
+ 1. Rule Name - The name of your cache rule. For example, `Hello-World-Cache`.
+
+ 2. Source - The name of the Source Registry.
+
+ 3. Repository Path - The source path of the repository to find and retrieve artifacts you want to cache. For example, `docker.io/library/hello-world`.
+
+ 4. New ACR Repository Namespace - The name of the new repository path to store artifacts. For example, `hello-world`. The Repository can't already exist inside the ACR instance.
+
+- Credentials
+ - Credentials are a set of username and password for the source registry. You require Credentials to authenticate with a public or private repository. Credentials contain four parts
+
+ 1. Credentials - The name of your credentials.
+
+ 2. Source registry Login Server - The login server of your source registry.
+
+ 3. Source Authentication - The key vault locations to store credentials.
+
+ 4. Username and Password secrets- The secrets containing the username and password.
+
+## Upstream support
+
+Artifact Cache currently supports the following upstream registries:
+
+| Upstream registries | Support | Availability |
+| | | -- |
+| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| ECR Public | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Nivida | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+| Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+
+## Limitations
+
+- Artifact Cache feature doesn't support Customer managed key (CMK) enabled registries.
+
+- Cache will only occur after at least one image pull is complete on the available container image. For every new image available, a new image pull must be complete. Artifact Cache doesn't automatically pull new tags of images when a new tag is available. It is on the roadmap but not supported in this release.
+
+- Artifact Cache only supports 1000 cache rules.
+
+## Next steps
+
+* To enable Artifact Cache using the Azure portal advance to the next article: [Enable Artifact Cache](tutorial-enable-artifact-cache.md).
+
+<!-- LINKS - External -->
+
+[docker-rate-limit]:https://aka.ms/docker-rate-limit
container-registry Tutorial Enable Artifact Cache Auth Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-auth-cli.md
+
+ Title: Enable Artifact Cache with authentication - Azure CLI
+description: Learn how to enable Artifact Cache with authentication using Azure CLI.
++ Last updated : 06/17/2022+++
+# Enable Artifact Cache with authentication - Azure CLI
+
+This article is part five of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. In [part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. In [part three](tutorial-enable-artifact-cache-cli.md), you learn how to enable Artifact Cache feature by using the Azure CLI. In [part four](tutorial-enable-artifact-cache-auth.md), you learn how to enable Artifact Cache feature with authentication by using Azure portal.
+
+This article walks you through the steps of enabling Artifact Cache with authentication by using the Azure CLI. You have to use the Credential set to make an authenticated pull or to access a private repository.
+
+## Prerequisites
+
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+* You have an existing Key Vault to store credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials]
+* You can set and retrieve secrets from your Key Vault. Learn more about [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret]
+
+## Configure Artifact Cache with authentication - Azure CLI
+
+### Create a Credential Set - Azure CLI
+
+Before configuring a Credential Set, you have to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
+
+1. Run [az acr credential set create][az-acr-credential-set-create] command to create a credential set.
+
+ - For example, To create a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set create
+ -r MyRegistry \
+ -n MyRule \
+ -l docker.io \
+ -u https://MyKeyvault.vault.azure.net/secrets/usernamesecret \
+ -p https://MyKeyvault.vault.azure.net/secrets/passwordsecret
+ ```
+
+2. Run [az acr credential set update][az-acr-credential-set-update] to update the username or password KV secret ID on a credential set.
+
+ - For example, to update the username or password KV secret ID on a credential set a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set update -r MyRegistry -n MyRule -p https://MyKeyvault.vault.azure.net/secrets/newsecretname
+ ```
+
+3. Run [az-acr-credential-set-show][az-acr-credential-set-show] to show a credential set.
+
+ - For example, to show a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set show -r MyRegistry -n MyCredSet
+ ```
+
+### Create a cache rule with a Credential Set - Azure CLI
+
+1. Run [az acr cache create][az-acr-cache-create] command to create a cache rule.
+
+ - For example, to create a cache rule with a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu -c MyCredSet
+ ```
+
+2. Run [az acr cache update][az-acr-cache-update] command to update the credential set on a cache rule.
+
+ - For example, to update the credential set on a cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache update -r MyRegistry -n MyRule -c NewCredSet
+ ```
+
+ - For example, to remove a credential set from an existing cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache update -r MyRegistry -n MyRule --remove-cred-set
+ ```
+
+3. Run [az acr cache show][az-acr-cache-show] command to show a cache rule.
+
+ - For example, to show a cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache show -r MyRegistry -n MyRule
+ ```
+
+### Assign permissions to Key Vault
+
+1. Get the principal ID of system identity in use to access Key Vault.
+
+ ```azurecli-interactive
+ PRINCIPAL_ID=$(az acr credential-set show
+ -n MyCredSet \
+ -r MyRegistry \
+ --query 'identity.principalId' \
+ -o tsv)
+ ```
+
+2. Run the [az keyvault set-policy][az-keyvault-set-policy] command to assign access to the Key Vault, before pulling the image.
+
+ - For example, to assign permissions for the credential set access the KeyVault secret
+
+ ```azurecli-interactive
+ az keyvault set-policy --name MyKeyVault \
+ --object-id $PRINCIPAL_ID \
+ --secret-permissions get
+ ```
+
+### Pull your Image
+
+1. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+## Clean up the resources
+
+1. Run [az acr cache list][az-acr-cache-list] command to list the cache rules in the Azure Container Registry.
+
+ - For example, to list the cache rules for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache list -r MyRegistry
+ ```
+
+2. Run [az acr cache delete][az-acr-cache-delete] command to delete a cache rule.
+
+ - For example, to delete a cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache delete -r MyRegistry -n MyRule
+ ```
+
+3. Run[az acr credential set list][az-acr-credential-set-list] to list the credential sets in an Azure Container Registry.
+
+ - For example, to list the credential sets for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set list -r MyRegistry
+ ```
+
+4. Run [az-acr-credential-set-delete][az-acr-credential-set-delete] to delete a credential set.
+
+ - For example, to delete a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set delete -r MyRegistry -n MyCredSet
+ ```
+
+## Next steps
+
+* Advance to the [next article](tutorial-troubleshoot-artifact-cache.md) to walk through the troubleshoot guide for Registry Cache.
+
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]: ../key-vault/secrets/quick-create-cli.md#add-a-secret-to-key-vault
+[set-and-retrieve-a-secret]: ../key-vault/secrets/quick-create-cli.md#retrieve-a-secret-from-key-vault
+[az-keyvault-set-policy]: ../key-vault/general/assign-access-policy.md#assign-an-access-policy
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[Azure Cloud Shell]: /azure/cloud-shell/quickstart
+[az-acr-cache-create]:/cli/azure/acr/cache#az-acr-cache-create
+[az-acr-cache-show]:/cli/azure/acr/cache#az-acr-cache-show
+[az-acr-cache-list]:/cli/azure/acr/cache#az-acr-cache-list
+[az-acr-cache-delete]:/cli/azure/acr/cache#az-acr-cache-delete
+[az-acr-cache-update]:/cli/azure/acr/cache#az-acr-cache-update
+[az-acr-credential-set-create]:/cli/azure/acr/credential-set#az-acr-credential-set-create
+[az-acr-credential-set-update]:/cli/azure/acr/credential-set#az-acr-credential-set-update
+[az-acr-credential-set-show]: /cli/azure/acr/credential-set#az-acr-credential-set-show
+[az-acr-credential-set-list]: /cli/azure/acr/credential-set#az-acr-credential-set-list
+[az-acr-credential-set-delete]: /cli/azure/acr/credential-set#az-acr-credential-set-delete
container-registry Tutorial Enable Artifact Cache Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-auth.md
+
+ Title: Enable Artifact Cache with authentication - Azure portal
+description: Learn how to enable Artifact Cache with authentication using Azure portal.
+ Last updated : 04/19/2022+++
+# Enable Artifact Cache with authentication - Azure portal
+
+This article is part four of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. In [part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. In [part three](tutorial-enable-artifact-cache-cli.md) , you learn how to enable Artifact Cache feature by using the Azure CLI.
+
+This article walks you through the steps of enabling Artifact Cache with authentication by using the Azure portal. You have to use the Credential set to make an authenticated pull or to access a private repository.
+
+## Prerequisites
+
+* Sign in to the [Azure portal](https://ms.portal.azure.com/).
+* You have an existing Key Vault to store credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials]
+* You have the existing Key vaults without the RBAC controls.
+
+## Configure Artifact Cache with authentication - Azure portal
+
+Follow the steps to create cache rule in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+
+2. In the side **Menu**, under the **Services**, select **Cache** .
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-preview-01.png" alt-text="Screenshot for Registry cache.":::
++
+3. Select **Create Rule**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-blade-02.png" alt-text="Screenshot for Create Rule.":::
++
+4. A window for **New cache rule** appears.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/new-cache-rule-auth-03.png" alt-text="Screenshot for new Cache Rule.":::
++
+5. Enter the **Rule name**.
+
+6. Select **Source** Registry from the dropdown menu.
+
+7. Enter the **Repository Path** to the artifacts you want to cache.
+
+8. For adding authentication to the repository, check the **Authentication** box.
+
+9. Choose **Create new credentials** to create a new set of credentials to store the username and password for your source registry. Learn how to [create new credentials](tutorial-enable-artifact-cache-auth.md#create-new-credentials)
+
+10. If you have the credentials ready, **Select credentials** from the drop-down menu.
+
+11. Under the **Destination**, Enter the name of the **New ACR Repository Namespace** to store cached artifacts.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/save-cache-rule-04.png" alt-text="Screenshot to save Cache Rule.":::
++
+12. Select on **Save**
+
+13. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+### Create new credentials
+
+Before configuring a Credential Set, you require to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
+
+1. Navigate to **Credentials** > **Add credential set** > **Create new credentials**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/add-credential-set-05.png" alt-text="Screenshot for adding credential set.":::
++
+ :::image type="content" source="./media/container-registry-artifact-cache/create-credential-set-06.png" alt-text="Screenshot for create new credential set.":::
++
+1. Enter **Name** for the new credentials for your source registry.
+
+1. Select a **Source Authentication**. Artifact Cache currently supports **Select from Key Vault** and **Enter secret URI's**.
+
+1. For the **Select from Key Vault** option, Learn more about [creating credentials using key vault][create-and-store-keyvault-credentials].
+
+1. Select on **Create**
+
+## Next steps
+
+* Advance to the [next article](tutorial-enable-artifact-cache-cli.md) to enable the Artifact Cache using Azure CLI.
+
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]: ../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault
+[set-and-retrieve-a-secret]: ../key-vault/secrets/quick-create-portal.md#retrieve-a-secret-from-key-vault
container-registry Tutorial Enable Artifact Cache Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-cli.md
+
+ Title: Enable Artifact Cache - Azure CLI
+description: Learn how to enable Registry Cachein your Azure Container Registry using Azure CLI.
++ Last updated : 06/17/2022+++
+# Enable Artifact Cache - Azure CLI
+
+This article is part three of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. [Part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. This article walks you through the steps of enabling Artifact Cache by using the Azure CLI without authentication.
+
+## Prerequisites
+
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+
+## Configure Artifact Cache - Azure CLI
+
+Follow the steps to create a Cache rule without using a Credential set.
+
+### Create a Cache rule
+
+1. Run [az acr Cache create][az-acr-cache-create] command to create a Cache rule.
+
+ - For example, to create a Cache rule without a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu-
+ ```
+
+2. Run [az acr Cache show][az-acr-cache-show] command to show a Cache rule.
+
+ - For example, to show a Cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache show -r MyRegistry -n MyRule
+ ```
+
+### Pull your image
+
+1. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+## Clean up the resources
+
+1. Run [az acr Cache list][az-acr-cache-list] command to list the Cache rules in the Azure Container Registry.
+
+ - For example, to list the Cache rules for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache list -r MyRegistry
+ ```
+
+2. Run [az acr Cache delete][az-acr-cache-delete] command to delete a Cache rule.
+
+ - For example, to delete a Cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache delete -r MyRegistry -n MyRule
+ ```
+
+## Next steps
+
+* To enable Artifact Cache with authentication using the Azure CLI advance to the next article [Enable Artifact Cache - Azure CLI](tutorial-enable-artifact-cache-auth-cli.md).
+
+<!-- LINKS - External -->
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[Azure Cloud Shell]: /azure/cloud-shell/quickstart
+[az-acr-cache-create]:/cli/azure/acr/cache#az-acr-cache-create
+[az-acr-cache-show]:/cli/azure/acr/cache#az-acr-cache-show
+[az-acr-cache-list]:/cli/azure/acr/cache#az-acr-cache-list
+[az-acr-cache-delete]:/cli/azure/acr/cache#az-acr-cache-delete
container-registry Tutorial Enable Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache.md
+
+ Title: Enable Artifact Cache - Azure portal
+description: Learn how to enable Registry Cache in your Azure Container Registry using Azure portal.
+ Last updated : 04/19/2022+++
+# Enable Artifact Cache - Azure portal
+
+This article is part two of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. This article walks you through the steps of enabling Artifact Cache by using the Azure portal without authentication.
+
+## Prerequisites
+
+* Sign in to the [Azure portal](https://ms.portal.azure.com/)
+
+## Configure Artifact Cache - Azure portal
+
+Follow the steps to create cache rule in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+
+2. In the side **Menu**, under the **Services**, select **Cache**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-preview-01.png" alt-text="Screenshot for Registry cache.":::
++
+3. Select **Create Rule**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-blade-02.png" alt-text="Screenshot for Create Rule.":::
++
+4. A window for **New cache rule** appears.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/new-cache-rule-03.png" alt-text="Screenshot for new Cache Rule.":::
++
+5. Enter the **Rule name**.
+
+6. Select **Source** Registry from the dropdown menu.
+
+7. Enter the **Repository Path** to the artifacts you want to cache.
+
+8. You can skip **Authentication**, if you aren't accessing a private repository or performing an authenticated pull.
+
+9. Under the **Destination**, Enter the name of the **New ACR Repository Namespace** to store cached artifacts.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/save-cache-rule-04.png" alt-text="Screenshot to save Cache Rule.":::
++
+10. Select on **Save**
+
+11. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+## Next steps
+
+* Advance to the [next article](tutorial-enable-artifact-cache-cli.md) to enable the Artifact Cache using Azure CLI.
+
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]:../key-vault/secrets/quick-create-portal.md
container-registry Tutorial Troubleshoot Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-artifact-cache.md
+
+ Title: Troubleshoot Artifact Cache
+description: Learn how to troubleshoot the most common problems for a registry enabled with the Artifact Cache feature.
+ Last updated : 04/19/2022+++
+# Troubleshoot guide for Artifact Cache
+
+This article is part six in a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. In [part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. In [part three](tutorial-enable-artifact-cache-cli.md), you learn how to enable Artifact Cache feature by using the Azure CLI. In [part four](tutorial-enable-artifact-cache-auth.md), you learn how to enable Artifact Cache feature with authentication by using Azure portal. In [part five](tutorial-enable-artifact-cache-auth-cli.md), you learn how to enable Artifact Cache feature with authentication by using Azure CLI.
+
+This article helps you troubleshoot problems you might encounter when attempting to use Artifact Cache.
+
+## Symptoms and Causes
+
+May include one or more of the following issues:
+
+- Cached images don't appear in a real repository
+ - [Cached images don't appear in a live repository](tutorial-troubleshoot-artifact-cache.md#cached-images-dont-appear-in-a-live-repository)
+
+- Credential set has an unhealthy status
+ - [Unhealthy Credential Set](tutorial-troubleshoot-artifact-cache.md#unhealthy-credential-set)
+
+- Unable to create a cache rule
+ - [Cache rule Limit](tutorial-troubleshoot-artifact-cache.md#cache-rule-limit)
+
+## Potential Solutions
+
+## Cached images don't appear in a live repository
+
+If you're having an issue with cached images not showing up in your repository in ACR, we recommend verifying the repository path. Incorrect repository paths lead the cached images to not show up in your repository in ACR.
+
+- The Login server for Docker Hub is `docker.io`.
+- The Login server for Microsoft Artifact Registry is `mcr.microsoft.com`.
+
+The Azure portal autofills these fields for you. However, many Docker repositories begin with `library/` in their path. For example, in-order to cache the `hello-world` repository, the correct Repository Path is `docker.io/library/hello-world`.
+
+## Unhealthy Credential Set
+
+Credential sets are a set of Key Vault secrets that operate as a Username and Password for private repositories. Unhealthy Credential sets are often a result of these secrets no longer being valid. In the Azure portal, you can select the credential set, to edit and apply changes.
+
+- Verify the secrets in Azure Key Vault haven't expired.
+- Verify the secrets in Azure Key Vault are valid.
+- Verify the access to the Azure Key Vault is assigned.
+
+To assign the access to Azure Key Vault:
+
+```azurecli-interactive
+az keyvault set-policy --name myKeyVaultName --object-id myObjID --secret-permissions get
+```
+
+Learn more about [Key Vaults][create-and-store-keyvault-credentials].
+Learn more about [Assigning the access to Azure Key Vault][az-keyvault-set-policy].
+
+## Unable to create a Cache rule
+
+### Cache rule Limit
+
+If you're facing issues while creating a Cache rule, we recommend verifying if you have more than 1000 cache rules created.
+
+We recommend deleting any unwanted cache rules to avoid hitting the limit.
+
+Learn more about the [Cache Terminology](tutorial-artifact-cache.md#terminology)
+
+## Upstream support
+
+Artifact Cache currently supports the following upstream registries:
+
+| Upstream registries | Support | Availability |
+| | | -- |
+| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| ECR Public | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Nivida | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+| Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
++
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]:../key-vault/secrets/quick-create-portal.md
+[az-keyvault-set-policy]: ../key-vault/general/assign-access-policy.md#assign-an-access-policy
cosmos-db Choose Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/choose-service.md
+
+ Title: Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
+description: Learn about the differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra. You also learn the benefits of each of these services and when to choose them.
++++++ Last updated : 09/05/2023++
+# Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
+
+In this article, you will learn the differences between [Azure Managed Instance for Apache Cassandra](../../managed-instance-apache-cassandr) Azure Cosmos DB for Apache Cassandra. This article provides recommendations on how to choose between the two services, or when to host your own Apache Cassandra environment.
+
+## Key differences
+
+Azure Managed Instance for Apache Cassandra provides automated deployment, scaling, and operations to maintain the node health for open-source Apache Cassandra instances in Azure. It also provides the capability to scale out the capacity of existing on-premises or cloud self-hosted Apache Cassandra clusters. It scales out by adding managed Cassandra datacenters to the existing cluster ring.
+
+The RU-based [Azure Cosmos DB for Apache Cassandra](introduction.md) in Azure Cosmos DB is a compatibility layer over Microsoft's globally distributed cloud-native database service [Azure Cosmos DB](../index.yml).
+
+## How to choose?
+
+The following table shows the common scenarios, workload requirements, and aspirations where each of this deployment approaches fit:
+
+| |Self-hosted Apache Cassandra on-premises or in Azure | Azure Managed Instance for Apache Cassandra | Azure Cosmos DB for Apache Cassandra |
+|||||
+|**Deployment type**| You have a highly customized Apache Cassandra deployment with custom patches or snitches. | You have a standard open-source Apache Cassandra deployment without any custom code. | You are content with a platform that is not Apache Cassandra underneath but is compliant with all open-source client drivers at a [wire protocol](../cassandra-support.md) level. |
+|**Operational overhead**| You have existing Cassandra experts who can deploy, configure, and maintain your clusters. | You want to lower the operational overhead for your Apache Cassandra node health, but still maintain control over the platform level configurations such as replication and consistency. | You want to eliminate the operational overhead by using a fully managed Platform-as-as-service database in the cloud. |
+|**Production Support**| You handle live incidents and outages yourself, including contacting relevant infrastructure teams for compute, networking, storage, etc. | You want a first-party managed service experience that will act as a one-stop shop for supporting live incidents and outages. | You want a first-party managed service experience that will act as a one-stop shop for live incidents and outages. |
+|**Software Support**| You handle all patches, and ensure that software is upgraded before end of life.| You want a first-party managed service experience that will offer Cassandra software level support beyond end of live| You want a first-party managed service experience where software level support is completely abstracted.|
+|**Operating system requirements**| You have a requirement to maintain custom or golden Virtual Machine operating system images. | You can use vanilla images but want to have control over SKUs, memory, disks, and IOPS. | You want capacity provisioning to be simplified and expressed as a single normalized metric, with a one-to-one relationship to throughput, such as [request units](../request-units.md) in Azure Cosmos DB. |
+|**Pricing model**| You want to use management software such as Datastax tooling and are happy with licensing costs. | You prefer pure open-source licensing and VM instance-based pricing. | You want to use cloud-native pricing, which includes [autoscale](scale-account-throughput.md#use-autoscale) and [serverless](../serverless.md) offers. |
+|**Analytics**| You want full control over the provisioning of analytical pipelines regardless of the overhead to build and maintain them. | You want to use cloud-based analytical services like Azure Databricks. | You want near real-time hybrid transactional analytics built into the platform with [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md). |
+|**Workload pattern**| Your workload is fairly steady-state and you don't require scaling nodes in the cluster frequently. | Your workload is volatile and you need to be able to scale up or scale down nodes in a data center or add/remove data centers easily. | Your workload is often volatile and you need to be able to scale up or scale down quickly and at a significant volume. |
+|**SLAs**| You are happy with your processes for maintaining SLAs on consistency, throughput, availability, and disaster recovery. | You are happy with your processes for maintaining SLAs on consistency and throughput, but want an [SLA for availability](https://azure.microsoft.com/support/legal/sl#backup-and-restore). | You want [fully comprehensive SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_4/) on consistency, throughput, availability, and disaster recovery. |
+|**Replication and consistency**| You need to be able to configure the full array of [tunable consistency settings](https://cassandra.apache.org/doc/latest/cassandr)) |
+|**Data model**| You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are building a new application, or your existing application has a relatively uniform distribution of data with respect to both storage and throughput across partition keys. |
+
+## Next steps
+
+* [Build a Java app to manage Azure Cosmos DB for Apache Cassandra data](manage-data-java-v4-sdk.md)
+* [Create an Azure Managed instance for Apache Cassandra cluster in Azure portal](../../managed-instance-apache-cassandr)
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
Container container = await database.CreateContainerIfNotExistsAsync(containerPr
#### [Java SDK v4](#tab/java-v4)
-```java
+#### [JavaScript SDK v4](#tab/javascript-v4)
+
+```javascript
// List of partition keys, in hierarchical order. You can have up to three levels of keys. List<String> subpartitionKeyPaths = new ArrayList<String>(); subpartitionKeyPaths.add("/TenantId");
item.setSessionId("0000-11-0000-1111");
Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+ // Create a new item
+const item: UserSession = {
+ Id: 'f7da01b0-090b-41d2-8416-dacae09fbb4a',
+ TenantId: 'Microsoft',
+ UserId: '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b',
+ SessionId: '0000-11-0000-1111'
+}
+
+// Pass in the object, and the SDK automatically extracts the full partition key path
+const { resource: document } = await = container.items.create(item);
+
+```
#### Manually specify the path
PartitionKey partitionKey = new PartitionKeyBuilder()
Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item, partitionKey); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+const item: UserSession = {
+ Id: 'f7da01b0-090b-41d2-8416-dacae09fbb4a',
+ TenantId: 'Microsoft',
+ UserId: '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b',
+ SessionId: '0000-11-0000-1111'
+}
+
+// Specify the full partition key path when creating the item
+const partitionKey: PartitionKey = new PartitionKeyBuilder()
+ .addValue(item.TenantId)
+ .addValue(item.UserId)
+ .addValue(item.SessionId)
+ .build();
+
+// Create the item in the container
+const { resource: document } = await container.items.create(item, partitionKey);
+```
### Perform a key/value lookup (point read) of an item
PartitionKey partitionKey = new PartitionKeyBuilder()
// Perform a point read Mono<CosmosItemResponse<UserSession>> readResponse = container.readItem(id, partitionKey, UserSession.class); ```---
-##### [JavaScript SDK v4](#tab/javascript-v4)
+##### [Javascript SDK v4](#tab/javascript-v4)
```javascript // Store the unique identifier
-String id = "f7da01b0-090b-41d2-8416-dacae09fbb4a";
+const id = "f7da01b0-090b-41d2-8416-dacae09fbb4a";
// Build the full partition key path
-PartitionKey partitionKey = new PartitionKeyBuilder()
- .add("Microsoft") //TenantId
- .add("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b") //UserId
- .add("0000-11-0000-1111") //SessionId
+const partitionKey: PartitionKey = new PartitionKeyBuilder()
+ .addValue(item.TenantId)
+ .addValue(item.UserId)
+ .addValue(item.SessionId)
.build();
-
+ // Perform a point read
-Mono<CosmosItemResponse<UserSession>> readResponse = container.readItem(id, partitionKey, UserSession.class);
+const { resource: document } = await container.item(id, partitionKey).read();
```- ### Run a query
pagedResponse.byPage().flatMap(fluxResponse -> {
return Flux.empty(); }).blockLast(); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+// Define a single-partition query that specifies the full partition key path
+const query: string = "SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b' AND c.SessionId = '0000-11-0000-1111'";
+
+// Retrieve an iterator for the result set
+const queryIterator = container.items.query(query);
+
+while (queryIterator.hasMoreResults()) {
+ const { resources: results } = await queryIterator.fetchNext();
+ // Process result
+}
+```
pagedResponse.byPage().flatMap(fluxResponse -> {
}).blockLast(); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+// Define a targeted cross-partition query specifying prefix path[s]
+const query: string = "SELECT * FROM c WHERE c.TenantId = 'Microsoft'";
+
+// Retrieve an iterator for the result set
+const queryIterator = container.items.query(query);
+
+while (queryIterator.hasMoreResults()) {
+ const { resources: results } = await queryIterator.fetchNext();
+ // Process result
+}
+```
## Limitations and known issues
cosmos-db Choose Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/choose-model.md
Here are a few key factors to help you decide which is the right option for you.
[**Get started with Azure Cosmos DB for MongoDB RU**](./quickstart-python.md)
+> [!TIP]
+> Want to try the Azure Cosmos DB for MongoDB RU with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free.
+ ### Choose vCore-based if - You're migrating (lift & shift) an existing MongoDB workload or building a new MongoDB application.-- Your workload has more point reads (fetching a single item by its ID and shard key value) and few long-running queries and complex aggregation pipeline operations.
+- Your workload has more long-running queries, complex aggregation pipelines, distributed transactions, joins, etc.
- You prefer high-capacity vertical and horizontal scaling with familiar vCore-based cluster tiers such as M30, M40, M50 and more. - You're running applications requiring 99.995% availability. [**Get started with Azure Cosmos DB for MongoDB vCore**](./vcore/quickstart-portal.md)
-> [!TIP]
-> Want to try the Azure Cosmos DB for MongoDB with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free.
- ## Resource and billing differences between the options The RU and vCore services have different architectures with important billing differences.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md
+
+ Title: Introduction/Overview
+
+description: Use Azure Cosmos DB for MongoDB to store and query massive amounts of data using popular open-source drivers.
+++++ Last updated : 09/12/2023++
+# What is Azure Cosmos DB for MongoDB?
++
+[Azure Cosmos DB](../introduction.md) is a fully managed NoSQL and relational database for modern app development.
+
+Azure Cosmos DB for MongoDB makes it easy to use Azure Cosmos DB as if it were a MongoDB database. You can use your existing MongoDB skills and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the connection string for your account using the API for MongoDB.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWXr4T]
+
+## Cosmos DB for MongoDB benefits
+
+Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service offerings such as MongoDB Atlas:
+
+### Request Unit (RU) architecture
+
+[A fully managed MongoDB-compatible service](./ru/introduction.md) with flexible scaling using [Request Units (RUs)](../request-units.md). Designed for cloud-native applications.
+
+- **Instantaneous scalability**: With the [Autoscale](../provision-throughput-autoscale.md) feature, your database scales instantaneously with zero warmup period. Other MongoDB offerings such as MongoDB Atlas can take hours to scale up and up to days to scale down.
+
+- **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This management includes sharding and optimizing the number of shards. Other MongoDB offerings such as MongoDB Atlas, require you to specify and manage sharding to horizontally scale. This automation gives you more time to focus on developing applications for your users.
+
+- **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
+
+- **Active-active database**: Unlike MongoDB Atlas, Cosmos DB for MongoDB supports active-active across multiple regions. Databases can span multiple regions, with no single point of failure for **writes and reads for the same data**. MongoDB Atlas global clusters only support active-passive deployments for writes for the same data.
+- **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. The Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to its architecture. This scalability means that you can scale your database to the exact size you need, without paying for unused resources.
+
+- **Real time analytics (HTAP) at any scale**: Run analytics workloads against your transactional MongoDB data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Easily create Power BI dashboards, integrate with Azure Machine Learning and Azure AI services, and bring all of your data from your MongoDB workloads into a single data warehousing solution. Learn more about the [Azure Synapse Link](../synapse-link.md).
+
+- **Serverless deployments**: Cosmos DB for MongoDB offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you're only charged per operation, and don't pay for the database when you don't use it.
+
+> [!TIP]
+> Visit [Choose your model](./choose-model.md) for an in-depth comparison of each architecture to help you choose which one is right for you.
+
+### vCore Architecture
+
+[A fully managed MongoDB-compatible service](./vcore/introduction.md) with dedicated instances for new and existing MongoDB apps. This architecture offers a familiar vCore architecture for MongoDB users, efficient scaling, and seamless integration with Azure services.
+
+- **Native Vector Search**: Seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB for MongoDB vCore. This integration is an all-in-one solution, unlike other vector search solutions that send your data between service integrations.
+
+- **Flat pricing with Low total cost of ownership**: Enjoy a familiar pricing model for Azure Cosmos DB for MongoDB vCore, based on compute (vCores & RAM) and storage (disks).
+
+- **Elevate querying with Text Indexes**: Enhance your data querying efficiency with our text indexing feature. Seamlessly navigate full-text searches across MongoDB collections, simplifying the process of extracting valuable insights from your documents.
+
+- **Scale with no shard key required**: Simplify your development process with high-capacity vertical scaling, all without the need for a shard key. Sharding and scaling horizontally is simple once collections are into the TBs.
+
+- **Free 35 day Backups with point in time restore (PITR)**: Azure Cosmos DB for MongoDB vCore offers free 35 day backups for any amount of data.
+
+> [!TIP]
+> Visit [Choose your model](./choose-model.md) for an in-depth comparison of each architecture to help you choose which one is right for you.
+
+## How Azure Cosmos DB for MongoDB works
+
+Cosmos DB for MongoDB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with MongoDB client SDKs, drivers, and tools. Azure Cosmos DB doesn't host the MongoDB database engine. Any MongoDB client driver compatible with the API version you're using should be able to connect, with no special configuration.
+
+> [!IMPORTANT]
+> This article describes a feature of Azure Cosmos DB that provides wire protocol compatibility with MongoDB databases. Microsoft does not run MongoDB databases to provide this service. Azure Cosmos DB is not affiliated with MongoDB, Inc.
+
+## Next steps
+
+- Read the [FAQ](faq.yml)
+- [Connect an existing MongoDB application to Azure Cosmos DB for MongoDB RU](connect-account.md)
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
To create a vector index, use the following `createIndexes` template:
| Field | Type | Description | | | | | | `index_name` | string | Unique name of the index. |
-| `path_to_property` | string | Path to the property that contains the vector. This path can be a top-level property or a dot notation path to the property. If a dot notation path is used, then all the nonleaf elements can't be arrays. |
+| `path_to_property` | string | Path to the property that contains the vector. This path can be a top-level property or a dot notation path to the property. If a dot notation path is used, then all the nonleaf elements can't be arrays. Vectors must be a `number[]` to be indexed and return in vector search results.|
| `kind` | string | Type of vector index to create. Currently, `vector-ivf` is the only supported index option. | | `numLists` | integer | This integer is the number of clusters that the inverted file (IVF) index uses to group the vector data. We recommend that `numLists` is set to `documentCount/1000` for up to 1 million documents and to `sqrt(documentCount)` for more than 1 million documents. Using a `numLists` value of `1` is akin to performing brute-force search, which will have limited performance. | | `similarity` | string | Similarity metric to use with the IVF index. Possible options are `COS` (cosine distance), `L2` (Euclidean distance), and `IP` (inner product). |
To create a vector index, use the following `createIndexes` template:
> > If you're experimenting with a new scenario or creating a small demo, you can start with `numLists` set to `1` to perform a brute-force search across all vectors. This should provide you with the most accurate results from the vector search, however be aware that the search speed and latency will be slow. After your initial setup, you should go ahead and tune the `numLists` parameter using the above guidance.
+> [!IMPORTANT]
+> Vectors must be a `number[]` to be indexed. Using another type, such as `double[]`, prevents the document from being indexed. Non-indexed documents won't be returned in the result of a vector search.
++ ## Examples The following examples show you how to index vectors, add documents that have vector properties, perform a vector search, and retrieve the index configuration.
This guide demonstrates how to create a vector index, add documents that have ve
> [!div class="nextstepaction"] > [Build AI apps with Azure Cosmos DB for MongoDB vCore vector search](vector-search-ai.md) * Learn more about [Azure OpenAI embeddings](../../../ai-services/openai/concepts/understand-embeddings.md)
-* Learn how to [generate embeddings using Azure OpenAI](../../../ai-services/openai/tutorials/embeddings.md)
+* Learn how to [generate embeddings using Azure OpenAI](../../../ai-services/openai/tutorials/embeddings.md)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-pull-model.md
Here's an example of how to obtain the iterator in latest version mode that retu
```js const options = {
- changeFeedStartFrom: ChangeFeedStartFrom.Beginning()
+ changeFeedStartFrom: ChangeFeedStartFrom.Now()
}; const iterator = container.items.getChangeFeedIterator(options);
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
cosmos-db Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-create-bicep.md
+
+ Title: 'Quickstart: create a cluster using Bicep'
+description: Using Bicep template for provisioning a cluster of Azure Cosmos DB for PostgreSQL
+++++ Last updated : 09/07/2023++
+# Use a Bicep file to provision an Azure Cosmos DB for PostgreSQL cluster
++
+Azure Cosmos DB for PostgreSQL is a managed service that allows you to run horizontally scalable PostgreSQL databases in the cloud. In this article you learn, using Bicep to provision and manage an Azure Cosmos DB for PostgreSQL cluster.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+## Create the Bicep file
+
+Provision an Azure Cosmos DB for PostgreSQL cluster that permits distributing data into shards, alongside HA node for high availability.
+
+Create a .bicep file and copy the following into it.
+
+```Bicep
+@secure()
+param administratorLoginPassword string
+param location string
+param clusterName string
+param coordinatorVCores int = 4
+param coordinatorStorageQuotaInMb int = 262144
+param coordinatorServerEdition string = 'GeneralPurpose'
+param enableShardsOnCoordinator bool = true
+param nodeServerEdition string = 'MemoryOptimized'
+param nodeVCores int = 4
+param nodeStorageQuotaInMb int = 524288
+param nodeCount int
+param enableHa bool
+param coordinatorEnablePublicIpAccess bool = true
+param nodeEnablePublicIpAccess bool = true
+param availabilityZone string = '1'
+param postgresqlVersion string = '15'
+param citusVersion string = '12.0'
+
+resource serverName_resource 'Microsoft.DBforPostgreSQL/serverGroupsv2@2022-11-08' = {
+ name: clusterName
+ location: location
+ tags: {}
+ properties: {
+ administratorLoginPassword: administratorLoginPassword
+ coordinatorServerEdition: coordinatorServerEdition
+ coordinatorVCores: coordinatorVCores
+ coordinatorStorageQuotaInMb: coordinatorStorageQuotaInMb
+ enableShardsOnCoordinator: enableShardsOnCoordinator
+ nodeCount: nodeCount
+ nodeServerEdition: nodeServerEdition
+ nodeVCores: nodeVCores
+ nodeStorageQuotaInMb: nodeStorageQuotaInMb
+ enableHa: enableHa
+ coordinatorEnablePublicIpAccess: coordinatorEnablePublicIpAccess
+ nodeEnablePublicIpAccess: nodeEnablePublicIpAccess
+ citusVersion: citusVersion
+ postgresqlVersion: postgresqlVersion
+ preferredPrimaryZone: availabilityZone
+ }
+ }
+```
+
+[Resource format](/azure/templates/microsoft.dbforpostgresql/servergroupsv2?pivots=deployment-language-bicep) could be referred to learn about the supported resource parameters.
+
+## Deploy the Bicep file
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group create --name exampleRG --location eastus
+az deployment group create --resource-group exampleRG --template-file provision.bicep
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+New-AzResourceGroup -Name "exampleRG" -Location "eastus"
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile "./provision.bicep"
+```
++
+You're prompted to enter these values:
+
+- **clusterName**: The cluster name determines the DNS name your applications use to connect, in the form `<node-qualifier>-<clustername>.<uniqueID>.postgres.cosmos.azure.com`. For example, The [domain name](./concepts-node-domain-name.md) postgres.cosmos.azure.com is appended to the cluster name you provide. The Cluster name must only contain lowercase letters, numbers and hyphens. The cluster name must not start or end in a hyphen.
+- **location**: Azure [region](./resources-regions.md) where the cluster and associated nodes are created.
+- **nodeCount**: Number of worker nodes in your cluster. Setting it to `0` provisions a single node cluster while value of greater than equal to two (`>= 2`) provisions a multi-node cluster.
+- **enableHA**: With this option selected if a node goes down, the failed node's standby automatically becomes the new node. Database applications continue to access the cluster with the same connection string.
+- **administratorLoginPassword**: Enter a new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and nonalphanumeric characters (!, $, #, %, etc.).
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to validate the deployment and review the deployed resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Next step
+
+With your cluster created, it's time to connect with a PostgreSQL client.
+
+> [!div class="nextstepaction"]
+> [Connect to your cluster](quickstart-connect-psql.md)
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 08/14/2023 Last updated : 09/12/2023
For Azure Storage accounts:
Or - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions. Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.-- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
- >[!NOTE]
- > Export to storage accounts behind firewall is in preview.
-
+- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
:::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing the From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" ::: If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023
defender-for-cloud Defender For Storage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md
To simulate a malware upload using an EICAR test file, follow these steps:
1. Select on **Security Alerts**. 1. Review the security alert:
+1. a. Locate the alert titled **Malicious file uploaded to storage account**.
- . Locate the alert titled **Malicious file uploaded to storage account**.
- 1. Select on the alertΓÇÖs **View full details** button to see all the related details.
+1. b. Select on the alertΓÇÖs **View full details** button to see all the related details.
- 1. Learn more about Defender for Storage security alerts in the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md#alerts-azurestorage).
+1. Learn more about Defender for Storage security alerts in the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md#alerts-azurestorage).
## Testing sensitive data threat detection
Learn more about:
- [Threat detection and alerts](defender-for-storage-threats-alerts.md) +
defender-for-cloud Episode Thirty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty.md
Title: New Custom Recommendations for AWS and GCP | Defender for Cloud in the field
+ Title: New custom recommendations for AWS and GCP | Defender for Cloud in the field
description: Learn about new custom recommendations for AWS and GCP in Defender for Cloud Last updated 05/14/2023
-# New Custom Recommendations for AWS and GCP in Defender for Cloud
+# New custom recommendations for AWS and GCP in Defender for Cloud
**Episode description**: In this episode of Defender for Cloud in the Field, Yael Genut joins Yuri Diogenes to talk about the new custom recommendations for AWS and GCP. Yael explains the importance of creating custom recommendations in a multicloud environment and how to use Kusto Query Language to create these customizations. Yael also demonstrates the step-by-step process to create custom recommendations using this new capability and how these custom recommendations appear in the Defender for Cloud dashboard.
Last updated 05/14/2023
- [03:15](/shows/mdc-in-the-field/new-custom-recommendations#time=03m15s) - Creating a custom recommendation based on a template - [08:20](/shows/mdc-in-the-field/new-custom-recommendations#time=08m20s) - Creating a custom recommendation from scratch - [12:27](/shows/mdc-in-the-field/new-custom-recommendations#time=12m27s) - Custom recommendation update interval-- [14:30](/shows/mdc-in-the-field/new-custom-recommendations#time=14m30s) - Filtering custom recommendations in the Defender for Cloud dashboard -- [16:40](/shows/mdc-in-the-field/new-custom-recommendations#time=16m40s) - Prerequisites to use the custom recommendations feature
+- [14:30](/shows/mdc-in-the-field/new-custom-recommendations#time=14m30s) - Filtering custom recommendations in the Defender for Cloud dashboard
+- [16:40](/shows/mdc-in-the-field/new-custom-recommendations#time=16m40s) - Prerequisites to use the custom recommendations feature
## Recommended resources - Learn how to [create custom recommendations and security standards](create-custom-recommendations.md)
Last updated 05/14/2023
## Next steps > [!div class="nextstepaction"]
-> [Understanding data aware security posture capability](episode-thirty-one.md)
+> [Understanding data aware security posture capabilities](episode-thirty-one.md)
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
September 6, 2023
Containers vulnerability assessment powered by Microsoft Defender Vulnerability Management (MDVM), now supports an additional trigger for scanning images pulled from an ACR. This newly added trigger provides additional coverage for active images in addition to the existing triggers scanning images pushed to an ACR in the last 90 days and images currently running in AKS.
-This new trigger is available today for some customers, and will be available to all customers by mid-September.
+The new trigger will start rolling out today, and is expected to be available to all customers by end of September.
For more information, see [Container Vulnerability Assessment powered by MDVM](agentless-container-registry-vulnerability-assessment.md)
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
+
+ Title: How to determine your resource usage and quota
+description: Learn how to determine where the Dev Box resources for your subscription are used and if you have any spare capacity against your quota.
+++++ Last updated : 08/21/2023
+
+
+# Determine resource usage and quota
+
+To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. You can see the default quota for each resource type by subscription type here:
+
+Keeping track of how your quota of VM cores is being used across your subscriptions can be difficult. You may want to know what your current usage is, how much you have left, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the Usage + Quotas page.
+
+## Determine your usage and quota
+
+1. In the [Azure portal](https://portal.azure.com), go to the subscription you want to examine.
+
+1. On the Subscription page, under Settings, select **Usage + quotas**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/subscription-overview.png" alt-text="Screenshot showing the Subscription overview left menu, with Usage and quotas highlighted." lightbox="media/how-to-determine-your-quota-usage/subscription-overview.png":::
+
+1. To view Usage + quotas information about Microsoft Dev Box, select **Dev Box**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-dev-box.png" alt-text="Screenshot showing the Usage and quotas page, with Dev Box highlighted." lightbox="media/how-to-determine-your-quota-usage/select-dev-box.png":::
+
+1. In this example, you can see the **Quota name**, the **Region**, the **Subscription** the quota is assigned to, the **Current Usage**, and whether or not the limit is **Adjustable**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription.png" alt-text="Screenshot showing the Usage and quotas page, with column headings highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription.png":::
+
+1. You can also see that the usage is grouped by level; regular, low, and no usage.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription-groups.png" alt-text="Screenshot showing the Usage and quotas page, with VM size groups highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription-groups.png" :::
+
+1. To view quota and usage information for specific regions, select the **Region:** filter, select the regions to display, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-regions.png" lightbox="media/how-to-determine-your-quota-usage/select-regions.png" alt-text="Screenshot showing the Usage and quotas page, with Regions drop down highlighted.":::
+
+1. To view only the items that are using part of your quota, select the **Usage:** filter, and then select **Only items with usage**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-items-with-usage.png" lightbox="media/how-to-determine-your-quota-usage/select-items-with-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Only show items with usage option highlighted.":::
+
+1. To view items that are using above a certain amount of your quota, select the **Usage:** filter, and then select **Select custom usage**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Select custom usage option highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" :::
+
+1. You can then set a custom usage threshold, so only the items using above the specified percentage of the quota are displayed.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Select custom usage option and configuration settings highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage.png":::
+
+1. Select **Apply**.
+
+ Each subscription has its own Usage + quotas page, which covers all the various services in the subscription, not just Microsoft Dev Box.
+
+## Related content
+
+- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#microsoft-dev-box-limits).
+- To learn how to request a quota increase, see [Request a quota limit increase](./how-to-request-quota-increase.md).
dev-box How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-request-quota-increase.md
+
+ Title: Request a quota limit increase for Dev Box resources
+description: Learn how to request a quota increase to expand the number of dev box resources you can use in your subscription. Request an increase for dev box cores and other resources.
+++++ Last updated : 08/22/2023++
+# Request a quota limit increase
+
+This article describes how to submit a support request for increasing the number of resources for Microsoft Dev Box in your Azure subscription.
+
+When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Microsoft Dev Box team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
+
+The time it takes to increase your quota varies depending on the VM size, region, and number of resources requested. You won't have to go through the process of requesting extra capacity often, but to ensure you have the resources you require when you need them, you should:
+
+- Request capacity as far in advance as possible.
+- If possible, be flexible on the region where you're requesting capacity.
+- Recognize that capacity remains assigned for the lifetime of a subscription. When dev box resources are deleted, the capacity remains assigned to the subscription.
+- Request extra capacity only if you need more than is already assigned to your subscription.
+- Make incremental requests for VM cores rather than making large, bulk requests. Break requests for large numbers of cores into smaller requests for extra flexibility in how those requests are fulfilled.
+
+Learn more about the general [process for creating Azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+## Prerequisites
+
+- To create a support request, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Support Request Contributor](/azure/role-based-access-control/built-in-roles#support-request-contributor) role at the subscription level.
+- Before you create a support request for a limit increase, you need to gather additional information.
+
+## Gather information for your request
+
+You'll find submitting a support request for additional quota is quicker if you gather the required information before you begin the request process.
+
+- **Determine your current quota usage**
+
+ For each of your subscriptions, you can check your current usage of each Deployment Environments resource type in each region. Determine your current usage by following these steps: [Determine usage and quota](./how-to-determine-your-quota-usage.md).
+
+- **Determine the region for the additional quota**
+
+ Dev Box resources can exist in many regions. You can choose to deploy resources in multiple regions close to your dev box users. For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+
+- **Choose the quota type of the additional quota.**
+
+ The following Dev Box resources are limited by subscription. You can request an increase in the number of resources for each of these types.
+
+ - Dev box definitions
+ - Dev centers
+ - Network settings
+ - Pools
+ - Projects
+ - Network connections
+ - Dev Box general cores
+ - Other
+
+ When you want to increase the number of dev boxes available to your developers, you should request an increase in the number of Dev Box general cores.
+
+## Submit a new support request
+
+Follow these steps to request a limit increase:
+
+1. On the Azure portal home page, select Support & troubleshooting, and then select **Help + support**
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/submit-new-request.png" alt-text="Screenshot of the Azure portal home page, highlighting the Request core limit increase button." lightbox="./media/how-to-request-capacity-increase/submit-new-request.png":::
+
+1. On the **Help + support** page, select **Create a support request**.
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/create-support-request.png" alt-text="Screenshot of the Help + support page, highlighting Create a support request." lightbox="./media/how-to-request-capacity-increase/create-support-request.png":::
+
+1. On the **New support request** page, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Issue type** | *Service and subscription limits (quotas)* |
+ | **Subscription** | Select the subscription to which the request applies. |
+ | **Quota type** | *Microsoft Dev Box* |
+
+1. On the **Additional details** tab, in the **Problem details** section, select **Enter details**.
+
+ :::image type="content" source="media/how-to-request-capacity-increase/enter-details.png" alt-text="Screenshot of the New support request page, highlighting Enter details." lightbox="media/how-to-request-capacity-increase/enter-details.png":::
+
+1. In **Quota details**, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Region** | Select the **Region** in which you want to increase your quota. |
+ | **Quota type** | When you select a Region, Azure displays your current usage and your current for all quota types. </br> Select the **Quota type** that you want to increase. |
+ | **New total limit** | Enter the new total limit that you want to request. |
+ | **Is it a limit decrease?** | Select **Yes** or **No**. |
+ | **Additional information** | Enter any extra information about your request. |
+
+ :::image type="content" source="media/how-to-request-capacity-increase/quota-details.png" alt-text="Screenshot of the Quota details pane." lightbox="media/how-to-request-capacity-increase/quota-details.png":::
+
+1. Select **Save and continue**.
+## Complete the support request
+
+To complete the support request, enter the following information:
+
+1. Complete the remainder of the support request **Additional details** tab using the following information:
+
+ ### Advanced diagnostic information
+
+ |Name |Value |
+ |||
+ |**Allow collection of advanced diagnostic information**|Select yes or no.|
+
+ ### Support method
+
+ |Name |Value |
+ |||
+ |**Support plan**|Select your support plan.|
+ |**Severity**|Select the severity of the issue.|
+ |**Preferred contact method**|Select email or phone.|
+ |**Your availability**|Enter your availability.|
+ |**Support language**|Select your language preference.|
+
+ ### Contact information
+
+ |Name |Value |
+ |||
+ |**First name**|Enter your first name.|
+ |**Last name**|Enter your last name.|
+ |**Email**|Enter your contact email.|
+ |**Additional email for notification**|Enter an email for notifications.|
+ |**Phone**|Enter your contact phone number.|
+ |**Country/region**|Enter your location.|
+ |**Save contact changes for future support requests.**|Select the check box to save changes.|
+
+1. Select **Next**.
+
+1. On the **Review + create** tab, review the information, and then select **Create**.
+
+## Related content
+
+- To learn how to check your quota usage, see [Determine usage and quota](./how-to-determine-your-quota-usage.md).
+- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#microsoft-dev-box-limits)
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
Previously updated : 04/25/2023 Last updated : 09/12/2023 #Customer intent: As a dev box user, I want to understand how to create and access a dev box so that I can start work.
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
description: Follow this tutorial to learn how to build out an end-to-end Azure Digital Twins solution that's driven by device data. Previously updated : 09/26/2022 Last updated : 09/12/2023
+# CustomerIntent: As a developer, I want to create a data flow from devices through Azure Digital Twins so that I can have a connected digital twin solution.
# Optional fields. Don't forget to remove # if you need a field. # #
In this tutorial, you will...
> * Use an [Azure Functions](../azure-functions/functions-overview.md) app to route simulated telemetry from an [IoT Hub](../iot-hub/about-iot-hub.md) device into digital twin properties > * Propagate changes through the twin graph by processing digital twin notifications with Azure Functions, endpoints, and routes [!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-h3.md)]
The function app is part of the sample project you downloaded, located in the *d
### Publish the app
-To publish the function app to Azure, you'll need to create a storage account, then create the function app in Azure, and finally publish the functions to the Azure function app. This section completes these actions using the Azure CLI.
+To publish the function app to Azure, you'll need to create a storage account, then create the function app in Azure, and finally publish the functions to the Azure function app. This section completes these actions using the Azure CLI. In each command, replace any placeholders in angle brackets with the details for your own resources.
1. Create an Azure storage account by running the following command:
To publish the function app to Azure, you'll need to create a storage account, t
1. Create an Azure function app by running the following command: ```azurecli-interactive
- az functionapp create --name <name-for-new-function-app> --storage-account <name-of-storage-account-from-previous-step> --functions-version 4 --consumption-plan-location <location> --runtime dotnet --runtime-version 6 --resource-group <resource-group>
+ az functionapp create --name <name-for-new-function-app> --storage-account <name-of-storage-account-from-previous-step> --functions-version 4 --consumption-plan-location <location> --runtime dotnet-isolated --runtime-version 7 --resource-group <resource-group>
``` 1. Next, you'll zip up the functions and publish them to your new Azure function app.
To publish the function app to Azure, you'll need to create a storage account, t
1. In the console, run the following command to publish the project locally: ```cmd/sh
- dotnet publish -c Release
+ dotnet publish -c Release -o publish
```
- This command publishes the project to the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
+ This command publishes the project to the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\publish* directory.
- 1. Using your preferred method, create a zip of the published files that are located in the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory. Name the zipped folder *publish.zip*.
+ 1. Using your preferred method, create a zip of the published files that are located **inside** the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\publish* directory. Name the zipped folder *publish.zip*.
- >[!TIP]
- >If you're using PowerShell, you can create the zip by copying the full path to that *\publish* directory and pasting it into the following command:
- >
- >```powershell
- >Compress-Archive -Path <full-path-to-publish-directory>\* -DestinationPath .\publish.zip
- >```
- > The cmdlet will create the *publish.zip* file in the directory location of your console.
+ >[!IMPORTANT]
+ >Make sure the zipped folder does not include an extra layer for the *publish* folder itself. It should only contain the contents that were inside the *publish* folder.
- Your *publish.zip* file should contain folders for *bin*, *ProcessDTRoutedData*, and *ProcessHubToDTEvents*, and there should also be a *host.json* file.
+ Here's an image of how the zip contents might look (it may change depending on your version of .NET).
:::image type="content" source="media/tutorial-end-to-end/publish-zip.png" alt-text="Screenshot of File Explorer in Windows showing the contents of the publish zip folder.":::
The first setting gives the function app the **Azure Digital Twins Data Owner**
The result of this command is outputted information about the role assignment you've created. The function app now has permissions to access data in your Azure Digital Twins instance.
-#### Configure application settings
+#### Configure application setting
The second setting creates an environment variable for the function with the URL of your Azure Digital Twins instance. The function code will use the value of this variable to refer to your instance. For more information about environment variables, see [Manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
The output is information about the device that was created.
Next, configure the device simulator to send data to your IoT Hub instance.
-Begin by getting the IoT hub connection string with this command:
+Begin by getting the IoT hub connection string with the following command. The connection string value will start with `HostName=`.
```azurecli-interactive az iot hub connection-string show --hub-name <your-IoT-hub-name>
The *ProcessHubToDTEvents* function you published earlier listens to the IoT Hub
To see the data from the Azure Digital Twins side, switch to your other console window that's open to the *AdtSampleApp\SampleClientApp* folder. Run the *SampleClientApp* project with `dotnet run`.
+```cmd/sh
+dotnet run
+```
+ Once the project is running and accepting commands, run the following command to get the temperatures being reported by the digital twin thermostat67: ```cmd/sh
Here's a review of the scenario that you built in this tutorial.
2. Simulated device telemetry is sent to IoT Hub, where the *ProcessHubToDTEvents* Azure function is listening for telemetry events. The *ProcessHubToDTEvents* Azure function uses the information in these events to set the `Temperature` property on thermostat67 (**arrow B** in the diagram). 3. Property change events in Azure Digital Twins are routed to an Event Grid topic, where the *ProcessDTRoutedData* Azure function is listening for events. The *ProcessDTRoutedData* Azure function uses the information in these events to set the `Temperature` property on room21 (**arrow C** in the diagram). ## Clean up resources
event-grid Create View Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespaces.md
If you already created a namespace and want to increase or decrease TUs, follow
:::image type="content" source="media/create-view-manage-namespaces/namespace-scale.png" alt-text="Screenshot showing Event Grid scale page.":::
+ > [!NOTE]
+ > For quotas and limits for resources in a namespace including maximum TUs in a namespace, See [Azure Event Grid quotas and limits](quotas-limits.md).
+ ## Delete a namespace 1. Follow instructions from the [View a namespace](#view-a-namespace) section to view all the namespaces, and select the namespace that you want to delete from the list.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
event-hubs Event Hubs About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-about.md
Last updated 03/07/2023
-# Azure Event HubsΓÇöA big data streaming platform and event ingestion service
+# What is Azure Event Hubs? ΓÇö A big data streaming platform and event ingestion service
Event Hubs is a modern big data streaming platform and event ingestion service that can seamlessly integrate with other Azure and Microsoft services, such as Stream Analytics, Power BI, and Event Grid, along with outside services like Apache Spark. The service can process millions of events per second with low latency. The data sent to an event hub (Event Hubs instance) can be transformed and stored by using any real-time analytics providers or batching or storage adapters. ## Why use Event Hubs? Data is valuable only when there's an easy way to process and get timely insights from data sources. Event Hubs provides a distributed stream processing platform with low latency and seamless integration, with data and analytics services inside and outside Azure to build your complete big data pipeline.
-Event Hubs represents the "front door" for an event pipeline, often called an **event ingestor** in solution architectures. An event ingestor is a component or service that sits between event publishers and event consumers to decouple the production of an event stream from the consumption of those events. Event Hubs provides a unified streaming platform with time retention buffer, decoupling event producers from event consumers.
+Event Hubs represents the "front door" for an event pipeline, often called an **event ingestor** in solution architectures. An event ingestor is a component or service that sits between event publishers and event consumers to decouple the production of events from the consumption of those events. Event Hubs provides a unified streaming platform with time retention buffer, decoupling event producers from event consumers.
The following sections describe key features of the Azure Event Hubs service:
The following sections describe key features of the Azure Event Hubs service:
Event Hubs is a fully managed Platform-as-a-Service (PaaS) with little configuration or management overhead, so you focus on your business solutions. [Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) gives you the PaaS Kafka experience without having to manage, configure, or run your clusters. ## Event Hubs for Apache Kafka
-[Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) furthermore enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
+Azure Event Hubs for Apache Kafka ecosystems enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure. For more information, see [Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md).
## Schema Registry in Azure Event Hubs
-[Azure Schema Registry](schema-registry-overview.md) in Event Hubs provides a centralized repository for managing schemas of events streaming applications. Azure Schema Registry comes free with every Event Hubs namespace, and it integrates seamlessly with you Kafka applications or Event Hubs SDK based applications.
+Schema Registry in Event Hubs provides a centralized repository for managing schemas of events streaming applications. Azure Schema Registry comes free with every Event Hubs namespace, and it integrates seamlessly with your Kafka applications or Event Hubs SDK based applications.
-It ensures data compatibility and consistency across event producers and consumers, enabling seamless schema evolution, validation, and governance, and promoting efficient data exchange and interoperability.
+It ensures data compatibility and consistency across event producers and consumers, enabling seamless schema evolution, validation, and governance, and promoting efficient data exchange and interoperability. For more information, see [Schema Registry in Azure Event Hubs](schema-registry-overview.md).
## Support for real-time and batch processing Ingest, buffer, store, and process your stream in real time to get actionable insights. Event Hubs uses a [partitioned consumer model](event-hubs-scalability.md#partitions), enabling multiple applications to process the stream concurrently and letting you control the speed of processing. Azure Event Hubs also integrates with [Azure Functions](../azure-functions/index.yml) for a serverless architecture. ## Capture event data
-[Capture](event-hubs-capture-overview.md) your data in near-real time in an [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/) or [Azure Data Lake Storage](https://azure.microsoft.com/services/data-lake-store/) for long-term retention or micro-batch processing. You can achieve this behavior on the same stream you use for deriving real-time analytics. Setting up capture of event data is fast. There are no administrative costs to run it, and it scales automatically with Event Hubs [throughput units](event-hubs-scalability.md#throughput-units) or [processing units](event-hubs-scalability.md#processing-units). Event Hubs enables you to focus on data processing rather than on data capture.
+Capture your data in near-real time in an [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/) or [Azure Data Lake Storage](https://azure.microsoft.com/services/data-lake-store/) for long-term retention or micro-batch processing. You can achieve this behavior on the same stream you use for deriving real-time analytics. Setting up capture of event data is fast. There are no administrative costs to run it, and it scales automatically with Event Hubs [throughput units](event-hubs-scalability.md#throughput-units) or [processing units](event-hubs-scalability.md#processing-units). Event Hubs enables you to focus on data processing rather than on data capture. For more information, see [Event Hubs Capture](event-hubs-capture-overview.md).
## Scalable
-With Event Hubs, you can start with data streams in megabytes, and grow to gigabytes or terabytes. The [Auto-inflate](event-hubs-auto-inflate.md) feature is one of the many options available to scale the number of throughput units or processing units to meet your usage needs.
+With Event Hubs, you can start with data streams in megabytes, and grow to gigabytes or terabytes. The [Autoinflate](event-hubs-auto-inflate.md) feature is one of the many options available to scale the number of throughput units or processing units to meet your usage needs.
## Rich ecosystem With a broad ecosystem available for the industry-standard AMQP 1.0 protocol and SDKs available in various languages: [.NET](https://github.com/Azure/azure-sdk-for-net/), [Java](https://github.com/Azure/azure-sdk-for-java/), [Python](https://github.com/Azure/azure-sdk-for-python/), [JavaScript](https://github.com/Azure/azure-sdk-for-js/), you can easily start processing your streams from Event Hubs. All supported client languages provide low-level integration. The ecosystem also provides you with seamless integration with Azure services like Azure Stream Analytics and Azure Functions and thus enables you to build serverless architectures. ## Event Hubs premium and dedicated
-Event Hubs **premium** caters to high-end streaming needs that require superior performance, better isolation with predictable latency and minimal interference in a managed multitenant PaaS environment. On top of all the features of the standard offering, the premium tier offers several extra features such as [dynamic partition scale up](dynamically-add-partitions.md), extended retention, and [customer-managed-keys](configure-customer-managed-key.md). For more information, see [Event Hubs Premium](event-hubs-premium-overview.md).
+Event Hubs **premium** caters to high-end streaming needs that require superior performance, better isolation with predictable latency, and minimal interference in a managed multitenant PaaS environment. On top of all the features of the standard offering, the premium tier offers several extra features such as [dynamic partition scale up](dynamically-add-partitions.md), extended retention, and [customer-managed-keys](configure-customer-managed-key.md). For more information, see [Event Hubs Premium](event-hubs-premium-overview.md).
Event Hubs **dedicated** tier offers single-tenant deployments for customers with the most demanding streaming needs. This single-tenant offering has a guaranteed 99.99% SLA and is available only on our dedicated pricing tier. An Event Hubs cluster can ingress millions of events per second with guaranteed capacity and subsecond latency. Namespaces and event hubs created within the dedicated cluster include all features of the premium offering and more. For more information, see [Event Hubs Dedicated](event-hubs-dedicated-overview.md).
Event Hubs contains the following key components.
| Component | Description | | | -- |
-| Event producers | Any entity that sends data to an event hub. Event publishers can publish events using HTTPS or AMQP 1.0 or Apache Kafka (1.0 and above). |
+| Event producers | Any entity that sends data to an event hub. Event publishers can publish events using HTTPS or AMQP 1.0 or Apache Kafka (1.0 and higher). |
| Partitions | Each consumer only reads a specific subset, or a partition, of the message stream. | | Consumer groups | A view (state, position, or offset) of an entire event hub. Consumer groups enable consuming applications to each have a separate view of the event stream. They read the stream independently at their own pace and with their own offsets. | | Event receivers | Any entity that reads event data from an event hub. All Event Hubs consumers connect via the AMQP 1.0 session. The Event Hubs service delivers events through a session as they become available. All Kafka consumers connect via the Kafka protocol 1.0 and later. |
-| [Throughput units (standard tier)](event-hubs-scalability.md#throughput-units) or [processing units (premium tier)](event-hubs-scalability.md#processing-units) or [capacity units (dedicated)](event-hubs-dedicated-overview.md) | Pre-purchased units of capacity that control the throughput capacity of Event Hubs. |
+| [Throughput units (standard tier)](event-hubs-scalability.md#throughput-units) or [processing units (premium tier)](event-hubs-scalability.md#processing-units) or [capacity units (dedicated)](event-hubs-dedicated-overview.md) | Prepurchased units of capacity that control the throughput capacity of Event Hubs. |
The following figure shows the Event Hubs stream processing architecture: ![Event Hubs](./media/event-hubs-about/event_hubs_architecture.png)
The following figure shows the Event Hubs stream processing architecture:
> [!NOTE] > For more information, see [Event Hubs features or components](event-hubs-features.md). - ## Next steps To get started using Event Hubs, see the **Send and receive events** tutorials:
To get started using Event Hubs, see the **Send and receive events** tutorials:
- [Python](event-hubs-python-get-started-send.md) - [JavaScript](event-hubs-node-get-started-send.md) - [Go](event-hubs-go-get-started-send.md)-- [C (send only)](event-hubs-c-getstarted-send.md)-- [Apache Storm (receive only)](event-hubs-storm-getstarted-receive.md)
+- [C](event-hubs-c-getstarted-send.md) (send only)
+- [Apache Storm](event-hubs-storm-getstarted-receive.md) (receive only)
To learn more about Event Hubs, see the following articles:
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
Title: 'Quickstart: Send or receive events using .NET'
-description: A quickstart to create a .NET Core application that sends events to Azure Event Hubs and then receive those events by using the Azure.Messaging.EventHubs package.
+description: A quickstart that shows you how to create a .NET Core application that sends events to and receive events from Azure Event Hubs.
Last updated 03/09/2023
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
firewall Explicit Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/explicit-proxy.md
With the Explicit proxy mode (supported for HTTP/S), you can define proxy settin
- First, upload the PAC file to a storage container that you create. Then, on the **Enable explicit proxy** page, configure the shared access signature (SAS) URL. Configure the port where the PAC is served from, and then select **Apply** at the bottom of the page.
- The SAS URL must have READ permissions so the firewall can upload the file. If changes are made to the PAC file, a new SAS URL needs to be generated and configured on the firewall **Enable explicit proxy** page.
+ The SAS URL must have READ permissions so the firewall can download the file. If changes are made to the PAC file, a new SAS URL needs to be generated and configured on the firewall **Enable explicit proxy** page.
:::image type="content" source="media/explicit-proxy/shared-access-signature.png" alt-text="Screenshot showing generate shared access signature.":::
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 09/06/2023 Last updated : 09/13/2023
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 09/06/2023 Last updated : 09/13/2023
hdinsight Apache Hadoop Visual Studio Tools Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-visual-studio-tools-get-started.md
keywords: hadoop tools,hive query,visual studio,visual studio hadoop
Previously updated : 08/05/2022 Last updated : 09/13/2023 # Use Data Lake Tools for Visual Studio to connect to Azure HDInsight and run Apache Hive queries
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
Last updated 07/28/2023
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/Azure/HDInsight/releases).
+## Release date: July 25, 2023
+
+This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2307201242**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+
+HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
+
+**OS versions**
+
+* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+
+For workload specific versions, see
+
+* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
+* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
+
+## ![Icon showing Whats new.](./media/hdinsight-release-notes/whats-new.svg) What's new
+* HDInsight 5.1 is now supported with ESP cluster.
+* Upgraded version of Ranger 2.3.0 and Oozie 5.2.1 are now part of HDInsight 5.1
+* The Spark 3.3.1 (HDInsight 5.1) cluster comes with Hive Warehouse Connector (HWC) 2.1, which works together with the Interactive Query (HDInsight 5.1) cluster.
+
+> [!IMPORTANT]
+> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on August 8, 2023. The action is to update to the latest image **2307201242**. Customers are advised to plan accordingly.
+
+|CVE | Severity| CVE Title|
+|-|-|-|
+|[CVE-2023-35393](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35393)| Important|Azure Apache Hive Spoofing Vulnerability|
+|[CVE-2023-35394](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35394)| Important|Azure HDInsight Jupyter Notebook Spoofing Vulnerability|
+|[CVE-2023-36877](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36877)| Important|Azure Apache Oozie Spoofing Vulnerability|
+|[CVE-2023-36881](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36881)| Important|Azure Apache Ambari Spoofing Vulnerability|
+|[CVE-2023-38188](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38188)| Important|Azure Apache Hadoop Spoofing Vulnerability|
+
+
+## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
+
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. Customers need to plan for the updates before 30, September 2023.
+* Cluster permissions for secure storage
+ * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account.
+* In-line quota update.
+ * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase.
+* HDInsight Cluster Creation with Custom VNets.
+ * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before 30, September 2023.ΓÇ»
+* Basic and Standard A-series VMs Retirement.
+ * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31, August 2024.
+* Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
+ * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before 30 September 2023.ΓÇ»
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight)
+
+YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [twitter](https://twitter.com/AzureHDInsight)
+
+ > [!NOTE]
+ > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
+ ## Release date: May 08, 2023 This release applies to HDInsight 4.x and 5.x HDInsight release is available to all regions over several days. This release is applicable for image number **2304280205**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
For workload specific versions, see
1. **Quota Management for HDInsight**
- HDInsight currently allocates quota to customer subscriptions at a regional level. The cores allocated to customers are generic and not classified at a VM family level (For example, Dv2, Ev3, Eav4, etc.).
+ HDInsight currently allocates quota to customer subscriptions at a regional level. The cores allocated to customers are generic and not classified at a VM family level (For example, `Dv2`, `Ev3`, `Eav4`, etc.).
HDInsight introduced an improved view, which provides a detail and classification of quotas for family-level VMs, this feature allows customers to view current and remaining quotas for a region at the VM family level. With the enhanced view, customers have richer visibility, for planning quotas, and a better user experience. This feature is currently available on HDInsight 4.x and 5.x for East US EUAP region. Other regions to follow later.
For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-ver
* Upgraded Zookeeper to 3.6.3 * Kafka Streams support * Stronger delivery guarantees for the Kafka producer enabled by default.
- * log4j 1.x replaced with reload4j.
+ * `log4j` 1.x replaced with `reload4j`.
* Send a hint to the partition leader to recover the partition. * `JoinGroupRequest` and `LeaveGroupRequest` have a reason attached. * Added Broker count metrics8.
- * Mirror Maker2 improvements.
+ * Mirror `Maker2` improvements.
**HBase 2.4.11 Upgrade (Preview)** * This version has new features such as the addition of new caching mechanism types for block cache, the ability to alter `hbase:meta table` and view the `hbase:meta` table from the HBase WEB UI.
For workload specific versions, see [here.](./hdinsight-40-component-versioning.
![Icon showing what's changed with text.](media/hdinsight-release-notes/new-icon-for-changed.png)
-* HDInsight has moved away from Azul Zulu Java JDK 8 to Adoptium Temurin JDK 8, which supports high-quality TCK certified runtimes, and associated technology for use across the Java ecosystem.
+* HDInsight has moved away from Azul Zulu Java JDK 8 to `Adoptium Temurin JDK 8`, which supports high-quality TCK certified runtimes, and associated technology for use across the Java ecosystem.
-* HDInsight has migrated to reload4j. The log4j changes are applicable to
+* HDInsight has migrated to `reload4j`. The `log4j` changes are applicable to
* Apache Hadoop * Apache Zookeeper
For more information on how to check Ubuntu version of cluster, see [here](https
|[HIVE-26127](https://issues.apache.org/jira/browse/HIVE-26127)| INSERT OVERWRITE error - File Not Found| |[HIVE-24957](https://issues.apache.org/jira/browse/HIVE-24957)| Wrong results when subquery has COALESCE in correlation predicate| |[HIVE-24999](https://issues.apache.org/jira/browse/HIVE-24999)| HiveSubQueryRemoveRule generates invalid plan for IN subquery with multiple correlations|
-|[HIVE-24322](https://issues.apache.org/jira/browse/HIVE-24322)| If there's direct insert, the attempt ID has to be checked when reading the manifest fails|
+|[HIVE-24322](https://issues.apache.org/jira/browse/HIVE-24322)| If there is direct insert, the attempt ID has to be checked when reading the manifest fails|
|[HIVE-23363](https://issues.apache.org/jira/browse/HIVE-23363)| Upgrade DataNucleus dependency to 5.2 | |[HIVE-26412](https://issues.apache.org/jira/browse/HIVE-26412)| Create interface to fetch available slots and add the default| |[HIVE-26173](https://issues.apache.org/jira/browse/HIVE-26173)| Upgrade derby to 10.14.2.0|
-|[HIVE-25920](https://issues.apache.org/jira/browse/HIVE-25920)| Bump Xerce2 to 2.12.2.|
+|[HIVE-25920](https://issues.apache.org/jira/browse/HIVE-25920)| Bump `Xerce2` to 2.12.2.|
|[HIVE-26300](https://issues.apache.org/jira/browse/HIVE-26300)| Upgrade Jackson data bind version to 2.12.6.1+ to avoid CVE-2020-36518| ## Release date: 08/10/2022
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
### Other bug fixes
-1. Yarn logΓÇÖs CLI failed to retrieve the logs if any TFile is corrupt or empty.
+1. Yarn logΓÇÖs CLI failed to retrieve the logs if any `TFile` is corrupt or empty.
2. Resolved invalid service principal details error while getting the OAuth token from Azure Active Directory. 3. Improved cluster creation reliability when 100+ worked nodes are configured.
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
|Bug Fixes|Apache JIRA| ||| |Tez Build Failure: FileSaver.js not found|[TEZ-4411](https://issues.apache.org/jira/browse/TEZ-4411)|
-|Wrong FS Exception when warehouse and scratchdir are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
+|Wrong FS Exception when warehouse and `scratchdir` are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
|TezUtils.createConfFromByteString on Configuration larger than 32 MB throws com.google.protobuf.CodedInputStream exception|[TEZ-4142](https://issues.apache.org/jira/browse/TEZ-4142)| |TezUtils::createByteStringFromConf should use snappy instead of DeflaterOutputStream|[TEZ-4113](https://issues.apache.org/jira/browse/TEZ-4411)| |Update protobuf dependency to 3.x|[TEZ-4363](https://issues.apache.org/jira/browse/TEZ-4363)|
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
### Other bug fixes
-1. Yarn logΓÇÖs CLI failed to retrieve the logs if any TFile is corrupt or empty.
+1. Yarn logΓÇÖs CLI failed to retrieve the logs if any `TFile` is corrupt or empty.
2. Resolved invalid service principal details error while getting the OAuth token from Azure Active Directory. 3. Improved cluster creation reliability when 100+ worked nodes are configured.
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
|Bug Fixes|Apache JIRA| ||| |Tez Build Failure: FileSaver.js not found|[TEZ-4411](https://issues.apache.org/jira/browse/TEZ-4411)|
-|Wrong FS Exception when warehouse and scratchdir are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
+|Wrong FS Exception when warehouse and `scratchdir` are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
|TezUtils.createConfFromByteString on Configuration larger than 32 MB throws com.google.protobuf.CodedInputStream exception|[TEZ-4142](https://issues.apache.org/jira/browse/TEZ-4142)| |TezUtils::createByteStringFromConf should use snappy instead of DeflaterOutputStream|[TEZ-4113](https://issues.apache.org/jira/browse/TEZ-4411)| |Update protobuf dependency to 3.x|[TEZ-4363](https://issues.apache.org/jira/browse/TEZ-4363)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Bug Fixes|Apache JIRA| |||
-|TableSnapshotInputFormat should use ReadType.STREAM for scanning HFiles |[HBASE-26273](https://issues.apache.org/jira/browse/HBASE-26273)|
+|TableSnapshotInputFormat should use ReadType.STREAM for scanning `HFiles` |[HBASE-26273](https://issues.apache.org/jira/browse/HBASE-26273)|
|Add option to disable scanMetrics in TableSnapshotInputFormat |[HBASE-26330](https://issues.apache.org/jira/browse/HBASE-26330)| |Fix for ArrayIndexOutOfBoundsException when balancer is executed |[HBASE-22739](https://issues.apache.org/jira/browse/HBASE-22739)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Include MultiDelimitSerDe in HiveServer2 By Default|[HIVE-20619](https://issues.apache.org/jira/browse/HIVE-20619)| | Remove glassfish.jersey and mssql-jdbc classes from jdbc-standalone jar|[HIVE-22134](https://issues.apache.org/jira/browse/HIVE-22134)| | Null pointer exception on running compaction against an MM table.|[HIVE-21280](https://issues.apache.org/jira/browse/HIVE-21280)|
-| Hive query with large size via knox fails with Broken pipe Write failed|[HIVE-22231](https://issues.apache.org/jira/browse/HIVE-22231)|
+| Hive query with large size via `knox` fails with Broken pipe Write failed|[HIVE-22231](https://issues.apache.org/jira/browse/HIVE-22231)|
| Adding ability for user to set bind user|[HIVE-21009](https://issues.apache.org/jira/browse/HIVE-21009)| | Implement UDF to interpret date/timestamp using its internal representation and Gregorian-Julian hybrid calendar|[HIVE-22241](https://issues.apache.org/jira/browse/HIVE-22241)| | Beeline option to show/not show execution report|[HIVE-22204](https://issues.apache.org/jira/browse/HIVE-22204)| | Tez: SplitGenerator tries to look for plan files, which doesn't exist for Tez|[HIVE-22169](https://issues.apache.org/jira/browse/HIVE-22169)|
-| Remove expensive logging from the LLAP cache hotpath|[HIVE-22168](https://issues.apache.org/jira/browse/HIVE-22168)|
+| Remove expensive logging from the LLAP cache `hotpath`|[HIVE-22168](https://issues.apache.org/jira/browse/HIVE-22168)|
| UDF: FunctionRegistry synchronizes on org.apache.hadoop.hive.ql.udf.UDFType class|[HIVE-22161](https://issues.apache.org/jira/browse/HIVE-22161)| | Prevent the creation of query routing appender if property is set to false|[HIVE-22115](https://issues.apache.org/jira/browse/HIVE-22115)| | Remove cross-query synchronization for the partition-eval|[HIVE-22106](https://issues.apache.org/jira/browse/HIVE-22106)| | Skip setting up hive scratch dir during planning|[HIVE-21182](https://issues.apache.org/jira/browse/HIVE-21182)| | Skip creating scratch dirs for tez if RPC is on|[HIVE-21171](https://issues.apache.org/jira/browse/HIVE-21171)|
-| switch Hive UDFs to use Re2J regex engine|[HIVE-19661](https://issues.apache.org/jira/browse/HIVE-19661)|
+| switch Hive UDFs to use `Re2J` regex engine|[HIVE-19661](https://issues.apache.org/jira/browse/HIVE-19661)|
| Migrated clustered tables using bucketing_version 1 on hive 3 uses bucketing_version 2 for inserts|[HIVE-22429](https://issues.apache.org/jira/browse/HIVE-22429)| | Bucketing: Bucketing version 1 is incorrectly partitioning data|[HIVE-21167](https://issues.apache.org/jira/browse/HIVE-21167)| | Adding ASF License header to the newly added file|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than the one is present in table schema|[HIVE-22360](https://issues.apache.org/jira/browse/HIVE-22360)| | LLAP external client - Need to reduce LlapBaseInputFormat#getSplits() footprint|[HIVE-22221](https://issues.apache.org/jira/browse/HIVE-22221)| | Column name with reserved keyword is unescaped when query including join on table with mask column is rewritten (Zoltan Matyus via Zoltan Haindrich)|[HIVE-22208](https://issues.apache.org/jira/browse/HIVE-22208)|
-|Prevent LLAP shutdown on AMReporter related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
+|Prevent LLAP shutdown on `AMReporter` related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
| LLAP status service driver may get stuck with wrong Yarn app ID|[HIVE-21866](https://issues.apache.org/jira/browse/HIVE-21866)| | OperationManager.queryIdOperation doesn't properly clean up multiple queryIds|[HIVE-22275](https://issues.apache.org/jira/browse/HIVE-22275)| | Bringing a node manager down blocks restart of LLAP service|[HIVE-22219](https://issues.apache.org/jira/browse/HIVE-22219)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Remove distribution management tag from pom.xml|[HIVE-19667](https://issues.apache.org/jira/browse/HIVE-19667)| | Parsing time can be high if there's deeply nested subqueries|[HIVE-21980](https://issues.apache.org/jira/browse/HIVE-21980)| | For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057](https://issues.apache.org/jira/browse/HIVE-20057)|
-| JDBC: HiveConnection shades log4j interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)|
-| Update repo URLs in poms - branch 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
-| DBInstall tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)|
+| JDBC: HiveConnection shades `log4j` interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)|
+| Update repo URLs in `poms` - branch 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
+| `DBInstall` tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)|
| Load data into a bucketed table is ignoring partitions specs and loads data into default partition|[HIVE-21564](https://issues.apache.org/jira/browse/HIVE-21564)| | Queries with join condition having timestamp or timestamp with local time zone literal throw SemanticException|[HIVE-21613](https://issues.apache.org/jira/browse/HIVE-21613)| | Analyze compute stats for column leave behind staging dir on HDFS|[HIVE-21342](https://issues.apache.org/jira/browse/HIVE-21342)|
For more information on migration, see the [migration guide.](https://spark.apac
### Kafka 2.4 is now generally available Kafka 2.4.1 is now Generally Available. For more information, please see [Kafka 2.4.1 Release Notes.](http://kafka.apache.org/24/documentation.html)
-Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand mmap of index files, More consumer metrics to observe user poll behavior.
+Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand `mmap` of index files, More consumer metrics to observe user poll behavior.
### Map Datatype in HWC is now supported in HDInsight 4.0
OSS backports that are included in Hive including HWC 1.0 (Spark 2.4) which supp
| Impacted Feature | Apache JIRA | ||--| | Metastore direct sql queries with IN/(NOT IN) should be split based on max parameters allowed by SQL DB | [HIVE-25659](https://issues.apache.org/jira/browse/HIVE-25659) |
-| Upgrade log4j 2.16.0 to 2.17.0 | [HIVE-25825](https://issues.apache.org/jira/browse/HIVE-25825) |
-| Update Flatbuffer version | [HIVE-22827](https://issues.apache.org/jira/browse/HIVE-22827) |
+| Upgrade `log4j` 2.16.0 to 2.17.0 | [HIVE-25825](https://issues.apache.org/jira/browse/HIVE-25825) |
+| Update `Flatbuffer` version | [HIVE-22827](https://issues.apache.org/jira/browse/HIVE-22827) |
| Support Map data-type natively in Arrow format | [HIVE-25553](https://issues.apache.org/jira/browse/HIVE-25553) | | LLAP external client - Handle nested values when the parent struct is null | [HIVE-25243](https://issues.apache.org/jira/browse/HIVE-25243) | | Upgrade arrow version to 0.11.0 | [HIVE-23987](https://issues.apache.org/jira/browse/HIVE-23987) |
HDInsight will no longer use Azure Virtual Machine Scale Sets to provision the c
#### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
-Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
+Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
## Release date: 12/27/2021
This release applies for HDInsight 4.0. HDInsight release is made available to a
The OS versions for this release are: - HDInsight 4.0: Ubuntu 18.04.5 LTS
-HDInsight 4.0 image has been updated to mitigate Log4j vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/)
+HDInsight 4.0 image has been updated to mitigate `Log4j` vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/)
> [!Note]
-> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the log4j vulnerabilities. Hence, customers need not patch/reboot these clusters.
+> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the `log4j` vulnerabilities. Hence, customers need not patch/reboot these clusters.
> * For new HDInsight 4.0 clusters created between 16 Dec 2021 at 01:15 UTC and 27 Dec 2021 00:00 UTC, HDInsight 3.6 or in pinned subscriptions after 16 Dec 2021 the patch is auto applied within the hour in which the cluster is created, however customers must then reboot their nodes for the patching to complete (except for Kafka Management nodes, which are automatically rebooted). ## Release date: 07/27/2021
HDInsight 4.0 ESP Spark cluster has built-in LLAP components running on both hea
### New region - West US 3-- Jio India West
+- `Jio` India West
- Australia Central ### Component version change
Here are the back ported Apache JIRAs for this release:
| | [HIVE-23046](https://issues.apache.org/jira/browse/HIVE-23046) | | Materialized view | [HIVE-22566](https://issues.apache.org/jira/browse/HIVE-22566) |
-### Price Correction for HDInsight Dv2 Virtual Machines
+### Price Correction for HDInsight `Dv2` Virtual Machines
-A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
+A pricing error was corrected on April 25, 2021, for the `Dv2` VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used `Dv2` VMs:
- Canada Central - Canada East
A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsi
- Southeast Asia - UAE Central
-Starting on April 25, 2021, the corrected amount for the Dv2 VMs will be on your account. Customer notifications were sent to subscription owners prior to the change. You can use the Pricing calculator, HDInsight pricing page, or the Create HDInsight cluster blade in the Azure portal to see the corrected costs for Dv2 VMs in your region.
+Starting on April 25, 2021, the corrected amount for the `Dv2` VMs will be on your account. Customer notifications were sent to subscription owners prior to the change. You can use the Pricing calculator, HDInsight pricing page, or the Create HDInsight cluster blade in the Azure portal to see the corrected costs for `Dv2` VMs in your region.
-No other action is needed from you. The price correction will only apply for usage on or after April 25, 2021 in the specified regions, and not to any usage prior to this date. To ensure you have the most performant and cost-effective solution, we recommended that you review the pricing, VCPU, and RAM for your Dv2 clusters and compare the Dv2 specifications to the Ev3 VMs to see if your solution would benefit from utilizing one of the newer VM series.
+No other action is needed from you. The price correction will only apply for usage on or after April 25, 2021 in the specified regions, and not to any usage prior to this date. To ensure you have the most performant and cost-effective solution, we recommended that you review the pricing, VCPU, and RAM for your `Dv2` clusters and compare the `Dv2` specifications to the `Ev3` VMs to see if your solution would benefit from utilizing one of the newer VM series.
## Release date: 06/02/2021
HDInsight added [Spark 3.0.0](https://spark.apache.org/docs/3.0.0/) support to H
#### Kafka 2.4 preview HDInsight added [Kafka 2.4.1](http://kafka.apache.org/24/documentation.html) support to HDInsight 4.0 as a Preview feature.
-#### Eav4-series support
-HDInsight added Eav4-series support in this release.
+#### `Eav4`-series support
+HDInsight added `Eav4`-series support in this release.
#### Moving to Azure virtual machine scale sets HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
No deprecation in this release.
#### Default cluster version is changed to 4.0 The default version of HDInsight cluster is changed from 3.6 to 4.0. For more information about available versions, see [available versions](./hdinsight-component-versioning.md). Learn more about what is new in [HDInsight 4.0](./hdinsight-version-release.md).
-#### Default cluster VM sizes are changed to Ev3-series
-Default cluster VM sizes are changed from D-series to Ev3-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
+#### Default cluster VM sizes are changed to `Ev3`-series
+Default cluster VM sizes are changed from D-series to `Ev3`-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
HDInsight now uses Azure virtual machines to provision the cluster. The service
Starting from January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption. ### Behavior changes
-#### Default cluster VM size changes to Ev3-series
-Default cluster VM sizes will be changed from D-series to Ev3-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
+#### Default cluster VM size changes to `Ev3`-series
+Default cluster VM sizes will be changed from D-series to `Ev3`-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
A minimum 4-core VM is required for Head Node to ensure the high availability an
#### Cluster worker node provisioning change When 80% of the worker nodes are ready, the cluster enters **operational** stage. At this stage, customers can do all the data plane operations like running scripts and jobs. But customers can't do any control plane operation like scaling up/down. Only deletion is supported.
-After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minutes, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes aren't available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
+After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minute, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes aren't available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
#### Create new service principal through HDInsight
-Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15 2020, customers can't create new service principal in HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
+Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15, 2020, new service principal creation is not possible in the HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
#### Time out for script actions with cluster creation HDInsight supports running script actions with cluster creation. From this release, all script actions with cluster creation must finish within **60 minutes**, or they time out. Script actions submitted to running clusters aren't impacted. Learn more details [here](./hdinsight-hadoop-customize-cluster-linux.md#script-action-in-the-cluster-creation-process).
F-series virtual machines(VMs) is a good choice to get started with HDInsight wi
#### G-series virtual machine deprecation From this release, G-series VMs are no longer offered in HDInsight.
-#### Dv1 virtual machine deprecation
-From this release, the use of Dv1 VMs with HDInsight is deprecated. Any customer request for Dv1 will be served with Dv2 automatically. There's no price difference between Dv1 and Dv2 VMs.
+#### `Dv1` virtual machine deprecation
+From this release, the use of `Dv1` VMs with HDInsight is deprecated. Any customer request for `Dv1` will be served with `Dv2` automatically. There's no price difference between `Dv1` and `Dv2` VMs.
### Behavior changes
This release provides Hadoop Common 2.7.3 and the following Apache patches:
- [HDFS-11384](https://issues.apache.org/jira/browse/HDFS-11384): Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike. -- [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689): New exception thrown by DFSClient%isHDFSEncryptionEnabled broke hacky hive code.
+- [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689): New exception thrown by `DFSClient%isHDFSEncryptionEnabled` broke `hacky` hive code.
- [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711): DN shouldn't delete the block On "Too many open files" Exception. - [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347): TestBalancerRPCDelay\#testBalancerRPCDelay fails frequently. -- [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781): After Datanode down, In Namenode UI Datanode tab is throwing warning message.
+- [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781): After `Datanode` down, In `Namenode` UI `Datanode` tab is throwing warning message.
-- [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054): Handling PathIsNotEmptyDirectoryException in DFSClient delete call.
+- [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054): Handling PathIsNotEmptyDirectoryException in `DFSClient` delete call.
- [HDFS-13120](https://issues.apache.org/jira/browse/HDFS-13120): Snapshot diff could be corrupted after concat. -- [YARN-3742](https://issues.apache.org/jira/browse/YARN-3742): YARN RM will shut down if ZKClient creation times out.
+- [YARN-3742](https://issues.apache.org/jira/browse/YARN-3742): YARN RM will shut down if `ZKClient` creation times out.
- [YARN-6061](https://issues.apache.org/jira/browse/YARN-6061): Add an UncaughtExceptionHandler for critical threads in RM.
This release provides Hadoop Common 2.7.3 and the following Apache patches:
HDP 2.6.4 provided Hadoop Common 2.7.3 and the following Apache patches: -- [HADOOP-13700](https://issues.apache.org/jira/browse/HADOOP-13700): Remove unthrown IOException from TrashPolicy\#initialize and \#getInstance signatures.
+- [HADOOP-13700](https://issues.apache.org/jira/browse/HADOOP-13700): Remove unthrown `IOException` from TrashPolicy\#initialize and \#getInstance signatures.
- [HADOOP-13709](https://issues.apache.org/jira/browse/HADOOP-13709): Ability to clean up subprocesses spawned by Shell when the process exits. -- [HADOOP-14059](https://issues.apache.org/jira/browse/HADOOP-14059): typo in s3a rename(self, subdir) error message.
+- [HADOOP-14059](https://issues.apache.org/jira/browse/HADOOP-14059): typo in `s3a` rename(self, subdir) error message.
- [HADOOP-14542](https://issues.apache.org/jira/browse/HADOOP-14542): Add IOUtils.cleanupWithLogger that accepts slf4j logger API.
This release provides HBase 1.1.2 and the following Apache patches.
- [HBASE-14473](https://issues.apache.org/jira/browse/HBASE-14473): Compute region locality in parallel. -- [HBASE-14517](https://issues.apache.org/jira/browse/HBASE-14517): Show regionserver's version in master status page.
+- [HBASE-14517](https://issues.apache.org/jira/browse/HBASE-14517): Show `regionserver's` version in master status page.
- [HBASE-14606](https://issues.apache.org/jira/browse/HBASE-14606): TestSecureLoadIncrementalHFiles tests timed out in trunk build on apache.
This release provides HBase 1.1.2 and the following Apache patches.
- [HBASE-15515](https://issues.apache.org/jira/browse/HBASE-15515): Improve LocalityBasedCandidateGenerator in Balancer. -- [HBASE-15615](https://issues.apache.org/jira/browse/HBASE-15615): Wrong sleep time when RegionServerCallable need retry.
+- [HBASE-15615](https://issues.apache.org/jira/browse/HBASE-15615): Wrong sleep time when `RegionServerCallable` need retry.
- [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135): PeerClusterZnode under rs of removed peer may never be deleted. - [HBASE-16570](https://issues.apache.org/jira/browse/HBASE-16570): Compute region locality in parallel at startup. -- [HBASE-16810](https://issues.apache.org/jira/browse/HBASE-16810): HBase Balancer throws ArrayIndexOutOfBoundsException when regionservers are in /hbase/draining znode and unloaded.
+- [HBASE-16810](https://issues.apache.org/jira/browse/HBASE-16810): HBase Balancer throws ArrayIndexOutOfBoundsException when `regionservers` are in /hbase/draining znode and unloaded.
- [HBASE-16852](https://issues.apache.org/jira/browse/HBASE-16852): TestDefaultCompactSelection failed on branch-1.3.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-17419*](https://issues.apache.org/jira/browse/HIVE-17419): ANALYZE TABLE...COMPUTE STATISTICS FOR COLUMNS command shows computed stats for masked tables. -- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting uniontype.
+- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting `uniontype`.
- [*HIVE-17621*](https://issues.apache.org/jira/browse/HIVE-17621): Hive-site settings are ignored during HCatInputFormat split-calculation. -- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for blobstores.
+- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for `blobstores`.
- [*HIVE-17729*](https://issues.apache.org/jira/browse/HIVE-17729): Add Database and Explain related blobstore tests. -- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward compat option for external users to HIVE-11985.
+- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward `compat` option for external users to HIVE-11985.
- [*HIVE-17803*](https://issues.apache.org/jira/browse/HIVE-17803): With Pig multi-query, 2 HCatStorers writing to the same table will trample each other's outputs. -- [*HIVE-17829*](https://issues.apache.org/jira/browse/HIVE-17829): ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2.
+- [*HIVE-17829*](https://issues.apache.org/jira/browse/HIVE-17829): ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in `Hive2`.
- [*HIVE-17845*](https://issues.apache.org/jira/browse/HIVE-17845): insert fails if target table columns are not lowercase.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18353*](https://issues.apache.org/jira/browse/HIVE-18353): CompactorMR should call jobclient.close() to trigger cleanup. -- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when query a partitioned view in ColumnPruner.
+- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when querying a partitioned view in ColumnPruner.
- [*HIVE-18429*](https://issues.apache.org/jira/browse/HIVE-18429): Compaction should handle a case when it produces no output.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-16828*](https://issues.apache.org/jira/browse/HIVE-16828): With CBO enabled, Query on partitioned views throws IndexOutOfBoundException. -- [*HIVE-17063*](https://issues.apache.org/jira/browse/HIVE-17063): insert overwrite partition onto an external table fail when drop partition first.
+- [*HIVE-17063*](https://issues.apache.org/jira/browse/HIVE-17063): insert overwrite partition onto an external table fails when drop partition first.
- [*HIVE-17259*](https://issues.apache.org/jira/browse/HIVE-17259): Hive JDBC does not recognize UNIONTYPE columns. -- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting uniontype.
+- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting `uniontype`.
- [*HIVE-17600*](https://issues.apache.org/jira/browse/HIVE-17600): Make OrcFile's enforceBufferSize user-settable.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-17629*](https://issues.apache.org/jira/browse/HIVE-17629): CachedStore: Have an approved/not-approved config to allow selective caching of tables/partitions and allow read while prewarming. -- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for blobstores.
+- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for `blobstores`.
- [*HIVE-17702*](https://issues.apache.org/jira/browse/HIVE-17702): incorrect isRepeating handling in decimal reader in ORC. - [*HIVE-17729*](https://issues.apache.org/jira/browse/HIVE-17729): Add Database and Explain related blobstore tests. -- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward compat option for external users to HIVE-11985.
+- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward `compat` option for external users to HIVE-11985.
- [*HIVE-17803*](https://issues.apache.org/jira/browse/HIVE-17803): With Pig multi-query, 2 HCatStorers writing to the same table will trample each other's outputs.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18090*](https://issues.apache.org/jira/browse/HIVE-18090): acid heartbeat fails when metastore is connected via hadoop credential. -- [*HIVE-18189*](https://issues.apache.org/jira/browse/HIVE-18189): Order by position does not work when cbo is disabled.
+- [*HIVE-18189*](https://issues.apache.org/jira/browse/HIVE-18189): Order by position does not work when `cbo` is disabled.
- [*HIVE-18258*](https://issues.apache.org/jira/browse/HIVE-18258): Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken. -- [*HIVE-18269*](https://issues.apache.org/jira/browse/HIVE-18269): LLAP: Fast llap io with slow processing pipeline can lead to OOM.
+- [*HIVE-18269*](https://issues.apache.org/jira/browse/HIVE-18269): LLAP: Fast `llap` io with slow processing pipeline can lead to OOM.
- [*HIVE-18293*](https://issues.apache.org/jira/browse/HIVE-18293): Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18353*](https://issues.apache.org/jira/browse/HIVE-18353): CompactorMR should call jobclient.close() to trigger cleanup. -- [*HIVE-18384*](https://issues.apache.org/jira/browse/HIVE-18384): ConcurrentModificationException in log4j2.x library.
+- [*HIVE-18384*](https://issues.apache.org/jira/browse/HIVE-18384): ConcurrentModificationException in `log4j2.x` library.
-- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when query a partitioned view in ColumnPruner.
+- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when querying a partitioned view in ColumnPruner.
- [*HIVE-18447*](https://issues.apache.org/jira/browse/HIVE-18447): JDBC: Provide a way for JDBC users to pass cookie info via connection string.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18530*](https://issues.apache.org/jira/browse/HIVE-18530): Replication should skip MM table (for now). -- [*HIVE-18548*](https://issues.apache.org/jira/browse/HIVE-18548): Fix log4j import.
+- [*HIVE-18548*](https://issues.apache.org/jira/browse/HIVE-18548): Fix `log4j` import.
- [*HIVE-18551*](https://issues.apache.org/jira/browse/HIVE-18551): Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18587*](https://issues.apache.org/jira/browse/HIVE-18587): insert DML event may attempt to calculate a checksum on directories. -- [*HIVE-18597*](https://issues.apache.org/jira/browse/HIVE-18597): LLAP: Always package the log4j2 API jar for org.apache.log4j.
+- [*HIVE-18597*](https://issues.apache.org/jira/browse/HIVE-18597): LLAP: Always package the `log4j2` API jar for `org.apache.log4j`.
- [*HIVE-18613*](https://issues.apache.org/jira/browse/HIVE-18613): Extend JsonSerDe to support BINARY type.
This release provides Kafka 1.0.0 and the following Apache patches.
- [KAFKA-6261](https://issues.apache.org/jira/browse/KAFKA-6261): Request logging throws exception if acks=0. -- [KAFKA-6274](https://issues.apache.org/jira/browse/KAFKA-6274): Improve KTable Source state store auto-generated names.
+- [KAFKA-6274](https://issues.apache.org/jira/browse/KAFKA-6274): Improve `KTable` Source state store auto-generated names.
#### Mahout
This release provides Oozie 4.2.0 with the following Apache patches.
- [OOZIE-2787](https://issues.apache.org/jira/browse/OOZIE-2787): Oozie distributes application jar twice making the spark job fail. -- [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792): Hive2 action isn't parsing Spark application ID from log file properly when Hive is on Spark.
+- [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792): `Hive2` action isn't parsing Spark application ID from log file properly when Hive is on Spark.
- [OOZIE-2799](https://issues.apache.org/jira/browse/OOZIE-2799): Setting log location for spark sql on hive. -- [OOZIE-2802](https://issues.apache.org/jira/browse/OOZIE-2802): Spark action failure on Spark 2.1.0 due to duplicate sharelibs.
+- [OOZIE-2802](https://issues.apache.org/jira/browse/OOZIE-2802): Spark action failure on Spark 2.1.0 due to duplicate `sharelibs`.
- [OOZIE-2923](https://issues.apache.org/jira/browse/OOZIE-2923): Improve Spark options parsing.
This release provides Phoenix 4.7.0 and the following Apache patches:
- [PHOENIX-4525](https://issues.apache.org/jira/browse/PHOENIX-4525): Integer overflow in GroupBy execution. -- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there's WHERE on pk column.
+- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there's WHERE on `pk` column.
- [PHOENIX-4586](https://issues.apache.org/jira/browse/PHOENIX-4586): UPSERT SELECT doesn't take in account comparison operators for subqueries.
This release provides Pig 0.16.0 with the following Apache patches.
- [PIG-5159](https://issues.apache.org/jira/browse/PIG-5159): Fix Pig not saving grunt history. -- [PIG-5175](https://issues.apache.org/jira/browse/PIG-5175): Upgrade jruby to 1.7.26.
+- [PIG-5175](https://issues.apache.org/jira/browse/PIG-5175): Upgrade `jruby` to 1.7.26.
#### Ranger
This release provides Ranger 0.7.0 and the following Apache patches:
- [RANGER-1990](https://issues.apache.org/jira/browse/RANGER-1990): Add One-way SSL MySQL support in Ranger Admin. -- [RANGER-2006](https://issues.apache.org/jira/browse/RANGER-2006): Fix problems detected by static code analysis in ranger usersync for ldap sync source.
+- [RANGER-2006](https://issues.apache.org/jira/browse/RANGER-2006): Fix problems detected by static code analysis in ranger `usersync` for `ldap` sync source.
- [RANGER-2008](https://issues.apache.org/jira/browse/RANGER-2008): Policy evaluation is failing for multiline policy conditions.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23406](https://issues.apache.org/jira/browse/SPARK-23406): Enable stream-stream self-joins for branch-2.3. -- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark shouldn't warn \`metadata directory\` for a HDFS file path.
+- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark shouldn't warn \`metadata directory\` for an HDFS file path.
- [SPARK-23436](https://issues.apache.org/jira/browse/SPARK-23436): Infer partition as Date only if it can be cast to Date.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23599](https://issues.apache.org/jira/browse/SPARK-23599): Use RandomUUIDGenerator in Uuid expression. -- [SPARK-23601](https://issues.apache.org/jir5 files from release.
+- [SPARK-23601](https://issues.apache.org/jir5` files from release.
- [SPARK-23608](https://issues.apache.org/jira/browse/SPARK-23608): Add synchronization in SHS between attachSparkUI and detachSparkUI functions to avoid concurrent modification issue to Jetty Handlers.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23639](https://issues.apache.org/jira/browse/SPARK-23639): Obtain token before init metastore client in SparkSQL CLI. -- [SPARK-23642](https://issues.apache.org/jira/browse/SPARK-23642): AccumulatorV2 subclass isZero scaladoc fix.
+- [SPARK-23642](https://issues.apache.org/jira/browse/SPARK-23642): AccumulatorV2 subclass isZero `scaladoc` fix.
- [SPARK-23644](https://issues.apache.org/jira/browse/SPARK-23644): Use absolute path for REST call in SHS.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23760](https://issues.apache.org/jira/browse/SPARK-23760): CodegenContext.withSubExprEliminationExprs should save/restore CSE state correctly. -- [SPARK-23769](https://issues.apache.org/jira/browse/SPARK-23769): Remove comments that unnecessarily disable Scalastyle check.
+- [SPARK-23769](https://issues.apache.org/jira/browse/SPARK-23769): Remove comments that unnecessarily disable `Scalastyle` check.
- [SPARK-23788](https://issues.apache.org/jira/browse/SPARK-23788): Fix race in StreamingQuerySuite.
This release provides Zeppelin 0.7.3 with no more Apache patches.
- [ZEPPELIN-3129](https://issues.apache.org/jira/browse/ZEPPELIN-3129): Zeppelin UI doesn't sign out in IE. -- [ZEPPELIN-903](https://issues.apache.org/jira/browse/ZEPPELIN-903): Replace CXF with Jersey2.
+- [ZEPPELIN-903](https://issues.apache.org/jira/browse/ZEPPELIN-903): Replace CXF with `Jersey2`.
#### ZooKeeper
This release provides ZooKeeper 3.4.6 and the following Apache patches:
- [ZOOKEEPER-2693](https://issues.apache.org/jira/browse/ZOOKEEPER-2693): DOS attack on wchp/wchc four letter words (4lw). -- [ZOOKEEPER-2726](https://issues.apache.org/jira/browse/ZOOKEEPER-2726): Patch for introduces potential race condition.
+- [ZOOKEEPER-2726](https://issues.apache.org/jira/browse/ZOOKEEPER-2726): Patch introduces a potential race condition.
### Fixed Common Vulnerabilities and Exposures
This section covers all Common Vulnerabilities and Exposures (CVE) that are addr
#### **ΓÇïCVE-2016-4970**
-| **Summary:** handler/ssl/OpenSslEngine.java in Netty 4.0.x before 4.0.37.Final and 4.1.x before 4.1.1.Final allows remote attackers to cause a denial of service (infinite loop) |
+| **Summary:** handler/ssl/OpenSslEngine.java in Netty 4.0.x before 4.0.37. Final and 4.1.x before 4.1.1. Final allows remote attackers to cause a denial of service (infinite loop) |
|--| | **Severity:** Moderate | | **Vendor:** Hortonworks |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag, which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. | | BUG-94618 | [YARN-5037](https://issues.apache.org/jira/browse/YARN-5037), [YARN-7274](https://issues.apache.org/jira/browse/YARN-7274) | Ability to disable elasticity at leaf queue level | | BUG-94901 | [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | Add per-table latency histograms |
-| BUG-95259 | [HADOOP-15185](https://issues.apache.org/jira/browse/HADOOP-15185), [HADOOP-15186](https://issues.apache.org/jira/browse/HADOOP-15186) | Update adls connector to use the current version of ADLS SDK |
+| BUG-95259 | [HADOOP-15185](https://issues.apache.org/jira/browse/HADOOP-15185), [HADOOP-15186](https://issues.apache.org/jira/browse/HADOOP-15186) | Update `adls` connector to use the current version of ADLS SDK |
| BUG-95619 | [HIVE-18551](https://issues.apache.org/jira/browse/HIVE-18551) | Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace |
-| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark shouldn't warn \`metadata directory\` for a HDFS file path |
+| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark shouldn't warn \`metadata directory\` for an HDFS file path |
**Performance**
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94345 | [HIVE-18429](https://issues.apache.org/jira/browse/HIVE-18429) | Compaction should handle a case when it produces no output | | BUG-94381 | [HADOOP-13227](https://issues.apache.org/jira/browse/HADOOP-13227), [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054) | Handling RequestHedgingProxyProvider RetryAction order: FAIL &lt; RETRY &lt; FAILOVER\_AND\_RETRY. | | BUG-94432 | [HIVE-18353](https://issues.apache.org/jira/browse/HIVE-18353) | CompactorMR should call jobclient.close() to trigger cleanup |
-| BUG-94869 | [PHOENIX-4290](https://issues.apache.org/jira/browse/PHOENIX-4290), [PHOENIX-4373](https://issues.apache.org/jira/browse/PHOENIX-4373) | Requested row out of range for Get on HRegion for local indexed salted phoenix table. |
+| BUG-94869 | [PHOENIX-4290](https://issues.apache.org/jira/browse/PHOENIX-4290), [PHOENIX-4373](https://issues.apache.org/jira/browse/PHOENIX-4373) | Requested row out of range for Get on `HRegion` for local indexed salted phoenix table. |
| BUG-94928 | [HDFS-11078](https://issues.apache.org/jira/browse/HDFS-11078) | Fix NPE in LazyPersistFileScrubber | | BUG-94964 | [HIVE-18269](https://issues.apache.org/jira/browse/HIVE-18269), [HIVE-18318](https://issues.apache.org/jira/browse/HIVE-18318), [HIVE-18326](https://issues.apache.org/jira/browse/HIVE-18326) | Multiple LLAP fixes |
-| BUG-95669 | [HIVE-18577](https://issues.apache.org/jira/browse/HIVE-18577), [HIVE-18643](https://issues.apache.org/jira/browse/HIVE-18643) | When run update/delete query on ACID partitioned table, HS2 read all each partitions. |
+| BUG-95669 | [HIVE-18577](https://issues.apache.org/jira/browse/HIVE-18577), [HIVE-18643](https://issues.apache.org/jira/browse/HIVE-18643) | When run update/delete query on ACID partitioned table, HS2 read all each partition. |
| BUG-96390 | [HDFS-10453](https://issues.apache.org/jira/browse/HDFS-10453) | ReplicationMonitor thread could be stuck for long time due to the race between replication and delete the same file in a large cluster. | | BUG-96625 | [HIVE-16110](https://issues.apache.org/jira/browse/HIVE-16110) | Revert of "Vectorization: Support 2 Value CASE WHEN instead of fallback to VectorUDFAdaptor" | | BUG-97109 | [HIVE-16757](https://issues.apache.org/jira/browse/HIVE-16757) | Use of deprecated getRows() instead of new estimateRowCount(RelMetadataQuery...) has serious performance impact |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | ||-|--| | BUG-100180 | [CALCITE-2232](https://issues.apache.org/jira/browse/CALCITE-2232) | Assertion error on AggregatePullUpConstantsRule while adjusting Aggregate indices |
-| BUG-100422 | [HIVE-19085](https://issues.apache.org/jira/browse/HIVE-19085) | FastHiveDecimal abs(0) sets sign to +ve |
+| BUG-100422 | [HIVE-19085](https://issues.apache.org/jira/browse/HIVE-19085) | FastHiveDecimal abs(0) sets sign to `+ve` |
| BUG-100834 | [PHOENIX-4658](https://issues.apache.org/jira/browse/PHOENIX-4658) | IllegalStateException: requestSeek can't be called on ReversedKeyValueHeap | | BUG-102078 | [HIVE-17978](https://issues.apache.org/jira/browse/HIVE-17978) | TPCDS queries 58 and 83 generate exceptions in vectorization. | | BUG-92483 | [HIVE-17900](https://issues.apache.org/jira/browse/HIVE-17900) | analyze stats on columns triggered by Compactor generates malformed SQL with &gt; 1 partition column | | BUG-93135 | [HIVE-15874](https://issues.apache.org/jira/browse/HIVE-15874), [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Hive query returning wrong results when set hive.groupby.orderby.position.alias to true |
-| BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position does not work when cbo is disabled |
+| BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position does not work when `cbo` is disabled |
| BUG-93595 | [HIVE-12378](https://issues.apache.org/jira/browse/HIVE-12378), [HIVE-15883](https://issues.apache.org/jira/browse/HIVE-15883) | HBase mapped table in Hive insert fail for decimal and binary columns | | BUG-94007 | [PHOENIX-1751](https://issues.apache.org/jira/browse/PHOENIX-1751), [PHOENIX-3112](https://issues.apache.org/jira/browse/PHOENIX-3112) | Phoenix Queries returns Null values due to HBase Partial rows |
-| BUG-94144 | [HIVE-17063](https://issues.apache.org/jira/browse/HIVE-17063) | insert overwrite partition into an external table fail when drop partition first |
+| BUG-94144 | [HIVE-17063](https://issues.apache.org/jira/browse/HIVE-17063) | insert overwrite partition into an external table fails when drop partition first |
| BUG-94280 | [HIVE-12785](https://issues.apache.org/jira/browse/HIVE-12785) | View with union type and UDF to \`cast\` the struct is broken | | BUG-94505 | [PHOENIX-4525](https://issues.apache.org/jira/browse/PHOENIX-4525) | Integer overflow in GroupBy execution | | BUG-95618 | [HIVE-18506](https://issues.apache.org/jira/browse/HIVE-18506) | LlapBaseInputFormat - negative array index | | BUG-95644 | [HIVE-9152](https://issues.apache.org/jira/browse/HIVE-9152) | CombineHiveInputFormat: Hive query is failing in Tez with java.lang.IllegalArgumentException exception | | BUG-96762 | [PHOENIX-4588](https://issues.apache.org/jira/browse/PHOENIX-4588) | Clone expression also if its children have Determinism.PER\_INVOCATION | | BUG-97145 | [HIVE-12245](https://issues.apache.org/jira/browse/HIVE-12245), [HIVE-17829](https://issues.apache.org/jira/browse/HIVE-17829) | Support column comments for an HBase backed table |
-| BUG-97741 | [HIVE-18944](https://issues.apache.org/jira/browse/HIVE-18944) | Groupping sets position is set incorrectly during DPP |
-| BUG-98082 | [HIVE-18597](https://issues.apache.org/jira/browse/HIVE-18597) | LLAP: Always package the log4j2 API jar for org.apache.log4j |
+| BUG-97741 | [HIVE-18944](https://issues.apache.org/jira/browse/HIVE-18944) | Grouping sets position is set incorrectly during DPP |
+| BUG-98082 | [HIVE-18597](https://issues.apache.org/jira/browse/HIVE-18597) | LLAP: Always package the `log4j2` API jar for `org.apache.log4j` |
| BUG-99849 | N/A | Create a new table from a file wizard tries to use default database | **Security** | **Bug ID** | **Apache JIRA** | **Summary** | |||--|
-| BUG-100436 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
+| BUG-100436 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | `Knox` proxy with `knox-sso` isn't working for ranger |
| BUG-101038 | [SPARK-24062](https://issues.apache.org/jira/browse/SPARK-24062) | Zeppelin %Spark interpreter "Connection refused" error, "A secret key must be specified..." error in HiveThriftServer | | BUG-101359 | [ACCUMULO-4056](https://issues.apache.org/jira/browse/ACCUMULO-4056) | Update version of commons-collection to 3.2.2 when released | | BUG-54240 | [HIVE-18879](https://issues.apache.org/jira/browse/HIVE-18879) | Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in classpath |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-95349 | [ZOOKEEPER-1256](https://issues.apache.org/jira/browse/ZOOKEEPER-1256), [ZOOKEEPER-1901](https://issues.apache.org/jira/browse/ZOOKEEPER-1901) | Upgrade netty | | BUG-95483 | N/A | Fix for CVE-2017-15713 | | BUG-95646 | [OOZIE-3167](https://issues.apache.org/jira/browse/OOZIE-3167) | Upgrade tomcat version on Oozie 4.3 branch |
-| BUG-95823 | N/A | Knox: Upgrade Beanutils |
+| BUG-95823 | N/A | `Knox`: Upgrade `Beanutils` |
| BUG-95908 | [RANGER-1960](https://issues.apache.org/jira/browse/RANGER-1960) | HBase auth does not take table namespace into consideration for deleting snapshot | | BUG-96191 | [FALCON-2322](https://issues.apache.org/jira/browse/FALCON-2322), [FALCON-2323](https://issues.apache.org/jira/browse/FALCON-2323) | Upgrade Jackson and Spring versions to avoid security vulnerabilities | | BUG-96502 | [RANGER-1990](https://issues.apache.org/jira/browse/RANGER-1990) | Add One-way SSL MySQL support in Ranger Admin | | BUG-96712 | [FLUME-3194](https://issues.apache.org/jira/browse/FLUME-3194) | upgrade derby to the latest (1.14.1.0) version | | BUG-96713 | [FLUME-2678](https://issues.apache.org/jira/browse/FLUME-2678) | Upgrade xalan to 2.7.2 to take care of CVE-2014-0107 vulnerability |
-| BUG-96714 | [FLUME-2050](https://issues.apache.org/jira/browse/FLUME-2050) | Upgrade to log4j2 (when GA) |
+| BUG-96714 | [FLUME-2050](https://issues.apache.org/jira/browse/FLUME-2050) | Upgrade to `log4j2` (when GA) |
| BUG-96737 | N/A | Use Java io filesystem methods to access local files | | BUG-96925 | N/A | Upgrade Tomcat from 6.0.48 to 6.0.53 in Hadoop |
-| BUG-96977 | [FLUME-3132](https://issues.apache.org/jira/browse/FLUME-3132) | Upgrade tomcat jasper library dependencies |
+| BUG-96977 | [FLUME-3132](https://issues.apache.org/jira/browse/FLUME-3132) | Upgrade tomcat `jasper` library dependencies |
| BUG-97022 | [HADOOP-14799](https://issues.apache.org/jira/browse/HADOOP-14799), [HADOOP-14903](https://issues.apache.org/jira/browse/HADOOP-14903), [HADOOP-15265](https://issues.apache.org/jira/browse/HADOOP-15265) | Upgrading Nimbus-JOSE-JWT library with version above 4.39 | | BUG-97101 | [RANGER-1988](https://issues.apache.org/jira/browse/RANGER-1988) | Fix insecure randomness | | BUG-97178 | [ATLAS-2467](https://issues.apache.org/jira/browse/ATLAS-2467) | Dependency upgrade for Spring and nimbus-jose-jwt |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-100040 | [ATLAS-2536](https://issues.apache.org/jira/browse/ATLAS-2536) | NPE in Atlas Hive Hook | | BUG-100057 | [HIVE-19251](https://issues.apache.org/jira/browse/HIVE-19251) | ObjectStore.getNextNotification with LIMIT should use less memory | | BUG-100072 | [HIVE-19130](https://issues.apache.org/jira/browse/HIVE-19130) | NPE is thrown when REPL LOAD applied drop partition event. |
-| BUG-100073 | N/A | too many close\_wait connections from hiveserver to data node |
+| BUG-100073 | N/A | too many close\_wait connections from `hiveserver` to data node |
| BUG-100319 | [HIVE-19248](https://issues.apache.org/jira/browse/HIVE-19248) | REPL LOAD doesn't throw error if file copy fails. | | BUG-100352 | N/A | CLONE - RM purging logic scans /registry znode too frequently | | BUG-100427 | [HIVE-19249](https://issues.apache.org/jira/browse/HIVE-19249) | Replication: WITH clause isn't passing the configuration to Task correctly in all cases | | BUG-100430 | [HIVE-14483](https://issues.apache.org/jira/browse/HIVE-14483) | java.lang.ArrayIndexOutOfBoundsException org.apache.orc.impl.TreeReaderFactory\$BytesColumnVectorUtil.commonReadByteArrays | | BUG-100432 | [HIVE-19219](https://issues.apache.org/jira/browse/HIVE-19219) | Incremental REPL DUMP should throw error if requested events are cleaned-up. |
-| BUG-100448 | [SPARK-23637](https://issues.apache.org/jira/browse/SPARK-23637), [SPARK-23802](https://issues.apache.org/jira/browse/SPARK-23802), [SPARK-23809](https://issues.apache.org/jira/browse/SPARK-23809), [SPARK-23816](https://issues.apache.org/jira/browse/SPARK-23816), [SPARK-23822](https://issues.apache.org/jira/browse/SPARK-23822), [SPARK-23823](https://issues.apache.org/jira/browse/SPARK-23823), [SPARK-23838](https://issues.apache.org/jira/browse/SPARK-23838), [SPARK-23881](https://issues.apache.org/jira/browse/SPARK-23881) | Update Spark2 to 2.3.0+ (4/11) |
+| BUG-100448 | [SPARK-23637](https://issues.apache.org/jira/browse/SPARK-23637), [SPARK-23802](https://issues.apache.org/jira/browse/SPARK-23802), [SPARK-23809](https://issues.apache.org/jira/browse/SPARK-23809), [SPARK-23816](https://issues.apache.org/jira/browse/SPARK-23816), [SPARK-23822](https://issues.apache.org/jira/browse/SPARK-23822), [SPARK-23823](https://issues.apache.org/jira/browse/SPARK-23823), [SPARK-23838](https://issues.apache.org/jira/browse/SPARK-23838), [SPARK-23881](https://issues.apache.org/jira/browse/SPARK-23881) | Update `Spark2` to 2.3.0+ (4/11) |
| BUG-100740 | [HIVE-16107](https://issues.apache.org/jira/browse/HIVE-16107) | JDBC: HttpClient should retry one more time on NoHttpResponseException | | BUG-100810 | [HIVE-19054](https://issues.apache.org/jira/browse/HIVE-19054) | Hive Functions replication fails |
-| BUG-100937 | [MAPREDUCE-6889](https://issues.apache.org/jira/browse/MAPREDUCE-6889) | Add Job\#close API to shutdown MR client services. |
-| BUG-101065 | [ATLAS-2587](https://issues.apache.org/jira/browse/ATLAS-2587) | Set read ACL for /apache\_atlas/active\_server\_info znode in HA for Knox proxy to read. |
+| BUG-100937 | [MAPREDUCE-6889](https://issues.apache.org/jira/browse/MAPREDUCE-6889) | Add Job\#close API to shut down MR client services. |
+| BUG-101065 | [ATLAS-2587](https://issues.apache.org/jira/browse/ATLAS-2587) | Set read ACL for /apache\_atlas/active\_server\_info znode in HA for `Knox` proxy to read. |
| BUG-101093 | [STORM-2993](https://issues.apache.org/jira/browse/STORM-2993) | Storm HDFS bolt throws ClosedChannelException when Time rotation policy is used | | BUG-101181 | N/A | PhoenixStorageHandler doesn't handle AND in predicate correctly | | BUG-101266 | [PHOENIX-4635](https://issues.apache.org/jira/browse/PHOENIX-4635) | HBase Connection leak in org.apache.phoenix.hive.mapreduce.PhoenixInputFormat |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-101485 | N/A | hive metastore thrift api is slow and causing client timeout | | BUG-101628 | [HIVE-19331](https://issues.apache.org/jira/browse/HIVE-19331) | Hive incremental replication to cloud failed. | | BUG-102048 | [HIVE-19381](https://issues.apache.org/jira/browse/HIVE-19381) | Hive Function Replication to cloud fails with FunctionTask |
-| BUG-102064 | N/A | Hive Replication \[ onprem to onprem \] tests failed in ReplCopyTask |
-| BUG-102137 | [HIVE-19423](https://issues.apache.org/jira/browse/HIVE-19423) | Hive Replication \[ Onprem to Cloud \] tests failed in ReplCopyTask |
+| BUG-102064 | N/A | Hive Replication `\[ onprem to onprem \]` tests failed in ReplCopyTask |
+| BUG-102137 | [HIVE-19423](https://issues.apache.org/jira/browse/HIVE-19423) | Hive Replication `\[ Onprem to Cloud \]` tests failed in ReplCopyTask |
| BUG-102305 | [HIVE-19430](https://issues.apache.org/jira/browse/HIVE-19430) | HS2 and hive metastore OOM dumps |
-| BUG-102361 | N/A | multiple insert results in single insert replicated to target hive cluster ( onprem - s3 ) |
+| BUG-102361 | N/A | multiple insert results in single insert replicated to target hive cluster ( `onprem - s3` ) |
| BUG-87624 | N/A | Enabling storm event logging causes workers to continuously die | | BUG-88929 | [HBASE-15615](https://issues.apache.org/jira/browse/HBASE-15615) | Wrong sleep time when RegionServerCallable need retry | | BUG-89628 | [HIVE-17613](https://issues.apache.org/jira/browse/HIVE-17613) | remove object pools for short, same-thread allocations |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-92373 | [FALCON-2314](https://issues.apache.org/jira/browse/FALCON-2314) | Bump TestNG version to 6.13.1 to avoid BeanShell dependency | | BUG-92381 | N/A | testContainerLogsWithNewAPI and testContainerLogsWithOldAPI UT fails | | BUG-92389 | [STORM-2841](https://issues.apache.org/jira/browse/STORM-2841) | testNoAcksIfFlushFails UT fails with NullPointerException |
-| BUG-92586 | [SPARK-17920](https://issues.apache.org/jira/browse/SPARK-17920), [SPARK-20694](https://issues.apache.org/jira/browse/SPARK-20694), [SPARK-21642](https://issues.apache.org/jira/browse/SPARK-21642), [SPARK-22162](https://issues.apache.org/jira/browse/SPARK-22162), [SPARK-22289](https://issues.apache.org/jira/browse/SPARK-22289), [SPARK-22373](https://issues.apache.org/jira/browse/SPARK-22373), [SPARK-22495](https://issues.apache.org/jira/browse/SPARK-22495), [SPARK-22574](https://issues.apache.org/jira/browse/SPARK-22574), [SPARK-22591](https://issues.apache.org/jira/browse/SPARK-22591), [SPARK-22595](https://issues.apache.org/jira/browse/SPARK-22595), [SPARK-22601](https://issues.apache.org/jira/browse/SPARK-22601), [SPARK-22603](https://issues.apache.org/jira/browse/SPARK-22603), [SPARK-22607](https://issues.apache.org/jira/browse/SPARK-22607), [SPARK-22635](https://issues.apache.org/jira/browse/SPARK-22635), [SPARK-22637](https://issues.apache.org/jira/browse/SPARK-22637), [SPARK-22653](https://issues.apache.org/jira/browse/SPARK-22653), [SPARK-22654](https://issues.apache.org/jira/browse/SPARK-22654), [SPARK-22686](https://issues.apache.org/jira/browse/SPARK-22686), [SPARK-22688](https://issues.apache.org/jira/browse/SPARK-22688), [SPARK-22817](https://issues.apache.org/jira/browse/SPARK-22817), [SPARK-22862](https://issues.apache.org/jira/browse/SPARK-22862), [SPARK-22889](https://issues.apache.org/jira/browse/SPARK-22889), [SPARK-22972](https://issues.apache.org/jira/browse/SPARK-22972), [SPARK-22975](https://issues.apache.org/jira/browse/SPARK-22975), [SPARK-22982](https://issues.apache.org/jira/browse/SPARK-22982), [SPARK-22983](https://issues.apache.org/jira/browse/SPARK-22983), [SPARK-22984](https://issues.apache.org/jira/browse/SPARK-22984), [SPARK-23001](https://issues.apache.org/jira/browse/SPARK-23001), [SPARK-23038](https://issues.apache.org/jira/browse/SPARK-23038), [SPARK-23095](https://issues.apache.org/jira/browse/SPARK-23095) | Update Spark2 up-to-date to 2.2.1 (Jan. 16) |
+| BUG-92586 | [SPARK-17920](https://issues.apache.org/jira/browse/SPARK-17920), [SPARK-20694](https://issues.apache.org/jira/browse/SPARK-20694), [SPARK-21642](https://issues.apache.org/jira/browse/SPARK-21642), [SPARK-22162](https://issues.apache.org/jira/browse/SPARK-22162), [SPARK-22289](https://issues.apache.org/jira/browse/SPARK-22289), [SPARK-22373](https://issues.apache.org/jira/browse/SPARK-22373), [SPARK-22495](https://issues.apache.org/jira/browse/SPARK-22495), [SPARK-22574](https://issues.apache.org/jira/browse/SPARK-22574), [SPARK-22591](https://issues.apache.org/jira/browse/SPARK-22591), [SPARK-22595](https://issues.apache.org/jira/browse/SPARK-22595), [SPARK-22601](https://issues.apache.org/jira/browse/SPARK-22601), [SPARK-22603](https://issues.apache.org/jira/browse/SPARK-22603), [SPARK-22607](https://issues.apache.org/jira/browse/SPARK-22607), [SPARK-22635](https://issues.apache.org/jira/browse/SPARK-22635), [SPARK-22637](https://issues.apache.org/jira/browse/SPARK-22637), [SPARK-22653](https://issues.apache.org/jira/browse/SPARK-22653), [SPARK-22654](https://issues.apache.org/jira/browse/SPARK-22654), [SPARK-22686](https://issues.apache.org/jira/browse/SPARK-22686), [SPARK-22688](https://issues.apache.org/jira/browse/SPARK-22688), [SPARK-22817](https://issues.apache.org/jira/browse/SPARK-22817), [SPARK-22862](https://issues.apache.org/jira/browse/SPARK-22862), [SPARK-22889](https://issues.apache.org/jira/browse/SPARK-22889), [SPARK-22972](https://issues.apache.org/jira/browse/SPARK-22972), [SPARK-22975](https://issues.apache.org/jira/browse/SPARK-22975), [SPARK-22982](https://issues.apache.org/jira/browse/SPARK-22982), [SPARK-22983](https://issues.apache.org/jira/browse/SPARK-22983), [SPARK-22984](https://issues.apache.org/jira/browse/SPARK-22984), [SPARK-23001](https://issues.apache.org/jira/browse/SPARK-23001), [SPARK-23038](https://issues.apache.org/jira/browse/SPARK-23038), [SPARK-23095](https://issues.apache.org/jira/browse/SPARK-23095) | Update `Spark2` up-to-date to 2.2.1 (Jan. 16) |
| BUG-92680 | [ATLAS-2288](https://issues.apache.org/jira/browse/ATLAS-2288) | NoClassDefFoundError Exception while running import-hive script when hbase table is created via Hive | | BUG-92760 | [ACCUMULO-4578](https://issues.apache.org/jira/browse/ACCUMULO-4578) | Cancel compaction FATE operation does not release namespace lock | | BUG-92797 | [HDFS-10267](https://issues.apache.org/jira/browse/HDFS-10267), [HDFS-8496](https://issues.apache.org/jira/browse/HDFS-8496) | Reducing the datanode lock contentions on certain use cases |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93361 | [HIVE-12360](https://issues.apache.org/jira/browse/HIVE-12360) | Bad seek in uncompressed ORC with predicate pushdown | | BUG-93426 | [CALCITE-2086](https://issues.apache.org/jira/browse/CALCITE-2086) | HTTP/413 in certain circumstances due to large Authorization headers | | BUG-93429 | [PHOENIX-3240](https://issues.apache.org/jira/browse/PHOENIX-3240) | ClassCastException from Pig loader |
-| BUG-93485 | N/A | can'tcan'tCan't get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
+| BUG-93485 | N/A | can't get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
| BUG-93512 | [PHOENIX-4466](https://issues.apache.org/jira/browse/PHOENIX-4466) | java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data | | BUG-93550 | N/A | Zeppelin %spark.r does not work with spark1 due to scala version mismatch | | BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93986 | [YARN-7697](https://issues.apache.org/jira/browse/YARN-7697) | NM goes down with OOM due to leak in log-aggregation (part\#2) | | BUG-94030 | [ATLAS-2332](https://issues.apache.org/jira/browse/ATLAS-2332) | Creation of type with attributes having nested collection datatype fails | | BUG-94080 | [YARN-3742](https://issues.apache.org/jira/browse/YARN-3742), [YARN-6061](https://issues.apache.org/jira/browse/YARN-6061) | Both RM are in standby in secure cluster |
-| BUG-94081 | [HIVE-18384](https://issues.apache.org/jira/browse/HIVE-18384) | ConcurrentModificationException in log4j2.x library |
+| BUG-94081 | [HIVE-18384](https://issues.apache.org/jira/browse/HIVE-18384) | ConcurrentModificationException in `log4j2.x` library |
| BUG-94168 | N/A | Yarn RM goes down with Service Registry is in wrong state ERROR |
-| BUG-94330 | [HADOOP-13190](https://issues.apache.org/jira/browse/HADOOP-13190), [HADOOP-14104](https://issues.apache.org/jira/browse/HADOOP-14104), [HADOOP-14814](https://issues.apache.org/jira/browse/HADOOP-14814), [HDFS-10489](https://issues.apache.org/jira/browse/HDFS-10489), [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689) | HDFS should support for multiple KMS Uris |
+| BUG-94330 | [HADOOP-13190](https://issues.apache.org/jira/browse/HADOOP-13190), [HADOOP-14104](https://issues.apache.org/jira/browse/HADOOP-14104), [HADOOP-14814](https://issues.apache.org/jira/browse/HADOOP-14814), [HDFS-10489](https://issues.apache.org/jira/browse/HDFS-10489), [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689) | HDFS should support for multiple `KMS Uris` |
| BUG-94345 | [HIVE-18429](https://issues.apache.org/jira/browse/HIVE-18429) | Compaction should handle a case when it produces no output | | BUG-94372 | [ATLAS-2229](https://issues.apache.org/jira/browse/ATLAS-2229) | DSL query: hive\_table name = \["t1","t2"\] throws invalid DSL query exception | | BUG-94381 | [HADOOP-13227](https://issues.apache.org/jira/browse/HADOOP-13227), [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054) | Handling RequestHedgingProxyProvider RetryAction order: FAIL &lt; RETRY &lt; FAILOVER\_AND\_RETRY. |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94928 | [HDFS-11078](https://issues.apache.org/jira/browse/HDFS-11078) | Fix NPE in LazyPersistFileScrubber | | BUG-95013 | [HIVE-18488](https://issues.apache.org/jira/browse/HIVE-18488) | LLAP ORC readers are missing some null checks | | BUG-95077 | [HIVE-14205](https://issues.apache.org/jira/browse/HIVE-14205) | Hive doesn't support union type with AVRO file format |
-| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend shouldn'tshould'n trust a partially trusted channel |
+| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend shouldn't trust a partially trusted channel |
| BUG-95201 | [HDFS-13060](https://issues.apache.org/jira/browse/HDFS-13060) | Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver | | BUG-95284 | [HBASE-19395](https://issues.apache.org/jira/browse/HBASE-19395) | \[branch-1\] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE | | BUG-95301 | [HIVE-18517](https://issues.apache.org/jira/browse/HIVE-18517) | Vectorization: Fix VectorMapOperator to accept VRBs and check vectorized flag correctly to support LLAP Caching | | BUG-95542 | [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135) | PeerClusterZnode under rs of removed peer may never be deleted | | BUG-95595 | [HIVE-15563](https://issues.apache.org/jira/browse/HIVE-15563) | Ignore Illegal Operation state transition exception in SQLOperation.runQuery to expose real exception. | | BUG-95596 | [YARN-4126](https://issues.apache.org/jira/browse/YARN-4126), [YARN-5750](https://issues.apache.org/jira/browse/YARN-5750) | TestClientRMService fails |
-| BUG-96019 | [HIVE-18548](https://issues.apache.org/jira/browse/HIVE-18548) | Fix log4j import |
+| BUG-96019 | [HIVE-18548](https://issues.apache.org/jira/browse/HIVE-18548) | Fix `log4j` import |
| BUG-96196 | [HDFS-13120](https://issues.apache.org/jira/browse/HDFS-13120) | Snapshot diff could be corrupted after concat | | BUG-96289 | [HDFS-11701](https://issues.apache.org/jira/browse/HDFS-11701) | NPE from Unresolved Host causes permanent DFSInputStream failures | | BUG-96291 | [STORM-2652](https://issues.apache.org/jira/browse/STORM-2652) | Exception thrown in JmsSpout open method |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-96390 | [HDFS-10453](https://issues.apache.org/jira/browse/HDFS-10453) | ReplicationMonitor thread could be stuck for a long time due to the race between replication and delete of the same file in a large cluster. | | BUG-96454 | [YARN-4593](https://issues.apache.org/jira/browse/YARN-4593) | Deadlock in AbstractService.getConfig() | | BUG-96704 | [FALCON-2322](https://issues.apache.org/jira/browse/FALCON-2322) | ClassCastException while submitAndSchedule feed |
-| BUG-96720 | [SLIDER-1262](https://issues.apache.org/jira/browse/SLIDER-1262) | Slider functests are failing in Kerberized environment |
-| BUG-96931 | [SPARK-23053](https://issues.apache.org/jira/browse/SPARK-23053), [SPARK-23186](https://issues.apache.org/jira/browse/SPARK-23186), [SPARK-23230](https://issues.apache.org/jira/browse/SPARK-23230), [SPARK-23358](https://issues.apache.org/jira/browse/SPARK-23358), [SPARK-23376](https://issues.apache.org/jira/browse/SPARK-23376), [SPARK-23391](https://issues.apache.org/jira/browse/SPARK-23391) | Update Spark2 up-to-date (Feb. 19) |
+| BUG-96720 | [SLIDER-1262](https://issues.apache.org/jira/browse/SLIDER-1262) | Slider functests are failing in `Kerberized` environment |
+| BUG-96931 | [SPARK-23053](https://issues.apache.org/jira/browse/SPARK-23053), [SPARK-23186](https://issues.apache.org/jira/browse/SPARK-23186), [SPARK-23230](https://issues.apache.org/jira/browse/SPARK-23230), [SPARK-23358](https://issues.apache.org/jira/browse/SPARK-23358), [SPARK-23376](https://issues.apache.org/jira/browse/SPARK-23376), [SPARK-23391](https://issues.apache.org/jira/browse/SPARK-23391) | Update `Spark2` up-to-date (Feb. 19) |
| BUG-97067 | [HIVE-10697](https://issues.apache.org/jira/browse/HIVE-10697) | ObjectInspectorConvertors\#UnionConvertor does a faulty conversion | | BUG-97244 | [KNOX-1083](https://issues.apache.org/jira/browse/KNOX-1083) | HttpClient default timeout should be a sensible value | | BUG-97459 | [ZEPPELIN-3271](https://issues.apache.org/jira/browse/ZEPPELIN-3271) | Option for disabling scheduler |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-97743 | N/A | java.lang.NoClassDefFoundError exception while deploying storm topology | | BUG-97756 | [PHOENIX-4576](https://issues.apache.org/jira/browse/PHOENIX-4576) | Fix LocalIndexSplitMergeIT tests failing | | BUG-97771 | [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711) | DN should not delete the block On "Too many open files" Exception |
-| BUG-97869 | [KNOX-1190](https://issues.apache.org/jira/browse/KNOX-1190) | Knox SSO support for Google OIDC is broken. |
+| BUG-97869 | [KNOX-1190](https://issues.apache.org/jira/browse/KNOX-1190) | `Knox` SSO support for Google OIDC is broken. |
| BUG-97879 | [PHOENIX-4489](https://issues.apache.org/jira/browse/PHOENIX-4489) | HBase Connection leak in Phoenix MR Jobs | | BUG-98392 | [RANGER-2007](https://issues.apache.org/jira/browse/RANGER-2007) | ranger-tagsync's Kerberos ticket fails to renew | | BUG-98484 | N/A | Hive Incremental Replication to Cloud not working | | BUG-98533 | [HBASE-19934](https://issues.apache.org/jira/browse/HBASE-19934), [HBASE-20008](https://issues.apache.org/jira/browse/HBASE-20008) | HBase snapshot restore is failing due to Null pointer exception | | BUG-98555 | [PHOENIX-4662](https://issues.apache.org/jira/browse/PHOENIX-4662) | NullPointerException in TableResultIterator.java on cache resend | | BUG-98579 | [HBASE-13716](https://issues.apache.org/jira/browse/HBASE-13716) | Stop using Hadoop's FSConstants |
-| BUG-98705 | [KNOX-1230](https://issues.apache.org/jira/browse/KNOX-1230) | Many Concurrent Requests to Knox causes URL Mangling |
+| BUG-98705 | [KNOX-1230](https://issues.apache.org/jira/browse/KNOX-1230) | Many Concurrent Requests to `Knox` causes URL Mangling |
| BUG-98983 | [KNOX-1108](https://issues.apache.org/jira/browse/KNOX-1108) | NiFiHaDispatch not failing over | | BUG-99107 | [HIVE-19054](https://issues.apache.org/jira/browse/HIVE-19054) | Function replication shall use "hive.repl.replica.functions.root.dir" as root | | BUG-99145 | [RANGER-2035](https://issues.apache.org/jira/browse/RANGER-2035) | Errors accessing servicedefs with empty implClass with Oracle backend |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-99453 | [HIVE-19065](https://issues.apache.org/jira/browse/HIVE-19065) | Metastore client compatibility check should include syncMetaStoreClient | | BUG-99521 | N/A | ServerCache for HashJoin isn't re-created when iterators are reinstantiated | | BUG-99590 | [PHOENIX-3518](https://issues.apache.org/jira/browse/PHOENIX-3518) | Memory Leak in RenewLeaseTask |
-| BUG-99618 | [SPARK-23599](https://issues.apache.org/jira/browse/SPARK-23599), [SPARK-23806](https://issues.apache.org/jira/browse/SPARK-23806) | Update Spark2 to 2.3.0+ (3/28) |
+| BUG-99618 | [SPARK-23599](https://issues.apache.org/jira/browse/SPARK-23599), [SPARK-23806](https://issues.apache.org/jira/browse/SPARK-23806) | Update `Spark2` to 2.3.0+ (3/28) |
| BUG-99672 | [ATLAS-2524](https://issues.apache.org/jira/browse/ATLAS-2524) | Hive hook with V2 notifications - incorrect handling of 'alter view as' operation | | BUG-99809 | [HBASE-20375](https://issues.apache.org/jira/browse/HBASE-20375) | Remove use of getCurrentUserCredentials in hbase-spark module |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | |||--| | BUG-87343 | [HIVE-18031](https://issues.apache.org/jira/browse/HIVE-18031) | Support replication for Alter Database operation. |
-| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
+| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | `Knox` proxy with `knox-sso` isn't working for ranger |
| BUG-93116 | [RANGER-1957](https://issues.apache.org/jira/browse/RANGER-1957) | Ranger Usersync isn't syncing users or groups periodically when incremental sync is enabled. | | BUG-93577 | [RANGER-1938](https://issues.apache.org/jira/browse/RANGER-1938) | Solr for Audit setup doesn't use DocValues effectively |
-| BUG-96082 | [RANGER-1982](https://issues.apache.org/jira/browse/RANGER-1982) | Error Improvement for Analytics Metric of Ranger Admin and Ranger Kms |
-| BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After Datanode down, In Namenode UI Datanode tab is throwing warning message. |
+| BUG-96082 | [RANGER-1982](https://issues.apache.org/jira/browse/RANGER-1982) | Error Improvement for Analytics Metric of Ranger Admin and Ranger `Kms` |
+| BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After `Datanode` down, In `Namenode` UI `Datanode` tab is throwing warning message. |
| BUG-97864 | [HIVE-18833](https://issues.apache.org/jira/browse/HIVE-18833) | Auto Merge fails when "insert into directory as orcfile" | | BUG-98814 | [HDFS-13314](https://issues.apache.org/jira/browse/HDFS-13314) | NameNode should optionally exit if it detects FsImage corruption |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | ||--|--| | BUG-100134 | [SPARK-22919](https://issues.apache.org/jira/browse/SPARK-22919) | Revert of "Bump Apache httpclient versions" |
-| BUG-95823 | N/A | Knox: Upgrade Beanutils |
+| BUG-95823 | N/A | `Knox`: Upgrade `Beanutils` |
| BUG-96751 | [KNOX-1076](https://issues.apache.org/jira/browse/KNOX-1076) | Update nimbus-jose-jwt to 4.41.2 | | BUG-97864 | [HIVE-18833](https://issues.apache.org/jira/browse/HIVE-18833) | Auto Merge fails when "insert into directory as orcfile" | | BUG-99056 | [HADOOP-13556](https://issues.apache.org/jira/browse/HADOOP-13556) | Change Configuration.getPropsWithPrefix to use getProps instead of iterator |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | ||--|--| | BUG-100045 | [HIVE-19056](https://issues.apache.org/jira/browse/HIVE-19056) | IllegalArgumentException in FixAcidKeyIndex when ORC file has 0 rows |
-| BUG-100139 | [KNOX-1243](https://issues.apache.org/jira/browse/KNOX-1243) | Normalize the required DNs that are Configured in KnoxToken Service |
-| BUG-100570 | [ATLAS-2557](https://issues.apache.org/jira/browse/ATLAS-2557) | Fix to allow to lookup hadoop ldap groups when are groups from UGI are wrongly set or aren't empty |
+| BUG-100139 | [KNOX-1243](https://issues.apache.org/jira/browse/KNOX-1243) | Normalize the required DNs that are Configured in `KnoxToken` Service |
+| BUG-100570 | [ATLAS-2557](https://issues.apache.org/jira/browse/ATLAS-2557) | Fix to allow to `lookup` hadoop `ldap` groups when are groups from UGI are wrongly set or aren't empty |
| BUG-100646 | [ATLAS-2102](https://issues.apache.org/jira/browse/ATLAS-2102) | Atlas UI Improvements: Search results page | | BUG-100737 | [HIVE-19049](https://issues.apache.org/jira/browse/HIVE-19049) | Add support for Alter table add columns for Druid |
-| BUG-100750 | [KNOX-1246](https://issues.apache.org/jira/browse/KNOX-1246) | Update service config in Knox to support latest configurations for Ranger. |
+| BUG-100750 | [KNOX-1246](https://issues.apache.org/jira/browse/KNOX-1246) | Update service config in `Knox` to support latest configurations for Ranger. |
| BUG-100965 | [ATLAS-2581](https://issues.apache.org/jira/browse/ATLAS-2581) | Regression with V2 Hive hook notifications: Moving table to a different database | | BUG-84413 | [ATLAS-1964](https://issues.apache.org/jira/browse/ATLAS-1964) | UI: Support to order columns in Search table | | BUG-90570 | [HDFS-11384](https://issues.apache.org/jira/browse/HDFS-11384), [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347) | Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike | | BUG-90584 | [HBASE-19052](https://issues.apache.org/jira/browse/HBASE-19052) | FixedFileTrailer should recognize CellComparatorImpl class in branch-1.x |
-| BUG-90979 | [KNOX-1224](https://issues.apache.org/jira/browse/KNOX-1224) | Knox Proxy HADispatcher to support Atlas in HA. |
-| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
+| BUG-90979 | [KNOX-1224](https://issues.apache.org/jira/browse/KNOX-1224) | `Knox` Proxy `HADispatcher` to support Atlas in HA. |
+| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | `Knox` proxy with knox-sso isn't working for ranger |
| BUG-92236 | [ATLAS-2281](https://issues.apache.org/jira/browse/ATLAS-2281) | Saving Tag/Type attribute filter queries with null/not null filters. | | BUG-92238 | [ATLAS-2282](https://issues.apache.org/jira/browse/ATLAS-2282) | Saved favorite search appears only on refresh after creation when there are 25+ favorite searches. | | BUG-92333 | [ATLAS-2286](https://issues.apache.org/jira/browse/ATLAS-2286) | Pre-built type 'kafka\_topic' should not declare 'topic' attribute as unique | | BUG-92678 | [ATLAS-2276](https://issues.apache.org/jira/browse/ATLAS-2276) | Path value for hdfs\_path type entity is set to lower case from hive-bridge. | | BUG-93097 | [RANGER-1944](https://issues.apache.org/jira/browse/RANGER-1944) | Action filter for Admin Audit isn't working | | BUG-93135 | [HIVE-15874](https://issues.apache.org/jira/browse/HIVE-15874), [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Hive query returning wrong results when set hive.groupby.orderby.position.alias to true |
-| BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position doesn't work when cbo is disabled |
+| BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position doesn't work when `cbo` is disabled |
| BUG-93387 | [HIVE-17600](https://issues.apache.org/jira/browse/HIVE-17600) | Make OrcFile's "enforceBufferSize" user-settable. |
-| BUG-93495 | [RANGER-1937](https://issues.apache.org/jira/browse/RANGER-1937) | Ranger tagsync should process ENTITY\_CREATE notification, to support Atlas import feature |
+| BUG-93495 | [RANGER-1937](https://issues.apache.org/jira/browse/RANGER-1937) | Ranger `tagsync` should process ENTITY\_CREATE notification, to support Atlas import feature |
| BUG-93512 | [PHOENIX-4466](https://issues.apache.org/jira/browse/PHOENIX-4466) | java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data | | BUG-93801 | [HBASE-19393](https://issues.apache.org/jira/browse/HBASE-19393) | HTTP 413 FULL head while accessing HBase UI using SSL. | | BUG-93804 | [HIVE-17419](https://issues.apache.org/jira/browse/HIVE-17419) | ANALYZE TABLE...COMPUTE STATISTICS FOR COLUMNS command shows computed stats for masked tables |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93933 | [ATLAS-2286](https://issues.apache.org/jira/browse/ATLAS-2286) | Pre-built type 'kafka\_topic' should not declare 'topic' attribute as unique | | BUG-93938 | [ATLAS-2283](https://issues.apache.org/jira/browse/ATLAS-2283), [ATLAS-2295](https://issues.apache.org/jira/browse/ATLAS-2295) | UI updates for classifications | | BUG-93941 | [ATLAS-2296](https://issues.apache.org/jira/browse/ATLAS-2296), [ATLAS-2307](https://issues.apache.org/jira/browse/ATLAS-2307) | Basic search enhancement to optionally exclude subtype entities and sub-classification-types |
-| BUG-93944 | [ATLAS-2318](https://issues.apache.org/jira/browse/ATLAS-2318) | UI: Clicking on child tag twice , parent tag is selected |
+| BUG-93944 | [ATLAS-2318](https://issues.apache.org/jira/browse/ATLAS-2318) | UI: When clicking on child tag twice, parent tag is selected |
| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag, which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. | | BUG-93977 | [HIVE-16232](https://issues.apache.org/jira/browse/HIVE-16232) | Support stats computation for column in QuotedIdentifier | | BUG-94030 | [ATLAS-2332](https://issues.apache.org/jira/browse/ATLAS-2332) | Creation of type with attributes having nested collection datatype fails | | BUG-94099 | [ATLAS-2352](https://issues.apache.org/jira/browse/ATLAS-2352) | Atlas server should provide configuration to specify validity for Kerberos DelegationToken | | BUG-94280 | [HIVE-12785](https://issues.apache.org/jira/browse/HIVE-12785) | View with union type and UDF to \`cast\` the struct is broken | | BUG-94332 | [SQOOP-2930](https://issues.apache.org/jira/browse/SQOOP-2930) | Sqoop job exec not overriding the saved job generic properties |
-| BUG-94428 | N/A | Dataplane Profiler Agent REST API Knox support |
+| BUG-94428 | N/A | `Dataplane` Profiler Agent REST API `Knox` support |
| BUG-94514 | [ATLAS-2339](https://issues.apache.org/jira/browse/ATLAS-2339) | UI: Modifications in "columns" in Basic search result view affects DSL also. | | BUG-94515 | [ATLAS-2169](https://issues.apache.org/jira/browse/ATLAS-2169) | Delete request fails when hard delete is configured |
-| BUG-94518 | [ATLAS-2329](https://issues.apache.org/jira/browse/ATLAS-2329) | Atlas UI Multiple Hovers appears if user click on another tag which is incorrect |
+| BUG-94518 | [ATLAS-2329](https://issues.apache.org/jira/browse/ATLAS-2329) | Atlas UI Multiple Hovers appear if user click on another tag which is incorrect |
| BUG-94519 | [ATLAS-2272](https://issues.apache.org/jira/browse/ATLAS-2272) | Save the state of dragged columns using save search API. |
-| BUG-94627 | [HIVE-17731](https://issues.apache.org/jira/browse/HIVE-17731) | add a backward compat option for external users to HIVE-11985 |
-| BUG-94786 | [HIVE-6091](https://issues.apache.org/jira/browse/HIVE-6091) | Empty pipeout files are created for connection create/close |
+| BUG-94627 | [HIVE-17731](https://issues.apache.org/jira/browse/HIVE-17731) | add a backward `compat` option for external users to HIVE-11985 |
+| BUG-94786 | [HIVE-6091](https://issues.apache.org/jira/browse/HIVE-6091) | Empty `pipeout` files are created for connection create/close |
| BUG-94793 | [HIVE-14013](https://issues.apache.org/jira/browse/HIVE-14013) | Describe table doesn't show unicode properly | | BUG-94900 | [OOZIE-2606](https://issues.apache.org/jira/browse/OOZIE-2606), [OOZIE-2658](https://issues.apache.org/jira/browse/OOZIE-2658), [OOZIE-2787](https://issues.apache.org/jira/browse/OOZIE-2787), [OOZIE-2802](https://issues.apache.org/jira/browse/OOZIE-2802) | Set spark.yarn.jars to fix Spark 2.0 with Oozie | | BUG-94901 | [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | Add per-table latency histograms | | BUG-94908 | [ATLAS-1921](https://issues.apache.org/jira/browse/ATLAS-1921) | UI: Search using entity and trait attributes: UI doesn't perform range check and allows providing out of bounds values for integral and float data types. | | BUG-95086 | [RANGER-1953](https://issues.apache.org/jira/browse/RANGER-1953) | improvement on user-group page listing | | BUG-95193 | [SLIDER-1252](https://issues.apache.org/jira/browse/SLIDER-1252) | Slider agent fails with SSL validation errors with Python 2.7.5-58 |
-| BUG-95314 | [YARN-7699](https://issues.apache.org/jira/browse/YARN-7699) | queueUsagePercentage is coming as INF for getApp REST api call |
+| BUG-95314 | [YARN-7699](https://issues.apache.org/jira/browse/YARN-7699) | queueUsagePercentage is coming as INF for `getApp` REST api call |
| BUG-95315 | [HBASE-13947](https://issues.apache.org/jira/browse/HBASE-13947), [HBASE-14517](https://issues.apache.org/jira/browse/HBASE-14517), [HBASE-17931](https://issues.apache.org/jira/browse/HBASE-17931) | Assign system tables to servers with highest version | | BUG-95392 | [ATLAS-2421](https://issues.apache.org/jira/browse/ATLAS-2421) | Notification updates to support V2 data structures | | BUG-95476 | [RANGER-1966](https://issues.apache.org/jira/browse/RANGER-1966) | Policy engine initialization does not create context enrichers in some cases | | BUG-95512 | [HIVE-18467](https://issues.apache.org/jira/browse/HIVE-18467) | support whole warehouse dump / load + create/drop database events |
-| BUG-95593 | N/A | Extend Oozie DB utils to support Spark2 sharelib creation |
+| BUG-95593 | N/A | Extend Oozie DB utils to support `Spark2` `sharelib` creation |
| BUG-95595 | [HIVE-15563](https://issues.apache.org/jira/browse/HIVE-15563) | Ignore Illegal Operation state transition exception in SQLOperation.runQuery to expose real exception. | | BUG-95685 | [ATLAS-2422](https://issues.apache.org/jira/browse/ATLAS-2422) | Export: Support type-based Export | | BUG-95798 | [PHOENIX-2714](https://issues.apache.org/jira/browse/PHOENIX-2714), [PHOENIX-2724](https://issues.apache.org/jira/browse/PHOENIX-2724), [PHOENIX-3023](https://issues.apache.org/jira/browse/PHOENIX-3023), [PHOENIX-3040](https://issues.apache.org/jira/browse/PHOENIX-3040) | Don't use guideposts for executing queries serially | | BUG-95969 | [HIVE-16828](https://issues.apache.org/jira/browse/HIVE-16828), [HIVE-17063](https://issues.apache.org/jira/browse/HIVE-17063), [HIVE-18390](https://issues.apache.org/jira/browse/HIVE-18390) | Partitioned view fails with FAILED: IndexOutOfBoundsException Index: 1, Size: 1 |
-| BUG-96019 | [HIVE-18548](https://issues.apache.org/jira/browse/HIVE-18548) | Fix log4j import |
+| BUG-96019 | [HIVE-18548](https://issues.apache.org/jira/browse/HIVE-18548) | Fix `log4j` import |
| BUG-96288 | [HBASE-14123](https://issues.apache.org/jira/browse/HBASE-14123), [HBASE-14135](https://issues.apache.org/jira/browse/HBASE-14135), [HBASE-17850](https://issues.apache.org/jira/browse/HBASE-17850) | Backport HBase Backup/Restore 2.0 |
-| BUG-96313 | [KNOX-1119](https://issues.apache.org/jira/browse/KNOX-1119) | Pac4J OAuth/OpenID Principal Needs to be Configurable |
+| BUG-96313 | [KNOX-1119](https://issues.apache.org/jira/browse/KNOX-1119) | `Pac4J` OAuth/OpenID Principal Needs to be Configurable |
| BUG-96365 | [ATLAS-2442](https://issues.apache.org/jira/browse/ATLAS-2442) | User with read-only permission on entity resource not able perform basic search |
-| BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After Datanode down, In Namenode UI Datanode tab is throwing warning message. |
+| BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After `Datanode` down, In `Namenode` UI `Datanode` tab is throwing warning message. |
| BUG-96502 | [RANGER-1990](https://issues.apache.org/jira/browse/RANGER-1990) | Add One-way SSL MySQL support in Ranger Admin | | BUG-96718 | [ATLAS-2439](https://issues.apache.org/jira/browse/ATLAS-2439) | Update Sqoop hook to use V2 notifications | | BUG-96748 | [HIVE-18587](https://issues.apache.org/jira/browse/HIVE-18587) | insert DML event may attempt to calculate a checksum on directories |
-| BUG-96821 | [HBASE-18212](https://issues.apache.org/jira/browse/HBASE-18212) | In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream |
+| BUG-96821 | [HBASE-18212](https://issues.apache.org/jira/browse/HBASE-18212) | In Standalone mode with local filesystem HBase logs Warning message: Failed to invoke 'unbuffer' method in class org.apache.hadoop.fs.FSDataInputStream |
| BUG-96847 | [HIVE-18754](https://issues.apache.org/jira/browse/HIVE-18754) | REPL STATUS should support 'with' clause | | BUG-96873 | [ATLAS-2443](https://issues.apache.org/jira/browse/ATLAS-2443) | Capture required entity attributes in outgoing DELETE messages |
-| BUG-96880 | [SPARK-23230](https://issues.apache.org/jira/browse/SPARK-23230) | When hive.default.fileformat is other kinds of file types, create textfile table cause a serde error |
+| BUG-96880 | [SPARK-23230](https://issues.apache.org/jira/browse/SPARK-23230) | When hive.default.fileformat is other kinds of file types, create `textfile` table cause a `serde` error |
| BUG-96911 | [OOZIE-2571](https://issues.apache.org/jira/browse/OOZIE-2571), [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792), [OOZIE-2799](https://issues.apache.org/jira/browse/OOZIE-2799), [OOZIE-2923](https://issues.apache.org/jira/browse/OOZIE-2923) | Improve Spark options parsing | | BUG-97100 | [RANGER-1984](https://issues.apache.org/jira/browse/RANGER-1984) | HBase audit log records may not show all tags associated with accessed column | | BUG-97110 | [PHOENIX-3789](https://issues.apache.org/jira/browse/PHOENIX-3789) | Execute cross region index maintenance calls in postBatchMutateIndispensably | | BUG-97145 | [HIVE-12245](https://issues.apache.org/jira/browse/HIVE-12245), [HIVE-17829](https://issues.apache.org/jira/browse/HIVE-17829) | Support column comments for an HBase backed table | | BUG-97409 | [HADOOP-15255](https://issues.apache.org/jira/browse/HADOOP-15255) | Upper/Lower case conversion support for group names in LdapGroupsMapping | | BUG-97535 | [HIVE-18710](https://issues.apache.org/jira/browse/HIVE-18710) | extend inheritPerms to ACID in Hive 2.X |
-| BUG-97742 | [OOZIE-1624](https://issues.apache.org/jira/browse/OOZIE-1624) | Exclusion pattern for sharelib JARs |
+| BUG-97742 | [OOZIE-1624](https://issues.apache.org/jira/browse/OOZIE-1624) | Exclusion pattern for `sharelib` JARs |
| BUG-97744 | [PHOENIX-3994](https://issues.apache.org/jira/browse/PHOENIX-3994) | Index RPC priority still depends on the controller factory property in hbase-site.xml | | BUG-97787 | [HIVE-18460](https://issues.apache.org/jira/browse/HIVE-18460) | Compactor doesn't pass Table properties to the Orc writer | | BUG-97788 | [HIVE-18613](https://issues.apache.org/jira/browse/HIVE-18613) | Extend JsonSerDe to support BINARY type |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-98392 | [RANGER-2007](https://issues.apache.org/jira/browse/RANGER-2007) | ranger-tagsync's Kerberos ticket fails to renew | | BUG-98533 | [HBASE-19934](https://issues.apache.org/jira/browse/HBASE-19934), [HBASE-20008](https://issues.apache.org/jira/browse/HBASE-20008) | HBase snapshot restore is failing due to Null pointer exception | | BUG-98552 | [HBASE-18083](https://issues.apache.org/jira/browse/HBASE-18083), [HBASE-18084](https://issues.apache.org/jira/browse/HBASE-18084) | Make large/small file clean thread number configurable in HFileCleaner |
-| BUG-98705 | [KNOX-1230](https://issues.apache.org/jira/browse/KNOX-1230) | Many Concurrent Requests to Knox causes URL Mangling |
+| BUG-98705 | [KNOX-1230](https://issues.apache.org/jira/browse/KNOX-1230) | Many Concurrent Requests to `Knox` causes URL Mangling |
| BUG-98711 | N/A | NiFi dispatch can't use two-way SSL without service.xml modifications | | BUG-98880 | [OOZIE-3199](https://issues.apache.org/jira/browse/OOZIE-3199) | Let system property restriction configurable | | BUG-98931 | [ATLAS-2491](https://issues.apache.org/jira/browse/ATLAS-2491) | Update Hive hook to use Atlas v2 notifications |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-99154 | [OOZIE-2844](https://issues.apache.org/jira/browse/OOZIE-2844), [OOZIE-2845](https://issues.apache.org/jira/browse/OOZIE-2845), [OOZIE-2858](https://issues.apache.org/jira/browse/OOZIE-2858), [OOZIE-2885](https://issues.apache.org/jira/browse/OOZIE-2885) | Spark query failed with "java.io.FileNotFoundException: hive-site.xml (Permission denied)" exception | | BUG-99239 | [ATLAS-2462](https://issues.apache.org/jira/browse/ATLAS-2462) | Sqoop import for all tables throws NPE for no table provided in command | | BUG-99636 | [KNOX-1238](https://issues.apache.org/jira/browse/KNOX-1238) | Fix Custom Truststore Settings for Gateway |
-| BUG-99650 | [KNOX-1223](https://issues.apache.org/jira/browse/KNOX-1223) | Zeppelin's Knox proxy doesn't redirect /api/ticket as expected |
+| BUG-99650 | [KNOX-1223](https://issues.apache.org/jira/browse/KNOX-1223) | Zeppelin's `Knox` proxy doesn't redirect /api/ticket as expected |
| BUG-99804 | [OOZIE-2858](https://issues.apache.org/jira/browse/OOZIE-2858) | HiveMain, ShellMain and SparkMain should not overwrite properties and config files locally | | BUG-99805 | [OOZIE-2885](https://issues.apache.org/jira/browse/OOZIE-2885) | Running Spark actions should not need Hive on the classpath | | BUG-99806 | [OOZIE-2845](https://issues.apache.org/jira/browse/OOZIE-2845) | Replace reflection-based code, which sets variable in HiveConf |
-| BUG-99807 | [OOZIE-2844](https://issues.apache.org/jira/browse/OOZIE-2844) | Increase stability of Oozie actions when log4j.properties is missing or not readable |
+| BUG-99807 | [OOZIE-2844](https://issues.apache.org/jira/browse/OOZIE-2844) | Increase stability of Oozie actions when `log4j`.properties is missing or not readable |
| RMP-9995 | [AMBARI-22222](https://issues.apache.org/jira/browse/AMBARI-22222) | Switch druid to use /var/druid directory instead of /apps/druid on local disk | ### Behavioral changes
Fixed issues represent selected issues that were previously logged via Hortonwor
|Spark |[**HIVE-12505**](https://issues.apache.org/jira/browse/HIVE-12505) |Spark job completes successfully but there's an HDFS disk quota full error |**Scenario:** Running **insert overwrite** when a quota is set on the Trash folder of the user who runs the command.<br /><br />**Previous Behavior:** The job succeeds even though it fails to move the data to the Trash. The result can wrongly contain some of the data previously present in the table.<br /><br />**New Behavior:** When the move to the Trash folder fails, the files are permanently deleted.| |**Kafka 1.0**|**N/A**|**Changes as documented in the Apache Spark release notes** |https://kafka.apache.org/10/documentation.html#upgrade_100_notable| |**Hive/ Ranger** | |Another ranger hive policies required for INSERT OVERWRITE |**Scenario:** Another ranger hive policies required for **INSERT OVERWRITE**<br /><br />**Previous behavior:** Hive **INSERT OVERWRITE** queries succeed as usual.<br /><br />**New behavior:** Hive **INSERT OVERWRITE** queries are unexpectedly failing after upgrading to HDP-2.6.x with the error:<br /><br />Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user jdoe does not have WRITE privilege on /tmp/\*(state=42000,code=40000)<br /><br />As of HDP-2.6.0, Hive **INSERT OVERWRITE** queries require a Ranger URI policy to allow write operations, even if the user has write privilege granted through HDFS policy.<br /><br />**Workaround/Expected Customer Action:**<br /><br />1. Create a new policy under the Hive repository.<br />2. In the dropdown where you see Database, select URI.<br />3. Update the path (Example: /tmp/*)<br />4. Add the users and group and save.<br />5. Retry the insert query.|
-|**HDFS**|**N/A** |HDFS should support for multiple KMS Uris |**Previous Behavior:** dfs.encryption.key.provider.uri property was used to configure the KMS provider path.<br /><br />**New Behavior:** dfs.encryption.key.provider.uri is now deprecated in favor of hadoop.security.key.provider.path to configure the KMS provider path.|
+|**HDFS**|**N/A** |HDFS should support for multiple `KMS Uris` |**Previous Behavior:** dfs.encryption.key.provider.uri property was used to configure the KMS provider path.<br /><br />**New Behavior:** dfs.encryption.key.provider.uri is now deprecated in favor of hadoop.security.key.provider.path to configure the KMS provider path.|
|**Zeppelin**|[**ZEPPELIN-3271**](https://issues.apache.org/jira/browse/ZEPPELIN-3271)|Option for disabling scheduler |**Component Affected:** Zeppelin-Server<br /><br />**Previous Behavior:** In previous releases of Zeppelin, there was no option for disabling scheduler.<br /><br />**New Behavior:** By default, users will no longer see scheduler, as it's disabled by default.<br /><br />**Workaround/Expected Customer Action:** If you want to enable scheduler, you will need to add azeppelin.notebook.cron.enable with value of true under custom zeppelin site in Zeppelin settings from Ambari.| ### Known issues
Fixed issues represent selected issues that were previously logged via Hortonwor
- **HDInsight integration with ADLS Gen 2** There are two issues on HDInsight ESP clusters using Azure Data Lake Storage Gen 2 with user directories and permissions:
- 1. Home directories for users aren't getting created on Head Node 1. As a workaround, create the directories manually and change ownership to the respective userΓÇÖs UPN.
+ 1. Home directories for users aren't getting created on Head Node 1. As a workaround, create the directories manually and changes ownership to the respective userΓÇÖs UPN.
- 2. Permissions on /hdp directory are currently not set to 751. This needs to be set to
+ 2. Permissions on /hdp directory are currently not set to 751. This needs to be set to
+
```bash chmod 751 /hdp chmod ΓÇôR 755 /hdp/apps
Fixed issues represent selected issues that were previously logged via Hortonwor
After removing the above line, the Ranger UI will allow you to create policies with policy condition that can contain special characters and policy evaluation will be successful for the same policy. **HDInsight Integration with ADLS Gen 2: User directories and permissions issue with ESP clusters**
- 1. Home directories for users aren't getting created on Head Node 1. Workaround is to create these manually and change ownership to the respective userΓÇÖs UPN.
+ 1. Home directories for users aren't getting created on Head Node 1. Workaround is to create these manually and changes ownership to the respective userΓÇÖs UPN.
2. Permissions on /hdp are currently not set to 751. This needs to be set to a. chmod 751 /hdp b. chmod ΓÇôR 755 /hdp/apps
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 08/08/2023 Last updated : 09/13/2023 # Azure HDInsight release notes
Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-rep
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDInsight Releases](https://github.com/Azure/HDInsight/releases).
-## Release date: July 25, 2023
+## Release date: September 7th, 2023
-This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2307201242**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2308221128**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
For workload specific versions, see
* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md) * [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
-## ![Icon showing Whats new.](./media/hdinsight-release-notes/whats-new.svg) What's new
-* HDInsight 5.1 is now supported with ESP cluster.
-* Upgraded version of Ranger 2.3.0 and Oozie 5.2.1 are now part of HDInsight 5.1
-* The Spark 3.3.1 (HDInsight 5.1) cluster comes with Hive Warehouse Connector (HWC) 2.1, which works together with the Interactive Query (HDInsight 5.1) cluster.
- > [!IMPORTANT]
-> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on August 8, 2023. The action is to update to the latest image **2307201242**. Customers are advised to plan accordingly.
+> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on September 12, 2023. The action is to update to the latest image **2308221128**. Customers are advised to plan accordingly.
|CVE | Severity| CVE Title| |-|-|-|
-|[CVE-2023-35393](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35393)| Important|Azure Apache Hive Spoofing Vulnerability|
-|[CVE-2023-35394](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35394)| Important|Azure HDInsight Jupyter Notebook Spoofing Vulnerability|
-|[CVE-2023-36877](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36877)| Important|Azure Apache Oozie Spoofing Vulnerability|
-|[CVE-2023-36881](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36881)| Important|Azure Apache Ambari Spoofing Vulnerability|
-|[CVE-2023-38188](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38188)| Important|Azure Apache Hadoop Spoofing Vulnerability|
-
+|[CVE-2023-38156](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38156)| Important | Azure HDInsight Apache Ambari Elevation of Privilege Vulnerability |
## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
-* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. Customers need to plan for the updates before 30, September 2023.
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be implemented by September 30th, 2023.
* Cluster permissions for secure storage * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account. * In-line quota update. * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase. * HDInsight Cluster Creation with Custom VNets.
- * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before 30, September 2023.ΓÇ»
+ * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before September 30, 2023.ΓÇ»
* Basic and Standard A-series VMs Retirement.
- * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31, August 2024.
+ * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
* Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
- * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before 30 September 2023.ΓÇ»
+ * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before 30 September, 2023.ΓÇ»
If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight)
-YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [twitter](https://twitter.com/AzureHDInsight)
+YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight).
> [!NOTE] > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: How to disable the events and delete Azure Health Data Services workspaces - Azure Health Data Services
+ Title: How to disable events and delete Azure Health Data Services workspaces - Azure Health Data Services
description: Learn how to disable events and delete Azure Health Data Services workspaces.
Last updated 07/11/2023
-# How to disable the events and delete Azure Health Data Services workspaces
+# How to disable events and delete Azure Health Data Services workspaces
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
To disable events from sending event messages for a single **Event Subscription*
:::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png"::: > [!NOTE]
-> The FHIR service will automatically go into an **Updating** status to disable the Events extension when a full delete of Event Subscriptions is executed. The FHIR service will remain online while the operation is completing.
+> The FHIR service will automatically go into an **Updating** status to disable events when a full delete of **Event Subscriptions** is executed. The FHIR service will remain online while the operation is completing.
## Delete workspaces
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-raspberry-pi.md
Read the license terms prior to using the agent. Your installation and use const
} ```
+ :::image type="content" source="media/import-update/device-twin-ppr.png" alt-text="Screenshot that shows twin with tag information." lightbox="media/import-update/device-twin-ppr.png":::
+
+ _This screenshot shows the section where the tag needs to be added in the twin._
+
+ ## Import the update 1. Download the sample tutorial manifest and sample update (.swu file) and the sample A/B script from [Tutorial_RaspberryPi.zip](https://github.com/Azure/iot-hub-device-update/releases/download/1.0.2/Tutorial_RaspberryPi.zip) under **Release Assets** for the latest agent.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
Health probes have the following properties:
| Health Probe property name | Details| | | |
-| Name | Name of the health probe. This is a naame you get to define for your health probe |
+| Name | Name of the health probe. This is a name you get to define for your health probe |
| Protocol | Protocol of health probe. This is the protocol type you would like the health probe to leverage. Options are: TCP, HTTP, HTTPS | | Port | Port of the health probe. The destination port you would like the health probe to use when it connects to the virtual machine to check it's health | | Interval (seconds) | Interval of health probe. The amount of time (in seconds) between different probe two consecutive health check attemps to the virtual machine |
The protocol used by the health probe can be configured to one of the following
## Probe interval
-The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. If the health probe succeeds on the next healthy probe up, Azure Load Balancer marks your backend pool instances as healthy. The health probe attempts to check the configured health probe port every 15 seconds by default but can be explicitly set to another value.
+The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. If the health probe succeeds on the next healthy probe up, Azure Load Balancer marks your backend pool instances as healthy. The health probe attempts to check the configured health probe port every 5 seconds by default but can be explicitly set to another value.
It is important to note that probes also have a timeout period. For example, if a health probe interval is set to 15 seconds, the total time it takes for your health probe to reflect your application would be 20 seconds (interval + timeout period). Assume the reaction to a timeout response takes a minimum of 5 seconds and a maximum of 10 seconds to react to the change. This example is provided to illustrate what is taking place.
logic-apps Logic Apps Control Flow Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-loops.md
Title: Add loops to repeat actions
-description: Create loops that repeat workflow actions or process arrays in Azure Logic Apps.
+description: Create loops to repeat actions or process arrays in workflows using Azure Logic Apps.
ms.suite: integration Previously updated : 09/01/2022 Last updated : 09/13/2023
-# Create loops that repeat workflow actions or process arrays in Azure Logic Apps
+# Create loops that repeat actions or process arrays in workflows with Azure Logic Apps
-To process an array in your logic app workflow, you can create a [For each loop](#foreach-loop). This loop repeats one or more actions on each item in the array. For the limit on the number of array items that a "For each" loop can process, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+Azure Logic Apps includes the following loop actions that you can use in your workflow:
-To repeat actions until a condition gets met or a state changes, you can create an [Until loop](#until-loop). Your workflow first runs all the actions inside the loop, and then checks the condition or state. If the condition is met, the loop stops. Otherwise, the loop repeats. For the default and maximum limits on the number of "Until" loops that a workflow can have, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+* To repeat one or more actions on an array, add the [**For each** action](#foreach-loop) to your workflow.
-> [!TIP]
-> If you have a trigger that receives an array and want to run a workflow for each array item, you can *debatch* that array
-> with the [**SplitOn** trigger property](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch).
+ Alternatively, if you have a trigger that receives an array and want to run an iteration for each array item, you can *debatch* that array with the [**SplitOn** trigger property](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch).
+
+* To repeat one or more actions until a condition gets met or a state changes, add the [Until action](#until-loop) to your workflow.
+
+ Your workflow first runs all the actions inside the loop, and then checks the condition or state. If the condition is met, the loop stops. Otherwise, the loop repeats. For the default and maximum limits on the number of **Until** loops that a workflow can have, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
## Prerequisites
-* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Basic knowledge about [logic app workflows](../logic-apps/logic-apps-overview.md) <a name="foreach-loop"></a>
-## "For each" loop
+## For each
+
+The **For each** action repeats one or more actions on each array item and works only on arrays. The following list contains some considerations for when you want to use a **For each** action:
+
+* The **For each** action can process a limited number of array items. For this limit, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+
+* By default, the cycles or iterations in a **For each** action run at the same time in parallel.
+
+ This behavior differs from [Power Automate's **Apply to each** loop](/power-automate/apply-to-each) where iterations run one at a time, or sequentially. However, you can [set up sequential **For each** iterations](#sequential-foreach-loop). For example, if you want to pause the next iteration in a **For each** action by using the [Delay action](../connectors/connectors-native-delay.md), you need to set up each iteration to run sequentially.
+
+ As an exception to the default behavior, a nested **For each** action's iterations always run sequentially, not in parallel. To run operations in parallel for items in a nested loop, create and [call a child logic app workflow](../logic-apps/logic-apps-http-endpoint.md).
+
+* To get predictable results from operations on variables during each iteration, run the iterations sequentially. For example, when a concurrently running iteration ends, the **Increment variable**, **Decrement variable**, and **Append to variable** operations return predictable results. However, during each iteration in the concurrently running loop, these operations might return unpredictable results.
+
+* Actions in a **For each** loop use the [`item()` function](../logic-apps/workflow-definition-language-functions-reference.md#item) to reference and process each item in the array. If you specify data that's not in an array, the workflow fails.
+
+The following example workflow sends a daily summary for a website RSS feed. The workflow uses a **For each** action that sends an email for each new item.
+
+Follow the steps based on whether you create a Consumption or Standard logic app workflow.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), create an example Consumption logic app workflow with the following steps in the specified order:
+
+ * The **RSS** trigger named **When a feed item is published**
+
+ For more information, [follow these general steps to add a trigger](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+ * The **Outlook.com** or **Office 365 Outlook** action named **Send an email**
+
+ For more information, [follow these general steps to add an action](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+1. [Follow the same general steps](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action) to add the **For each** action between the RSS trigger and **Send an email** action in your workflow.
+
+1. Now build the loop:
+
+ 1. Select inside the **Select an output from previous steps** box so that the dynamic content list opens.
+
+ 1. In the **Add dynamic content** list, from the **When a feed item is published** section, select **Feed links**, which is an array output from the RSS trigger.
+
+ > [!NOTE]
+ >
+ > If the **Feed links** output doesn't appear, next to the trigger section label, select **See more**.
+ > From the dynamic content list, you can select *only* outputs from previous steps.
+
+ ![Screenshot shows Azure portal, Consumption workflow designer, action named For each, and opened dynamic content list.](media/logic-apps-control-flow-loops/for-each-select-feed-links-consumption.png)
+
+ When you're done, the selected array output appears as in the following example:
+
+ ![Screenshot shows Consumption workflow, action named For each, and selected array output.](media/logic-apps-control-flow-loops/for-each-selected-array-consumption.png)
+
+ 1. To run an existing action on each array item, drag the **Send an email** action into the **For each** loop.
-A "For each" loop repeats one or more actions on each array item and works only on arrays. Here are some considerations when you use "For each" loops:
+ Now, your workflow looks like the following example:
-* The "For each" loop can process a limited number of array items. For this limit, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+ ![Screenshot shows Consumption workflow, action named For each, and action named Send an email, now inside For each loop.](media/logic-apps-control-flow-loops/for-each-with-last-action-consumption.png)
-* By default, iterations in a "For each" loop run at the same time, or in parallel. This behavior differs from [Power Automate's **Apply to each** loop](/power-automate/apply-to-each) where iterations run one at a time, or sequentially. However, you can [set up sequential "For each" loop iterations](#sequential-foreach-loop). For example, if you want to pause the next iteration in a "Foreach" loop by using the [Delay action](../connectors/connectors-native-delay.md), you need to set the loop to run sequentially.
+1. When you're done, save your workflow.
- The exception to the default behavior are nested loops where iterations always run sequentially, not in parallel. To run operations in parallel for items in a nested loop, create and [call a child logic app workflow](../logic-apps/logic-apps-http-endpoint.md).
+1. To manually test your workflow, on the designer toolbar, select **Run Trigger** > **Run**.
-* To get predictable results from operations on variables during each loop iteration, run those loops sequentially. For example, when a concurrently running loop ends, the increment, decrement, and append to variable operations return predictable results. However, during each iteration in the concurrently running loop, these operations might return unpredictable results.
+### [Standard](#tab/standard)
-* Actions in a "For each" loop use the [`@item()`](../logic-apps/workflow-definition-language-functions-reference.md#item) expression to reference and process each item in the array. If you specify data that's not in an array,
-the logic app workflow fails.
+1. In the [Azure portal](https://portal.azure.com), create an example Standard logic app workflow with the following steps in the specified order:
-This example logic app workflow sends a daily summary for a website RSS feed. The workflow uses a "For each" loop that sends an email for each new item.
+ * The **RSS** trigger named **When a feed item is published**
-1. [Create this example Consumption logic app workflow](../logic-apps/quickstart-create-example-consumption-workflow.md) with an Outlook.com account or a work or school account.
+ For more information, [follow these general steps to add a trigger](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
-2. Between the RSS trigger and send email action, add a "For each" loop.
+ * The **Outlook.com** or **Office 365 Outlook** action named **Send an email**
- 1. To add a loop between steps, move your pointer over the arrow between those steps. Select the **plus sign** (**+**) that appears, then select **Add an action**.
+ For more information, [follow these general steps to add an action](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- ![Select "Add an action"](media/logic-apps-control-flow-loops/add-for-each-loop.png)
+1. [Follow the same general steps](create-workflow-with-trigger-or-action.md?tabs=standard#add-action) to add the **For each** action between the RSS trigger and **Send an email** action in your workflow.
- 1. Under the search box, select **All**. In the search box, enter **for each**. From the actions list,
- select the Control action named **For each**.
+1. Now build the loop:
- ![Add "For each" loop](media/logic-apps-control-flow-loops/select-for-each.png)
+ 1. On the designer, make sure that the **For each** action is selected.
-3. Now build the loop. Under **Select an output from previous steps**
-after the **Add dynamic content** list appears,
-select the **Feed links** array, which is output from the RSS trigger.
+ 1. On the action information pane, select inside the **Select an output from previous steps** box so that the options for the dynamic content list (lightning icon) and expression editor (formula icon) appear. Select the dynamic content list option.
- ![Select from dynamic content list](media/logic-apps-control-flow-loops/for-each-loop-dynamic-content-list.png)
+ ![Screenshot shows Azure portal, Standard workflow designer, action named For each, and selected lightning icon.](media/logic-apps-control-flow-loops/for-each-open-dynamic-content.png)
- > [!NOTE]
- > You can select *only* array outputs from the previous step.
+ 1. In the **Add dynamic content** list, from the **When a feed item is published** section, select **Feed links**, which is an array output from the RSS trigger.
- The selected array now appears here:
+ > [!NOTE]
+ >
+ > If the **Feed links** output doesn't appear, next to the trigger section label, select **See more**.
+ > From the dynamic content list, you can select *only* outputs from previous steps.
- ![Select array](media/logic-apps-control-flow-loops/for-each-loop-select-array.png)
+ ![Screenshot shows Azure portal, Standard workflow designer, action named For each, and opened dynamic content list.](media/logic-apps-control-flow-loops/for-each-select-feed-links-standard.png)
-4. To run an action on each array item, drag the **Send an email** action into the loop.
+ When you're done, the selected array output appears as in the following example:
- Your workflow might look something like this example:
+ ![Screenshot shows Standard workflow, action named For each, and selected array output.](media/logic-apps-control-flow-loops/for-each-selected-array-standard.png)
- ![Add steps to "Foreach" loop](media/logic-apps-control-flow-loops/for-each-loop-with-step.png)
+ 1. To run an existing action on each array item, drag the **Send an email** action into the **For each** loop.
-5. Save your workflow. To manually test your logic app, on the designer toolbar, select **Run Trigger** > **Run**.
+ Now, your workflow looks like the following example:
+
+ ![Screenshot shows Standard workflow, action named For each, and action named Send an email, now inside For each loop.](media/logic-apps-control-flow-loops/for-each-with-last-action-standard.png)
+
+1. When you're done, save your workflow.
+
+1. To manually test your workflow, on the workflow menu, select **Overview**. On the **Overview** toolbar, select **Run** > **Run**.
++ <a name="for-each-json"></a>
-## "Foreach" loop definition (JSON)
+## For each action definition (JSON)
-If you're working in code view for your logic app,
-you can define the `Foreach` loop in your
-logic app's JSON definition instead, for example:
+If you're working in code view, you can define the `For_each` action in your workflow's JSON definition, for example:
``` json "actions": {
- "myForEachLoopName": {
- "type": "Foreach",
+ "For_each": {
"actions": {
- "Send_an_email": {
+ "Send_an_email_(V2)": {
"type": "ApiConnection", "inputs": { "body": {
logic app's JSON definition instead, for example:
"To": "me@contoso.com" }, "host": {
- "api": {
- "runtimeUrl": "https://logic-apis-westus.azure-apim.net/apim/office365"
- },
"connection": { "name": "@parameters('$connections')['office365']['connectionId']" } }, "method": "post",
- "path": "/Mail"
+ "path": "/v2/Mail"
}, "runAfter": {} } }, "foreach": "@triggerBody()?['links']",
- "runAfter": {}
+ "runAfter": {},
+ "type": "Foreach"
}
-}
+},
``` <a name="sequential-foreach-loop"></a>
-## "Foreach" loop: Sequential
+## For each: Run sequentially
+
+By default, the iterations in a **For each** loop run at the same time in parallel. However, when you have nested loops or variables inside the loops where you expect predictable results, you must run those loops one at a time or sequentially.
+
+### [Consumption](#tab/consumption)
+
+1. In the **For each** action's upper right corner, select **ellipses** (**...**) > **Settings**.
+
+1. Under **Concurrency Control**, change the setting from **Off** to **On**.
-By default, cycles in a "Foreach" loop run in parallel.
-To run each cycle sequentially, set the loop's **Sequential** option.
-"Foreach" loops must run sequentially when you have nested
-loops or variables inside loops where you expect predictable results.
+1. Move the **Degree of Parallelism** slider to **1**, and select **Done**.
-1. In the loop's upper right corner, choose **ellipses** (**...**) > **Settings**.
+ ![Screenshot shows Consumption workflow, action named For each, concurrency control setting turned on, and degree of parallelism slider set to 1.](media/logic-apps-control-flow-loops/for-each-sequential-consumption.png)
- ![On "Foreach" loop, choose "..." > "Settings"](media/logic-apps-control-flow-loops/for-each-loop-settings.png)
+### [Standard](#tab/standard)
-1. Under **Concurrency Control**, turn the
-**Concurrency Control** setting to **On**.
-Move the **Degree of Parallelism** slider to **1**,
-and choose **Done**.
+1. On the **For each** action's information pane, under **General**, select **Settings**.
- ![Turn on concurrency control](media/logic-apps-control-flow-loops/for-each-loop-sequential-setting.png)
+1. Under **Concurrency Control**, change the setting from **Off** to **On**.
-If you're working with your logic app's JSON definition,
-you can use the `Sequential` option by adding the
-`operationOptions` parameter, for example:
+1. Move the **Degree of Parallelism** slider to **1**.
+
+ ![Screenshot shows Standard workflow, action named For each, concurrency control setting turned on, and degree of parallelism slider set to 1.](media/logic-apps-control-flow-loops/for-each-sequential-standard.png)
+++
+## For each action definition (JSON): Run sequentially
+
+If you're working in code view with the `For_each` action in your workflow's JSON definition, you can use the `Sequential` option by adding the `operationOptions` parameter, for example:
``` json "actions": {
- "myForEachLoopName": {
- "type": "Foreach",
+ "For_each": {
"actions": {
- "Send_an_email": { }
+ "Send_an_email_(V2)": { }
}, "foreach": "@triggerBody()?['links']", "runAfter": {},
+ "type": "Foreach",
"operationOptions": "Sequential" } }
you can use the `Sequential` option by adding the
<a name="until-loop"></a>
-## "Until" loop
-
-To run and repeat actions until a condition gets met or a state changes, put those actions in an "Until" loop. Your logic app first runs any and all actions inside the loop, and then checks the condition or state. If the condition is met, the loop stops. Otherwise, the loop repeats. For the default and maximum limits on the number of "Until" loops that a logic app run can have, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+## Until
+
+The **Until** action runs and repeats one or more actions until the required specified condition is met. If the condition is met, the loop stops. Otherwise, the loop repeats. For the default and maximum limits on the number of **Until** actions or iterations that a workflow can have, see [Concurrency, looping, and debatching limits](logic-apps-limits-and-config.md#looping-debatching-limits).
-Here are some common scenarios where you can use an "Until" loop:
+The following list contains some common scenarios where you can use an **Until** action:
* Call an endpoint until you get the response you want.
-* Create a record in a database. Wait until a specific field in that record gets approved. Continue processing.
+* Create a record in a database. Wait until a specific field in that record gets approved. Continue processing.
-Starting at 8:00 AM each day, this example logic app increments a variable until the variable's value equals 10. The logic app then sends an email that confirms the current value.
+In the following example workflow, starting at 8:00 AM each day, the **Until** action increments a variable until the variable's value equals 10. The workflow then sends an email that confirms the current value.
> [!NOTE]
-> These steps use Office 365 Outlook, but you can
-> use any email provider that Logic Apps supports.
-> [Check the connectors list here](/connectors/).
-> If you use another email account, the general steps stay the same,
-> but your UI might look slightly different.
+>
+> This example uses Office 365 Outlook, but you can use [any email provider that Azure Logic Apps supports](/connectors/).
+> If you use another email account, the general steps stay the same, but your UI might look slightly different.
-1. Create a blank logic app. In Logic App Designer,
- under the search box, choose **All**. Search for "recurrence".
- From the triggers list, select this trigger: **Recurrence - Schedule**
+### [Consumption](#tab/consumption)
- ![Add "Recurrence - Schedule" trigger](./media/logic-apps-control-flow-loops/do-until-loop-add-trigger.png)
+1. In the [Azure portal](https://portal.azure.com), create a Consumption logic app resource with a blank workflow.
-1. Specify when the trigger fires by setting the interval, frequency,
- and hour of the day. To set the hour, choose **Show advanced options**.
+1. In the designer, [follow these general steps to add the **Recurrence** built-in trigger named **Schedule** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
- ![Set up recurrence schedule](./media/logic-apps-control-flow-loops/do-until-loop-set-trigger-properties.png)
+1. In the **Recurrence** trigger, specify the interval, frequency, and hour of the day for the trigger to fire.
| Property | Value |
- | -- | -- |
- | **Interval** | 1 |
- | **Frequency** | Day |
- | **At these hours** | 8 |
- |||
+ |-|-|
+ | **Interval** | **1** |
+ | **Frequency** | **Day** |
+ | **At these hours** | **8** |
+
+ To add the **At these hours** parameter, open the **Add new parameter** list, and select **At these hours**, which appears only after you set **Frequency** to **Day**.
-1. Under the trigger, choose **New step**.
- Search for "variables", and select this action:
- **Initialize variable - Variables**
+ ![Screenshot shows Azure portal, Consumption workflow designer, and Recurrence trigger parameters with selected option for At these hours.](./media/logic-apps-control-flow-loops/do-until-trigger-consumption.png)
- ![Add "Initialize variable - Variables" action](./media/logic-apps-control-flow-loops/do-until-loop-add-variable.png)
+ When you're done, the **Recurrence** trigger looks like the following example:
-1. Set up your variable with these values:
+ ![Screenshot shows Azure portal, Consumption workflow, and Recurrence trigger parameters set up.](./media/logic-apps-control-flow-loops/do-until-trigger-complete-consumption.png)
- ![Set variable properties](./media/logic-apps-control-flow-loops/do-until-loop-set-variable-properties.png)
+1. Under the trigger, [follow these general steps to add the **Variables** built-in action named **Initialize variable** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+1. In the **Initialize variable** action, provide the following values:
| Property | Value | Description |
- | -- | -- | -- |
- | **Name** | Limit | Your variable's name |
- | **Type** | Integer | Your variable's data type |
- | **Value** | 0 | Your variable's starting value |
- ||||
+ |-|-|-|
+ | **Name** | **Limit** | Your variable's name |
+ | **Type** | **Integer** | Your variable's data type |
+ | **Value** | **0** | Your variable's starting value |
-1. Under the **Initialize variable** action, choose **New step**.
+ ![Screenshot shows Azure portal, Consumption workflow, and parameters for built-in action named Initialize variable.](./media/logic-apps-control-flow-loops/do-until-loop-variable-properties-consumption.png)
-1. Under the search box, choose **All**. Search for "until",
- and select this action: **Until - Control**
+1. Under the **Initialize variable** action, [follow these general steps to add the **Control** built-in action named **Until** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- ![Add "Until" loop](./media/logic-apps-control-flow-loops/do-until-loop-add-until-loop.png)
+1. In the **Until** action, provide the following values to set up the stop condition for the loop.
-1. Build the loop's exit condition by selecting
- the **Limit** variable and the **is equal** operator.
- Enter **10** as the comparison value.
+ 1. Select inside the leftmost box named **Choose a value**, which automatically opens the dynamic content list.
- ![Build exit condition for stopping loop](./media/logic-apps-control-flow-loops/do-until-loop-settings.png)
+ 1. From the list, under **Variables**, select the variable named **Limit**.
-1. Inside the loop, choose **Add an action**.
+ 1. From the middle operator list, select the **is equal to** operator.
-1. Under the search box, choose **All**. Search for "variables",
- and select this action: **Increment variable - Variables**
+ 1. In the rightmost box named **Choose a value**, enter **10** as the comparison value.
- ![Add action for incrementing variable](./media/logic-apps-control-flow-loops/do-until-loop-increment-variable.png)
+ ![Screenshot shows Consumption workflow and built-in action named Until with finished stop condition.](./media/logic-apps-control-flow-loops/do-until-loop-settings-consumption.png)
-1. For **Name**, select the **Limit** variable. For **Value**,
- enter "1".
+1. Inside the **Until** action, select **Add an action**.
- ![Increment "Limit" by 1](./media/logic-apps-control-flow-loops/do-until-loop-increment-variable-settings.png)
+1. In the **Choose an operation** search box, [follow these general steps to add the **Variables** built-in action named **Increment variable** to the **Until** action](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Outside and under the loop, choose **New step**.
+1. In the **Increment variable** action, provide the following values to increment the **Limit** variable's value by 1:
-1. Under the search box, choose **All**.
- Find and add an action that sends email,
- for example:
+ | Property | Value |
+ |-|-|
+ | **Name** | Select the **Limit** variable. |
+ | **Value** | **1** |
- ![Add action that sends email](media/logic-apps-control-flow-loops/do-until-loop-send-email.png)
+ ![Screenshot shows Consumption workflow and built-in action named Until with Name set to the Limit variable and Value set to 1.](./media/logic-apps-control-flow-loops/do-until-loop-increment-variable-consumption.png)
-1. If prompted, sign in to your email account.
+1. Outside and under **Until** action, [follow these general steps to add an action that sends email](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Set the email action's properties. Add the **Limit**
- variable to the subject. That way, you can confirm the
- variable's current value meets your specified condition,
- for example:
+ This example continues with the **Office 365 Outlook** action named **Send an email**.
- ![Set up email properties](./media/logic-apps-control-flow-loops/do-until-loop-send-email-settings.png)
+1. In the email action, provide the following values:
- | Property | Value | Description |
- | -- | -- | -- |
- | **To** | *\<email-address\@domain>* | The recipient's email address. For testing, use your own email address. |
- | **Subject** | Current value for "Limit" is **Limit** | Specify the email subject. For this example, make sure that you include the **Limit** variable. |
- | **Body** | <*email-content*> | Specify the email message content you want to send. For this example, enter whatever text you like. |
- ||||
+ | Property | Value | Description |
+ |-|-|-|
+ | **To** | <*email-address\@domain*> | The recipient's email address. For testing, use your own email address. |
+ | **Subject** | **Current value for "Limit" variable is:** **Limit** | The email subject. For this example, make sure that you include the **Limit** variable to confirm that the current value meets your specified condition: <br><br>1. Select inside the **Subject** box so that the dynamic content list appears. <br><br>2. In the dynamic content list, next to the **Variables** section header, select **See more**. <br><br>3. Select **Limit**. |
+ | **Body** | <*email-content*> | The email message content that you want to send. For this example, enter whatever text you want. |
-1. Save your logic app. To manually test your logic app,
- on the designer toolbar, choose **Run**.
+ When you're done, your email action looks similar to the following example:
- After your logic starts running, you get an email with the content that you specified:
+ ![Screenshot shows Consumption workflow and action named Send an email with property values.](./media/logic-apps-control-flow-loops/do-until-loop-send-email-consumption.png)
- ![Received email](./media/logic-apps-control-flow-loops/do-until-loop-sent-email.png)
+1. Save your workflow.
-<a name="prevent-endless-loops"></a>
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), create a Standard logic app resource with a blank workflow.
+
+1. In the designer, [follow these general steps to add the **Recurrence** built-in trigger named **Schedule** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+1. In the **Recurrence** trigger information pane, on the **Parameters** tab, specify the interval, frequency, and hour of the day for the trigger to fire.
+
+ The **At These Hours** parameter appears only after you set **Frequency** to **Day**.
+
+ | Property | Value |
+ |-|-|
+ | **Interval** | **1** |
+ | **Frequency** | **Day** |
+ | **At These Hours** | **8** |
+
+ When you're done, the **Recurrence** trigger information pane looks like the following example:
+
+ ![Screenshot shows Azure portal, Standard workflow, and Recurrence trigger parameters set up.](./media/logic-apps-control-flow-loops/do-until-trigger-standard.png)
+
+1. Under the trigger, [follow these general steps to add the **Variables** built-in action named **Initialize variable** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. In the **Initialize variable** action information pane, on the **Parameters** tab, provide the following values:
+
+ | Property | Value | Description |
+ |-|-|-|
+ | **Name** | **Limit** | Your variable's name |
+ | **Type** | **Integer** | Your variable's data type |
+ | **Value** | **0** | Your variable's starting value |
+
+ ![Screenshot shows Azure portal, Standard workflow, and parameters for built-in action named Initialize variable.](./media/logic-apps-control-flow-loops/do-until-loop-variable-properties-standard.png)
+
+1. Under the **Initialize variable** action, [follow these general steps to add the **Control** built-in action named **Until** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. In the **Until** action information pane, on the **Parameters** tab, provide the following values to set up the stop condition for the loop.
+
+ 1. Under **Loop Until**, select inside the leftmost box named **Choose a value**, and select the option to open the dynamic content list (lightning icon).
+
+ 1. From the list, under **Variables**, select the variable named **Limit**.
+
+ 1. From the middle operator list, select the **is equal to** operator.
+
+ 1. In the rightmost box named **Choose a value**, enter **10** as the comparison value.
+
+ ![Screenshot shows Standard workflow and built-in action named Until with finished stop condition.](./media/logic-apps-control-flow-loops/do-until-loop-settings-standard.png)
+
+1. Inside the **Until** action, select the plus (**+**) sign, and then select **Add an action**.
+
+1. [Follow these general steps to add the **Variables** built-in action named **Increment variable** to the **Until** action](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. In the **Increment variable** action information pane, provide the following values to increment the **Limit** variable's value by 1:
+
+ | Property | Value |
+ |-|-|
+ | **Name** | Select the **Limit** variable. |
+ | **Value** | **1** |
+
+ ![Screenshot shows Standard workflow and built-in action named Until with Name set to the Limit variable and Value set to 1.](./media/logic-apps-control-flow-loops/do-until-loop-increment-variable-standard.png)
+
+1. Outside and under the **Until** action, [follow these general steps to add an action that sends email](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ This example continues with the **Office 365 Outlook** action named **Send an email**.
+
+1. In the email action information pane, on the **Parameters** tab, provide the following values:
+
+ | Property | Value | Description |
+ |-|-|-|
+ | **To** | <*email-address\@domain*> | The recipient's email address. For testing, use your own email address. |
+ | **Subject** | **Current value for "Limit" variable is:** **Limit** | The email subject. For this example, make sure that you include the **Limit** variable to confirm that the current value meets your specified condition: <br><br>1. Select inside the **Subject** box so that the dynamic content list appears. <br><br>2. In the dynamic content list, next to the **Variables** section header, select **See more**. <br><br>3. Select **Limit**. |
+ | **Body** | <*email-content*> | The email message content that you want to send. For this example, enter whatever text you want. |
+
+ When you're done, your email action looks similar to the following example:
+
+ ![Screenshot shows Standard workflow and action named Send an email with property values.](./media/logic-apps-control-flow-loops/do-until-loop-send-email-standard.png)
+
+1. Save your workflow.
+++
+### Test your workflow
-## Prevent endless loops
+To manually test your logic app workflow, follow the steps based on whether you have a Consumption or Standard logic app.
-The "Until" loop stops execution based on these properties, so make sure that you set their values accordingly:
+### [Consumption](#tab/consumption)
-* **Count**: This value is the highest number of loops that run before the loop exits. For the default and maximum limits on the number of "Until" loops that a logic app run can have, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+On the designer toolbar, select **Run Trigger** > **Run**.
-* **Timeout**: This value is the most amount of time that the "Until" action, including all the loops, runs before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). For the default and maximum limits on the **Timeout** value, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits).
+### [Standard](#tab/standard)
- The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met.
+1. On the workflow menu, select **Overview**.
-To change these limits, in the loop action, select **Change limits**.
+1. On the **Overview** page toolbar, select **Run** > **Run**.
+++
+After your workflow starts running, you get an email with the content that you specified:
+
+![Screenshot shows sample email received from example workflow.](./media/logic-apps-control-flow-loops/do-until-loop-sent-email.png)
+
+<a name="prevent-endless-loops"></a>
+
+### Prevent endless loops
+
+The **Until** action stops execution based on the following properties, which you can view by selecting **Change limits** in the action. Make sure that you set these property values accordingly:
+
+| Property | Description |
+|-|-|
+| **Count** | The maximum number of iterations that run before the loop exits. <br><br>For the default and maximum limits on the number of **Until** actions that a workflow can have, see [Concurrency, looping, and debatching limits](logic-apps-limits-and-config.md#looping-debatching-limits). |
+| **Timeout** | The maximum amount of time that the **Until** action, including all iterations, runs before the loop exits. This value is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601) and is evaluated for each iteration. <br><br>If any action in the loop takes longer than the timeout limit, the current iteration doesn't stop. However, the next iteration doesn't start because the timeout limit condition is met. <br><br>For the default and maximum limits on the **Timeout** value, see [Concurrency, looping, and debatching limits](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits). |
<a name="until-json"></a> ## "Until" definition (JSON)
-If you're working in code view for your logic app,
-you can define an `Until` loop in your logic app's
-JSON definition instead, for example:
+If you're working in code view, you can define an `Until` action in your workflow's JSON definition, for example:
``` json "actions": {
JSON definition instead, for example:
} ```
-This example "Until" loop calls an HTTP endpoint,
-which creates a resource. The loop stops when the
-HTTP response body returns with `Completed` status.
-To prevent endless loops, the loop also stops
-if any of these conditions happen:
+This example **Until** loop calls an HTTP endpoint, which creates a resource. The loop stops when the
+HTTP response body returns with `Completed` status. To prevent endless loops, the loop also stops
+if any of the following conditions happen:
-* The loop ran 10 times as specified by the `count` attribute.
-The default is 60 times.
+* The loop ran 10 times as specified by the `count` attribute. The default is 60 times.
+
+* The loop ran for two hours as specified by the `timeout` attribute in ISO 8601 format. The default is one hour.
-* The loop ran for two hours as specified by the `timeout` attribute in ISO 8601 format.
-The default is one hour.
-
``` json "actions": { "myUntilLoopName": {
The default is one hour.
} ```
-## Get support
-
-* For questions, visit the
-[Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on features and suggestions,
-[Azure Logic Apps user feedback site](https://aka.ms/logicapps-wish).
- ## Next steps
-* [Run steps based on a condition (condition action)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
-* [Run steps based on different values (switch action)](../logic-apps/logic-apps-control-flow-switch-statement.md)
-* [Run or merge parallel steps (branches)](../logic-apps/logic-apps-control-flow-branches.md)
-* [Run steps based on grouped action status (scopes)](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md)
+* [Run steps based on a condition (condition action)](logic-apps-control-flow-conditional-statement.md)
+* [Run steps based on different values (switch action)](logic-apps-control-flow-switch-statement.md)
+* [Run or merge parallel steps (branches)](logic-apps-control-flow-branches.md)
+* [Run steps based on grouped action status (scopes)](logic-apps-control-flow-run-steps-group-scopes.md)
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023 ms.suite: integration
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
Previously updated : 08/26/2022 Last updated : 09/13/2023 # Enterprise security and governance for Azure Machine Learning
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
You can submit training jobs to Azure Machine Learning by using [MLflow projects
Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning](how-to-train-mlflow-projects.md). - ### Example notebooks * [Track an MLflow project in Azure Machine Learning workspaces](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/train-projects-local.ipynb)
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
Previously updated : 11/17/2022 Last updated : 09/05/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
-Typically the beginning of a machine learning project involves exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and building prototypes of ML models to validate hypotheses. This *prototyping* phase of the project is highly interactive in nature that lends itself to developing in a Jupyter notebook or an IDE with a *Python interactive console*. In this article you'll learn how to:
+A machine learning project typically starts with exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and includes building prototypes of ML models to validate hypotheses. This *prototyping* project phase is highly interactive in nature, and it lends itself to development in a Jupyter notebook, or an IDE with a *Python interactive console*. In this article you'll learn how to:
> [!div class="checklist"] > * Access data from a Azure Machine Learning Datastores URI as if it were a file system.
Typically the beginning of a machine learning project involves exploratory data
* An Azure Machine Learning Datastore. For more information, see [Create datastores](how-to-datastore.md). > [!TIP]
-> The guidance in this article to access data during interactive development applies to any host that can run a Python session - for example: your local machine, a cloud VM, a GitHub Codespace, etc. We recommend using an Azure Machine Learning compute instance - a fully managed and pre-configured cloud workstation. For more information, see [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md).
+> The guidance in this article describes data access during interactive development. It applies to any host that can run a Python session. This can include your local machine, a cloud VM, a GitHub Codespace, etc. We recommend use of an Azure Machine Learning compute instance - a fully managed and pre-configured cloud workstation. For more information, see [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md).
> [!IMPORTANT] > Ensure you have the latest `azure-fsspec` and `mltable` python libraries installed in your python environment:
->
+>
> ```bash > pip install -U azureml-fsspec mltable > ``` ## Access data from a datastore URI, like a filesystem -
-An Azure Machine Learning datastore is a *reference* to an *existing* storage account on Azure. The benefits of creating and using a datastore include:
+An Azure Machine Learning datastore is a *reference* to an *existing* Azure storage account. The benefits of datastore creation and use include:
> [!div class="checklist"]
-> * A common and easy-to-use API to interact with different storage types (Blob/Files/ADLS).
-> * Easier to discover useful datastores when working as a team.
-> * Supports both credential-based (for example, SAS token) and identity-based (use Azure Active Directory or Manged identity) to access data.
-> * When using credential-based access, the connection information is secured so you don't expose keys in scripts.
+> * A common, easy-to-use API to interact with different storage types (Blob/Files/ADLS).
+> * Easy discovery of useful datastores in team operations.
+> * Support of both credential-based (for example, SAS token) and identity-based (use Azure Active Directory or Manged identity) access, to access data.
+> * For credential-based access, the connection information is secured, to void key exposure in scripts.
> * Browse data and copy-paste datastore URIs in the Studio UI.
-A *Datastore URI* is a Uniform Resource Identifier, which is a *reference* to a storage *location* (path) on your Azure storage account. The format of the datastore URI is:
+A *Datastore URI* is a Uniform Resource Identifier, which is a *reference* to a storage *location* (path) on your Azure storage account. A datastore URI has this format:
```python # Azure Machine Learning workspace details:
subscription = '<subscription_id>'
resource_group = '<resource_group>' workspace = '<workspace>' datastore_name = '<datastore>'
-path_on_datastore = '<path>'
+path_on_datastore '<path>'
# long-form Datastore uri format:
-uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'
+uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'.
```
-These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.
-You can pip install the `azureml-fsspec`package and its dependency `azureml-dataprep` package. And then you can use the Azure Machine Learning Datastore implementation of `fsspec`.
-
-The Azure Machine Learning Datastore implementation of `fsspec` automatically handles credential/identity passthrough used by the Azure Machine Learning datastore. This means you don't need to expose account keys in your scripts or do additional sign-in procedures on a compute instance.
+These Datastore URIs are a known implementation of the [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): a unified pythonic interface to local, remote and embedded file systems and bytes storage.
+You can pip install the `azureml-fsspec` package and its dependency `azureml-dataprep` package. Then, you can use the Azure Machine Learning Datastore `fsspec` implementation.
+The Azure Machine Learning Datastore `fsspec` implementation automatically handles the credential/identity passthrough that the Azure Machine Learning datastore uses. You can avoid both account key exposure in your scripts, and additional sign-in procedures, on a compute instance.
-For example, you can directly use Datastore URIs in Pandas - below is an example of reading a CSV file:
+For example, you can directly use Datastore URIs in Pandas. This example shows how to read a CSV file:
```python import pandas as pd df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv") df.head()
-```
+```
> [!TIP]
-> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
-> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
-> 1. Select your datastore name and then **Browse**.
-> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI with these steps:
+> 1. Select **Data** from the left-hand menu, then select the **Datastores** tab.
+> 1. Select your datastore name, and then **Browse**.
+> 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.
> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
-You can also instantiate an Azure Machine Learning filesystem and do filesystem-like commands like `ls`, `glob`, `exists`, `open`.
-- The `ls()` method can be used to list files in the corresponding directory. You can use ls(), ls(.), ls (<<folder_level_1>/<folder_level_2>) to list files. We support both '.' and '..' in relative paths.
+You can also instantiate an Azure Machine Learning filesystem, to handle filesystem-like commands - for example `ls`, `glob`, `exists`, `open`.
+- The `ls()` method lists files in a specific directory. You can use ls(), ls(.), ls (<<folder_level_1>/<folder_level_2>) to list files. We support both '.' and '..', in relative paths.
- The `glob()` method supports '*' and '**' globbing.-- The `exists()` method returns a Boolean value that indicates whether a specified file exists in current root directory. -- The `open()` method will return a file-like object, which can be passed to any other library that expects to work with python files, or used by your own code as you would a normal python file object. These file-like objects respect the use of `with` contexts, for example:
+- The `exists()` method returns a Boolean value that indicates whether a specified file exists in current root directory.
+- The `open()` method returns a file-like object, which can be passed to any other library that expects to work with python files. Your code can also use this object, as if it were a normal python file object. These file-like objects respect the use of `with` contexts, as shown in this example:
```python from azureml.fsspec import AzureMachineLearningFileSystem
fs.upload(lpath='data/upload_folder/', rpath='data/fsspec_folder', recursive=Tru
`lpath` is the local path, and `rpath` is the remote path. If the folders you specify in `rpath` do not exist yet, we will create the folders for you.
-We support 3 modes for 'overwrite':
-- APPEND: if there is already a file with the same name in the destination path, will keep the original file-- FAIL_ON_FILE_CONFLICT: if there is already a file with the same name in the destination path, will throw an error-- MERGE_WITH_OVERWRITE: if there is already a file with the same name in the destination path, will overwrite with the new file
+We support three 'overwrite' modes:
+- APPEND: if a file with the same name exists in the destination path, this keeps the original file
+- FAIL_ON_FILE_CONFLICT: if a file with the same name exists in the destination path, this throws an error
+- MERGE_WITH_OVERWRITE: if a file with the same name exists in the destination path, this overwrites that existing file with the new file
### Download files via AzureMachineLearningFileSystem ```python
fs.download(rpath='data/fsspec_folder', lpath='data/download_folder/', recursive
### Examples
-In this section we provide some examples of how to use Filesystem spec, for some common scenarios.
+These examples show use of the filesystem spec use in common scenarios.
-#### Read a single CSV file into pandas
+#### Read a single CSV file into Pandas
-If you have a *single* CSV file, then as outlined above you can read that into pandas with:
+You can read a *single* CSV file into Pandas as shown:
```python import pandas as pd
import pandas as pd
df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv") ```
-#### Read a folder of CSV files into pandas
+#### Read a folder of CSV files into Pandas
-The Pandas `read_csv()` method doesn't support reading a folder of CSV files. You need to glob csv paths and concatenate them to a data frame using Pandas `concat()` method. The code below demonstrates how to achieve this concatenation with the Azure Machine Learning filesystem:
+The Pandas `read_csv()` method doesn't support reading a folder of CSV files. You must glob csv paths, and concatenate them to a data frame with the Pandas `concat()` method. The next code sample shows how to achieve this concatenation with the Azure Machine Learning filesystem:
```python import pandas as pd
df.head()
#### Reading CSV files into Dask
-Below is an example of reading a CSV file into a Dask data frame:
+This example shows how to read a CSV file into a Dask data frame:
```python import dask.dd as dd df = dd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv") df.head()
-```
+```
-#### Read a folder of parquet files into pandas
-Parquet files are typically written to a folder as part of an ETL process, which can emit files pertaining to the ETL such as progress, commits, etc. Below is an example of files created from an ETL process (files beginning with `_`) to produce a parquet file of data.
+#### Read a folder of parquet files into Pandas
+As part of an ETL process, Parquet files are typically written to a folder, which can then emit files relevant to the ETL such as progress, commits, etc. This example shows files created from an ETL process (files beginning with `_`) which then produce a parquet file of data.
:::image type="content" source="media/how-to-access-data-ci/parquet-auxillary.png" alt-text="Screenshot showing the parquet etl process.":::
-In these scenarios, you'll only want to read the parquet files in the folder and ignore the ETL process files. The code below shows how you can use glob patterns to read only parquet files in a folder:
+In these scenarios, you'll only read the parquet files in the folder, and ignore the ETL process files. This code sample shows how glob patterns can read only parquet files in a folder:
```python import pandas as pd
df.head()
#### Accessing data from your Azure Databricks filesystem (`dbfs`)
-Filesystem spec (`fsspec`) has a range of [known implementations](https://filesystem-spec.readthedocs.io/en/stable/_modules/https://docsupdatetracker.net/index.html), one of which is the Databricks Filesystem (`dbfs`).
+Filesystem spec (`fsspec`) has a range of [known implementations](https://filesystem-spec.readthedocs.io/en/stable/_modules/https://docsupdatetracker.net/index.html), including the Databricks Filesystem (`dbfs`).
-To access data from `dbfs` you will need:
+To access data from `dbfs` you need:
-- **Instance name**, which is in the form of `adb-<some-number>.<two digits>.azuredatabricks.net`. You can glean this from the URL of your Azure Databricks workspace.-- **Personal Access Token (PAT)**, for more information on creating a PAT, please see [Authentication using Azure Databricks personal access tokens](/azure/databricks/dev-tools/api/latest/authentication)
+- **Instance name**, in the form of `adb-<some-number>.<two digits>.azuredatabricks.net`. You can find this value in the URL of your Azure Databricks workspace.
+- **Personal Access Token (PAT)**; for more information about PAT creation, see [Authentication using Azure Databricks personal access tokens](/azure/databricks/dev-tools/api/latest/authentication)
-Once you have these, you will need to create an environment variable on your compute instance for the PAT token:
+With these values, you must create an environment variable on your compute instance for the PAT token:
```bash export ADB_PAT=<pat_token> ```
-You can then access data in Pandas using:
+You can then access data in Pandas as shown in this example:
```python import os
with fs.open('/<folder>/<image.jpeg>') as f:
#### PyTorch custom dataset example
-In this example, you create a PyTorch custom dataset for processing images. The assumption is that an annotations file (in CSV format) exists that looks like:
+In this example, you create a PyTorch custom dataset for processing images. We assume that an annotations file (in CSV format) exists, with this overall structure:
```text image_path, label
image_path, label
2/image5.png, label2 ```
-The images are stored in subfolders according to their label:
+Subfolders store these images, according to their labels:
```text /
The images are stored in subfolders according to their label:
└── 📷image5.png ```
-A custom Dataset class in PyTorch must implement three functions: `__init__`, `__len__`, and `__getitem__`, which are implemented below:
+A custom PyTorch Dataset class must implement three functions: `__init__`, `__len__`, and `__getitem__`, as shown here:
```python import os
class CustomImageDataset(Dataset):
return image, label ```
-You can then instantiate the dataset using:
+You can then instantiate the dataset as shown here:
```python from azureml.fsspec import AzureMachineLearningFileSystem
training_data = CustomImageDataset(
img_dir='/<path_to_images>/' )
-# Preparing your data for training with DataLoaders
+# Prepare your data for training with DataLoaders
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True) ``` ## Materialize data into Pandas using `mltable` library
-Another method for accessing data in cloud storage is to use the `mltable` library. The general format for reading data into pandas using `mltable` is:
+The `mltable` library can also help access data in cloud storage. Reading data into Pandas with `mltable` has this general format:
```python import mltable
tbl = mltable.from_delimited_files(paths=[path])
# tbl = mltable.from_json_lines_files(paths=[path]) # tbl = mltable.from_delta_lake(paths=[path])
-# materialize to pandas
+# materialize to Pandas
df = tbl.to_pandas_dataframe() df.head() ``` ### Supported paths
-You'll notice the `mltable` library supports reading tabular data from different path types:
+The `mltable` library supports reading of tabular data from different path types:
|Location | Examples | |||
You'll notice the `mltable` library supports reading tabular data from different
|A long-form Azure Machine Learning datastore | `azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<path>` | > [!NOTE]
-> `mltable` does user credential passthrough for paths on Azure Storage and Azure Machine Learning datastores. If you do not have permission to the data on the underlying storage then you will not be able to access the data.
+> `mltable` does user credential passthrough for paths on Azure Storage and Azure Machine Learning datastores. If you do not have permission to access the data on the underlying storage, you cannot access the data.
### Files, folders and globs `mltable` supports reading from: -- file(s), for example: `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-csv.csv`-- folder(s), for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/`-- [glob](https://wikipedia.org/wiki/Glob_(programming)) pattern(s), for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/*.csv`-- Or, a combination of files, folders, globbing patterns
+- file(s) - for example: `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-csv.csv`
+- folder(s) - for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/`
+- [glob](https://wikipedia.org/wiki/Glob_(programming)) pattern(s) - for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/*.csv`
+- a combination of files, folders, and/or globbing patterns
-The flexibility of `mltable` allows you to materialize data into a single dataframe from a combination of local/cloud storage and combinations of files/folder/globs. For example:
+`mltable` flexibility allows data materialization, into a single dataframe, from a combination of local and cloud storage resources, and combinations of files/folder/globs. For example:
```python path1 = {
tbl = mltable.from_delimited_files(paths=[path1, path2, path3])
##### [ADLS gen2](#tab/adls)
-Update the placeholders (`<>`) in the code snippet with your details.
+Update the placeholders (`<>`) in this code snippet with your specific details:
```python import mltable
df.head()
##### [Blob storage](#tab/blob)
-Update the placeholders (`<>`) in the code snippet with your details.
+Update the placeholders (`<>`) in this code snippet with your specific details:
```python import mltable
df.head()
##### [Azure Machine Learning Datastore](#tab/datastore)
-Update the placeholders (`<>`) in the code snippet with your details.
+Update the placeholders (`<>`) in this code snippet with your specific details:
```python import mltable
df.head()
``` > [!TIP]
-> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
-> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
-> 1. Select your datastore name and then **Browse**.
-> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> Instead of remembering the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI with these steps:
+> 1. Select **Data** from the left-hand menu, then select the **Datastores** tab.
+> 1. Select your datastore name, and then **Browse**.
+> 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.
> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: ##### [HTTP Server](#tab/http)
df.head()
#### Read parquet files in a folder
-The example code below shows how `mltable` can use [glob](https://wikipedia.org/wiki/Glob_(programming)) patterns - such as wildcards - to ensure only the parquet files are read.
+This example shows how `mltable` can use [glob](https://wikipedia.org/wiki/Glob_(programming)) patterns - such as wildcards - to ensure that only the parquet files are read.
##### [ADLS gen2](#tab/adls)
-Update the placeholders (`<>`) in the code snippet with your details.
+Update the placeholders (`<>`) in this code snippet with your specific details:
```python import mltable
df.head()
##### [Blob storage](#tab/blob)
-Update the placeholders (`<>`) in the code snippet with your details.
+Update the placeholders (`<>`) in this code snippet with your specific details:
```python import mltable
df.head()
##### [Azure Machine Learning Datastore](#tab/datastore)
-Update the placeholders (`<>`) in the code snippet with your details.
+Update the placeholders (`<>`) in this code snippet with your specific details:
```python import mltable
df.head()
``` > [!TIP]
-> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
-> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
-> 1. Select your datastore name and then **Browse**.
-> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> To avoid remembering the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI with these steps:
+> 1. Select **Data** from the left-hand menu, then select the **Datastores** tab.
+> 1. Select your datastore name, and then **Browse**.
+> 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.
> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: ##### [HTTP Server](#tab/http)
-Update the placeholders (`<>`) in the code snippet with your details.
+Update the placeholders (`<>`) in this code snippet with your specific details:
> [!IMPORTANT]
-> To glob the pattern on a public HTTP server, there must be access at the **folder** level.
+> To glob the pattern on a public HTTP server, you need access at the **folder** level.
```python import mltable
df.head()
### Reading data assets
-In this section, you'll learn how to access your Azure Machine Learning data assets into pandas.
+This section shows how to access your Azure Machine Learning data assets in Pandas.
#### Table asset
-If you've previously created a Table asset in Azure Machine Learning (an `mltable`, or a V1 `TabularDataset`), you can load that into pandas using:
+If you previously created a table asset in Azure Machine Learning (an `mltable`, or a V1 `TabularDataset`), you can load that table asset into Pandas with this code:
```python import mltable
df.head()
#### File asset
-If you've registered a File asset that you want to read into Pandas data frame - for example, a CSV file - you can achieve this using:
+If you registered a file asset (a CSV file, for example), you can read that asset into a Pandas data frame with this code:
```python import mltable
df.head()
#### Folder asset
-If you've registered a Folder asset (`uri_folder` or a V1 `FileDataset`) that you want to read into Pandas data frame - for example, a folder containing CSV file - you can achieve this using:
+If you registered a folder asset (`uri_folder` or a V1 `FileDataset`) - for example, a folder containing a CSV file - you can read that asset into a Pandas data frame with this code:
```python import mltable
df.head()
## A note on reading and processing large data volumes with Pandas > [!TIP]
-> Pandas is not designed to handle large datasets - you will only be able to process data that can fit into the memory of the compute instance.
+> Pandas is not designed to handle large datasets - Pandas can only process data that can fit into the memory of the compute instance.
>
-> For large datasets we recommend that you use Azure Machine Learning managed Spark, which provides the [PySpark Pandas API](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/https://docsupdatetracker.net/index.html).
+> For large datasets, we recommend use of Azure Machine Learning managed Spark. This provides the [PySpark Pandas API](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/https://docsupdatetracker.net/index.html).
-You may wish to iterate quickly on a smaller subset of a large dataset before scaling up to a remote asynchronous job. `mltable` provides in-built functionality to get samples of large data using the [take_random_sample](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-take-random-sample) method:
+You might want to iterate quickly on a smaller subset of a large dataset before scaling up to a remote asynchronous job. `mltable` provides in-built functionality to get samples of large data using the [take_random_sample](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-take-random-sample) method:
```python import mltable
df = tbl.to_pandas_dataframe()
df.head() ```
-You can also take subsets of large data by using:
+You can also take subsets of large data with these operations:
- [filter](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-filter) - [keep_columns](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-keep-columns) - [drop_columns](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-drop-columns) - ## Downloading data using the `azcopy` utility
-You may want to download the data to the local SSD of your host (local machine, cloud VM, Azure Machine Learning Compute Instance) and use the local filesystem. You can do this with the `azcopy` utility, which is pre-installed on an Azure Machine Learning compute instance. If you are **not** using an Azure Machine Learning compute instance or a Data Science Virtual Machine (DSVM), you may need to install `azcopy`. For more information please read [azcopy](../storage/common/storage-ref-azcopy.md).
+Use the `azcopy` utility to download the data to the local SSD of your host (local machine, cloud VM, Azure Machine Learning Compute Instance), into the local filesystem. The `azcopy` utility, which is pre-installed on an Azure Machine Learning compute instance, will handle this. If you **don't** use an Azure Machine Learning compute instance or a Data Science Virtual Machine (DSVM), you may need to install `azcopy`. See [azcopy](../storage/common/storage-ref-azcopy.md) for more information.
> [!CAUTION]
-> We do not recommend downloading data in the `/home/azureuser/cloudfiles/code` location on a compute instance. This is designed to store notebook and code artifacts, **not** data. Reading data from this location will incur significant performance overhead when training. Instead we recommend storing your data in `home/azureuser`, which is the local SSD of the compute node.
+> We don't recommend data downloads into the `/home/azureuser/cloudfiles/code` location on a compute instance. This location is designed to store notebook and code artifacts, **not** data. Reading data from this location will incur significant performance overhead when training. Instead, we recommend data storage in the `home/azureuser`, which is the local SSD of the compute node.
Open a terminal and create a new directory, for example:
azcopy cp $SOURCE $DEST
## Next steps - [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](interactive-data-wrangling-with-apache-spark-azure-ml.md)-- [Access data in a job](how-to-read-write-data-v2.md)
+- [Access data in a job](how-to-read-write-data-v2.md)
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
Last updated 10/10/2022-+
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
Previously updated : 09/07/2022 Last updated : 09/13/2023 # Network Isolation Change with Our New API Platform on Azure Resource Manager
machine-learning How To Manage Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-powershell.md
Previously updated : 01/26/2023 Last updated : 09/13/2023
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
Last updated 10/10/2022-+
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Previously updated : 08/19/2022 Last updated : 09/13/2023 monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning How To Retrieval Augmented Generation Cloud To Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-retrieval-augmented-generation-cloud-to-local.md
+
+ Title: Retrieval Augmented Generation (RAG) cloud to local (preview)
+
+description: Learning how to transition your RAG created flows from cloud to local using the prompt flow VS Code extension.
+++++++ Last updated : 09/12/2023++++
+# RAG from cloud to local - bring your own data QnA (preview)
+
+In this article, you'll learn how to transition your RAG created flows from cloud in your Azure Machine Learning workspace to local using the Prompt flow VS Code extension.
+
+> [!IMPORTANT]
+> Prompt flow and Retrieval Augmented Generation (RAG) is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+1. Install prompt flow SDK:
+
+ ``` bash
+ pip install promptflow promptflow-tool
+ ```
+
+ To learn more, see [prompt flow local quick start](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html#quick-start)
+
+2. Install promptflow-vectordb SDK:
+
+ ``` bash
+ pip install promptflow-vectordb
+ ```
+
+3. Install the prompt flow extension in VS Code
+
+ :::image type="content" source="./media/how-to-retrieval-augmented-generation-cloud-to-local/vs-code-extension.png" alt-text="Screenshot of the prompt flow VS Code extension in the marketplace." lightbox = "./media/how-to-retrieval-augmented-generation-cloud-to-local/vs-code-extension.png":::
+
+## Download your flow files to local
+
+For example, there's already a flow "Bring Your Own Data QnA" in the workspace, which uses the **Vector index lookup** tool to search question from the indexed docs.
+
+The index docs are stored in the workspace binding storage blog.
++
+Go to the flow authoring, select the **Download** icon in the file explorer. It downloads the flow zip package to local, such as "Bring Your Own Data Qna.zip" file, which contains the flow files.
++
+## Open the flow folder in VS Code
+
+Unzip the "Bring Your Own Data Qna.zip" locally, and open the "Bring Your Own Data QnA" folder in VS Code desktop.
+
+> [!TIP]
+> If you don't depend on the prompt flow extension in VS Code, you can open the folder in any IDE you like.
+
+## Create a local connection
+
+To use the vector index lookup tool locally, you need to create the same connection to the vector index service as you did in the cloud.
++
+Open the "flow.dag.yaml" file, search the "connections" section, you can find the connection configuration you used in your Azure Machine Learning workspace.
+
+Create a local connection same as the cloud one.
++
+If you have the **prompt flow extension** installed in VS Code desktop, you can create the connection in the extension UI.
+
+Select the prompt flow extension icon to go to the prompt flow management central place. Select the **+** icon in the connection explorer, and select the connection type "AzureOpenAI".
++
+### Create a connection with Azure CLI
+
+If you prefer to use Azure CLI instead of the VS Code extension you can create a connection yaml file "AzureOpenAIConnection.yaml", then run the connection create CLI command in the terminal:
+
+``` yaml
+ $schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
+ name: azure_open_ai_connection
+ type: azure_open_ai
+ api_key: "<aoai-api-key>" #your key
+ api_base: "aoai-api-endpoint"
+ api_type: "azure"
+ api_version: "2023-03-15-preview"
+```
+
+``` bash
+ pf connection create -f AzureOpenAIConnection.yaml
+```
+
+> [!NOTE]
+> The rest of this article details how to use the VS code extension to edit the files, you can follow this [quick start on how to edit your files with CLI instructions](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html#quick-start).
+
+## Check and modify the flow files
+
+1. Open "flow.dag.yaml" and select "Visual editor"
+
+ :::image type="content" source="./media/how-to-retrieval-augmented-generation-cloud-to-local/visual-editor.png" alt-text="Screenshot of the flow dag yaml file with the visual editor highlighted in VS Code." lightbox = "./media/how-to-retrieval-augmented-generation-cloud-to-local/visual-editor.png":::
+
+ > [!NOTE]
+ > When legacy tools switching to code first mode, "not found" error may occur, refer to [Vector DB/Faiss Index/Vector Index Lookup tool](./prompt-flow/tools-reference/troubleshoot-guidance.md) rename reminder
+
+2. Jump to the "embed_the_question" node, make sure the connection is the local connection you have created, and double check the deployment_name, which is the model you use here for the embedding.
+
+ :::image type="content" source="./media/how-to-retrieval-augmented-generation-cloud-to-local/embed-question.png" alt-text="Screenshot of embed the question node in VS Code." lightbox = "./media/how-to-retrieval-augmented-generation-cloud-to-local/embed-question.png":::
+
+3. Jump to the "search_question_from_indexed_docs" node, which consumes the Vector Index Lookup Tool in this flow. Check the path of your indexed docs you specify. All public accessible path is supported, such as: `https://github.com/Azure/azureml-assets/tree/main/assets/promptflow/data/faiss-index-lookup/faiss_index_sample`.
+
+ > [!NOTE]
+ > If your indexed docs is the data asset in your workspace, the local consume of it need Azure authentication.
+ >
+ > Before run the flow, make sure you have `az login` and connect to the Azure Machine Learning workspace.
+ >
+ > To learn more, see [Connect to Azure Machine Learning workspace](./prompt-flow/how-to-integrate-with-llm-app-devops.md#connect-to-azure-machine-learning-workspace)
+
+ :::image type="content" source="./media/how-to-retrieval-augmented-generation-cloud-to-local/search-blob.png" alt-text="Screenshot of search question from indexed docs node in VS Code showing the inputs." lightbox = "./media/how-to-retrieval-augmented-generation-cloud-to-local/search-blob.png":::
+
+ Then select on the **Edit** button located within the "query" input box. This will take you to the raw flow.dag.yaml file and locate to the definition of this node.
+
+ Check the "tool" section within this node. Ensure that the value of the "tool" section is set to `promptflow_vectordb.tool.vector_index_lookup.VectorIndexLookup.search`. This tool package name of the VectorIndexLookup local version.
+
+ :::image type="content" source="./media/how-to-retrieval-augmented-generation-cloud-to-local/search-tool.png" alt-text="Screenshot of the tool section of the node showing the value mentioned previously." lightbox = "./media/how-to-retrieval-augmented-generation-cloud-to-local/search-tool.png":::
+
+4. Jump to the "generate_prompt_context" node, check the package name of the vector tool in this python node is `promptflow_vectordb`.
+
+ :::image type="content" source="./media/how-to-retrieval-augmented-generation-cloud-to-local/generate-node.png" alt-text="Screenshot of the generate prompt content node in VS Code highlighting the package name." lightbox = "./media/how-to-retrieval-augmented-generation-cloud-to-local/generate-node.png":::
+
+5. Jump to the "answer_the_question_with_context" node, check the connection and deployment_name as well.
+
+ :::image type="content" source="./media/how-to-retrieval-augmented-generation-cloud-to-local/answer-connection.png" alt-text="Screenshot of answer the question with context node with the connection highlighted." lightbox = "./media/how-to-retrieval-augmented-generation-cloud-to-local/answer-connection.png":::
+
+## Test and run the flow
+
+Scroll up to the top of the flow, fill in the "Inputs" value of this single run for testing, for example "How to use SDK V2?", then run the flows. Then select the **Run** button in the top right corner. This will trigger a single run of the flow.
++
+For batch run and evaluation, you can refer to [Submit flow run to Azure Machine Learning workspace](./prompt-flow/how-to-integrate-with-llm-app-devops.md#submit-flow-run-to-azure-machine-learning-workspace)
+
+## Next steps
+
+- [Submit runs to cloud for large scale testing and ops integration](./prompt-flow/how-to-integrate-with-llm-app-devops.md#submitting-runs-to-the-cloud-from-local-repository)
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
Last updated 10/10/2022-+
machine-learning How To Secure Rag Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-rag-workflows.md
+
+ Title: Secure your RAG workflows with network isolation (preview)
+
+description: Learn how to secure your RAG workflows with managed network and custom virtual network scenarios.
++++++ Last updated : 09/12/2023++++
+# Secure your RAG workflows with network isolation (preview)
+
+You can secure your Retrieval Augmented Generation (RAG) flows by using private networks in Azure Machine Learning with two network management options. These options are: **Managed Virtual Network**, which is the in-house offering, or **"Bring Your Own" Virtual Network**, which is useful when you want full control over setup for your Virtual Networks / Subnets, Firewalls, Network Security Group rules, etc.
+
+Within the Azure Machine Learning managed network option, there are two secured suboptions offered which you can select from: **Allow Internet Outbound** and **Allow Only Approved Outbound**.
+
+![Screenshot of Managed Vnet Options in Azure Machine Learning.](./media/how-to-secure-rag-workflows/private-managed-vnet-options.png)
++
+Depending on your setup and scenario, RAG workflows in Azure Machine Learning may require other steps for network isolation.
+
+## Prerequisites
+* An Azure subscription.
+* Access to Azure OpenAI Service.
+* A secure Azure Machine Learning workspace: either with Workspace Managed Virtual Network or "Bring Your Own" Virtual Network setup.
+* Prompt flows enabled in your Azure Machine Learning workspace. You can enable prompt flows by turning on Build AI solutions with Prompt flow on the Manage preview features panel.
+
+## With Azure Machine Learning Workspace Managed VNet
+
+1. Follow [Workspace managed network isolation](./how-to-managed-network.md) to enable workspace managed VNet.
+
+2. Navigate to the [Azure portal](https://ms.portal.azure.com) and select **Networking** under the **Settings** tab in the left-hand menu.
+
+3. To allow your RAG workflow to communicate with [<u>private</u> Azure Cognitive Services](./../ai-services/cognitive-services-virtual-networks.md) such as Azure Open AI or Azure Cognitive Search during Vector Index creation, you need to define a related user outbound rule to a related resource. Select **Workspace managed outbound access** at the top of networking settings. Then select **+Add user-defined outbound rule**. Enter in a **Rule name**. Then select your resource you want to add the rule to using the **Resource name** text box.
+
+ The Azure Machine Learning workspace creates a private endpoint in the related resource with autoapprove. If the status is stuck in pending, go to related resource to approve the private endpoint manually.
+
+ :::image type="content" source="./media/how-to-secure-rag-workflows/add-private-cognitive-services.png" alt-text="Screenshot showing the location in Azure Studio to add private cognitive services user outbound rule." lightbox="./media/how-to-secure-rag-workflows/add-private-cognitive-services.png":::
+
+4. Navigate to the settings of the storage account associated with your workspace. Select **Access Control (IAM)** in the left-hand menu. Select **Add Role Assignment**. Add **Storage Table Data Contributor** and **Storage Blob Data Contributor** access to Workspace Managed Identity. This can be done typing **Storage Table Data Contributor** and **Storage Blob Data Contributor** into the search bar. You'll need to complete this step and the next step twice. Once for Blob Contributor and the second time for Table Contributor.
+
+5. Ensure the **Managed Identity** option is selected. Then select **Select Members**. Select **Azure Machine Learning Workspace** under the drop-down for **Managed Identity**. Then select your managed identity of the workspace.
+
+ :::image type="content" source="./media/how-to-secure-rag-workflows/storage-add-blob-table-managed-identity.png" alt-text="Screenshot showing the location to add a Workspace Managed Identity to a Blob or Table access in Storage Account of the Azure Studio." lightbox="./media/how-to-secure-rag-workflows/storage-add-blob-table-managed-identity.png":::
+
+7. (optional) To add an outgoing FQDN rule, in the Azure portal, select **Networking** under the **Settings** tab in the left-hand menu. Select **Workspace managed outbound access** at the top of networking settings. Then select **+Add user-defined outbound rule**. Select **FQDN Rule** under **Destination type**. Enter your endpoint URL in **FQDN Destination**. To find your endpoint URL, navigate to deployed endpoints in the Azure portal, select your desired endpoints and copy the endpoint URL from the details section.
+
+If you're using an **Allow only approved outbound** Managed Vnet workspace and a `public` Azure Open AI resource, you need to **add an outgoing FQDN rule** for your Azure Open AI endpoint. This enables data plane operations, which are required to perform Embeddings in RAG. Without this, the AOAI resource, even if public, isn't allowed to be accessed.
+
+7. (optional) In order to upload data files beforehand or to use **Local Folder Upload** for RAG when the storage account is made is private, the workspace must be accessed from a Virtual Machine behind a Vnet, and subnet must be allow-listed in the Storage Account. This can be done by selecting **Storage Account**, then **Networking setting**. Select **Enable for selected virtual network and IPs**, then add your workspace Subnet.
+
+ :::image type="content" source="./media/how-to-secure-rag-workflows/storage-setting-for-private-data-upload.png" alt-text="Screenshot showing the private storage settings requirements for secure data upload." lightbox="./media/how-to-secure-rag-workflows/storage-setting-for-private-data-upload.png":::
+
+ Follow this tutorial for [how to connect to a private storage](../private-link/tutorial-private-endpoint-storage-portal.md) from an Azure Virtual Machine.
+
+## With BYO Custom Vnet
+
+1. Select **Use my Own Virtual Network** when configuring your Azure Machine Learning workspace. In this scenario, it's up to the user to configure the network rules and private endpoints to related resources correctly, as the workspace doesn't autoconfigure it.
+
+2. In the Vector Index creation Wizard, make sure to select **Compute Instance** or **Compute Cluster** from the compute options dropdown, as this scenario isn't supported with Serverless Compute.
+
+## Troubleshooting Common Problems
+
+- If your workspace runs into network related issues where your compute is unable to create or start a compute, try adding a placeholder FQDN rule in the **Networking** tab of your workspace in the Azure portal, in order to initiate a managed network update. Then, re-create the Compute in the Azure Machine Learning workspace.
+
+- You might see an error message related to `< Resource > is not registered with Microsoft.Network resource provider.` In which case, you should **ensure the subscription which your AOAI/ACS resource is registered with a Microsoft Network resource provider**. To do so, navigate to **Subscription**, then **Resource Providers** for the same tenant as your Managed Vnet Workspace.
+
+> [!NOTE]
+> It's expected for a first-time serverless job in the workspace to be Queued an additional 10-15 minutes while Managed Network is provisioning Private Endpoints for the first time. With Compute Instance and Compute Cluster, this process happens during the compute creation.
+
+## Next Steps
+
+- Secure your Prompt Flow
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
Last updated 10/10/2022-+
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md
Last updated 10/10/2022-+
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid-batch.md
Last updated 10/10/2022-+
machine-learning How To Use Low Priority Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-low-priority-batch.md
Last updated 10/10/2022-+
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
In this article, learn how to enable MLflow to connect to Azure Machine Learning
If you have an MLflow Project to train with Azure Machine Learning, see [Train ML models with MLflow Projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md). - ## Prerequisites * An [Azure Synapse Analytics workspace and cluster](../synapse-analytics/quickstart-create-workspace.md).
machine-learning Monitor Resource Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-resource-reference.md
Quota information is for Azure Machine Learning compute only.
| Metric| Unit | Description | |--|--|--| | CpuUtilization | Count | Percentage of utilization on a CPU node. Utilization is reported at one-minute intervals. |
+| CpuUtilizationPercentage | Count | Utilization percentage of a CPU node. Utilization is aggregated in one minute intervals. |
+| CpuUtilizationMillicores | Count | Utilization of a CPU node in millicores. Utilization is aggregated in one minute intervals. |
+| CpuCapacityMillicores | Count | Maximum capacity of a CPU node in millicores. Capacity is aggregated in one minute intervals. |
+| CpuMemoryCapacityMegabytes | Count | Maximum memory utilization of a CPU node in megabytes. Utilization is aggregated in one minute intervals. |
+| CpuMemoryUtilizationMegabytes | Count | Memory utilization of a CPU node in megabytes. Utilization is aggregated in one minute intervals. |
+| CpuMemoryUtilizationPercentage | Count | Memory utilization percentage of a CPU node. Utilization is aggregated in one minute intervals. |
| GpuUtilization | Count | Percentage of utilization on a GPU node. Utilization is reported at one-minute intervals. |
+| GpuUtilizationPercentage | Count | Utilization percentage of a GPU device. Utilization is aggregated in one minute intervals. |
+| GpuUtilizationMilliGPUs | Count | Utilization of a GPU device in milli-GPUs. Utilization is aggregated in one minute intervals. |
+| GpuCapacityMilliGPUs | Count | Maximum capacity of a GPU device in milli-GPUs. Capacity is aggregated in one minute intervals. |
+| GpuMemoryCapacityMegabytes | Count | Maximum memory capacity of a GPU device in megabytes. Capacity aggregated in at one minute intervals. |
| GpuMemoryUtilization | Count | Percentage of memory utilization on a GPU node. Utilization is reported at one-minute intervals. |
+| GpuMemoryUtilizationMegabytes | Count | Memory utilization of a GPU device in megabytes. Utilization aggregated in at one minute intervals.
+| GpuMemoryUtilizationPercentage | Count | Memory utilization percentage of a GPU device. Utilization aggregated in at one minute intervals.
| GpuEnergyJoules | Count | Interval energy in Joules on a GPU node. Energy is reported at one-minute intervals. |
+| DiskAvailMegabytes | Count | Available disk space in megabytes. Metrics are aggregated in one minute intervals. |
+| DiskReadMegabytes | Count | Data read from disk in megabytes. Metrics are aggregated in one minute intervals. |
+| DiskUsedMegabytes | Count | Used disk space in megabytes. Metrics are aggregated in one minute intervals. |
+| DiskWriteMegabytes | Count | Data written into disk in megabytes. Metrics are aggregated in one minute intervals. |
+| IBReceiveMegabytes | Count | Network data received over InfiniBand in megabytes. Metrics are aggregated in one minute intervals. |
+| IBTransmitMegabytes | Count | Network data sent over InfiniBand in megabytes. Metrics are aggregated in one minute intervals. |
+| NetworkInputMegabytes | Count | Network data received in megabytes. Metrics are aggregated in one minute intervals. |
+| NetworkOutputMegabytes | Count | Network data sent in megabytes. Metrics are aggregated in one minute intervals. |
++ **Run**
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
machine-learning Community Ecosystem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/community-ecosystem.md
+
+ Title: Prompt Flow community ecosystem (preview)
+
+description: Introduction to the Prompt flow community ecosystem, which includes the SDK and VS Code extension.
+++++++ Last updated : 09/12/2023++
+# Prompt Flow community ecosystem (preview)
+
+The Prompt Flow community ecosystem aims to provide a comprehensive set of tools and resources for developers who want to leverage the power of Prompt Flow to experimentally tune their prompts and develop their LLM-based application in a local environment. This article goes through the key components of the ecosystem, including the **Prompt Flow SDK** and the **VS Code extension**.
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prompt flow SDK/CLI
+
+The Prompt Flow SDK/CLI empowers developers to use code manage credentials, initialize flows, develop flows, and execute batch testing and evaluation of prompt flows locally.
+
+It's designed for efficiency, allowing simultaneous trigger of large dataset-based flow tests and metric evaluations. Additionally, the SDK/CLI can be easily integrated into your CI/CD pipeline, automating the testing process.
+
+To get started with the Prompt Flow SDK, explore and follow the [SDK quick start notebook](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart.ipynb) in steps.
+
+## VS Code extension
+
+The ecosystem also provides a powerful VS Code extension designed for enabling you to easily and interactively develop prompt flows, fine-tune your prompts, and test them with a user-friendly UI.
++
+To get started with the Prompt Flow VS Code extension, navigate to the extension marketplace to install and read the details tab.
++
+## Transition to production in cloud
+
+After successful development and testing of your prompt flow within our community ecosystem, the subsequent step you're considering may involve transitioning to a production-grade LLM application. We recommend Azure Machine Learning for this phase to ensure security, efficiency, and scalability.
+
+You can seamlessly shift your local flow to your Azure resource to leverage large-scale execution and management in the cloud. To achieve this, see [Integration with LLMOps](how-to-integrate-with-llm-app-devops.md#go-back-to-studio-ui-for-continuous-development).
+
+## Community support
+
+The community ecosystem thrives on collaboration and support. Join the active community forums to connect with fellow developers, and contribute to the growth of the ecosystem.
+
+[GitHub Repository: promptflow](https://github.com/microsoft/promptflow)
+
+For questions or feedback, you can [open GitHub issue directly](https://github.com/microsoft/promptflow/issues/new) or reach out to pf-feedback@microsoft.com.
+
+## Next steps
+
+The prompt flow community ecosystem empowers developers to build interactive and dynamic prompts with ease. By using the Prompt Flow SDK and the VS Code extension, you can create compelling user experiences and fine-tune your prompts in a local environment.
+
+- Join the [Prompt flow community on GitHub](https://github.com/microsoft/promptflow).
machine-learning Get Started Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/get-started-prompt-flow.md
description: Learn how to use Prompt flow in Azure Machine Learning studio. -+ Previously updated : 06/30/2023 Last updated : 09/12/2023 # Get started with Prompt flow (preview) This article walks you through the main user journey of using Prompt flow in Azure Machine Learning studio. You'll learn how to enable Prompt flow in your Azure Machine Learning workspace, create and develop your first prompt flow, test and evaluate it, then deploy it to production.
-A quick video tutorial can be found here: [Prompt flow get started video tutorial](https://www.youtube.com/watch?v=kYqRtjDBci8).
- > [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
If you aren't already connected to AzureOpenAI, select the **Create** button the
:::image type="content" source="./media/get-started-prompt-flow/connection-creation-entry-point.png" alt-text="Screenshot of the connections tab with create highlighted." lightbox = "./media/get-started-prompt-flow/connection-creation-entry-point.png":::
-Then a right-hand panel will appear. Here, you'll need to provide the connection name, API key, API base, API type, and API version before selecting the **Save** button.
+Then a right-hand panel will appear. Here, you'll need to select the subscription and resource name, provide the connection name, API key, API base, API type, and API version before selecting the **Save** button.
+ To obtain the API key, base, type, and version, you can navigate to the [chat playground](https://oai.azure.com/portal/chat) in the Azure OpenAI portal and select the **View code** button. From here, you can copy the necessary information and paste it into the connection creation panel.
In **Flows** tab of Prompt flow home page, select **Create** to create your firs
The built-in samples are shown in the gallery.
-In this guide, we'll use **Web Classification** sample to walk you through the main user journey, so select **View detail** on Web Classification tile to preview the sample.
+In this guide, we'll use **Web Classification** sample to walk you through the main user journey. You can select **View detail** on Web Classification tile to preview the sample. Then a preview window is popped up. You can browse the sample introduction to see if the sample is similar to your scenario. Or you can just select **Clone** to clone the sample directly, then check the flow, test it, modify it.
:::image type="content" source="./media/get-started-prompt-flow/sample-in-gallery.png" alt-text="Screenshot of create from galley highlighting web classification. " lightbox = "./media/get-started-prompt-flow/sample-in-gallery.png":::
-Then a preview window is popped up. You can browse the sample introduction to see if the sample is similar to your scenario. You can select Clone to clone the sample, then check the flow, test it, modify it.
+After selecting **Clone**, as shown in the right panel, the new flow will be saved in a specific folder within your workspace file share storage. You can customize the folder name according to your preferences.
+ ### Authoring page
At the left, it's the flatten view, the main working area where you can author t
:::image type="content" source="./media/get-started-prompt-flow/flatten-view.png" alt-text="Screenshot of web classification highlighting the main working area." lightbox = "./media/get-started-prompt-flow/flatten-view.png":::
-At the right, it's the graph view for visualization only. You can zoom in, zoom out, auto layout, etc.
+The top right corner shows the folder structure of the flow. Each flow has a folder that contains a flow.dag.yaml file, source code files, and system folders. You can export or import a flow easily for testing, deployment, or collaborative purposes.
++++
+In the bottom right corner, it's the graph view for visualization only. You can zoom in, zoom out, auto layout, etc.
:::image type="content" source="./media/get-started-prompt-flow/graph-view.png" alt-text="Screenshot of web classification highlighting graph view area." lightbox = "./media/get-started-prompt-flow/graph-view.png":::
You need to prepare test data first. We support csv and txt file for now.
Go to [GitHub](https://aka.ms/web-classification-data) to download raw file for Web Classification sample.
-### Bulk test
+### Batch run
-Select **Bulk test** button, then a right panel pops up. It's a wizard that guides you to submit a bulk test and to select the evaluation method (optional).ΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï
+Select **Batch run** button, then a right panel pops up. It's a wizard that guides you to submit a batch run and to select the evaluation method (optional).ΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï
-You need to set a bulk test name, description, then select a runtime first.
+You need to set a batch run name, description, then select a runtime first.
-Then select **Upload new data** to upload the data you just downloaded. After uploading the data or if your colleagues in the workspace already created a dataset, you can choose the dataset from the drop-down and preview first 50 rows.
+Then select **Upload new data** to upload the data you just downloaded. After uploading the data or if your colleagues in the workspace already created a dataset, you can choose the dataset from the drop-down and preview first five rows. The dataset selection drop down supports search and autosuggestion.
+In addition, the **input mapping** supports mapping your flow input to a specific data column in your dataset, which means that you can use any column as the input, even if the column names don't match.
-The dataset selection drop down supports search and autosuggestion.
+
+After that, you can select the **Review+submit** button to do batch run directly, or you can select **Next** to use an evaluation method to evaluate your flow.
### Evaluate
-Select **Next**, then you can use an evaluation method to evaluate your flow. The evaluation methods are also flows that use Python or LLM etc., to calculate metrics like accuracy, relevance score. The built-in evaluation flows and customized ones are listed in the drop-down.
+Turn on the toggle in evaluation settings tab. The evaluation methods are also flows that use Python or LLM etc., to calculate metrics like accuracy, relevance score. The built-in evaluation flows and customized ones are listed in the drop-down.
-Since Web classification is a classification scenario, it's suitable to select the **Classification Accuracy Evaluation** to evaluate.
+Since Web classification is a classification scenario, it's suitable to select the **Classification Accuracy Eval** to evaluate.
If you're interested in how the metrics are defined for built-in evaluation methods, you can preview the evaluation flows by selecting **View details**.
-After selecting **Classification Accuracy Evaluation** as evaluation method, you can set interface mapping to map the ground truth to flow input and category to flow output.
+After selecting **Classification Accuracy Eval** as evaluation method, you can set interface mapping to map the ground truth to flow input and category to flow output.
+
+Then select **Review+submit** to submit a batch run and the selected evaluation.
-Then select **Submit** to submit a bulk test and the selected evaluation.
+### Check results
-### Check evaluation results
+When completed, select the link, go to batch run detail page.
-When completed, select the link, go to bulk test detail page.
+The batch run may take a while to finish. You can **Refresh** the page to load the latest status.
-Select **Refresh** until the evaluation run is completed.
+After the batch run is completed, select **View outputs** to view the result of your batch run.
-Then go to the **Metrics** tab, check accuracy.
+If you have added an evaluation method to evaluate your flow, go to the **Metrics** tab, check the evaluation metrics. You can see the overall accuracy of your batch run.
-To understand in which case the flow classifies incorrectly, you need to see the evaluation results for each row of data. Go to **Outputs** tab, select the evaluation run, you can see in the table below for most cases the flow classifies correctly except for few rows.
+To understand how this accuracy was calculated, you can view the evaluation results for each row of data. In **Outputs** tab, select the evaluation run, you can see in the table which cases are predicted correctly and which are not.
-You can adjust column width, hide/unhide columns, and export table to csv file for further investigation.
+You can adjust column width, hide/unhide columns, and select **Export** to download a csv file of the batch run outputs for further investigation.
As you might know, accuracy isn't the only metric that can evaluate a classification task, for example you can also use recall to evaluate. In this case, you can select **New evaluation**, choose other evaluation methods to evaluate. ## Deployment
-After you build a flow and test it properly, you may want to deploy it as an endpoint so that you can invoke the endpoint for real-time inference.
+After you build a flow and test it properly, you may want to [deploy it as an endpoint so that you can invoke the endpoint for real-time inference.](how-to-deploy-for-real-time-inference.md)
### Configure the endpoint
-When you are in the bulk test **Overview** tab, select bulk test link.
+When you are in the batch run **Overview** tab, select batch run link.
-Then you're directed to the bulk test detail page, select **Deploy**. A wizard pops up to allow you to configure the endpoint. Specify an endpoint name, use the default settings, set connections, and select a virtual machine, select **Deploy** to start the deployment.
+Then you're directed to the batch run detail page, select **Deploy**. A wizard pops up to allow you to configure the endpoint. Specify an endpoint name, use the default settings, set connections, and select a virtual machine, select **Deploy** to start the deployment.
:::image type="content" source="./media/get-started-prompt-flow/endpoint-creation.png" alt-text="Screenshot of endpoint configuration wizard." lightbox = "./media/get-started-prompt-flow/endpoint-creation.png":::
-If you're a Workspace Owner or Subscription Owner, see [Deploy a flow as a managed online endpoint for real-time inference](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint) to grant permissions to the endpoint. If not, go ask your Workspace Owner or Subscription Owner to it for you.
+If you're a Workspace Owner or Subscription Owner, see [Deploy a flow as a managed online endpoint for real-time inference](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint) to grant permissions to the endpoint. If not, go ask your Workspace Owner or Subscription Owner to do it for you.
### Test the endpoint
machine-learning How To Bulk Test Evaluate Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-bulk-test-evaluate-flow.md
Title: Submit bulk test and evaluate a flow in Prompt flow (preview)
+ Title: Submit batch run and evaluate a flow in Prompt flow (preview)
-description: Learn how to submit bulk test and use built-in evaluation methods in prompt flow to evaluate how well your flow performs with a large dataset with Azure Machine Learning studio.
+description: Learn how to submit batch run and use built-in evaluation methods in prompt flow to evaluate how well your flow performs with a large dataset with Azure Machine Learning studio.
Previously updated : 06/30/2023 Last updated : 09/12/2023
-# Submit bulk test and evaluate a flow (preview)
+# Submit batch run and evaluate a flow (preview)
-To evaluate how well your flow performs with a large dataset, you can submit bulk test and use built-in evaluation methods in Prompt flow.
+To evaluate how well your flow performs with a large dataset, you can submit batch run and use built-in evaluation methods in Prompt flow.
In this article you'll learn to: -- Submit a Bulk Test and Use a Built-in Evaluation Method
+- Submit a Batch Run and Use a Built-in Evaluation Method
- View the evaluation result and metrics - Start A New Round of Evaluation-- Check Bulk Test History and Compare Metrics
+- Check Batch Run History and Compare Metrics
- Understand the Built-in Evaluation Metrics - Ways to Improve Flow Performance
-You can quickly start testing and evaluating your flow by following this video tutorial [submit bulk test and evaluate a flow video tutorial](https://www.youtube.com/watch?v=5Khu_zmYMZk).
+You can quickly start testing and evaluating your flow by following this video tutorial [submit batch run and evaluate a flow video tutorial](https://www.youtube.com/watch?v=5Khu_zmYMZk).
> [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
You can quickly start testing and evaluating your flow by following this video t
## Prerequisites
-To run a bulk test and use an evaluation method, you need to have the following ready:
+To run a batch run and use an evaluation method, you need to have the following ready:
-- A test dataset for bulk test. Your dataset should be in one of these formats: `.csv`, `.tsv`, `.jsonl`, or `.parquet`. Your data should also include headers that match the input names of your flow.-- An available runtime to run your bulk test. A runtime is a cloud-based resource that executes your flow and generates outputs. To learn more about runtime, see [Runtime](./how-to-create-manage-runtime.md).
+- A test dataset for batch run. Your dataset should be in one of these formats: `.csv`, `.tsv`, `.jsonl`, or `.parquet`. Your data should also include headers that match the input names of your flow.
+- An available runtime to run your batch run. A runtime is a cloud-based resource that executes your flow and generates outputs. To learn more about runtime, see [Runtime](./how-to-create-manage-runtime.md).
-## Submit a bulk test and use a built-in evaluation method
+## Submit a batch run and use a built-in evaluation method
-A bulk test allows you to run your flow with a large dataset and generate outputs for each data row. You can also choose an evaluation method to compare the output of your flow with certain criteria and goals. An evaluation method **is a special type of flow** that calculates metrics for your flow output based on different aspects. An evaluation run will be executed to calculate the metrics when submitted with the bulk test.
+A batch run allows you to run your flow with a large dataset and generate outputs for each data row. You can also choose an evaluation method to compare the output of your flow with certain criteria and goals. An evaluation method **is a special type of flow** that calculates metrics for your flow output based on different aspects. An evaluation run will be executed to calculate the metrics when submitted with the batch run.
-To start a bulk test with evaluation, you can select on the **"Bulk test"** button on the top right corner of your flow page.
+To start a batch run with evaluation, you can select on the **"Batch run"** button on the top right corner of your flow page.
-To submit bulk test, you can select a dataset to test your flow with. You can also select an evaluation method to calculate metrics for your flow output. If you don't want to use an evaluation method, you can skip this step and run the bulk test without calculating any metrics. You can also start a new round of evaluation later.
+To submit batch run, you can select a dataset to test your flow with. You can also select an evaluation method to calculate metrics for your flow output. If you don't want to use an evaluation method, you can skip this step and run the batch run without calculating any metrics. You can also start a new round of evaluation later.
-First, select or upload a dataset that you want to test your flow with. An available runtime that can run your bulk test is also needed. You also give your bulk test a descriptive and recognizable name. After you finish the configuration, select **"Next"** to continue.
+First, you're asked to give your batch run a descriptive and recognizable name. You can also write a description and add tags (key-value pairs) to your batch run. After you finish the configuration, select **"Next"** to continue.
-Second, you can decide to use an evaluation method to validate your flow performance either immediately or later. If you have already completed a bulk test, you can start a new round of evaluation with a different method or subset of variants.
+Second, you need to select or upload a dataset that you want to test your flow with. You also need to select an available runtime to execute this batch run.
+Prompt flow also supports mapping your flow input to a specific data column in your dataset. This means that you can assign a column to a certain input. You can assign a column to an input by referencing with `${data.XXX}` format. If you want to assign a constant value to an input, you can directly type in that value.
-- **Submit Bulk test without using evaluation method to calculate metrics:** You can select **"Skip"** button to skip this step and run the bulk test without using any evaluation method to calculate metrics. In this way, this bulk test will only generate outputs for your dataset. You can check the outputs manually or export them for further analysis with other methods. -- **Submit Bulk test using evaluation method to calculate metrics:** This option will run the bulk test and also evaluate the output using a method of your choice. A special designed evaluation method will run and calculate metrics for your flow output to validate the performance.
+Then, in the next step, you can decide to use an evaluation method to validate the performance of this run either immediately or later. For a completed batch run, a new round of evaluation can still be added.
-If you want to run bulk test with evaluation now, you can select an evaluation method from the dropdown box based on the description provided. After you selected an evaluation method, you can select **"View detail"** button to see more information about the selected method, such as the metrics it generates and the connections and inputs it requires.
+You can directly select the **"Next"** button to skip this step and run the batch run without using any evaluation method to calculate metrics. In this way, this batch run only generates outputs for your dataset. You can check the outputs manually or export them for further analysis with other methods.
+Otherwise, if you want to run batch run with evaluation now, you can select an evaluation method from the dropdown box based on the description provided. After you selected an evaluation method, you can select **"View detail"** button to see more information about the selected method, such as the metrics it generates and the connections and inputs it requires.
-In the **"input mapping"** section, you need to specify the sources of the input data that are needed for the evaluation method. The sources can be from the current flow output or from your test dataset, even if some columns in your dataset aren't used by your flow. For example, if your evaluation method requires a _ground truth_ column, you need to provide it in your dataset and select it in this section.
-You can also manually type in the source of the data column.
+In the **"input mapping"** section, you need to specify the sources of the input data that are needed for the evaluation method. For example, ground truth column may come from a dataset. By default, evaluation will use the same dataset as the test dataset provided to the tested run. However, if the corresponding labels or target ground truth values are in a different dataset, you can easily switch to that one.
-- If the data column is in your test dataset, then it's specified as **"data.[column\_name]".**-- If the data column is from your flow output, then it's specified as **"output.[output\_name]".**
+Therefore, to run an evaluation, you need to indicate the sources of these required inputs. To do so, when submitting an evaluation, you'll see an **"input mapping"** section.
+- If the data source is from your run output, the source is indicated as **"${run.output.[OutputName]}"**
+- If the data source is from your test dataset, the source is indicated as **"${data.[ColumnName]}"**
-If an evaluation method uses Large Language Models (LLMs) to measure the performance of the flow response, you're required to set connections for the LLM nodes in the evaluation methods.
> [!NOTE]
-> Some evaluation methods require GPT-4 or GPT-3 to run. You must provide valid connections for these evaluation methods before using them.
+> If your evaluation doesn't require data from the dataset, you do not need to reference any dataset columns in the input mapping section, indicating the dataset selection is an optional configuration. Dataset selection won't affect evaluation result.
+
+If an evaluation method uses Large Language Models (LLMs) to measure the performance of the flow response, you're also required to set connections for the LLM nodes in the evaluation methods.
+
+> [!NOTE]
+> Some evaluation methods require GPT-4 or GPT-3 to run. You must provide valid connections for these evaluation methods before using them.
-After you finish the input mapping, select on **"Next"** to review your settings and select on **"Submit"** to start the bulk test with evaluation.
+After you finish the input mapping, select on **"Next"** to review your settings and select on **"Submit"** to start the batch run with evaluation.
## View the evaluation result and metrics
-In the bulk test detail page, you can check the status of the bulk test you submitted. In the **"Evaluation History"** section, you can find the records of the evaluation for this bulk test. The link of the evaluation navigates to the snapshot of the evaluation run that executed for calculating the metrics.
+After submission, you can find the submitted batch run in the run list tab in prompt flow page. Select a run to navigate to the run detail page.
-When the evaluation run is completed, you can go to the **Outputs** tab in the bulk test detail page to check the outputs/responses generated by the flow with the dataset that you provided. You can also select **"Export"** to export and download the outputs in a .csv file.
+In the run detail page, you can select **Details** to check the details of this batch run.
-You can **select an evaluation run** from the dropdown box and you'll see additional columns appended at the end of the table showing the evaluation result for each row of data. In this screenshot, you can locate the result that is falsely predicted with the output column "grade".
+In the details panel, you can check the metadata of this run. You can also go to the **Outputs** tab in the batch run detail page to check the outputs/responses generated by the flow with the dataset that you provided. You can also select **"Export"** to export and download the outputs in a `.csv` file.
-To view the overall performance, you can select the **"Metrics"** tab, and you can see various metrics that indicate the quality of each variant.
+You can **select an evaluation run** from the dropdown box and you'll see appended columns at the end of the table showing the evaluation result for each row of data. You can locate the result that is falsely predicted with the output column "grade".
-To learn more about the metrics calculated by the built-in evaluation methods, please navigate to [understand the built-in evaluation metrics](#understand-the-built-in-evaluation-metrics).
-## Start a new round of evaluation
+To view the overall performance, you can select the **Metrics** tab, and you can see various metrics that indicate the quality of each variant.
+
+To learn more about the metrics calculated by the built-in evaluation methods, navigate to [understand the built-in evaluation metrics](#understand-the-built-in-evaluation-metrics).
-If you have already completed a bulk test, you can start another round of evaluation to submit a new evaluation run to calculate metrics for the outputs **without running your flow again**. This is helpful and can save your cost to rerun your flow when:
+## Start a new round of evaluation
-- you didn't select an evaluation method to calculate the metrics when submitting the bulk test, and decide to do it now.
+If you have already completed a batch run, you can start another round of evaluation to submit a new evaluation run to calculate metrics for the outputs **without running your flow again**. This is helpful and can save your cost to rerun your flow when:
+
+- you didn't select an evaluation method to calculate the metrics when submitting the batch run, and decide to do it now.
- you have already used evaluation method to calculate a metric. You can start another round of evaluation to calculate another metric. - your evaluation run failed but your flow successfully generated outputs. You can submit your evaluation again.
-You can select **"New evaluation"** to start another round of evaluation. The process is similar to that in submitting bulk test, except that you're asked to specify the output from which variants you would like to evaluate on in this new round.
+You can select **Evaluate** to start another round of evaluation.
+
+After setting up the configuration, you can select **"Submit"** for this new round of evaluation. After submission, you'll be able to see a new record in the prompt flow run list.
-After setting up the configuration, you can select **"Submit"** for this new round of evaluation. After submission, you'll be able to see a new record in the "Evaluation History" Section.
+After the evaluation run completed, similarly, you can check the result of evaluation in the **"Outputs"** tab of the batch run detail panel. You need select the new evaluation run to view its result.
-After the evaluation run completed, similarly, you can check the result of evaluation in the **"Output"** tab of the bulk test detail page. You need select the new evaluation run to view its result.
+When multiple different evaluation runs are submitted for a batch run, you can go to the **"Metrics"** tab of the batch run detail page to compare all the metrics.
+## Check batch run history and compare metrics
-When multiple different evaluation runs are submitted for a bulk test, you can go to the **"Metrics"** tab of the bulk test detail page to compare all the metrics.
+In some scenarios, you'll modify your flow to improve its performance. You can submit multiple batch runs to compare the performance of your flow with different versions. You can also compare the metrics calculated by different evaluation methods to see which one is more suitable for your flow.
-## Check bulk test history and compare metrics
+To check the batch run history of your flow, you can select the **"View batch run"** button on the top right corner of your flow page. You'll see a list of batch runs that you have submitted for this flow.
-In some scenarios, you'll modify your flow to improve its performance. You can submit multiple bulk tests to compare the performance of your flow with different versions. You can also compare the metrics calculated by different evaluation methods to see which one is more suitable for your flow.
-To check the bulk test history of your flow, you can select the **"Bulk test"** button on the top right corner of your flow page. You'll see a list of bulk tests that you have submitted for this flow.
+You can select on each batch run to check the detail. You can also select multiple batch runs and select on the **"Visualize outputs"** to compare the metrics and the outputs of these batch runs.
+In the "Visualize output" panel the **Runs & metrics** table shows the information of the selected runs with highlight. Other runs that take the outputs of the selected runs as input are also listed.
-You can select on each bulk test to check the detail. You can also select multiple bulk tests and select on the **"Compare Metrics"** to compare the metrics of these bulk tests.
+In the "Outputs" table, you can compare the selected batch runs by each line of sample. By selecting the "eye visualizing" icon in the "Runs & metrics" table, outputs of that run will be appended to the corresponding base run.
## Understand the built-in evaluation metrics
-In Prompt flow, we provide multiple built-in evaluation methods to help you measure the performance of your flow output. Each evaluation method calculates different metrics. Now we provide nine built-in evaluation methods available, you can check the following table for a quick reference:
+In prompt flow, we provide multiple built-in evaluation methods to help you measure the performance of your flow output. Each evaluation method calculates different metrics. Now we provide nine built-in evaluation methods available, you can check the following table for a quick reference:
| Evaluation Method | Metrics | Description | Connection Required | Required Input | Score Value | ||||||| | Classification Accuracy Evaluation | Accuracy | Measures the performance of a classification system by comparing its outputs to ground truth. | No | prediction, ground truth | in the range [0, 1]. | | QnA Relevance Scores Pairwise Evaluation | Score, win/lose | Assesses the quality of answers generated by a question answering system. It involves assigning relevance scores to each answer based on how well it matches the user question, comparing different answers to a baseline answer, and aggregating the results to produce metrics such as averaged win rates and relevance scores. | Yes | question, answer (no ground truth or context) | Score: 0-100, win/lose: 1/0 | | QnA Groundedness Evaluation | Groundedness | Measures how grounded the model's predicted answers are in the input source. Even if LLMΓÇÖs responses are true, if not verifiable against source, then is ungrounded. | Yes | question, answer, context (no ground truth) | 1 to 5, with 1 being the worst and 5 being the best. |
-| QnA Ada Similarity Evaluation | Similarity | Measures similarity between user-provided ground truth answers and the model predicted answer. | Yes | question, answer, ground truth (context not needed) | in the range [0, 1]. |
+| QnA GPT Similarity Evaluation | GPT Similarity | Measures similarity between user-provided ground truth answers and the model predicted answer using GPT Model. | Yes | question, answer, ground truth (context not needed) | in the range [0, 1]. |
| QnA Relevance Evaluation | Relevance | Measures how relevant the model's predicted answers are to the questions asked. | Yes | question, answer, context (no ground truth) | 1 to 5, with 1 being the worst and 5 being the best. | | QnA Coherence Evaluation | Coherence | Measures the quality of all sentences in a model's predicted answer and how they fit together naturally. | Yes | question, answer (no ground truth or context) | 1 to 5, with 1 being the worst and 5 being the best. | | QnA Fluency Evaluation | Fluency | Measures how grammatically and linguistically correct the model's predicted answer is. | Yes | question, answer (no ground truth or context) | 1 to 5, with 1 being the worst and 5 being the best |
System message, sometimes referred to as a metaprompt or [system prompt](../../c
## Next steps
-In this document, you learned how to run a bulk test and use a built-in evaluation method to measure the quality of your flow output. You also learned how to view the evaluation result and metrics, and how to start a new round of evaluation with a different method or subset of variants. We hope this document helps you improve your flow performance and achieve your goals with Prompt flow.
+In this document, you learned how to submit a batch run and use a built-in evaluation method to measure the quality of your flow output. You also learned how to view the evaluation result and metrics, and how to start a new round of evaluation with a different method or subset of variants. We hope this document helps you improve your flow performance and achieve your goals with Prompt flow.
- [Develop a customized evaluation flow](how-to-develop-an-evaluation-flow.md) - [Tune prompts using variants](how-to-tune-prompts-using-variants.md)-- [Deploy a flow](how-to-deploy-for-real-time-inference.md)
+- [Deploy a flow](how-to-deploy-for-real-time-inference.md)
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
Previously updated : 07/14/2023 Last updated : 09/13/2023 # Create and manage runtimes (preview)
Prompt flow's runtime provides the computing resources required for the applicat
> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Runtime type
-
-You can choose between two types of runtimes for Prompt flow: [managed online endpoint/deployment](../concept-endpoints-online.md) and [compute instance (CI)](../concept-compute-instance.md). Here are some differences between them to help you decide which one suits your needs.
-
-| Runtime type | Managed online deployment runtime | Compute instance runtime |
-|-|--|--|
-| Team shared | Y | N |
-| User isolation | N | Y |
-| OBO/identity support | N | Y |
-| Easily manually customization of environment | N | Y |
-| Multiple runtimes on single resource | N | Y |
-
-If you're new to Prompt flow, we recommend you to start with compute instance runtime first.
- ## Permissions/roles need to use runtime You need to assign enough permission to use runtime in Prompt flow. To assign a role, you need to have `owner` or have `Microsoft.Authorization/roleAssignments/write` permission on resource. -- To create runtime, you need to have `AzureML Data Scientist` role of the workspace. To learn more, see [Prerequisites](#prerequisites)-- To use a runtime in flow authoring, you or identity associate with managed online endpoint need to have `AzureML Data Scientist` role of workspace, `Storage Blob Data Contributor` and `Storage Table Data Contributor` role of workspace default storage. To learn more, see [Grant sufficient permissions to use the runtime](#grant-sufficient-permissions-to-use-the-runtime).
+To create and use runtime to author prompt flow, you need to have `AzureML Data Scientist` role in the workspace. To learn more, see [Prerequisites](#prerequisites)
## Create runtime in UI ### Prerequisites -- Make sure your workspace linked with ACR, you can link an existing ACR when you're creating a new workspace, or you can trigger environment build, which may auto link ACR to Azure Machine Learning workspace. To learn more, see [How to trigger environment build in workspace](#potential-root-cause-and-solution).-- You need `AzureML Data Scientist` role of the workspace to create a runtime.
+- You need `AzureML Data Scientist` role in the workspace to create a runtime.
> [!IMPORTANT] > Prompt flow is **not supported** in the workspace which has data isolation enabled. The enableDataIsolation flag can only be set at the workspace creation phase and can't be updated. > >Prompt flow is **not supported** in the project workspace which was created with a workspace hub. The workspace hub is a private preview feature. >
-> Prompt flow is **not supported** in workspaces that enable managed VNet. Managed VNet is a private preview feature.
->
->Prompt flow is **not supported** if you secure your Azure AI services account (Azure openAI, Azure cognitive search, Azure content safety) with virtual networks. If you want to use these as connection in prompt flow please allow access from all networks.
### Create compute instance runtime in UI
If you didn't have compute instance, create a new one: [Create and manage an Azu
1. Select create new custom application as runtime. :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-custom-application.png" alt-text="Screenshot of add compute instance runtime with custom application highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-custom-application.png":::
- This is recommended for most users of Prompt flow. The Prompt flow system will create a new custom application on a compute instance as a runtime.
+ This is recommended for most users of Prompt flow. The Prompt flow system creates a new custom application on a compute instance as a runtime.
- To choose the default environment, select this option. This is the recommended choice for new users of Prompt flow. :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-default-env.png" alt-text="Screenshot of add compute instance runtime with environment highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-default-env.png":::
- - If you want to install additional packages in your project, you should create a custom environment. To learn how to build your own custom environment, see [Customize environment with docker context for runtime](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
+ - If you want to install other packages in your project, you should create a custom environment. To learn how to build your own custom environment, see [Customize environment with docker context for runtime](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
:::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-custom-env.png" alt-text="Screenshot of add compute instance runtime with customized environment and choose an environment highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-custom-env.png":::
If you didn't have compute instance, create a new one: [Create and manage an Azu
:::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png" alt-text="Screenshot of add compute instance runtime with custom application dropdown highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png":::
-### Create managed online endpoint runtime in UI
-
-1. Specify the runtime name.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-runtime-name.png" alt-text="Screenshot of add managed online deployment runtime. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-runtime-name.png":::
-
-1. Select existing or create a new deployment as runtime
- 1. Select create new deployment as runtime.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-deployment-new.png" alt-text="Screenshot of add managed online deployment runtime with deployment highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-deployment-new.png":::
-
- There are two options for deployment as runtime: `new` and `existing`. If you choose `new`, we'll create a new deployment for you. If you choose `existing`, you need to provide the name of an existing deployment as runtime.
-
- If you're new to Prompt flow, select `new` and we'll create a new deployment for you.
-
- - Select identity type of endpoint.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-identity.png" alt-text="Screenshot of add managed online deployment runtime with endpoint identity type highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-identity.png":::
-
- You need [assign sufficient permission](#grant-sufficient-permissions-to-use-the-runtime) to system assigned identity or user assigned identity.
-
- To learn more, see [Access Azure resources from an online endpoint with a managed identity](../how-to-access-resources-from-endpoints-managed-identities.md)
-
- - Select environment used for this runtime.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-env.png" alt-text="Screenshot of add managed online deployment runtime wizard on the environment page. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-env.png":::
-
- Follow [Customize environment with docker context for runtime](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime) to build your custom environment.
-
- - Choose the appropriate SKU and instance count.
-
- > [!NOTE]
- > For **Virtual machine**, since the Prompt flow runtime is memory-bound, itΓÇÖs better to select a virtual machine SKU with more than 8GB of memory. For the list of supported sizes, see [Managed online endpoints SKU list](../reference-managed-online-endpoints-vm-sku-list.md).
-
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-compute.png" alt-text="Screenshot of add managed online deployment runtime wizard on the compute page. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-compute.png":::
-
- > [!NOTE]
- > Creating a managed online deployment runtime using new deployment may take several minutes.
-
- 1. Select existing deployment as runtime.
-
- - To use an existing managed online deployment as a runtime, you can choose it from the available options. Each runtime corresponds to one managed online deployment.
-
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-existing-deployment.png" alt-text="Screenshot of add managed online deployment runtime wizard on the runtime page. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-existing-deployment.png":::
-
- - You can select from existing endpoint and existing deployment as runtime.
-
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-existing-deployment-select-endpoint.png" alt-text="Screenshot of add managed online deployment runtime on the endpoint page with an endpoint selected. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-existing-deployment-select-endpoint.png":::
-
- - We'll verify that this deployment meets the runtime requirements.
-
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-existing-deployment-select-deployment.png" alt-text="Screenshot of add managed online deployment runtime on the deployment page. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-mir-runtime-existing-deployment-select-deployment.png":::
-
- To learn, see [[how to create managed online deployment, which can be used as Prompt flow runtime](how-to-customize-environment-runtime.md#create-managed-online-deployment-that-can-be-used-as-prompt-flow-runtime).]
- ## Grant sufficient permissions to use the runtime After creating the runtime, you need to grant the necessary permissions to use it.
After creating the runtime, you need to grant the necessary permissions to use i
To assign role, you need to have `owner` or have `Microsoft.Authorization/roleAssignments/write` permission on the resource.
-### Assign built-in roles
-
-To use runtime, assigning the following roles to user (if using Compute instance as runtime) or endpoint (if using managed online endpoint as runtime).
-
-| Resource | Role | Why do I need this? |
-||||
-| Workspace | Azure Machine Learning Data Scientist | Used to write to run history, log metrics |
-| Workspace default ACR | AcrPull | Pull image from ACR |
-| Workspace default storage | Storage Blob Data Contributor | Write intermediate data and tracing data |
-| Workspace default storage | Storage Table Data Contributor | Write intermediate data and tracing data |
-
-You can use this Azure Resource Manager template to assign these roles to your user or endpoint.
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fcloga%2Fazure-quickstart-templates%2Flochen%2Fpromptflow%2Fquickstarts%2Fmicrosoft.machinelearningservices%2Fmachine-learning-prompt-flow%2Fassign-built-in-roles%2Fazuredeploy.json)
-
-To find the minimal permissions required, and use an Azure Resource Manager template to create a custom role and assign relevant permissions, visit: [Permissions/roles need to use runtime](./how-to-create-manage-runtime.md#permissionsroles-need-to-use-runtime)
+To use the runtime, assigning the `AzureML Data Scientist` role of workspace to user (if using Compute instance as runtime) or endpoint (if using managed online endpoint as runtime). To learn more, see [Manage access to an Azure Machine Learning workspace](../how-to-assign-roles.md?view=azureml-api-2&tabs=labeler&preserve-view=true)
-You can also assign these permissions manually through the UI.
--- Select top-right corner to access the Azure Machine Learning workspace detail page.
- :::image type="content" source="./media/how-to-create-manage-runtime/mir-without-acr-runtime-workspace-top-right.png" alt-text="Screenshot of the Azure Machine Learning workspace detail page. " lightbox = "./media/how-to-create-manage-runtime/mir-without-acr-runtime-workspace-top-right.png":::
-- Locate the **default storage account** and **ACR** on the Azure Machine Learning workspace detail page.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-permission-workspace-detail-storage-acr.png" alt-text="Screenshot of Azure Machine Learning workspace detail page with storage account and ACR highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-permission-workspace-detail-storage-acr.png":::
-- Navigate to `access control` to grant the relevant roles to the workspace, storage account, and ACR.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-permission-workspace-access-control.png" alt-text="Screenshot of the access control page highlighting the add role assignment button. " lightbox = "./media/how-to-create-manage-runtime/runtime-permission-workspace-access-control.png":::
-- Select user if you're using compute instance
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-permission-rbac-user.png" alt-text="Screenshot of add role assignment with assign access to highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-permission-rbac-user.png":::
-- Alternatively, choose the managed identity and machine learning online endpoint for the MIR runtime.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-permission-rbac-msi.png" alt-text="Screenshot of add role assignment with assign access to highlighted and managed identity selected. " lightbox = "./media/how-to-create-manage-runtime/runtime-permission-rbac-msi.png":::
-
- > [!NOTE]
- > This operation may take several minutes to take effect.
- > If your compute instance behind VNet, please follow [Compute instance behind VNet](#compute-instance-behind-vnet) to configure the network.
-
-To learn more:
-- [Manage access to an Azure Machine Learning workspace](../how-to-assign-roles.md?view=azureml-api-2&tabs=labeler&preserve-view=true)-- [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md?tabs=portal)-- [Azure Container Registry roles and permissions](../../container-registry/container-registry-roles.md?tabs=azure-cli)
+> [!NOTE]
+> This operation may take several minutes to take effect.
-## Using runtime in Prompt flow authoring
+## Using runtime in prompt flow authoring
When you're authoring your Prompt flow, you can select and change the runtime from left top corner of the flow page.
Go to runtime detail page and select update button at the top. You can change ne
### Common issues
-#### Failed to perform workspace run operations due to invalid authentication
--
-This means the identity of the managed endpoint doesn't have enough permissions, see [Grant sufficient permissions to use the runtime](#grant-sufficient-permissions-to-use-the-runtime) to grant sufficient permissions to the identity or user.
-
-If you just assigned the permissions, it will take a few minutes to take effect.
- #### My runtime is failed with a system error **runtime not ready** when using a custom environment :::image type="content" source="./media/how-to-create-manage-runtime/ci-failed-runtime-not-ready.png" alt-text="Screenshot of a failed run on the runtime detail page. " lightbox = "./media/how-to-create-manage-runtime/ci-failed-runtime-not-ready.png":::
Error in the example says "UserError: Invoking runtime gega-ci timeout, error me
3. If you can't find anything in runtime logs to indicate it's a specific node issue
- Please contact the Prompt Flow team ([promptflow-eng](mailto:aml-pt-eng@microsoft.com)) with the runtime logs. We'll try to identify the root cause.
+ Contact the Prompt Flow team ([promptflow-eng](mailto:aml-pt-eng@microsoft.com)) with the runtime logs. We'll try to identify the root cause.
### Compute instance runtime related
Go to the compute instance terminal and run `docker logs -<runtime_container_na
#### User doesn't have access to this compute instance. Please check if this compute instance is assigned to you and you have access to the workspace. Additionally, verify that you are on the correct network to access this compute instance. This because you're cloning a flow from others that is using compute instance as runtime. As compute instance runtime is user isolated, you need to create your own compute instance runtime or select a managed online deployment/endpoint runtime, which can be shared with others.
-#### Compute instance behind VNet
-
-If your compute instance is behind a VNet, you need to make the following changes to ensure that your compute instance can be used in prompt flow:
-- See [required-public-internet-access](../how-to-secure-workspace-vnet.md#required-public-internet-access) to set your compute instance network configuration.-- If your storage account also behind vnet, see [Secure Azure storage accounts](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts) to create private endpoints for both table and blob.-- Make sure the managed identity of workspace have `Storage Blob Data Contributor`, `Storage Table Data Contributor` roles on the workspace default storage account.-
-> [!NOTE]
-> This only works if your AOAI and other Azure AI services allow access from all networks.
-
-### Managed endpoint runtime related
-
-#### Managed endpoint failed with an internal server error. Endpoint creation was successful, but failed to create deployment for the newly created workspace.
--- Runtime status shows as failed with an internal server error.
- :::image type="content" source="./media/how-to-create-manage-runtime/mir-without-acr-runtime-detail-error.png" alt-text="Screenshot of the runtime status showing failed on the runtime detail page. " lightbox = "./media/how-to-create-manage-runtime/mir-without-acr-runtime-detail-error.png":::
-- Check the related endpoint.
- :::image type="content" source="./media/how-to-create-manage-runtime/mir-without-acr-runtime-detail-endpoint.png" alt-text="Screenshot of the runtime detail page, highlighting the managed endpoint. " lightbox = "./media/how-to-create-manage-runtime/mir-without-acr-runtime-detail-endpoint.png":::
-- Endpoint was created successfully, but there are no deployments created.
- :::image type="content" source="./media/how-to-create-manage-runtime/mir-without-acr-runtime-endpoint-detail.png" alt-text="Screenshot of the endpoint detail page with successful creation. " lightbox = "./media/how-to-create-manage-runtime/mir-without-acr-runtime-endpoint-detail.png":::
-
-##### Potential root cause and solution
-
-The issue may occur when you create a managed endpoint using a system-assigned identity. The system tries to grant ACR pull permission to this identity, but for a newly created workspace, please go to the workspace detail page in Azure to check whether the workspace has a linked ACR.
---
-If there's no ACR, you can create a new custom environment from curated environments on the environment page.
--
-After creating a new custom environment, a linked ACR will be automatically created for the workspace. You can return to the workspace detail page in Azure to confirm.
--
-Delete the failed managed endpoint runtime and create a new one to test.
-
-#### We are unable to connect to this deployment as runtime. Please make sure this deployment is ready to use.
--
-If you encounter with this issue, please check the deployment status and make sure it's build on top of runtime base image.
- ## Next steps - [Develop a standard flow](how-to-develop-a-standard-flow.md)-- [Develop a chat flow](how-to-develop-a-chat-flow.md)
+- [Develop a chat flow](how-to-develop-a-chat-flow.md)
machine-learning How To Custom Tool Package Creation And Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-custom-tool-package-creation-and-usage.md
+
+ Title: Custom tool package creation and usage in Prompt Flow (preview)
+
+description: Learn how to develop your own tool package in Prompt Flow.
+++++++ Last updated : 09/12/2023++
+# Custom tool package creation and usage (preview)
+
+When develop flows, you can not only use the built-in tools provided by Prompt Flow, but also develop your own custom tool. In this article, we'll guide you through the process of developing your own tool package, offering detailed steps and advice on how to utilize your creation.
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Create your own tool package
+
+Your tool package should be a Python package. To try, see [my-tools-package 0.0.1](https://pypi.org/project/my-tools-package/) and skip this section.
+
+### Prerequisites
+
+Create a new conda environment using Python 3.9 or 3.10. Run the following command to install Prompt Flow dependencies:
+
+```sh
+# eventually only need to install promptflow
+pip install promptflow-sdk promptflow --extra-index-url https://azuremlsdktestpypi.azureedge.net/promptflow/
+```
+
+Install Pytest packages for running tests:
+
+```sh
+pip install pytest
+pip install pytest-mock
+```
+
+### Create custom tool package
+
+Run the following command under root folder to create your tool project quickly:
+
+```sh
+python scripts\generate_tool_package_template.py --destination <your-tool-project> --package-name <your-package-name> --tool-name <your-tool-name> --function-name <your-tool-function-name>
+```
+
+For example:
+
+```sh
+python scripts\generate_tool_package_template.py --destination hello-world-proj --package-name hello-world --tool-name hello_world_tool --function-name get_greeting_message
+```
+
+This autogenerated script will create one tool for you. The parameters _destination_ and _package-name_ are mandatory. The parameters _tool-name_ and _function-name_ are optional. If left unfilled, the _tool-name_ will default to _hello_world_tool_, and the _function-name_ will default to _tool-name_.
+
+The command will generate the tool project as follows with one tool `hello_world_tool.py` in it:
++
+The following points outlined explain the purpose of each folder/file in the package. If your aim is to develop multiple tools within your package, make sure to closely examine bullet hello-world/tools and hello_world/yamls/hello_world_tool.yaml:
+
+- **hello-world-proj**: This is the source directory. All of your project's source code should be placed in this directory.
+
+- **hello-world/tools**: This directory contains the individual tools for your project. Your tool package can contain either one tool or many tools. When adding a new tool, you should create another *_tool.py under the `tools` folder.
+
+- **hello-world/tools/hello_world_tool.py**: Develop your tool within the def function. Use the `@tool` decorator to identify the function as a tool.
+ > [!Note]
+ > There are two ways to write a tool. The default and recommended way is the function implemented way. You can also use the class implementation way, referring to [my_tool_2.py](https://github.com/Azure/promptflow/blob/main/tool-package-quickstart/my_tool_package/tools/my_tool_2.py) as an example.
+
+- **hello-world/tools/utils.py**: This file implements the tool list method, which collects all the tools defined. It's required to have this tool list method, as it allows the User Interface (UI) to retrieve your tools and display them within the UI.
+
+ > [!Note]
+ > There's no need to create your own list method if you maintain the existing folder structure. You can simply use the auto-generated list method provided in the `utils.py` file.
+
+- **hello_world/yamls/hello_world_tool.yaml**: Tool YAMLs defines the metadata of the tool. The tool list method, as outlined in the `utils.py`, fetches these tool YAMLs.
+
+ You may want to update `name` and `description` to a better one in `your_tool.yaml`, so that tool can have a great name and description hint in prompt flow UI.
+
+ > [!Note]
+ > If you create a new tool, don't forget to also create the corresponding tool YAML. you can use the following command under your tool project to auto generate your tool YAML.
+
+ ```sh
+ python ..\scripts\package_tools_generator.py -m <tool_module> -o <tool_yaml_path>
+ ```
+
+ For example:
+
+ ```sh
+ python ..\scripts\package_tools_generator.py -m hello_world.tools.hello_world_tool -o hello_world\yamls\hello_world_tool.yaml
+ ```
+
+ To populate your tool module, adhere to the pattern `\<package_name\>.tools.\<tool_name\>`, which represents the folder path to your tool within the package.
+
+- **tests**: This directory contains all your tests, though they aren't required for creating your custom tool package. When adding a new tool, you can also create corresponding tests and place them in this directory. Run the following command under your tool project:
+
+ ```sh
+ pytest tests
+ ```
+
+- **MANIFEST.in**: This file is used to determine which files to include in the distribution of the project. Tool YAML files should be included in MANIFEST.in so that your tool YAMLs would be packaged and your tools can show in the UI.
+
+ > [!Note]
+ > There's no need to update this file if you maintain the existing folder structure.
+
+- **setup.py**: This file contains metadata about your project like the name, version, author, and more. Additionally, the entry point is automatically configured for you in the `generate_tool_package_template.py` script. In Python, configuring the entry point in `setup.py` helps establish the primary execution point for a package, streamlining its integration with other software.
+
+ The `package_tools` entry point together with the tool list method are used to retrieve all the tools and display them in the UI.
+
+ ```python
+ entry_points={
+ "package_tools": ["<your_tool_name> = <list_module>:<list_method>"],
+ },
+ ```
+
+ > [!Note]
+ > There's no need to update this file if you maintain the existing folder structure.
+
+### Build and share the tool package
+
+ Execute the following command in the tool package root directory to build your tool package:
+
+ ```sh
+ python setup.py sdist bdist_wheel
+ ```
+
+ This will generate a tool package `<your-package>-0.0.1.tar.gz` and corresponding `whl file` inside the `dist` folder.
+
+ [Create an account on PyPI](https://pypi.org/account/register/) if you don't already have one, and install `twine` package by running `pip install twine`.
+
+ Upload your package to PyPI by running `twine upload dist/*`, this will prompt you for your Pypi username and password, and then upload your package on PyPI. Once your package is uploaded to PyPI, others can install it using pip by running `pip install your-package-name`. Make sure to replace `your-package-name` with the name of your package as it appears on PyPI.
+
+ If you only want to put it on Test PyPI, upload your package by running `twine upload --repository-url https://test.pypi.org/legacy/ dist/*`. Once your package is uploaded to Test PyPI, others can install it using pip by running `pip install --index-url https://test.pypi.org/simple/ your-package-name`.
+
+## Prepare runtime
+
+You can create runtime with CI (Compute Instance) or MIR (Managed Inference Runtime). CI is the recommended way.
+
+### Create customized environment
+
+1. Create a customized environment with docker context.
+
+ 1. Create a customized environment in Azure Machine Learning studio by going to **Environments** then select **Create**. In the settings tab under *Select environment source*, choose " Create a new docker content".
+
+ Currently we support creating environment with "Create a new docker context" environment source. "Use existing docker image with optional conda file" has known [limitation](../how-to-manage-environments-v2.md#create-an-environment-from-a-conda-specification) and isn't supported now.
+
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/create-customized-environment-step-1.png" alt-text="Screenshot of create environment in Azure Machine Learning studio."lightbox = "./media/how-to-custom-tool-package-creation-and-usage/create-customized-environment-step-1.png":::
+
+ 1. Under **Customize**, replace the text in the Dockerfile:
+
+ ```sh
+ FROM mcr.microsoft.com/azureml/promptflow/promptflow-runtime:latest
+ RUN pip install -i https://test.pypi.org/simple/ my-tools-package==0.0.1
+ ```
+
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/create-customized-environment-step-2.png" alt-text="Screenshot of create environment in Azure Machine Learning studio on the customize step."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/create-customized-environment-step-2.png":::
+
+ It will take several minutes to create the environment. After it succeeded, you can copy the Azure Container Registry (ACR) from environment detail page for the next step.
+
+2. Create another environment with inference config. This is to support create MIR runtime with the customized environment and deployment scenario.
+
+ >[!Note]
+ > This step can only be done through CLI, AML studio UI doesn't support creating environment with inference_config today.
+
+ Create env.yaml file like below example:
+ >[!Note]
+ > Remember to replace the ACR in the 'image' field.
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
+ name: my-tool-env-with-inference
+
+ # Once the image build succeed in last step, you will see ACR from environment detail page, replace the ACR path here.
+ image: a0e352e5655546debe782dc5cb4a52df.azurecr.io/azureml/azureml_39b1850f1ec09f5500365d2b3be13b96
+
+ description: promptflow enrivonment with custom tool packages
+
+ # make sure the inference_config is specified in yaml, otherwise the endpoint deployment won't work
+ inference_config:
+ liveness_route:
+ port: 8080
+ path: /health
+ readiness_route:
+ port: 8080
+ path: /health
+ scoring_route:
+ port: 8080
+ path: /score
+ ```
+
+ Run Azure Machine Learning CLI to create environment:
+
+ ```cli
+ # optional
+ az login
+
+ # create your environment in workspace
+ az ml environment create --subscription <sub-id> -g <resource-group> -w <workspace> -f env.yaml
+ ```
+
+### Prepare runtime with CI or MIR
+
+3. Create runtime with CI using the customized environment created in step 2.
+ 1. Create a new compute instance. Existing compute instance created long time ago may hit unexpected issue.
+ 1. Create runtime on CI with customized environment.
+
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-compute-instance.png" alt-text="Screenshot of add compute instance runtime in Azure Machine Learning studio."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-compute-instance.png":::
+
+4. Create runtime with MIR using the customized environment created in step 2. To learn how to create a runtime with MIR, see [How to create a manage runtime](how-to-create-manage-runtime.md).
+
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-managed-inference-runtime.png" alt-text="Screenshot of add managed online deployment runtime in Azure Machine Learning studio."lightbox = "./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-managed-inference-runtime.png":::
+
+## Test from Prompt Flow UI
+
+>[!Note]
+> Currently you need to append flight `PFPackageTools` after studio url.
+
+1. Create a standard flow.
+2. Select the correct runtime ("my-tool-runtime") and add your tools.
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-1.png" alt-text="Screenshot of flow in Azure Machine Learning studio showing the runtime and more tools dropdown."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-1.png":::
+3. Change flow based on your requirements and run flow in the selected runtime.
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-2.png" alt-text="Screenshot of flow in Azure Machine Learning studio showing adding a tool."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-2.png":::
+
+## Test from VS Code extension
+
+1. Download the latest version [Prompt flow extension](https://ms.portal.azure.com/#view/Microsoft_Azure_Storage/ContainerMenuBlade/~/overview/storageAccountId/%2Fsubscriptions%2F96aede12-2f73-41cb-b983-6d11a904839b%2Fresourcegroups%2Fpromptflow%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fpfvscextension/path/pf-vscode-extension/etag/%220x8DB7169BD91D29C%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride~/false/defaultId//publicAccessVal/None).
+
+2. Install the extension in VS Code via "Install from VSIX":
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/install-vsix.png" alt-text="Screenshot of the VS Code extensions showing install from VSIX under the ellipsis." lightbox = "./media/how-to-custom-tool-package-creation-and-usage/install-vsix.png":::
+
+3. Go to terminal and install your tool package in conda environment of the extension. By default, the conda env name is `prompt-flow`.
+
+ ```sh
+ (local_test) PS D:\projects\promptflow\tool-package-quickstart> conda activate prompt-flow
+ (prompt-flow) PS D:\projects\promptflow\tool-package-quickstart> pip install .\dist\my_tools_package-0.0.1-py3-none-any.whl
+ ```
+
+4. Go to the extension and open one flow folder. Select 'flow.dag.yaml' and preview the flow. Next, select `+` button and you'll see your tools. You may need to reload the windows to clean previous cache if you don't see your tool in the list.
+
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/auto-list-tool-in-extension.png" alt-text="Screenshot of the VS Code showing the tools." lightbox ="./media/how-to-custom-tool-package-creation-and-usage/auto-list-tool-in-extension.png":::
+
+## FAQ
+
+### Why is my custom tool not showing up in the UI?
+
+- Ensure that you've set the UI flight to `&flight=PFPackageTools`.
+- Confirm that the tool YAML files are included in your custom tool package. You can add the YAML files to [MANIFEST.in](https://github.com/Azure/promptflow/blob/main/tool-package-quickstart/MANIFEST.in) and include the package data in [setup.py](https://github.com/Azure/promptflow/blob/main/tool-package-quickstart/setup.py).
+Alternatively, you can test your tool package using the following script to ensure that you've packaged your tool YAML files and configured the package tool entry point correctly.
+
+ 1. Make sure to install the tool package in your conda environment before executing this script.
+ 2. Create a python file anywhere and copy the following content into it.
+
+ ```python
+ def test():
+ # `collect_package_tools` gathers all tools info using the `package-tools` entry point. This ensures that your package is correctly packed and your tools are accurately collected.
+ from promptflow.core.tools_manager import collect_package_tools
+ tools = collect_package_tools()
+ print(tools)
+ if __name__ == "__main__":
+ test()
+ ```
+
+ 3. Run this script in your conda environment. This will return the metadata of all tools installed in your local environment, and you should verify that your tools are listed.
+- If you're using runtime with CI, try to restart your container with command `docker restart <container_name_or_id>` to see if the issue can be resolved.
+
+### Why am I unable to upload package to PyPI?
+
+- Make sure that the entered username and password of your PyPI account are accurate.
+- If you encounter a `403 Forbidden Error`, it's likely due to a naming conflict with an existing package. You'll need to choose a different name. Package names must be unique on PyPI to avoid confusion and conflicts among users. Before creating a new package, it's recommended to search PyPI (https://pypi.org/) to verify that your chosen name isn't already taken. If the name you want is unavailable, consider selecting an alternative name or a variation that clearly differentiates your package from the existing one.
+
+## Next steps
+
+- Learn more about [customize environment for runtime](how-to-customize-environment-runtime.md)
machine-learning How To Customize Environment Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-customize-environment-runtime.md
Previously updated : 06/30/2023 Last updated : 09/12/2023 # Customize environment for runtime (preview) - > [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In your local environment, create a folder contains following files, the folder
|--image_build | |--requirements.txt | |--Dockerfile
-| |--environment-build.yaml
| |--environment.yaml ```
RUN pip install -r requirements.txt
> [!NOTE] > This docker image should be built from prompt flow base image that is `mcr.microsoft.com/azureml/promptflow/promptflow-runtime:<newest_version>`. If possible use the [latest version of the base image](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime/tags/list).
-### Step 2: Use Azure Machine Learning environment to build image
+### Step 2: Create custom Azure Machine Learning environment
-#### Define your environment in `environment_build.yaml`
+#### Define your environment in `environment.yaml`
In your local compute, you can use the CLI (v2) to create a customized environment based on your docker image.
az account set --subscription <subscription ID>
az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group> ```
-Open the `environment_build.yaml` file and add the following content. Replace the <environment_name_docker_build> placeholder with your desired environment name.
+Open the `environment.yaml` file and add the following content. Replace the <environment_name> placeholder with your desired environment name.
```yaml $schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
-name: <environment_name_docker_build>
+name: <environment_name>
build: path: . ```
build:
```bash cd image_build az login(optional)
-az ml environment create -f environment_build.yaml --subscription <sub-id> -g <resource-group> -w <workspace>
+az ml environment create -f environment.yaml --subscription <sub-id> -g <resource-group> -w <workspace>
``` > [!NOTE] > Building the image may take several minutes.
-#### Locate the image in ACR
-
-Go to the environment page to find the built image in your workspace Azure Container Registry (ACR).
--
-Find the image in ACR.
-
+Go to your workspace UI page, then go to the **environment** page, and locate the custom environment you created. You can now use it to create a runtime in your Prompt flow. To learn more, see [Create compute instance runtime in UI](how-to-create-manage-runtime.md#create-compute-instance-runtime-in-ui).
-> [!IMPORTANT]
-> Make sure the `Environment image build status` is `Succeeded` before using it in the next step.
-
-### Step 3: Create a custom Azure Machine Learning environment for runtime
-
-Open the `environment.yaml` file and add the following content. Replace the `<environment_name>` placeholder with your desired environment name and change `<image_build_in_acr>` to the ACR image found in the step 2.3.
-
-```yaml
-$schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
-name: <environment_name>
-image: <image_build_in_acr>
-inference_config:
- liveness_route:
- port: 8080
- path: /health
- readiness_route:
- port: 8080
- path: /health
- scoring_route:
- port: 8080
- path: /score
-```
-
-Using following CLI command to create the environment:
-
-```bash
-cd image_build # optional if you already in this folder
-az login(optional)
-az ml environment create -f environment.yaml --subscription <sub-id> -g <resource-group> -w <workspace>
-```
-
-Go to your workspace UI page, go to the `environment` page, and locate the custom environment you created. You can now use it to create a runtime in your Prompt flow. To learn more, see:
--- [Create compute instance runtime in UI](how-to-create-manage-runtime.md#create-compute-instance-runtime-in-ui)-- [Create managed online endpoint runtime in UI](how-to-create-manage-runtime.md#create-managed-online-endpoint-runtime-in-ui)-
-To Learn more about environment CLI, see [Manage environments](../how-to-manage-environments-v2.md#manage-environments).
+To learn more about environment CLI, see [Manage environments](../how-to-manage-environments-v2.md#manage-environments).
## Create a custom application on compute instance that can be used as Prompt flow runtime
Follow [this document to add custom application](../how-to-create-compute-instan
:::image type="content" source="./media/how-to-customize-environment-runtime/runtime-creation-add-custom-application-ui.png" alt-text="Screenshot of compute showing custom applications. " lightbox = "./media/how-to-customize-environment-runtime/runtime-creation-add-custom-application-ui.png":::
-## Create managed online deployment that can be used as Prompt flow runtime
+## Create managed online deployment that can be used as Prompt flow runtime (deprecated)
+
+> [!IMPORTANT]
+> Managed online endpoint/deployment as runtime is **deprecated**. Please use [Migrate guide for managed online endpoint/deployment runtime](./migrate-managed-inference-runtime.md).
### Create managed online deployment that can be used as Prompt flow runtime via CLI v2
deployment:
You need to replace the following placeholders with your own values: - `ENDPOINT_NAME`: the name of the endpoint you created in the previous step-- `PRT_CONFIG_FILE`: the name of the config file that contains the port and runtime settings
+- `PRT_CONFIG_FILE`: the name of the config file that contains the port and runtime settings. Include the parent model folder name, for example, if model folder name is `model`, then the config file name should be `model/config.yaml`.
- `IMAGE_NAME` to name of your own image, for example: `mcr.microsoft.com/azureml/promptflow/promptflow-runtime:<newest_version>`, you can also follow [Customize environment with docker context for runtime](#customize-environment-with-docker-context-for-runtime) to create your own environment. ```yaml
environment:
Use following CLI command `az ml online-deployment create -f <yaml_file> -g <resource_group> -w <workspace_name>` to create managed online deployment that can be used as a Prompt flow runtime.
-Follow [Create managed online endpoint runtime in UI](how-to-create-manage-runtime.md#create-managed-online-endpoint-runtime-in-ui) to select this deployment as Prompt flow runtime.
- ## Next steps - [Develop a standard flow](how-to-develop-a-standard-flow.md)
machine-learning How To Deploy For Real Time Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md
Previously updated : 07/07/2023 Last updated : 09/12/2023
If you didn't complete the tutorial, you need to build a flow. Testing the flow
We'll use the sample flow **Web Classification** as example to show how to deploy the flow. This sample flow is a standard flow. Deploying chat flows is similar. Evaluation flow doesn't support deployment.
-> [!NOTE]
-> Currently Prompt flow only supports **single deployment** of managed online endpoints, so we will simplify the *deployment* configuration in the UI.
- ## Create an online endpoint Now that you have built a flow and tested it properly, it's time to create your online endpoint for real-time inference.
Select the identity you want to use, and you'll notice a warning message to remi
See detailed guidance about how to grant permissions to the endpoint identity in [Grant permissions to the endpoint](#grant-permissions-to-the-endpoint).
-#### Allow sharing sample input data for testing purpose only
+### Deployment
+
+In this step, you can specify the following properties:
+
+|Property| Description |
+||--|
+|Deployment name| - Within the same endpoint, deployment name should be unique. <br> - If you select an existing endpoint in the previous step, and input an existing deployment name, then that deployment will be overwritten with the new configurations. |
+|Inference data collection| If you enable this, the flow inputs and outputs will be auto collected in an Azure Machine Learning data asset, and can be used for later monitoring. To learn more, see [model monitoring.](how-to-monitor-generative-ai-applications.md)|
+|Application Insights diagnostics| If you enable this, system metrics during inference time (such as token count, flow latency, flow request, and etc.) will be collected into workspace default Application Insights. To learn more, see [prompt flow serving metrics](#view-prompt-flow-endpoints-specific-metrics-optional).|
-If the checkbox is selected, the first row of your input data will be used as sample input data for testing the endpoint later.
### Outputs
The `chat_input` was set during development of the chat flow. You can input the
In the endpoint detail page, switch to the **Consume** tab. You can find the REST endpoint and key/token to consume your endpoint. There is also sample code for you to consume the endpoint in different languages.
-## View metrics using Azure Monitor (optional)
+## View endpoint metrics
+
+### View managed online endpoints common metrics using Azure Monitor (optional)
You can view various metrics (request numbers, request latency, network bytes, CPU/GPU/Disk/Memory utilization, and more) for an online endpoint and its deployments by following links from the endpoint's **Details** page in the studio. Following these links take you to the exact metrics page in the Azure portal for the endpoint or deployment.
You can view various metrics (request numbers, request latency, network bytes, C
For more information on how to view online endpoint metrics, see [Monitor online endpoints](../how-to-monitor-online-endpoints.md#metrics).
+### View prompt flow endpoints specific metrics (optional)
+
+If you enable **Application Insights diagnostics** in the UI deploy wizard, or set `app_insights_enabled=true` in the deployment definition using code, there will be following prompt flow endpoints specific metrics collected in the workspace default Application Insights.
+
+| Metrics Name | Type | Dimensions | Description |
+|--|--|-||
+| token_consumption | counter | - flow <br> - node<br> - llm_engine<br> - token_type: `prompt_tokens`: LLM API input tokens; `completion_tokens`: LLM API response tokens ; `total_tokens` = `prompt_tokens + completion tokens` | openai token consumption metrics |
+| flow_latency | histogram | flow,response_code,streaming,response_type| request execution cost, response_type means whether it's full/firstbyte/lastbyte|
+| flow_request | counter | flow,response_code,exception,streaming | flow request count |
+| node_latency | histogram | flow,node,run_status | node execution cost |
+| node_request | counter | flow,node,exception,run_status | node execution failure count |
+| rpc_latency | histogram | flow,node,api_call | rpc cost |
+| rpc_request | counter | flow,node,api_call,exception | rpc count |
+| flow_streaming_response_duration | histogram | flow | streaming response sending cost, from sending first byte to sending last byte |
+
+You can find the workspace default Application Insights in your workspace page in Azure portal.
++
+Open the Application Insights, and select **Usage and estimated costs** from the left navigation. Select **Custom metrics (Preview)**, and select **With dimensions**, and save the change.
++
+Select **Metrics** tab in the left navigation. Select **promptflow standard metrics** from the **Metric Namespace**, and you can explore the metrics from the **Metric** dropdown list with different aggregation methods.
++ ## Troubleshoot endpoints deployed from prompt flow ### Unable to fetch deployment schema
If you aren't going use the endpoint after completing this tutorial, you should
> [!NOTE] > The complete deletion may take approximately 20 minutes. -- ## Next Steps - [Iterate and optimize your flow by tuning prompts using variants](how-to-tune-prompts-using-variants.md)
machine-learning How To Deploy To Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md
+
+ Title: Deploy a flow to online endpoint for real-time inference with CLI (preview)
+
+description: Learn how to deploy your flow to a managed online endpoint or Kubernetes online endpoint in Azure Machine Learning prompt flow.
+++++++ Last updated : 09/12/2023++
+# Deploy a flow to online endpoint for real-time inference with CLI (preview)
+
+In this article, you'll learn to deploy your flow to a [managed online endpoint](../concept-endpoints-online.md#managed-online-endpoints-vs-kubernetes-online-endpoints) or a [Kubernetes online endpoint](../concept-endpoints-online.md#managed-online-endpoints-vs-kubernetes-online-endpoints) for use in real-time inferencing with Azure Machine Learning v2 CLI.
+
+Before beginning make sure that you have tested your flow properly, and feel confident that it's ready to be deployed to production. To learn more about testing your flow, see [test your flow](how-to-bulk-test-evaluate-flow.md). After testing your flow you'll learn how to create managed online endpoint and deployment, and how to use the endpoint for real-time inferencing.
+
+- For the **CLI** experience, all the sample yaml files can be found in the [Prompt flow CLI GitHub folder](https://aka.ms/pf-deploy-mir-cli). This article will cover how to use the CLI experience.
+- For the **Python SDK** experience, sample notebook is [Prompt flow SDK GitHub folder](https://aka.ms/pf-deploy-mir-sdk). The Python SDK isn't covered in this article, see the GitHub sample notebook instead. To use the Python SDK, you must have The Python SDK v2 for Azure Machine Learning. To learn more, see [Install the Python SDK v2 for Azure Machine Learning](/python/api/overview/azure/ai-ml-readme).
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- The Azure CLI and the Azure Machine Learning extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](../how-to-configure-cli.md).
+- An Azure Machine Learning workspace. If you don't have one, use the steps in the [Quickstart: Create workspace resources article](../quickstart-create-resources.md) to create one.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing "Microsoft.MachineLearningServices/workspaces/onlineEndpoints/". If you use studio to create/manage online endpoints/deployments, you will need an additional permission "Microsoft.Resources/deployments/write" from the resource group owner. For more information, see [Manage access to an Azure Machine Learning workspace](../how-to-assign-roles.md).
+
+### Virtual machine quota allocation for deployment
+
+For managed online endpoints, Azure Machine Learning reserves 20% of your compute resources for performing upgrades. Therefore, if you request a given number of instances in a deployment, you must have a quota for `ceil(1.2 * number of instances requested for deployment) * number of cores for the VM SKU` available to avoid getting an error. For example, if you request 10 instances of a Standard_DS3_v2 VM (that comes with four cores) in a deployment, you should have a quota for 48 cores (12 instances four cores) available. To view your usage and request quota increases, see [View your usage and quotas in the Azure portal](../how-to-manage-quotas.md#view-your-usage-and-quotas-in-the-azure-portal).
+
+## Get the flow ready for deploy
+
+Each flow will have a folder which contains codes/prompts, definition and other artifacts of the flow. If you have developed your flow with UI, you can download the flow folder from the flow details page. If you have developed your flow with CLI or SDK, you should have the flow folder already.
+
+This article will use the [sample flow "basic-chat"](https://github.com/Azure/azureml-examples/examples/flows/chat/basic-chat) as an example to deploy to Azure Machine Learning managed online endpoint.
+
+> [!IMPORTANT]
+>
+> If you have used `additional_includes` in your flow, then you need to use `pf flow build --source <path-to-flow> --output <output-path> --format docker` first to get a resolved version of flow folder.
+
+## Set default workspace
+
+Use the following commands to set the default workspace and resource group for the CLI.
+
+```Azure CLI
+az account set --subscription <subscription ID>
+az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
+```
+
+## Register the flow as a model (optional)
+
+In the online deployment, you can either refer to a registered model, or specify the model path (where to upload the model files from) inline. It's recommended to register the model and specify the model name and version in the deployment definition. Use the form `model:<model_name>:<version>`.
+
+Following is a model definition example.
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json
+name: basic-chat-model
+path: ../../../../examples/flows/chat/basic-chat
+description: register basic chat flow folder as a custom model
+properties:
+ # In AuzreML studio UI, endpoint detail UI Test tab needs this property to know it's from prompt flow
+ azureml.promptflow.source_flow_id: basic-chat
+
+ # Following are properties only for chat flow
+ # endpoint detail UI Test tab needs this property to know it's a chat flow
+ azureml.promptflow.mode: chat
+ # endpoint detail UI Test tab needs this property to know which is the input column for chat flow
+ azureml.promptflow.chat_input: question
+ # endpoint detail UI Test tab needs this property to know which is the output column for chat flow
+ azureml.promptflow.chat_output: answer
+```
++
+## Define the endpoint
+
+To define an endpoint, you need to specify:
+
+- **Endpoint name**: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](../how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+- **Authentication mode**: The authentication method for the endpoint. Choose between key-based authentication and Azure Machine Learning token-based authentication. A key doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](../how-to-authenticate-online-endpoint.md).
+Optionally, you can add a description and tags to your endpoint.
+- Optionally, you can add a description and tags to your endpoint.
+- If you want to deploy to a Kubernetes cluster (AKS or Arc enabled cluster) which is attaching to your workspace, you can deploy the flow to be a **Kubernetes online endpoint**.
+
+Following is an endpoint definition example.
+
+# [Managed online endpoint](#tab/managed)
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
+name: basic-chat-endpoint
+auth_mode: key
+```
+
+# [Kubernetes online endpoint](#tab/kubernetes)
++
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/kubernetesOnlineEndpoint.schema.json
+name: basic-chat-endpoint
+compute: azureml:<Kubernetes compute name>
+auth_mode: key
+```
+++
+| Key | Description |
+|--|--|
+| `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding code snippet in a browser. |
+| `name` | The name of the endpoint. |
+| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. To get the most recent token, use the `az ml online-endpoint get-credentials` command. |
++
+If you create a Kubernetes online endpoint, you need to specify the following additional attributes:
+
+| Key | Description |
+|--|-|
+| `compute` | The Kubernetes compute target to deploy the endpoint to. |
+
+> [!IMPORTANT]
+>
+> By default, when you create an online endpoint, a system-assigned managed identity is automatically generated for you. You can also specify an existing user-assigned managed identity for the endpoint.
+> You need to grant permissions to your endpoint identity so that it can access the Azure resources to perform inference. See [Grant permissions to your endpoint identity](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint) for more information.
+>
+> For more configurations of endpoint, see [managed online endpoint schema](../reference-yaml-endpoint-online.md).
+
+### Define the deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. To deploy a flow, you must have:
+
+- **Model files (or the name and version of a model that's already registered in your workspace).** In the example, we have a scikit-learn model that does regression.
+- **A scoring script**, that is, code that executes the model on a given input request. The scoring script receives data submitted to a deployed web service and passes it to the model. The script then executes the model and returns its response to the client. The scoring script is specific to your model and must understand the data that the model expects as input and returns as output. In this example, we have a score.py file.
+An environment in which your model runs. The environment can be a Docker image with Conda dependencies or a Dockerfile.
+Settings to specify the instance type and scaling capacity.
+
+Following is a deployment definition example.
+
+# [Managed online endpoint](#tab/managed)
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
+name: blue
+endpoint_name: basic-chat-endpoint
+model: azureml:basic-chat-model:1
+ # You can also specify model files path inline
+ # path: examples/flows/chat/basic-chat
+environment:
+ image: mcr.microsoft.com/azureml/promptflow/promptflow-runtime:20230831.v1
+ # inference config is used to build a serving container for online deployments
+ inference_config:
+ liveness_route:
+ path: /health
+ port: 8080
+ readiness_route:
+ path: /health
+ port: 8080
+ scoring_route:
+ path: /score
+ port: 8080
+instance_type: Standard_E16s_v3
+instance_count: 1
+environment_variables:
+
+ # "compute" mode is the default mode, if you want to deploy to serving mode, you need to set this env variable to "serving"
+ PROMPTFLOW_RUN_MODE: serving
+
+ # for pulling connections from workspace
+ PRT_CONFIG_OVERRIDE: deployment.subscription_id=<subscription_id>,deployment.resource_group=<resource_group>,deployment.workspace_name=<workspace_name>,deployment.endpoint_name=<endpoint_name>,deployment.deployment_name=<deployment_name>
+
+ # (Optional) When there are multiple fields in the response, using this env variable will filter the fields to expose in the response.
+ # For example, if there are 2 flow outputs: "answer", "context", and I only want to have "answer" in the endpoint response, I can set this env variable to '["answer"]'.
+ # If you don't set this environment, by default all flow outputs will be included in the endpoint response.
+ # PROMPTFLOW_RESPONSE_INCLUDED_FIELDS: '["category", "evidence"]'
+```
+
+# [Kubernetes online endpoint](#tab/kubernetes)
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/kubernetesOnlineDeployment.schema.json
+name: blue
+type: kubernetes
+endpoint_name: basic-chat-endpoint
+model: azureml:basic-chat-model:1
+ # You can also specify model files path inline
+ # path: examples/flows/chat/basic-chat
+environment:
+ image: mcr.microsoft.com/azureml/promptflow/promptflow-runtime:20230831.v1
+ # inference config is used to build a serving container for online deployments
+ inference_config:
+ liveness_route:
+ path: /health
+ port: 8080
+ readiness_route:
+ path: /health
+ port: 8080
+ scoring_route:
+ path: /score
+ port: 8080
+instance_type: <kubernetes custom instance type>
+instance_count: 1
+environment_variables:
+
+ # "compute" mode is the default mode, if you want to deploy to serving mode, you need to set this env variable to "serving"
+ PROMPTFLOW_RUN_MODE: serving
+
+ # for pulling connections from workspace
+ PRT_CONFIG_OVERRIDE: deployment.subscription_id=<subscription_id>,deployment.resource_group=<resource_group>,deployment.workspace_name=<workspace_name>,deployment.endpoint_name=<endpoint_name>,deployment.deployment_name=<deployment_name>
+
+ # (Optional) When there are multiple fields in the response, using this env variable will filter the fields to expose in the response.
+ # For example, if there are 2 flow outputs: "answer", "context", and I only want to have "answer" in the endpoint response, I can set this env variable to '["answer"]'.
+ # If you don't set this environment, by default all flow outputs will be included in the endpoint response.
+ # PROMPTFLOW_RESPONSE_INCLUDED_FIELDS: '["category", "evidence"]'
+```
+++
+| Attribute | Description |
+|--|--|
+| Name | The name of the deployment. |
+| Endpoint name | The name of the endpoint to create the deployment under. |
+| Model | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. |
+| Environment | The environment to host the model and code. It contains: <br> - `image`<br> - `inference_config`: is used to build a serving container for online deployments, including `liveness route`, `readiness_route`, and `scoring_route` . |
+| Instance type | The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](../reference-managed-online-endpoints-vm-sku-list.md). |
+| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](../how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+| Environment variables | Following environment variables need to be set for endpoints deployed from a flow: <br> - (required) `PROMPTFLOW_RUN_MODE: serving`: specify the mode to serving <br> - (required) `PRT_CONFIG_OVERRIDE`: for pulling connections from workspace <br> - (optional) `PROMPTFLOW_RESPONSE_INCLUDED_FIELDS:`: When there are multiple fields in the response, using this env variable will filter the fields to expose in the response. <br> For example, if there are two flow outputs: "answer", "context", and if you only want to have "answer" in the endpoint response, you can set this env variable to '["answer"]'. <br> - <br> |
+
+If you create a Kubernetes online deployment, you need to specify the following additional attributes:
+
+| Attribute | Description |
+|--|--|
+| Type | The type of the deployment. Set the value to `kubernetes`. |
+| Instance type | The instance type you have created in your kubernetes cluster to use for the deployment, represent the request/limit compute resource of the deployment. For more detail, see [Create and manage instance type](../how-to-manage-kubernetes-instance-types.md). |
+
+### Deploy your online endpoint to Azure
+
+To create the endpoint in the cloud, run the following code:
+
+```Azure CLI
+az ml online-endpoint create --file endpoint.yml
+```
+
+To create the deployment named `blue` under the endpoint, run the following code:
+
+```Azure CLI
+az ml online-deployment create --file blue-deployment.yml --all-traffic
+```
+
+This deployment might take up to 20 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment will finish processing more quickly.
+
+> [!TIP]
+>
+> If you prefer not to block your CLI console, you may add the flag `--no-wait` to the command. However, this will stop the interactive display of the deployment status.
+
+> [!IMPORTANT]
+>
+> The `--all-traffic` flag in the above `az ml online-deployment create` allocates 100% of the endpoint traffic to the newly created blue deployment. Though this is helpful for development and testing purposes, for production, you might want to open traffic to the new deployment through an explicit command. For example, `az ml online-endpoint update -n $ENDPOINT_NAME --traffic "blue=100"`.
+
+### Check status of the endpoint and deployment
+
+To check the status of the endpoint, run the following code:
+
+```Azure CLI
+az ml online-endpoint show -n basic-chat-endpoint
+```
+
+To check the status of the deployment, run the following code:
+
+```Azure CLI
+az ml online-deployment get-logs --name blue --endpoint basic-chat-endpoint
+```
+
+### Invoke the endpoint to score data by using your model
+
+```Azure CLI
+az ml online-endpoint invoke --name basic-chat-endpoint --request-file endpoints/online/model-1/sample-request.json
+```
+
+## Next steps
+
+- Learn more about [managed online endpoint schema](../reference-yaml-endpoint-online.md) and [managed online deployment schema](../reference-yaml-deployment-managed-online.md).
+- Learn more about how to [troubleshoot managed online endpoints](../how-to-troubleshoot-online-endpoints.md).
+- Once you improve your flow, and would like to deploy the improved version with safe rollout strategy, see [Safe rollout for online endpoints](../how-to-safely-rollout-online-endpoints.md).
machine-learning How To Develop A Chat Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-develop-a-chat-flow.md
description: Learn how to develop a chat flow in Prompt flow that can easily create a chatbot that handles chat input and output with Azure Machine Learning studio. -+ Previously updated : 06/30/2023 Last updated : 09/12/2023 # Develop a chat flow
To create a chat flow, you can **either** clone an existing chat flow sample fro
:::image type="content" source="./media/how-to-develop-a-chat-flow/create-chat-flow.png" alt-text="Screenshot of the Prompt flow gallery highlighting chat flow and Chat with Wikipedia. " lightbox = "./media/how-to-develop-a-chat-flow/create-chat-flow.png":::
+After selecting **Clone**, as shown in the right panel, the new flow will be saved in a specific folder within your workspace file share storage. You can customize the folder name according to your preferences.
++
+## Develop a chat flow
+
+### Authoring page
In chat flow authoring page, the chat flow is tagged with a "chat" label to distinguish it from standard flow and evaluation flow. To test the chat flow, select "Chat" button to trigger a chat box for conversation. :::image type="content" source="./media/how-to-develop-a-chat-flow/chat-input-output.png" alt-text="Screenshot of Chat with Wikipedia with the chat button highlighted. " lightbox = "./media/how-to-develop-a-chat-flow/chat-input-output.png":::
-## Develop a chat flow
+At the left, it's the flatten view, the main working area where you can author the flow, for example add a new node, edit the prompt, select the flow input data, etc.
++
+The top right corner shows the folder structure of the flow. Each flow has a folder that contains a flow.dag.yaml file, source code files, and system folders. You can export or import a flow easily for testing, deployment, or collaborative purposes.
++++
+In the bottom right corner, it's the graph view for visualization only. You can zoom in, zoom out, auto layout, etc.
+ ### Develop flow inputs and outputs
machine-learning How To Develop A Standard Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-develop-a-standard-flow.md
description: learn how to develop the standard flow in the authoring page in Prompt flow with Azure Machine Learning studio. -+ Previously updated : 06/30/2023 Last updated : 09/12/2023 # Develop a standard flow (preview)
In the Prompt flowΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï homepage, you can create a standard flow
:::image type="content" source="./media/how-to-develop-a-standard-flow/flow-create-standard.png" alt-text="Screenshot of the Prompt flow home page showing create a new flow with standard flow highlighted. " lightbox = "./media/how-to-develop-a-standard-flow/flow-create-standard.png":::
-## Authoring page - flatten view and graph view
+After selecting **Create**, as shown in the right panel, the new flow will be saved in a specific folder within your workspace file share storage. You can customize the folder name according to your preferences.
++
+## Authoring page
After the creation, you'll enter the authoring page for flow developing.
-At the left, it's the flatten view, the main working area where you can author the flow, for example add tools in your flow, edit the prompt, set the flow input data, run your flow, view the output, etc.
+At the left, it's the flatten view, the main working area where you can author the flow, for example add a new node, edit the prompt, select the flow input data, etc.
++
+The top right corner shows the folder structure of the flow. Each flow has a folder that contains a flow.dag.yaml file, source code files, and system folders. You can export or import a flow easily for testing, deployment, or collaborative purposes.
+
-At the right, it's the graph view for visualization only. It shows the flow structure you're developing, including the tools and their links. You can zoom in, zoom out, auto layout, etc.
+
+In the bottom right corner, it's the graph view for visualization only. You can zoom in, zoom out, auto layout, etc.
> [!NOTE] > You cannot edit the graph view. To edit one tool node, you can double-click the node to locate to the corresponding tool card in the flatten view, then do the inline edit. + ## Select runtime
At the top of each tool node card, there's a toolbar for adjusting the tool node
In the LLM tool, select Connection to select one to set the LLM key or credential. ### Prompt and python code inline edit In the LLM tool and python tool, it's available to inline edit the prompt or code. Go to the card in the flatten view, select the prompt section or code section, then you can make your change there. ### Validate and run
LLM node has only one output, the completion given by LLM provider.
As for inputs, we offer a templating strategy that can help you create parametric prompts that accept different input values. Instead of fixed text, enclose your input name in `{{}}`, so it can be replaced on the fly. We use **Jinja** as our templating language.
-Select **Edit** next to prompt box to define inputs using `{{input_name}}`.
+Edit the prompt box to define inputs using `{{input_name}}`.
:::image type="content" source="./media/how-to-develop-a-standard-flow/flow-input-interface.png" alt-text="Screenshot of editing the prompt box to define inputs. " lightbox = "./media/how-to-develop-a-standard-flow/flow-input-interface.png":::
Below are common scenarios for linking nodes together.
### Scenario 1 - Link LLM node with flow input 1. Add a new LLM node, rename it with a meaningful name, specify the connection and API type.
-2. Select **Edit** next to the prompt box, add an input by `{{url}}`, then you'll see an input called URL is created in inputs section.
+2. Edit the prompt box, add an input by `{{url}}`, select **Validate and parse input**, then you'll see an input called URL is created in inputs section.
3. In the value drop-down, select ${inputs.url}, then you'll see in the graph view that the newly created LLM node is linked to the flow input. When running the flow, the URL input of the node will be replaced by flow input on the fly. + ### Scenario 2 - Link LLM node with single-output upstream node
-1. Select **Edit** next to the prompt box, add another input by `{{summary}}`, then you'll see an input called summary is created in inputs section.
+1. Edit the prompt box, add another input by `{{summary}}`, select **Validate and parse input**, then you'll see an input called summary is created in inputs section.
2. In the value drop-down, select ${summarize_text_content.output}, then you'll see in the graph view that the newly created LLM node is linked to the upstream summarize_text_content node. When running the flow, the summary input of the node will be replaced by summarize_text_content node output on the fly. - We support search and autosuggestion here in the drop-down. You can search by node name if you have many nodes in the flow. You can also navigate to the node you want to link with, copy the node name, navigate back to the newly created LLM node, paste in the input value field. ### Scenario 3 - Link LLM node with multi-output upstream node
First define flow output schema, then select in drop-down the node whose output
:::image type="content" source="./media/how-to-develop-a-standard-flow/flow-output-check.png" alt-text="Screenshot of Web Classification highlighting the view outputs button. " lightbox = "./media/how-to-develop-a-standard-flow/flow-output-check.png"::: ## Next steps
machine-learning How To Develop An Evaluation Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-develop-an-evaluation-flow.md
Title: Develop an evaluation flow in Prompt flow (preview)
-description: Learn how to customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a bulk test as an evaluation method in Prompt flow with Azure Machine Learning studio.
+description: Learn how to customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a batch run as an evaluation method in Prompt flow with Azure Machine Learning studio.
Previously updated : 06/30/2023 Last updated : 09/12/2023 # Develop an evaluation flow (preview)
-Evaluation flows are special types of flows that assess how well the outputs of a flow align with specific criteria and goals.
+Evaluation flows are special types of flows that assess how well the outputs of a run align with specific criteria and goals.
-In Prompt flow, you can customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a bulk test as an evaluation method. This document you'll learn:
+In Prompt flow, you can customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a batch run as an evaluation method. This document you'll learn:
- How to develop an evaluation method - Customize built-in evaluation Method
There are two ways to develop your own evaluation methods:
The process of customizing and creating evaluation methods is similar to that of a standard flow.
-### Customize built-in evaluation method to measure the performance of a flow
+### Customize a built-in evaluation method to measure the performance of a flow
-Find the built-in evaluation methods by selecting the **"Create"** button on the homepage and navigating to the Create from gallery -\> Evaluation tab. View more details about the evaluation method by selecting **"View details"**.
-
+Find the built-in evaluation methods by selecting the **"Create"** button on the homepage and navigating to the Create from gallery -\> Evaluation tab. You can view more details about an evaluation method by selecting **"View details"**.
If you want to customize this evaluation method, you can select the **"Clone"** button. By the name of the flow, you can see an **"evaluation"** tag, indicating you're building an evaluation flow. Similar to cloning a sample flow from gallery, you'll be able to view and edit the flow and the codes and prompts of the evaluation method. :::image type="content" source="./media/how-to-develop-an-evaluation-flow/evaluation-tag.png" alt-text="Screenshot of Classification Accuracy Evaluation with the evaluation tag underlined. " lightbox = "./media/how-to-develop-an-evaluation-flow/evaluation-tag.png":::
-Alternatively, you can customize a built-in evaluation method used in a bulk test by clicking the **"Clone"** icon when viewing its snapshot from the bulk test detail page.
+Alternatively, you can customize a built-in evaluation method from a completed run by selecting the **"Clone"** icon when viewing its snapshot from the run detail page.
### Create new evaluation flow from scratch
To create your evaluation method from scratch, select the **"Create"** button o
:::image type="content" source="./media/how-to-develop-an-evaluation-flow/create-by-type.png" alt-text="Screenshot of tiles from the Prompt flow gallery with the create button highlighted on evaluation flow. " lightbox = "./media/how-to-develop-an-evaluation-flow/create-by-type.png":::
+Then, you can see a template of evaluation flow containing two nodes: line_process and aggregate.
++ ## Understand evaluation in Prompt flow
-In Prompt flow, a flow is a sequence of nodes that process an input and generate an output. Evaluation methods also take required inputs and produce corresponding outputs.
+In Prompt flow, a flow is a sequence of nodes that process an input and generate an output. Evaluation flows also take required inputs and produce corresponding outputs.
Some special features of evaluation methods are:
-1. They need to handle outputs from flows that contain multiple variants.
-2. They usually run after a flow being tested, so there's a field mapping process when submitting an evaluation.
+1. They usually run after the run to be tested, and receive outputs from that run.
+2. Apart from the outputs from the run to be tested, they can receive an optional additional dataset which may contain corresponding ground truths.
3. They may have an aggregation node that calculates the overall performance of the flow being tested based on the individual scores.
+4. They can log metrics using log_metric() function.
We'll introduce how the inputs and outputs should be defined in developing evaluation methods. ### Inputs
-Different from a standard flow, evaluation methods run after a flow being tested, which may have multiple variants. Therefore, evaluation needs to distinguish the sources of the received flow output in a bulk test, including the data sample and variant the output is generated from.
-
-To build an evaluation method that can be used in a bulk test, two additional inputs are required: line\_number and variant\_id(s).
--- **line\_number:** the index of the sample in the test dataset-- **variant\_id(s):** the variant ID that indicates the source variant of the output-
-There are two types of evaluation methods based on how to process outputs from different variants:
--- **Point-based evaluation method:** This type of evaluation flow calculates metrics based on the outputs from different variants **independently and separately.** "line\_number" and **"variant\_id"** are the required flow inputs. The receiving output of a flow is from a single variant. Therefore, the evaluation input "variant\_id" is a **string** indicating the source variant of the output.-
-| Field name | Type | Description | Examples |
-| | | | |
-| line\_number | int | The line number of the test data. | 0, 1, ... |
-| **variant\_id** | **string** | **The variant name.** | **"variant\_0", "variant\_1", ...** |
+An evaluation runs after another run to assess how well the outputs of that run align with specific criteria and goals. Therefore, evaluation receives the outputs generated from that run.
-- The built-in evaluation methods in the gallery are mostly this type of evaluation methods, except "QnA Relevance Scores Pairwise Evaluation".
+Other inputs may also be required, such as ground truth, which may come from a dataset. By default, evaluation will use the same dataset as the test dataset provided to the tested run. However, if the corresponding labels or target ground truth values are in a different dataset, you can easily switch to that one.
-- **Collection-based/Pair-wise evaluation method:** This type of evaluation flow calculates metrics based on the outputs from different variants **collectively.** "line\_number" and **"variant\_ids"** are the required flow inputs. This evaluation method receives a list of outputs of a flow from multiple variants. Therefore, the evaluation input "variant\_ids" is a **list of strings** indicating the source variants of the outputs. This type of evaluation method can process the outputs from multiple variants at a time, and calculate **relative metrics**, comparing to a baseline variant: variant\_0. This is useful when you want to know how other variants are performing compared to that of variant\_0 (baseline), allowing for the calculation of relative metrics.
+Therefore, to run an evaluation, you need to indicate the sources of these required inputs. To do so, when submitting an evaluation, you'll see an **"input mapping"** section.
-| Field name | Type | Description | Examples |
-| | | | |
-| line\_number | int | The line number of the test data. | 0, 1, ... |
-| **variant\_ids** | **List[string]** | **The variant name list.** | **["variant\_0", "variant\_1", ...]**|
+- If the data source is from your run output, the source is indicated as "${run.output.[OutputName]}"
+- If the data source is from your test dataset, the source is indicated as "${data.[ColumnName]}"
-See "QnA Relevance Scores Pairwise Evaluation" flow in "Create from gallery" for reference.
-#### Input mapping
-In this context, the inputs are the subjects of evaluation, which are the outputs of a flow. Other inputs may also be required, such as ground truth, which may come from the test dataset you provided. Therefore, to run an evaluation, you need to indicate the sources of these required input test data. To do so, when submitting an evaluation, you'll see an **"input mapping"** section.
--- If the data source is from your test dataset, the source is indicated as "data.[ColumnName]"-- If the data source is from your flow output, the source is indicated as "output.[OutputName]"--
-To demonstrate the relationship of how the inputs and outputs are passed between flow and evaluation methods, here's a diagram showing the schema:
--
-Here's a diagram showing the example how data are passed between test dataset and flow outputs:
-
+> [!NOTE]
+> If your evaluation doesn't require data from the dataset, you do not need to reference any dataset columns in the input mapping section, indicating the dataset selection is an optional configuration. Dataset selection won't affect evaluation result.
### Input description
-To remind what inputs are needed to calculate metrics, you can add a description for each required input. The descriptions will be displayed when mapping the sources in bulk test submission.
+To remind what inputs are needed to calculate metrics, you can add a description for each required input. The descriptions are displayed when mapping the sources in batch run submission.
:::image type="content" source="./media/how-to-develop-an-evaluation-flow/input-description.png" alt-text="Screenshot of evaluation input mapping with the answers description highlighted. " lightbox = "./media/how-to-develop-an-evaluation-flow/input-description.png":::
To add descriptions for each input, select **Show description** in the input sec
:::image type="content" source="./media/how-to-develop-an-evaluation-flow/add-description.png" alt-text="Screenshot of Classification Accuracy Evaluation with hide description highlighted. " lightbox = "./media/how-to-develop-an-evaluation-flow/add-description.png":::
-Then this description will be displayed to when using this evaluation method in bulk test submission.
+Then this description is displayed to when using this evaluation method in batch run submission.
### Outputs and metrics The outputs of an evaluation are the results that measure the performance of the flow being tested. The output usually contains metrics such as scores, and may also include text for reasoning and suggestions.
-#### Instance-level metricsΓÇöoutputs
+#### Instance-level scores ΓÇö outputs
-In Prompt flow, the flow processes each sample dataset one at a time and generates an output record. Similarly, in most evaluation cases, there will be a metric for each flow output, allowing you to check how the flow performs on each individual data input.
+In Prompt flow, the flow processes each sample dataset one at a time and generates an output record. Similarly, in most evaluation cases, there will be a metric for each output, allowing you to check how the flow performs on each individual data.
-To record the score for each data sample, calculate the score for each output, and log the score **as a flow output** by setting it in the output section. This is the same as defining a standard flow output.
+To record the score for each data sample, calculate the score for each output, and log the score **as a flow output** by setting it in the output section. This authoring experience is the same as defining a standard flow output.
:::image type="content" source="./media/how-to-develop-an-evaluation-flow/eval-output.png" alt-text="Screenshot of the outputs section showing a name and value. " lightbox = "./media/how-to-develop-an-evaluation-flow/eval-output.png":::
-When this evaluation method is used in a bulk test, the instance-level score can be viewed in the **Output** tab.
+We calculate this score in `line_process` node, which you can create and edit from scratch when creating by type. You can also replace this python node with an LLM node to use LLM to calculate the score.
++
+When this evaluation method is used in a batch run, the instance-level score can be viewed in the **Overview ->Output** tab.
#### Metrics logging and aggregation node
-In addition, it's also important to provide an overall score for the run. You can check the **"set as aggregation"** of a Python node to turn it into a "reduce" node, allowing the node to take in the inputs **as a list** and process them in batch.
+In addition, it's also important to provide an overall score for the run. You can check the **"set as aggregation"** of a Python node in an evaluation flow to turn it into a "reduce" node, allowing the node to take in the inputs **as a list** and process them in batch.
:::image type="content" source="./media/how-to-develop-an-evaluation-flow/set-as-aggregation.png" alt-text="Screenshot of the Python node heading pointing to an unchecked checked box. " lightbox = "./media/how-to-develop-an-evaluation-flow/set-as-aggregation.png":::
In this way, you can calculate and process all the scores of each flow output an
You can log metrics in an aggregation node using **Prompt flow_sdk.log_metrics()**. The metrics should be numerical (float/int). String type metrics logging isn't supported.
-See the following example for using the log_metric API:
+We calculate this score in `aggregate` node, which you can create and edit from scratch when creating by type. You can also replace this python node with a LLM node to use LLM to calculate the score. See the following example for using the log_metric API in an evaluation flow:
```python
def calculate_accuracy(grades: List[str], variant_ids: List[str]):
return aggregate_grades ```
+As you called this function in the Python node, you don't need to assign it anywhere else, and you can view the metrics later. When this evaluation method is used in a batch run, the instance-level score can be viewed in the **Overview->Metrics** tab.
++ ## Next steps - [Iterate and optimize your flow by tuning prompts using variants](how-to-tune-prompts-using-variants.md)-- [Submit bulk test and evaluate a flow](how-to-develop-a-standard-flow.md)
+- [Submit batch run and evaluate a flow](how-to-bulk-test-evaluate-flow.md)
machine-learning How To Enable Streaming Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-enable-streaming-mode.md
+
+ Title: How to use streaming endpoints deployed from Prompt Flow (preview)
+
+description: Learn how use streaming when you consume the endpoints in Azure Machine Learning prompt flow.
+++++++ Last updated : 09/12/2023++
+# How to use streaming endpoints deployed from Prompt Flow (preview)
+
+In Prompt Flow, you can [deploy flow to an Azure Machine Learning managed online endpoint](how-to-deploy-for-real-time-inference.md) for real-time inference.
+
+When consuming the endpoint by sending a request, the default behavior is that the online endpoint will keep waiting until the whole response is ready, and then send it back to the client. This can cause a long delay for the client and a poor user experience.
+
+To avoid this, you can use streaming when you consume the endpoints. Once streaming enabled, you don't have to wait for the whole response to be ready. Instead, the server will send back the response in chunks as they're generated. The client can then display the response progressively, with less waiting time and more interactivity.
+
+This article will describe the scope of streaming, how streaming works, and how to consume streaming endpoints.
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Create a streaming enabled flow
+
+If you want to use the streaming mode, you need to create a flow that has a node that produces a string generator as the flowΓÇÖs output. A string generator is an object that can return one string at a time when requested. You can use the following types of nodes to create a string generator:
+
+- LLM node: This node uses a large language model to generate natural language responses based on the input.
+
+ ```jinja
+ {# Sample prompt template for LLM node #}
+
+ system:
+ You are a helpful assistant.
+
+ user:
+ {{question}}
+ ```
+
+- Python node: This node allows you to write custom Python code that can yield string outputs. You can use this node to call external APIs or libraries that support streaming. For example, you can use this code to echo the input word by word:
+
+ ```python
+ from promptflow import tool
+
+ # Sample code echo input by yield in Python tool node
+
+ @tool
+ def my_python_tool(paragraph: str) -> str:
+ yield "Echo: "
+ for word in paragraph.split():
+ yield word + " "
+ ```
+
+> [!IMPORTANT]
+> Only the output of the last node of the flow can support streaming.
+>
+> "Last node" means the node output is not consumed by other nodes.
+
+In this guide, we will use the "Chat with Wikipedia" sample flow as an example. This flow processes the userΓÇÖs question, searches Wikipedia for relevant articles, and answers the question with information from the articles. It uses streaming mode to show the progress of the answer generation.
+
+To learn how to create a chat flow, see [how to develop a chat flow in prompt flow](how-to-develop-a-chat-flow.md) to create a chat flow.
++
+## Deploy the flow as an online endpoint
+
+To use the streaming mode, you need to deploy your flow as an online endpoint. This will allow you to send requests and receive responses from your flow in real time.
+
+To learn how to deploy your flow as an online endpoint, see [Deploy a flow to online endpoint for real-time inference with CLI](./how-to-deploy-to-code.md) to deploy your flow as an online endpoint.
+
+> [!NOTE]
+>
+> Deploy with Runtime environment version later than version `20230710.v2`.
+
+You can check your runtime version and update runtime in the run time detail page.
++
+## Understand the streaming process
+
+When you have an online endpoint, the client and the server need to follow specific principles for [content negotiation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation) to utilize the streaming mode:
+
+Content negotiation is like a conversation between the client and the server about the preferred format of the data they want to send and receive. It ensures effective communication and agreement on the format of the exchanged data.
+
+To understand the streaming process, consider the following steps:
+
+- First, the client constructs an HTTP request with the desired media type included in the `Accept` header. The media type tells the server what kind of data format the client expects. It's like the client saying, "Hey, I'm looking for a specific format for the data you'll send me. It could be JSON, text, or something else." For example, `application/json` indicates a preference for JSON data, `text/event-stream` indicates a desire for streaming data, and `*/*` means the client accepts any data format.
+ > [!NOTE]
+ >
+ > If a request lacks an `Accept` header or has empty `Accept` header, it implies that the client will accept any media type in response. The server treats it as `*/*`.
+
+- Next, the server responds based on the media type specified in the `Accept` header. It's important to note that the client may request multiple media types in the `Accept` header, and the server must consider its capabilities and format priorities to determine the appropriate response.
+ - First, the server checks if `text/event-stream` is explicitly specified in the `Accept` header:
+ - For a stream-enabled flow, the server returns a response with a `Content-Type` of `text/event-stream`, indicating that the data is being streamed.
+ - For a non-stream-enabled flow, the server proceeds to check for other media types specified in the header.
+ - If `text/event-stream` isn't specified, the server then checks if `application/json` or `*/*` is specified in the `Accept` header:
+ - In such cases, the server returns a response with a `Content-Type` of `application/json`, providing the data in JSON format.
+ - If the `Accept` header specifies other media types, such as `text/html`:
+ - The server returns a `424` response with a PromptFlow runtime error code `UserError` and a runtime HTTP status `406`, indicating that the server can't fulfill the request with the requested data format.
+ To learn more, see [handle errors](#handle-errors).
+- Finally, the client checks the `Content-Type` response header. If it's set to `text/event-stream`, it indicates that the data is being streamed.
+
+LetΓÇÖs take a closer look at how the streaming process works. The response data in streaming mode follows the format of [server-sent events (SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events).
+
+The overall process works as follows:
+
+### 0. The client sends a message to the server
+
+```JSON
+POST https://<your-endpoint>.inference.ml.azure.com/score
+Content-Type: application/json
+Authorization: Bearer <key or token of your endpoint>
+Accept: text/event-stream
+
+{
+ "question": "Hello",
+ "chat_history": []
+}
+```
+
+> [!NOTE]
+>
+> The `Accept` header is set to `text/event-stream` to request a stream response.
+
+### 1. The server sends back the response in streaming mode
+
+```JSON
+HTTP/1.1 200 OK
+Content-Type: text/event-stream; charset=utf-8
+Connection: close
+Transfer-Encoding: chunked
+
+data: {"answer": ""}
+
+data: {"answer": "Hello"}
+
+data: {"answer": "!"}
+
+data: {"answer": " How"}
+
+data: {"answer": " can"}
+
+data: {"answer": " I"}
+
+data: {"answer": " assist"}
+
+data: {"answer": " you"}
+
+data: {"answer": " today"}
+
+data: {"answer": " ?"}
+
+data: {"answer": ""}
+
+```
+
+> [!NOTE]
+> The `Content-Type` is set to `text/event-stream; charset=utf-8`, indicating the response is an event stream.
+
+The client should decode the response data as server-sent events and display them incrementally. The server will close the HTTP connection after all the data is sent.
+
+Each response event is the delta to the previous event. It's recommended for the client to keep track of the merged data in memory and send them back to the server as chat history in the next request.
+
+### 2. The client sends another chat message, along with the full chat history, to the server
+
+```JSON
+POST https://<your-endpoint>.inference.ml.azure.com/score
+Content-Type: application/json
+Authorization: Bearer <key or token of your endpoint>
+Accept: text/event-stream
+
+{
+ "question": "Glad to know you!",
+ "chat_history": [
+ {
+ "inputs": {
+ "question": "Hello"
+ },
+ "outputs": {
+ "answer": "Hello! How can I assist you today?"
+ }
+ }
+ ]
+}
+```
+
+### 3. The server sends back the answer in streaming mode
+
+```JSON
+HTTP/1.1 200 OK
+Content-Type: text/event-stream; charset=utf-8
+Connection: close
+Transfer-Encoding: chunked
+
+data: {"answer": ""}
+
+data: {"answer": "Nice"}
+
+data: {"answer": " to"}
+
+data: {"answer": " know"}
+
+data: {"answer": " you"}
+
+data: {"answer": " too"}
+
+data: {"answer": "!"}
+
+data: {"answer": " Is"}
+
+data: {"answer": " there"}
+
+data: {"answer": " anything"}
+
+data: {"answer": " I"}
+
+data: {"answer": " can"}
+
+data: {"answer": " help"}
+
+data: {"answer": " you"}
+
+data: {"answer": " with"}
+
+data: {"answer": "?"}
+
+data: {"answer": ""}
+
+```
+
+The chat then continues in a similar way.
+
+## Handle errors
+
+The client should check the HTTP response code first. See [HTTP status code table](../how-to-troubleshoot-online-endpoints.md#http-status-codes) for common error codes returned by online endpoints.
+
+If the response code is "424 Model Error", it means that the error is caused by the modelΓÇÖs code. The error response from a Prompt Flow model always follows this format:
+
+```json
+{
+ "error": {
+ "code": "UserError",
+ "message": "Media type text/event-stream in Accept header is not acceptable. Supported media type(s) - application/json",
+ }
+}
+```
+
+- It is always a JSON dictionary with only one key "error" defined.
+- The value for "error" is a dictionary, containing "code", "message".
+- "code" defines the error category. Currently, it may be "UserError" for bad user inputs and "SystemError" for errors inside the service.
+- "message" is a description of the error. It can be displayed to the end user.
+
+## How to consume the server-sent events
+
+### Consume using Python
+
+We have created [a utility file](https://aka.ms/pf-streaming-sample-util) as an example to demonstrate how to consume the server-sent event. A sample usage would like:
+
+```python
+try:
+ response = requests.post(url, json=body, headers=headers, stream=stream)
+ response.raise_for_status()
+
+ content_type = response.headers.get('Content-Type')
+ if "text/event-stream" in content_type:
+ event_stream = EventStream(response.iter_lines())
+ for event in event_stream:
+ # Handle event, i.e. print to stdout
+ else:
+ # Handle json response
+
+except HTTPError:
+ # Handle exceptions
+```
+
+### Consume using JavaScript
+
+There are several libraries to consume server-sent events in JavaScript. For example, this is the [sse.js library](https://www.npmjs.com/package/sse.js?activeTab=code).
+
+## A sample chat app using Python
+
+Here's a sample chat app written in Python. (To view the source code, see [chat_app.py](https://aka.ms/pf-streaming-sample-chat))
++
+## Advance usage - hybrid stream and non-stream flow output
+
+Sometimes, you may want to get both stream and non-stream results from a flow output. For example, in the ΓÇ£Chat with WikipediaΓÇ¥ flow, you may want to get not only LLMΓÇÖs answer, but also the list of URLs that the flow searched. To do this, you need to modify the flow to output a combination of stream LLMΓÇÖs answer and non-stream URL list.
+
+In the sample "Chat With Wikipedia" flow, the output is connected to the LLM node `augmented_chat`. To add the URL list to the output, you need to add an output field with the name `url` and the value `${get_wiki_url.output}`.
++
+The output of the flow will be a non-stream field as the base and a stream field as the delta. Here's an example of request and response.
+
+### Advance usage - 0. The client sends a message to the server
+
+```JSON
+POST https://<your-endpoint>.inference.ml.azure.com/score
+Content-Type: application/json
+Authorization: Bearer <key or token of your endpoint>
+Accept: text/event-stream
+{
+ "question": "When was ChatGPT launched?",
+ "chat_history": []
+}
+```
+
+### Advance usage - 1. The server sends back the answer in streaming mode
+
+```JSON
+HTTP/1.1 200 OK
+Content-Type: text/event-stream; charset=utf-8
+Connection: close
+Transfer-Encoding: chunked
+
+data: {"url": ["https://en.wikipedia.org/w/index.php?search=ChatGPT", "https://en.wikipedia.org/w/index.php?search=GPT-4"]}
+
+data: {"answer": ""}
+
+data: {"answer": "Chat"}
+
+data: {"answer": "G"}
+
+data: {"answer": "PT"}
+
+data: {"answer": " was"}
+
+data: {"answer": " launched"}
+
+data: {"answer": " on"}
+
+data: {"answer": " November"}
+
+data: {"answer": " "}
+
+data: {"answer": "30"}
+
+data: {"answer": ","}
+
+data: {"answer": " "}
+
+data: {"answer": "202"}
+
+data: {"answer": "2"}
+
+data: {"answer": "."}
+
+data: {"answer": " \n\n"}
+
+...
+
+data: {"answer": "PT"}
+
+data: {"answer": ""}
+```
+
+### Advance usage - 2. The client sends another chat message, along with the full chat history, to the server
+
+```JSON
+POST https://<your-endpoint>.inference.ml.azure.com/score
+Content-Type: application/json
+Authorization: Bearer <key or token of your endpoint>
+Accept: text/event-stream
+{
+ "question": "When did OpenAI announce GPT-4? How long is it between these two milestones?",
+ "chat_history": [
+ {
+ "inputs": {
+ "question": "When was ChatGPT launched?"
+ },
+ "outputs": {
+ "url": [
+ "https://en.wikipedia.org/w/index.php?search=ChatGPT",
+ "https://en.wikipedia.org/w/index.php?search=GPT-4"
+ ],
+ "answer": "ChatGPT was launched on November 30, 2022. \n\nSOURCES: https://en.wikipedia.org/w/index.php?search=ChatGPT"
+ }
+ }
+ ]
+}
+```
+
+### Advance usage - 3. The server sends back the answer in streaming mode
+
+```JSON
+HTTP/1.1 200 OK
+Content-Type: text/event-stream; charset=utf-8
+Connection: close
+Transfer-Encoding: chunked
+
+data: {"url": ["https://en.wikipedia.org/w/index.php?search=Generative pre-trained transformer ", "https://en.wikipedia.org/w/index.php?search=Microsoft "]}
+
+data: {"answer": ""}
+
+data: {"answer": "Open"}
+
+data: {"answer": "AI"}
+
+data: {"answer": " released"}
+
+data: {"answer": " G"}
+
+data: {"answer": "PT"}
+
+data: {"answer": "-"}
+
+data: {"answer": "4"}
+
+data: {"answer": " in"}
+
+data: {"answer": " March"}
+
+data: {"answer": " "}
+
+data: {"answer": "202"}
+
+data: {"answer": "3"}
+
+data: {"answer": "."}
+
+data: {"answer": " Chat"}
+
+data: {"answer": "G"}
+
+data: {"answer": "PT"}
+
+data: {"answer": " was"}
+
+data: {"answer": " launched"}
+
+data: {"answer": " on"}
+
+data: {"answer": " November"}
+
+data: {"answer": " "}
+
+data: {"answer": "30"}
+
+data: {"answer": ","}
+
+data: {"answer": " "}
+
+data: {"answer": "202"}
+
+data: {"answer": "2"}
+
+data: {"answer": "."}
+
+data: {"answer": " The"}
+
+data: {"answer": " time"}
+
+data: {"answer": " between"}
+
+data: {"answer": " these"}
+
+data: {"answer": " two"}
+
+data: {"answer": " milestones"}
+
+data: {"answer": " is"}
+
+data: {"answer": " approximately"}
+
+data: {"answer": " "}
+
+data: {"answer": "3"}
+
+data: {"answer": " months"}
+
+data: {"answer": ".\n\n"}
+
+...
+
+data: {"answer": "Chat"}
+
+data: {"answer": "G"}
+
+data: {"answer": "PT"}
+
+data: {"answer": ""}
+```
+
+## Next steps
+
+- Learn more about how to [troubleshoot managed online endpoints](../how-to-troubleshoot-online-endpoints.md).
+- Once you improve your flow, and would like to deploy the improved version with safe rollout strategy, you can refer to [Safe rollout for online endpoints](../how-to-safely-rollout-online-endpoints.md).
machine-learning How To End To End Llmops With Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md
+
+ Title: Set up end-to-end LLMOps with Prompt Flow and GitHub (preview)
+
+description: Learn about using Azure Machine Learning to set up an end-to-end LLMOps pipeline that runs a web classification flow that classifies a website based on a given URL.
+++++++ Last updated : 09/12/2023+
+# Set up end-to-end LLMOps with prompt flow and GitHub (preview)
+
+Azure Machine Learning allows you to integrate with [GitHub Actions](https://docs.github.com/actions) to automate the machine learning lifecycle. Some of the operations you can automate are:
+
+- Running prompt flow after a pull request
+- Running prompt flow evaluation to ensure results are high quality
+- Registering of prompt flow models
+- Deployment of prompt flow models
+
+In this article, you learn about using Azure Machine Learning to set up an end-to-end LLMOps pipeline that runs a web classification flow that classifies a website based on a given URL. The flow is made up of multiple LLM calls and components, each serving different functions. All the LLMs used are managed and store in your Azure Machine Learning workspace in your Prompt flow connections.
+
+> [!TIP]
+> We recommend you understand how we integrate [LLMOps with Prompt flow](how-to-integrate-with-llm-app-devops.md).
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Machine Learning](https://azure.microsoft.com/free/).
+- An [Azure Machine Learning workspace](../how-to-manage-workspace.md#create-a-workspace).
+- Git running on your local machine.
+- GitHub as the source control repository.
+
+> [!NOTE]
+>Git version 2.27 or newer is required. For more information on installing the Git command, see https://git-scm.com/downloads and select your operating system.
+
+> [!IMPORTANT]
+>The CLI commands in this article were tested using Bash. If you use a different shell, you may encounter errors.
+
+### Set up authentication with Azure and GitHub
+
+Before you can set up a Prompt flow project with Azure Machine Learning, you need to set up authentication for Azure GitHub.
+
+#### Create service principal
+
+ Create one production service principal for this demo. You can add more depending on how many environments, you want to work on (development, production or both). Service principals can be created using one of the following methods:
+
+1. Launch the [Azure Cloud Shell](https://shell.azure.com).
+
+ > [!TIP]
+ > The first time you've launched the Cloud Shell, you'll be prompted to create a storage account for the Cloud Shell.
+
+1. If prompted, choose **Bash** as the environment used in the Cloud Shell. You can also change environments in the drop-down on the top navigation bar
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/cli-1.png" alt-text="Screenshot of the Cloud Shell with bash selected showing connections to the PowerShell terminal. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/cli-1.png":::
+
+1. Copy the following bash commands to your computer and update the **projectName**, **subscriptionId**, and **environment** variables with the values for your project. This command will also grant the **Contributor** role to the service principal in the subscription provided. This is required for GitHub Actions to properly use resources in that subscription.
+
+ ``` bash
+ projectName="<your project name>"
+ roleName="Contributor"
+ subscriptionId="<subscription Id>"
+ environment="<Prod>" #First letter should be capitalized
+ servicePrincipalName="Azure-ARM-${environment}-${projectName}"
+ # Verify the ID of the active subscription
+ echo "Using subscription ID $subscriptionID"
+ echo "Creating SP for RBAC with name $servicePrincipalName, with role $roleName and in scopes /subscriptions/$subscriptionId"
+ az ad sp create-for-rbac --name $servicePrincipalName --role $roleName --scopes /subscriptions/$subscriptionId --sdk-auth
+ echo "Please ensure that the information created here is properly save for future use."
+ ```
+
+1. Copy your edited commands into the Azure Shell and run them (**Ctrl** + **Shift** + **v**).
+
+1. After running these commands, you'll be presented with information related to the service principal. Save this information to a safe location, you'll use it later in the demo to configure GitHub.
+
+ ```json
+
+ {
+ "clientId": "<service principal client id>",
+ "clientSecret": "<service principal client secret>",
+ "subscriptionId": "<Azure subscription id>",
+ "tenantId": "<Azure tenant id>",
+ "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
+ "resourceManagerEndpointUrl": "https://management.azure.com/",
+ "activeDirectoryGraphResourceId": "https://graph.windows.net/",
+ "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
+ "galleryEndpointUrl": "https://gallery.azure.com/",
+ "managementEndpointUrl": "https://management.core.windows.net/"
+ }
+ ```
+
+1. Copy all of this output, braces included. Save this information to a safe location, it will be use later in the demo to configure GitHub repo.
+
+1. Close the Cloud Shell once the service principals are created.
+
+### Setup GitHub repo
+
+1. Fork example repo. [LLMOps Demo Template Repo](https://github.com/Azure/llmops-gha-demo/fork) in your GitHub organization. This repo has reusable LLMOps code that can be used across multiple projects.
+
+### Add secret to GitHub repo
+
+1. From your GitHub project, select **Settings**:
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-settings.png" alt-text="Screenshot of the GitHub menu bar on a GitHub project with settings selected. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-settings.png":::
+
+1. Then select **Secrets**, then **Actions**:
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets.png" alt-text="Screenshot of on GitHub showing the security settings with security and actions highlighted." lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets.png":::
+
+1. Select **New repository secret**. Name this secret **AZURE_CREDENTIALS** and paste the service principal output as the content of the secret. Select **Add secret**.
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets-string.png" alt-text="Screenshot of GitHub Action secrets when creating a new secret. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets-string.png":::
+
+1. Add each of the following additional GitHub secrets using the corresponding values from the service principal output as the content of the secret:
+
+ - **GROUP**: \<Resource Group Name\>
+ - **WORKSPACE**: \<Azure ML Workspace Name\>
+ - **SUBSCRIPTION**: \<Subscription ID\>
+
+ |Variable | Description |
+ |||
+ |GROUP | Name of resource group |
+ |SUBSCRIPTION | Subscription ID of your workspace |
+ |WORKSPACE | Name of Azure Machine Learning workspace |
+
+> [!NOTE]
+> This finishes the prerequisite section and the deployment of the solution accelerator can happen accordingly.
+
+## Setup connections for prompt flow
+
+Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools, for example, Azure Content Safety.
+
+In this guide, we'll use flow `web-classification`, which uses connection `azure_open_ai_connection` inside, we need to set up the connection if we havenΓÇÖt added it before.
+
+Go to workspace portal, select `Prompt flow` -> `Connections` -> `Create` -> `Azure OpenAI`, then follow the instruction to create your own connections. To learn more, see [connections](concept-connections.md).
+
+## Setup runtime for Prompt flow
+Prompt flow's runtime provides the computing resources required for the application to run, including a Docker image that contains all necessary dependency packages.
+
+In this guide, we will use a runtime to run your prompt flow. You need to create your own [Prompt flow runtime](how-to-create-manage-runtime.md)
+
+Go to workspace portal, select **Prompt flow** -> **Runtime** -> **Add**, then follow the instruction to create your own connections
+
+## Setup variables with for prompt flow and GitHub Actions
+
+Clone repo to your local machine.
+
+```bash
+ git clone https://github.com/<user-name>/llmops-pipeline
+ ```
+
+### Update workflow to connect to your Azure Machine Learning workspace
+
+1. Update `run-eval-pf-pipeline.yml` and `deploy-pf-online-endpoint-pipeline.yml` to connect to your Azure Machine Learning workspace.
+ You'll need to update the CLI setup file variables to match your workspace.
+
+1. In your cloned repository, go to `.github/workflow/`.
+1. Verify `env` section in the `run-eval-pf-pipeline.yml` and `deploy-pf-online-endpoint-pipeline.yml` refers to the workspace secrets you added in the previous step.
+
+### Update run.yml with your connections and runtime
+
+You'll use a `run.yml` file to deploy your Azure Machine Learning pipeline. This is a flow run definition. You only need to make this update if you're using a name other than `pf-runtime` for your [prompt flow runtime](how-to-create-manage-runtime.md). You'll also need to update all the `connections` to match the connections in your Azure Machine Learning workspace and `deployment_name` to match the name of your GPT 3.5 Turbo deployment associate with that connection.
+
+1. In your cloned repository, open `web-classification/run.yml` and `web-classification/run_evaluation.yml`
+1. Each time you see `runtime: <runtime-name>`, update the value of `<runtime-name>` with your runtime name.
+1. Each time you see `connection: Default_AzureOpenAI`, update the value of `Default_AzureOpenAI` to match the connection name in your Azure Machine Learning workspace.
+1. Each time you see `deployment_name: gpt-35-turbo-0301`, update the value of `gpt-35-turbo-0301` with the name of your GPT 3.5 Turbo deployment associate with that connection.
+
+## Sample prompt run, evaluation and deployment scenario
+
+This is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
+
+This training pipeline contains the following steps:
+
+**Run prompts in flow:**
+
+- Compose a classification flow with LLM.
+- Feed few shots to LLM classifier.
+- Upload prompt test dataset.
+- Bulk run prompt flow based on dataset.
+
+**Evaluate results:**
+
+- Upload ground test dataset
+- Evaluation of the bulk run result and new uploaded ground test dataset
+
+**Register prompt flow LLM app:**
+
+- Check in logic, Customer defined logic (accuracy rate, if >=90% you can deploy)
+
+**Deploy and test LLM app:**
+
+- Deploy the PF as a model to production
+- Test the model/prompt flow real-time endpoint.
+
+## Run and evaluate prompt flow in Azure Machine Learning with GitHub Actions
+
+Using a [GitHub Action workflow](../how-to-github-actions-machine-learning.md#step-5-run-your-github-actions-workflow) we'll trigger actions to run a Prompt Flow job in Azure Machine Learning.
+
+This pipeline will start the prompt flow run and evaluate the results. When the job is complete, the prompt flow model will be registered in the Azure Machine Learning workspace and be available for deployment.
+
+1. In your GitHub project repository, select **Actions**
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-actions.png" alt-text="Screenshot of GitHub project repository with Action page selected. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-actions.png":::
+
+1. Select the `run-eval-pf-pipeline.yml` from the workflows listed on the left and the select **Run Workflow** to execute the Prompt flow run and evaluate workflow. This will take several minutes to run.
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-training-pipeline.png" alt-text="Screenshot of the pipeline run in GitHub. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-training-pipeline.png":::
+
+1. The workflow will only register the model for deployment, if the accuracy of the classification is greater than 60%. You can adjust the accuracy threshold in the `run-eval-pf-pipeline.yml` file in the `jobMetricAssert` section of the workflow file. The section should look like:
+
+ ```yaml
+ id: jobMetricAssert
+ run: |
+ export ASSERT=$(python promptflow/llmops-helper/assert.py result.json 0.6)
+ ```
+
+ You can update the current `0.6` number to fit your preferred threshold.
+
+1. Once completed, a successful run and all test were passed, it will register the Prompt Flow model in the Azure Machine Learning workspace.
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-training-step.png" alt-text="Screenshot of training step in GitHub Actions. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-training-step.png":::
+
+ > [!NOTE]
+ > If you want to check the output of each individual step, for example to view output of a failed run, select a job output, and then select each step in the job to view any output of that step.
+
+With the Prompt flow model registered in the Azure Machine Learning workspace, you're ready to deploy the model for scoring.
+
+## Deploy prompt flow in Azure Machine Learning with GitHub Actions
+
+This scenario includes prebuilt workflows for deploying a model to an endpoint for real-time scoring. You may run the workflow to test the performance of the model in your Azure Machine Learning workspace.
+
+### Online endpoint
+
+1. In your GitHub project repository, select **Actions**.
+
+1. Select the **deploy-pf-online-endpoint-pipeline** from the workflows listed on the left and select **Run workflow** to execute the online endpoint deployment pipeline workflow. The steps in this pipeline will create an online endpoint in your Azure Machine Learning workspace, create a deployment of your model to this endpoint, then allocate traffic to the endpoint.
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-online-endpoint.png" alt-text="Screenshot of GitHub Actions for online endpoint showing deploy prompts with prompt flow workflow." lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-online-endpoint.png":::
+
+1. Once completed, you'll find the online endpoint deployed in the Azure Machine Learning workspace and available for testing.
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/web-class-online-endpoint.png" alt-text="Screenshot of Azure Machine Learning studio on the endpoints page showing real time endpoint tab." lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/web-class-online-endpoint.png":::
+
+1. To test this deployment, go to the **Endpoints** tab in your Azure Machine Learning workspace, select the endpoint and select the **Test** tab. You can use the sample input data located in the cloned repo at `/deployment/sample-request.json` to test the endpoint.
+
+ :::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/online-endpoint-test.png" alt-text="Screenshot of Azure Machine Learning studio on the endpoints page showing how to test the endpoint." lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/online-endpoint-test.png":::
+
+> [!NOTE]
+> Make sure you have already [granted permissions to the endpoint](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint) before you test or consume the endpoint.
+
+## Moving to production
+
+This example scenario can be run and deployed both for Dev and production branches and environments. When you're satisfied with the performance of the prompt evaluation pipeline, Prompt Flow model, deployment in testing, development pipelines, and models can be replicated and deployed in the production environment.
+
+The sample Prompt flow run and evaluation and GitHub workflows can be used as a starting point to adapt your own prompt engineering code and data.
+
+## Clean up resources
+
+1. If you're not going to continue to use your pipeline, delete your GitHub project.
+1. In Azure portal, delete your resource group and Azure Machine Learning instance.
+
+## Next steps
+
+- [Install and set up Python SDK v2](https://aka.ms/sdk-v2-install)
+- [Install and set up Python CLI v2](../how-to-configure-cli.md)
machine-learning How To Integrate With Llm App Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-integrate-with-llm-app-devops.md
+
+ Title: Integrate Prompt Flow with LLM-based application DevOps (preview)
+
+description: Learn about integration of Prompt Flow with LLM-based application DevOps in Azure Machine Learning
+++++++ Last updated : 09/12/2023++
+# Integrate Prompt Flow with LLM-based application DevOps (preview)
+
+In this article, you'll learn about the integration of prompt flow with LLM-based application DevOps in Azure Machine Learning. Prompt flow offers a developer-friendly and easy-to-use code-first experience for flow developing and iterating with your entire LLM-based application development workflow.
+
+It provides an **prompt flow SDK and CLI**, an **VS code extension**, and the new UI of **flow folder explorer** to facilitate the local development of flows, local triggering of flow runs and evaluation runs, and transitioning flows from local to cloud (Azure Machine Learning workspace) environments.
+
+This documentation focuses on how to effectively combine the capabilities of prompt flow code experience and DevOps to enhance your LLM-based application development workflows.
++
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Introduction of code-first experience in Prompt Flow
+
+When developing applications using LLM, it's common to have a standardized application engineering process that includes code repositories and CI/CD pipelines. This integration allows for a streamlined development process, version control, and collaboration among team members.
+
+For developers experienced in code development who seek a more efficient LLMOps iteration process, the following key features and benefits you can gain from prompt flow code experience:
+
+- **Flow versioning in code repository**. You can define your flow in YAML format, which can stay aligned with the referenced source files in a folder structure.
+- **Integrate flow run with CI/CD pipeline**. You can trigger flow runs using the prompt flow CLI or SDK, which can be seamlessly integrated into your CI/CD pipeline and delivery process.
+- **Smooth transition from local to cloud**. You can easily export your flow folder to your local or code repository for version control, local development and sharing. Similarly, the flow folder can be effortlessly imported back to the cloud for further authoring, testing, deployment in cloud resources.
+
+## Accessing prompt flow code definition
+
+Each flow each prompt flow is associated with a **flow folder structure** that contains essential files for defining the flow in code **folder structure**. This folder structure organizes your flow, facilitating smoother transitions.
+
+Azure Machine Learning offers a shared file system for all workspace users. Upon creating a flow, a corresponding flow folder is automatically generated and stored there, located in the ```Users/<username>/promptflow``` directory.
++
+### Flow folder structure
+
+Overview of the flow folder structure and the key files it contains:
+
+- **flow.dag.yaml**: This primary flow definition file, in YAML format, includes information about inputs, outputs, nodes, tools, and variants used in the flow. It's integral for authoring and defining the prompt flow.
+- **Source code files (.py, .jinja2)**: The flow folder also includes user-managed source code files, which are referred to by the tools/nodes in the flow.
+ - Files in Python (.py) format can be referenced by the python tool for defining custom python logic.
+ - Files in Jinja2 (.jinja2) format can be referenced by the prompt tool or LLM tool for defining prompt context.
+- **Non-source files**: The flow folder may also contain non-source files such as utility files and data files that can be included in the source files.
+
+Once the flow is created, you can navigate to the Flow Authoring Page to view and operate the flow files in the right file explorer. This allows you to view, edit, and manage your files. Any modifications made to the files will be directly reflected in the file share storage.
+++++
+Alternatively, you can access all the flow folders directly within the Azure Machine Learning notebook.
++
+## Versioning prompt flow in repository
+
+To check in your flow into your code repository, you can easily export the flow folder from the flow authoring page to your local system. This will download a package containing all the files from the explorer to your local machine, which you can then check into your code repository.
++
+For more information about DevOps integration with Azure Machine Learning, see [Git integration in Azure Machine Learning](../concept-train-model-git-integration.md)
+
+## Submitting runs to the cloud from local repository
+
+### Prerequisites
+
+- Complete the [Create resources to get started](../quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
+
+- A Python environment in which you've installed Azure Machine Learning Python SDK v2 - [install instructions](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk#getting-started). This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training.
+
+### Install prompt flow SDK
+
+```shell
+pip install -r ../../exmples/requirements.txt
+```
+
+### Connect to Azure Machine Learning workspace
+
+# [Azure CLI](#tab/cli)
+
+```sh
+az login
+```
+# [Python SDK](#tab/python)
+
+```python
+import json
+
+# Import required libraries
+from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
+from azure.ai.ml import MLClient
+
+# azure version promptflow apis
+from promptflow.azure import PFClient
+
+# Configure credential
+try:
+ credential = DefaultAzureCredential()
+ # Check if given credential can get token successfully.
+ credential.get_token("https://management.azure.com/.default")
+except Exception as ex:
+ # Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
+ credential = InteractiveBrowserCredential()
+
+# Get a handle to workspace
+ml_client = MLClient.from_config(credential=credential)
+
+pf = PFClient(ml_client)
+```
+++
+### Submit flow run to Azure Machine Learning workspace
+
+We'll use [web-classification flow](https://github.com/microsoft/promptflow/tree/examples/flows/standard/web-classification/) as example.
+
+# [Azure CLI](#tab/cli)
+
+Prepare the `run.yml` to define the config for this flow run in cloud.
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
+flow: <path_to_flow>
+data: <path_to_flow>/data.jsonl
+
+# define cloud resource
+runtime: <runtime_name>
+connections:
+ classify_with_llm:
+ connection: <connection_name>
+ deployment_name: <deployment_name>
+ summarize_text_content:
+ connection: <connection_name>
+ deployment_name: <deployment_name>
+```
+
+You can specify the connection and deployment name for each tool in the flow. If you don't specify the connection and deployment name, it will use the one connection and deployment on the `flow.dag.yaml` file. To format of connections:
+
+```yaml
+...
+connections:
+ <node_name>:
+ connection: <connection_name>
+ deployment_name: <deployment_name>
+...
+
+```
+
+```sh
+pfazure run create --file run.yml
+```
+
+# [Python SDK](#tab/python)
+
+```python
+# load flow
+flow = "<path_to_flow>"
+data = "<path_to_flow>/data.jsonl"
+
+# define cloud resource
+runtime = <runtime_name>
+connections = {"classify_with_llm":
+ {"connection": <connection_name>,
+ "deployment_name": <deployment_name>},
+ "summarize_text_content":
+ {"connection": <connection_name>,
+ "deployment_name": <deployment_name>}
+ }
+# create run
+base_run = pf.run(
+ flow=flow,
+ data=data,
+ runtime=runtime,
+ connections=connections,
+)
+print(base_run)
+```
+++
+### Evaluation your flow to Azure Machine Learning workspace
+
+- We will use [classification-accuracy-eval flow](https://github.com/microsoft/promptflow/tree/examples/flows/evaluation/classification-accuracy-eval/) as example.
+
+# [Azure CLI](#tab/cli)
+
+Prepare the `run_evaluation.yml` to define the config for this evaluation flow run in cloud.
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
+flow: <path_to_flow>
+data: <path_to_flow>/data.jsonl
+run: <id of web-classification flow run>
+column_mapping:
+ groundtruth: ${data.answer}
+ prediction: ${run.outputs.category}
+
+# define cloud resource
+runtime: <runtime_name>
+connections:
+ classify_with_llm:
+ connection: <connection_name>
+ deployment_name: <deployment_name>
+ summarize_text_content:
+ connection: <connection_name>
+ deployment_name: <deployment_name>
+
+```
+
+```sh
+pfazure run create --file run_evaluation.yml
+```
+
+# [Python SDK](#tab/python)
+
+```python
+# load flow
+flow = "<path_to_flow>"
+data = "<path_to_flow>/data.jsonl"
+
+# define cloud resource
+runtime = <runtime_name>
+connections = {"classify_with_llm":
+ {"connection": <connection_name>,
+ "deployment_name": <deployment_name>},
+ "summarize_text_content":
+ {"connection": <connection_name>,
+ "deployment_name": <deployment_name>}
+ }
+
+# create evaluation run
+eval_run = pf.run(
+ flow=flow
+ data=data,
+ run=base_run,
+ column_mapping={
+ "groundtruth": "${data.answer}",
+ "prediction": "${run.outputs.category}",
+ },
+ runtime=runtime,
+ connections=connections
+)
+```
+++
+### View run results in Azure Machine Learning workspace
+
+Submit flow run to cloud will return the portal url of the run. You can open the uri view the run results in the portal.
+
+You can also use following command to view results for runs.
+
+#### Steam the logs
+
+# [Azure CLI](#tab/cli)
+
+```sh
+pfazure run stream --name <run_name>
+```
+
+# [Python SDK](#tab/python)
+
+```python
+pf.stream("<run_name>")
+```
+++
+#### View run outputs
+
+# [Azure CLI](#tab/cli)
+
+```sh
+pfazure run show-details --name <run_name>
+```
+
+# [Python SDK](#tab/python)
+
+```python
+details = pf.get_details(eval_run)
+details.head(10)
+```
+++
+#### View metrics of evaluation run
+
+# [Azure CLI](#tab/cli)
+
+```sh
+pfazure run show-metrics --name <evaluation_run_name>
+```
+
+# [Python SDK](#tab/python)
+
+```python
+pf.get_metrics("evaluation_run_name")
+```
+++
+## Iterative development from fine-tuning
+
+### Local development and testing
+
+During iterative development, as you refine and fine-tune your flow or prompts, you may find it beneficial to carry out multiple iterations locally within your code repository. The community version, **Prompt flow VS Code extension** and **Prompt flow local SDK & CLI** is provided to facilitate pure local development and testing without Azure binding.
+
+#### Prompt flow VS Code extension
+
+With the Prompt Flow VS Code extension installed, you can easily author your flow locally from the VS Code editor, providing a similar UI experience as in the cloud.
+
+To use the extension:
+
+1. Open a prompt flow folder in VS Code Desktop.
+2. Open the ```flow.dag.yaml`` file in notebook view.
+3. Use the visual editor to make any necessary changes to your flow, such as tune the prompts in variants, or add more tools.
+4. To test your flow, select the **Run Flow** button at the top of the visual editor. This will trigger a flow test.
++
+#### Prompt flow local SDK & CLI
+
+If you prefer to use Jupyter, PyCharm, Visual Studio, or other IDEs, you can directly modify the YAML definition in the ```flow.dag.yaml``` file.
++
+You can then trigger a flow single run for testing using either the Prompt Flow CLI or SDK.
+
+# [Azure CLI](#tab/cli)
+
+Assuming you are in working directory `<path-to-the-sample-repo>/examples/flows/standard/`
+
+```sh
+pf flow test --flow web-classification # "web-classification" is the directory name
+```
++
+# [Python SDK](#tab/python)
+
+The return value of `test` function is the flow/node outputs.
+
+```python
+from promptflow import PFClient
+
+pf_client = PFClient()
+
+flow_path = "web-classification" # "web-classification" is the directory name
+
+# Test flow
+flow_inputs = {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g", "answer": "Channel", "evidence": "Url"} # The inputs of the flow.
+flow_result = pf_client.test(flow=flow_path, inputs=inputs)
+print(f"Flow outputs: {flow_result}")
+
+# Test node in the flow
+node_name = "fetch_text_content_from_url" # The node name in the flow.
+node_inputs = {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g"} # The inputs of the node.
+node_result = pf_client.test(flow=flow_path, inputs=node_inputs, node=node_name)
+print(f"Node outputs: {node_result}")
+```
++++
+This allows you to make and test changes quickly, without needing to update the main code repository each time. Once you're satisfied with the results of your local testing, you can then transfer to [submitting runs to the cloud from local repository](#submitting-runs-to-the-cloud-from-local-repository) to perform experiment runs in the cloud.
+
+For more details and guidance on using the local versions, you can refer to the [Prompt flow GitHub community](https://github.com/microsoft/promptflow).
+
+### Go back to studio UI for continuous development
+
+Alternatively, you have the option to go back to the studio UI, using the cloud resources and experience to make changes to your flow in the flow authoring page.
+
+To continue developing and working with the most up-to-date version of the flow files, you can access the terminal in the notebook and pull the latest changes of the flow files from your repository.
+
+In addition, if you prefer continuing to work in the studio UI, you can directly import a local flow folder as a new draft flow. This allows you to seamlessly transition between local and cloud development.
++
+## CI/CD integration
+
+### CI: Trigger flow runs in CI pipeline
+
+Once you have successfully developed and tested your flow, and checked it in as the initial version, you're ready for the next tuning and testing iteration. At this stage, you can trigger flow runs, including batch testing and evaluation runs, using the Prompt Flow CLI. This could serve as an automated workflow in your Continuous Integration (CI) pipeline.
+
+Throughout the lifecycle of your flow iterations, several operations can be automated:
+
+- Running Prompt flow after a Pull Request
+- Running Prompt flow evaluation to ensure results are high quality
+- Registering of prompt flow models
+- Deployment of prompt flow models
+
+For a comprehensive guide on an end-to-end MLOps pipeline that executes a web classification flow, see [Set up end to end LLMOps with Prompt Flow and GitHub](how-to-end-to-end-llmops-with-prompt-flow.md), and the [GitHub demo project](https://github.com/Azure/llmops-gha-demo).
+
+### CD: Continuous deployment
+
+The last step to go to production is to deploy your flow as an online endpoint in Azure Machine Learning. This allows you to integrate your flow into your application and make it available for use.
+
+For more information on how to deploy your flow, see [Deploy flows to Azure Machine Learning managed online endpoint for real-time inference with CLI and SDK](how-to-deploy-to-code.md).
+
+## Next steps
+
+- [Set up end-to-end LLMOps with Prompt Flow and GitHub](how-to-end-to-end-llmops-with-prompt-flow.md)
machine-learning How To Monitor Generative Ai Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-monitor-generative-ai-applications.md
What parameters are configured in your data asset dictates what metrics you can
- **Outputs:** In the Outputs _(Step #3 of the PromptFlow deployment wizard)_, confirm you have selected the required outputs listed above (for example, completion | context | ground_truth) that meet your [metric configuration requirements](#metric-configuration-requirements) > [!NOTE]
-> If your compute instance is behind a VNet, see [Compute instance behind VNet](how-to-create-manage-runtime.md#compute-instance-behind-vnet).
+> If your compute instance is behind a VNet, see [Network isolation in prompt flow](how-to-secure-prompt-flow.md).
## Create your monitor
machine-learning How To Secure Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-secure-prompt-flow.md
+
+ Title: Network isolation in prompt flow (preview)
+
+description: Learn how to secure prompt flow with virtual network.
+++++++ Last updated : 09/12/2023++
+# Network isolation in prompt flow (preview)
+
+You can secure prompt flow using private networks. This article explains the requirements to use prompt flow in an environment secured by private networks.
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Involved services
+
+When you're developing your LLM application using prompt flow, you may want a secured environment. You can make the following services private via network setting.
+
+- Workspace: you can make Azure Machine Learning workspace as private and limit inbound and outbound of it.
+- Compute resource: you can also limit inbound and outbound rule of compute resource in the workspace.
+- Storage account: you can limit the accessibility of the storage account to specific virtual network.
+- Container registry: you may also want to secure your container registry with virtual network.
+- Endpoint: you may want to limit Azure services or IP address to access your endpoint.
+- Related Azure Cognitive Services as such Azure OpenAI, Azure content safety and Azure cognitive search, you can use network config to make them as private then using private endpoint to let Azure Machine Learning services communicate with them.
+
+## Secure prompt flow with workspace managed virtual network
+
+Workspace managed virtual network is the recommend way to support network isolation in prompt flow. It provides easily configuration to secure your workspace. After you enable managed virtual network in the workspace level, resources related to workspace in the same virtual network, will use the same network setting in the workspace level. You can also configure the workspace to use private endpoint to access other Azure resources such as Azure OpenAI, Azure content safety, and Azure cognitive search. You also can configure FQDN rule to approve outbound to non-Azure resources use by your prompt flow such as OpenAI, Pinecone etc.
+
+1. Follow [Workspace managed network isolation](../how-to-managed-network.md) to enable workspace managed virtual network.
+
+ > [!IMPORTANT]
+ > The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. You can use following command to manually trigger network provisioning.
+ ```bash
+ az ml workspace provision-network --subscription <sub_id> -g <resource_group_name> -n <workspace_name>
+ ```
+
+2. If you want to communicate with [private Azure Cognitive Services](../../ai-services/cognitive-services-virtual-networks.md), you need to add related user defined outbound rules to related resource. The Azure Machine Learning workspace creates private endpoint in the related resource with auto approve. If the status is stuck in pending, go to related resource to approve the private endpoint manually.
+
+ :::image type="content" source="./media/how-to-secure-prompt-flow/outbound-rule-cognitive-services.png" alt-text="Screenshot of user defined outbound rule for Azure Cognitive Services." lightbox = "./media/how-to-secure-prompt-flow/outbound-rule-cognitive-services.png":::
+
+ :::image type="content" source="./media/how-to-secure-prompt-flow/outbound-private-endpoint-approve.png" alt-text="Screenshot of user approve private endpoint." lightbox = "./media/how-to-secure-prompt-flow/outbound-private-endpoint-approve.png":::
+
+3. If you're restricting outbound traffic to only allow specific destinations, you must add a corresponding user-defined outbound rule to allow the relevant FQDN.
+
+ :::image type="content" source="./media/how-to-secure-prompt-flow/outbound-rule-non-azure-resources.png" alt-text="Screenshot of user defined outbound rule for non Azure resource." lightbox = "./media/how-to-secure-prompt-flow/outbound-rule-non-azure-resources.png":::
+
+## Secure prompt flow use your own virtual network
+
+- To set up Azure Machine Learning related resources as private, see [Secure workspace resources](../how-to-secure-workspace-vnet.md).
+- Meanwhile, you can follow [private Azure Cognitive Services](../../ai-services/cognitive-services-virtual-networks.md) to make them as private.
+- You can either create private endpoint to the same virtual network or leverage virtual network peering to make them communicate with each other.
+
+## Limitations
+
+- Workspace hub / lean workspace and AI studio don't support bring your own virtual network.
+- Managed online endpoint only supports workspace managed virtual network. If you want to use your own virtual network, you may need one workspace for prompt flow authoring with your virtual network and another workspace for prompt flow deployment using managed online endpoint with workspace managed virtual network.
+
+## Next steps
+
+- [Secure workspace resources](../how-to-secure-workspace-vnet.md)
+- [Workspace managed network isolation](../how-to-managed-network.md)
machine-learning How To Tune Prompts Using Variants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-tune-prompts-using-variants.md
Previously updated : 06/30/2023 Last updated : 09/12/2023 # Tune prompts using variants (preview)
In this article, we'll use **Web Classification** sample flow as example.
1. Open the sample flow and remove the **prepare_examples** node as a start.
- :::image type="content" source="./media/how-to-tune-prompts-using-variants/flow-graph.png" alt-text="Screenshot of the graph view of a Web Classification sample flow. " lightbox = "./media/how-to-tune-prompts-using-variants/flow-graph.png":::
+ :::image type="content" source="./media/how-to-tune-prompts-using-variants/flow-graph.png" alt-text="Screenshot of Web Classification example flow to demonstrate variants. " lightbox = "./media/how-to-tune-prompts-using-variants/flow-graph.png":::
-1. Use the following prompt as a baseline prompt in the **classify_with_llm** node.
+2. Use the following prompt as a baseline prompt in the **classify_with_llm** node.
``` Your task is to classify a given url into one of the following types:
To optimize this flow, there can be multiple ways, and following are two directi
### Create variants 1. Select **Show variants** button on the top right of the LLM node. The existing LLM node is variant_0 and is the default variant.
- :::image type="content" source="./media/how-to-tune-prompts-using-variants/show-variants.png" alt-text="Screenshot of Web Classification highlighting the show variants button. " lightbox = "./media/how-to-tune-prompts-using-variants/show-variants.png":::
-1. Select the **Clone** button on variant_0 to generate variant_1, then you can configure parameters to different values, or update the prompt on variant_1.
- :::image type="content" source="./media/how-to-tune-prompts-using-variants/clone-variant.png" alt-text="Screenshot of Web Classification highlighting the clone button. " lightbox = "./media/how-to-tune-prompts-using-variants/clone-variant.png":::
-1. Repeat the step to create more variants.
-1. Select **Hide variants** to stop adding more variants. And all variants are folded. The default variant is shown for the node.
+2. Select the **Clone** button on variant_0 to generate variant_1, then you can configure parameters to different values or update the prompt on variant_1.
+3. Repeat the step to create more variants.
+4. Select **Hide variants** to stop adding more variants. All variants are folded. The default variant is shown for the node.
For **classify_with_llm** node, based on variant_0: -- create variant_1 where the temperature is changed from 1 to 0.-- create variant_2 where temperature is 0 and you can use the following prompt including few-shots examples.
+- Create variant_1 where the temperature is changed from 1 to 0.
+- Create variant_2 where temperature is 0 and you can use the following prompt including few-shots examples.
```
For **summarize_text_content** node, based on variant_0, you can create variant_
Now, the flow looks as following, 2 variants for **summarize_text_content** node and 3 for **classify_with_llm** node. ### Run all variants with a single row of data and check outputs
In this example, we configure variants for both **summarize_text_content** node
1. Select the **Run** button on the top right. 1. Select an LLM node with variants. The other LLM nodes will use the default variant.
- :::image type="content" source="./media/how-to-tune-prompts-using-variants/run-select-variants.png" alt-text="Screenshot of Submit flow run where you can select an LLM node. " lightbox = "./media/how-to-tune-prompts-using-variants/run-select-variants.png":::
-1. Submit the flow run.
-1. After the flow run is completed, you can check the corresponding result for each variant.
- :::image type="content" source="./media/how-to-tune-prompts-using-variants/run-variant-output.png" alt-text="Screenshot of Web Classification showing a completed run. " lightbox = "./media/how-to-tune-prompts-using-variants/run-variant-output.png":::
-1. Submit another flow run with the other LLM node with variants, and check the outputs.
-1. You can change another input data (for example, use a Wikipedia page URL) and repeat the steps above to test variants for different data.ΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï
+ :::image type="content" source="./media/how-to-tune-prompts-using-variants/run-select-variants.png" alt-text="Screenshot of submitting a flow run when you have variants in flow. " lightbox = "./media/how-to-tune-prompts-using-variants/run-select-variants.png":::
+2. Submit the flow run.
+3. After the flow run is completed, you can check the corresponding result for each variant.
+4. Submit another flow run with the other LLM node with variants, and check the outputs.
+5. You can change another input data (for example, use a Wikipedia page URL) and repeat the steps above to test variants for different data.ΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï
### Evaluate variants When you run the variants with a few single pieces of data and check the results with the naked eye, it cannot reflect the complexity and diversity of real-world data, meanwhile the output isn't measurable, so it's hard to compare the effectiveness of different variants, then choose the best.
-You can submit a bulk test, which allows you test the variants with a large amount of data and evaluate them with metrics, to help you find the best fit.
+You can submit a batch run, which allows you test the variants with a large amount of data and evaluate them with metrics, to help you find the best fit.
1. First you need to prepare a dataset, which is representative enough of the real-world problem you want to solve with Prompt flow. In this example, it's a list of URLs and their classification ground truth. We'll use accuracy to evaluate the performance of variants.
-2. Select **Bulk test** on the top right of the page.
-3. A wizard for **Bulk test & Evaluate** occurs. The first step is to select a node to run all its variants.
+2. Select **Batch run** on the top right of the page.
+3. A wizard for **Submit batch run** occurs. The first step is to select a node to run all its variants.
- To test how well different variants work for each node in a flow, you need to run a bulk test for each node with variants one by one. This helps you avoid the influence of other nodes' variants and focus on the results of this node's variants. This follows the rule of the controlled experiment, which means that you only change one thing at a time and keep everything else the same.
+ To test how well different variants work for each node in a flow, you need to run a batch run for each node with variants one by one. This helps you avoid the influence of other nodes' variants and focus on the results of this node's variants. This follows the rule of the controlled experiment, which means that you only change one thing at a time and keep everything else the same.
- For example, you can select **classify_with_llm** node to run all variants, the **summarize_text_content** node will use it default variant for this bulk test.
+ For example, you can select **classify_with_llm** node to run all variants, the **summarize_text_content** node will use it default variant for this batch run.
-3. Next in **Bulk test settings**, you can set bulk test name, choose a runtime, upload the prepared data.
-4. Next, in **Evaluation settings**, select an evaluation method.
+4. Next in **Batch run settings**, you can set batch run name, choose a runtime, upload the prepared data.
+5. Next, in **Evaluation settings**, select an evaluation method.
Since this flow is for classification, you can select **Classification Accuracy Evaluation** method to evaluate accuracy.
You can submit a bulk test, which allows you test the variants with a large amou
In the **Evaluation input mapping** section, you need to specify ground truth comes from the category column of input dataset, and prediction comes from one of the flow outputs: category.
-5. After reviewing all the settings, you can submit the bulk test.
-6. After the run is submitted, select the link, go to the run detail page.
+6. After reviewing all the settings, you can submit the batch run.
+7. After the run is submitted, select the link, go to the run detail page.
> [!NOTE] > The run may take several minutes to complete.
-### Compare metrics
+### Visualize outputs
-1. After the bulk run and evaluation run complete, in the bulk run detail page, switch to **Metrics** tab, you can see the metrics of 3 variants for the **classify_with_llm** node. It validates the hypothesis that lower temperature and few-shot examples can improve classification accuracy.
-
- :::image type="content" source="./media/how-to-tune-prompts-using-variants/bulk-test-metrics.png" alt-text="Screenshot of the bulk detail page on the metrics tab. " lightbox = "./media/how-to-tune-prompts-using-variants/bulk-test-metrics.png":::
-
- > [!NOTE]
- > You may fail to reproduce the same results as shown in the screenshots, it is because of the randomness of LLM output and the limited 30 records of data.
-
-1. To further investigate how different variants predict, you can go to **Outputs** tab, select an evaluation run, check prediction results for each row of data.
-
- :::image type="content" source="./media/how-to-tune-prompts-using-variants/bulk-test-outputs.png" alt-text="Screenshot of the bulk detail page on the outputs tab. " lightbox = "./media/how-to-tune-prompts-using-variants/bulk-test-outputs.png":::
-
-1. After you identify that which variant is the best, you can go back to the flow authoring page and set that variant as default variant of the node
-1. You can repeat the above steps to evaluate the variants of **summarize_text_content** node as well.
+1. After the batch run and evaluation run complete, in the run detail page, multi-select the batch runs for each variant, then select **Visualize outputs**. You will see the metrics of 3 variants for the **classify_with_llm** node and LLM predicted outputs for each record of data.
+ :::image type="content" source="./media/how-to-tune-prompts-using-variants/visualize-outputs.png" alt-text="Screenshot of runs showing visualize outputs. " lightbox = "./media/how-to-tune-prompts-using-variants/3-2-variants.png":::
+2. After you identify which variant is the best, you can go back to the flow authoring page and set that variant as default variant of the node
+3. You can repeat the above steps to evaluate the variants of **summarize_text_content** node as well.
Now, you've finished the process of tuning prompts using variants. You can apply this technique to your own Prompt flow to find the best variant for the LLM node.
machine-learning Migrate Managed Inference Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/migrate-managed-inference-runtime.md
+
+ Title: Migrate managed online endpoint/deployment runtime to compute instance or serverless runtime
+
+description: Migrate managed online endpoint/deployment runtime to compute instance or serverless runtime.
+++++++ Last updated : 08/31/2023+++
+# Deprecation plan for managed online endpoint/deployment runtime
+
+Managed online endpoint/deployment as runtime is deprecated. We recommend you migrate to compute instance or serverless runtime.
+
+From **September 2013**, we'll stop the creation for managed online endpoint/deployment as runtime, the existing runtime will still be supported until **November 2023**.
+
+## Migrate to compute instance runtime
+
+If the existing managed online endpoint/deployment runtime is used by yourself and you didn't share with other users, you can migrate to compute instance runtime.
+
+- Create compute instance yourself or ask the workspace admin to create one for you. To learn more, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
+- Using the compute instance to create a runtime. You can reuse the custom environment of the existing managed online endpoint/deployment runtime. To learn more, see [Customize environment for runtime](how-to-customize-environment-runtime.md).
+
+## Next steps
+
+- [Customize environment for runtime](how-to-customize-environment-runtime.md)
+- [Create and manage runtimes](how-to-create-manage-runtime.md)
machine-learning Resource Limits Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md
Previously updated : 09/27/2022 Last updated : 09/13/2023 ms.metadata: product-dependency
machine-learning Tutorial Network Isolation For Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-network-isolation-for-feature-store.md
+
+ Title: "Tutorial 6: Network isolation for feature store (preview)"
+
+description: This is part 6 of a tutorial series on managed feature store.
+++++++ Last updated : 09/13/2023++
+#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial 6: Network isolation with feature store (preview)
++
+An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and inference steps look up the feature data. For more information about feature stores, see the [feature store concepts](./concept-what-is-managed-feature-store.md) document.
+
+This tutorial describes how to configure secure ingress through a private endpoint, and secure egress through a managed virtual network.
+
+Part 1 of this tutorial series showed how to create a feature set specification with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial series showed how to enable materialization and perform a backfill. Part 3 of this tutorial series showed how to experiment with features, as a way to improve model performance. Part 3 also showed how a feature store increases agility in the experimentation and training flows. Tutorial 4 described how to run batch inference. Tutorial 5 explained how to use feature store for online/realtime inference use cases. Tutorial 6 shows how to
+
+> [!div class="checklist"]
+> * Set up the necessary resources for network isolation of a managed feature store.
+> * Create a new feature store resource.
+> * Set up your feature store to support network isolation scenarios.
+> * Update your project workspace (current workspace) to support network isolation scenarios .
+
+## Prerequisites
+
+> [!NOTE]
+> This tutorial uses Azure Machine Learning notebook with **Serverless Spark Compute**.
+
+* Make sure you complete parts 1 through 5 of this tutorial series.
+* An Azure Machine Learning workspace, enabled with Managed virtual network for **serverless spark jobs**.
+* If your workspace has an **Azure Container Registry**, it must use **Premium SKU** to successfully complete the workspace configuration. To configure your project workspace:
+ 1. Create a YAML file named `network.yml`:
+ ```YAML
+ managed_network:
+ isolation_mode: allow_internet_outbound
+ ```
+ 1. Execute these commands to update the workspace and provision the managed virtual network for serverless Spark jobs:
+
+ ```cli
+ az ml workspace update --file network.yml --resource-group my_resource_group --name
+ my_workspace_name
+ az ml workspace provision-network --resource-group my_resource_group --name my_workspace_name
+ --include-spark
+ ```
+
+ For more information, see [Configure for serverless spark job](./how-to-managed-network.md#configure-for-serverless-spark-jobs).
+
+* Your user account must have the `Owner` or `Contributor` role assigned to the resource group where you create the feature store. Your user account also needs the `User Access Administrator` role.
+
+> [!IMPORTANT]
+> For your Azure Machine Learning workspace, set the `isolation_mode` to `allow_internet_outbound`. This is the only `isolation_mode` option available at this time. However, we are actively working to add `allow_only_approved_outbound` isolation_mode functionality. As a workaround, this tutorial will show how to connect to sources, materialization store and observation data securely through private endpoints.
+
+## Set up
+
+This tutorial uses the Python feature store core SDK (`azureml-featurestore`). The Python SDK is used for feature set development and testing only. The CLI is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities. This is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios where CLI/YAML is preferred.
+
+You don't need to explicitly install these resources for this tutorial, because in the set-up instructions shown here, the `conda.yaml` file covers them.
+
+To prepare the notebook environment for development:
+
+1. Clone the [azureml-examples](https://github.com/azure/azureml-examples) repository to your local GitHub resources with this command:
+
+ `git clone --depth 1 https://github.com/Azure/azureml-examples`
+
+ You can also download a zip file from the [azureml-examples](https://github.com/azure/azureml-examples) repository. At this page, first select the `code` dropdown, and then select `Download ZIP`. Then, unzip the contents into a folder on your local device.
+
+1. Upload the feature store samples directory to the project workspace
+
+ 1. In the Azure Machine Learning workspace, open the Azure Machine Learning studio UI.
+ 1. Select **Notebooks** in left navigation panel.
+ 1. Select your user name in the directory listing.
+ 1. Select ellipses (**...**) and then select **Upload folder**.
+ 1. Select the feature store samples folder from the cloned directory path: `azureml-examples/sdk/python/featurestore-sample`.
+
+1. Run the tutorial
+
+ * Option 1: Create a new notebook, and execute the instructions in this document, step by step.
+ * Option 2: Open existing notebook `featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb`. You may keep this document open and refer to it for more explanation and documentation links.
+
+ 1. Select **Serverless Spark Compute** in the top navigation **Compute** dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **Configure session**.
+ 1. Select **Configure session** in the top status bar.
+ 1. Select **Python packages**.
+ 1. Select **Upload conda file**.
+ 1. Select file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` located on your local device.
+ 1. (Optional) Increase the session time-out (idle time in minutes) to reduce the serverless spark cluster startup time.
+
+1. This code cell starts the Spark session. It needs about 10 minutes to install all dependencies and start the Spark session.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=start-spark-session)]
+
+1. Set up the root directory for the samples
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=root-dir)]
+
+1. Set up the Azure Machine Learning CLI:
+
+ * Install the Azure Machine Learning CLI extension
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=install-ml-ext-cli)]
+
+ * Authenticate
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=auth-cli)]
+
+ * Set the default subscription
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=set-default-subs-cli)]
+
+ > [!NOTE]
+ > A **feature store workspace** supports feature reuse across projects. A **project workspace** - the current workspace in use - leverages features from a specific feature store, to train and inference models. Many project workspaces can share and reuse the same feature store workspace.
+
+## Provision the necessary resources
+
+You can create a new Azure Data Lake Storage (ADLS) Gen2 storage account and containers, or reuse existing storage account and container resources for the feature store. In a real-world situation, different storage accounts can host the ADLS Gen2 containers. Both options work, depending on your specific requirements.
+
+For this tutorial, you create three separate storage containers in the same ADLS Gen2 storage account:
+
+ * Source data
+ * Offline store
+ * Observation data
+
+1. Create an ADLS Gen2 storage account for source data, offline store, and observation data.
+
+ 1. Provide the name of an Azure Data Lake Storage Gen2 storage account in the following code sample. You can execute the following code cell with the provided default settings. Optionally, you can override the default settings.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=default-settings)]
+
+ 1. This code cell creates the ADLS Gen2 storage account defined in the above code cell.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-storage-cli)]
+
+ 1. This code cell creates a new storage container for offline store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-offline-cli)]
+
+ 1. This code cell creates a new storage container for source data.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-source-cli)]
+
+ 1. This code cell creates a new storage container for observation data.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-obs-cli)]
+
+1. Copy the sample data required for this tutorial series into the newly created storage containers.
+
+ 1. To write data to the storage containers, ensure that **Contributor** and **Storage Blob Data Contributor** roles are assigned to the user identity on the created ADLS Gen2 storage account in the Azure portal [following these steps](../role-based-access-control/role-assignments-portal.md).
+
+ > [!IMPORTANT]
+ > Once you have ensured that the **Contributor** and **Storage Blob Data Contributor** roles are assigned to the user identity, wait for a few minutes after role assignment to let permissions propagate before proceeding with the next steps. To learn more about access control, see [role-based access control (RBAC) for Azure storage accounts](../storage/blobs/data-lake-storage-access-control-model.md#role-based-access-control-azure-rbac)
+
+ The following code cells copy sample source data for transactions feature set used in this tutorial from a public storage account to the newly created storage account.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=copy-transact-data)]
+
+ 1. Copy sample source data for account feature set used in this tutorial from a public storage account to the newly created storage account.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=copy-account-data)]
+
+ 1. Copy sample observation data used for training from a public storage account to the newly created storage account.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=copy-obs-train-data)]
+
+ 1. Copy sample observation data used for batch inference from a public storage account to the newly created storage account.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=copy-obs-batch-data)]
+
+1. Disable the public network access on the newly created storage account.
+
+ 1. This code cell disables public network access for the ADLS Gen2 storage account created earlier.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=disable-pna-gen2-cli)]
+
+ 1. Set ARM IDs for the offline store, source data, and observation data containers.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=set-container-arm-ids)]
+
+## Provision the user-assigned managed identity (UAI)
+
+1. Create a new User-assigned managed identity.
+
+ 1. In the following code cell, provide a name for the user-assigned managed identity that you would like to create.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=define-uai-name)]
+
+ 1. This code cell creates the UAI.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-uai-cli)]
+
+ 1. This code cell retrieves the principal ID, client ID, and ARM ID property values for the created UAI.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=retrieve-uai-props)]
+
+ ### Grant RBAC permission to the user-assigned managed identity (UAI)
+
+ The UAI is assigned to the feature store, and requires the following permissions:
+
+ |Scope| Action/Role|
+ |--|--|
+ |Feature store |Azure Machine Learning Data Scientist role|
+ |Storage account of feature store offline store |Storage Blob Data Contributor role|
+ |Storage accounts of source data |Storage Blob Data Contributor role|
+
+ The next CLI commands will assign the **Storage Blob Data Contributor** role to the UAI. In this example, "Storage accounts of source data" doesn't apply because you read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see role-based access control for [Azure storage accounts](../storage/blobs/data-lake-storage-access-control-model.md#role-based-access-control-azure-rbac) and [Azure Machine Learning workspace](./how-to-assign-roles.md).
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=uai-offline-role-cli)]
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=uai-source-role-cli)]
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=uai-obs-role-cli)]
+
+## Create a feature store with materialization enabled
+
+ ### Set the feature store parameters
+
+ Set the feature store name, location, subscription ID, group name, and ARM ID values, as shown in this code cell sample:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=fs-params)]
+
+ Following code cell generates a YAML specification file for a feature store with materialization enabled.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-fs-yaml)]
+
+ ### Create the feature store
+
+ This code cell creates a feature store with materialization enabled by using the YAML specification file generated in the previous step.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-fs-cli)]
+
+ ### Initialize the Azure Machine Learning feature store core SDK client
+
+ The SDK client initialized in this cell facilitates development and consumption of features:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=init-fs-core-sdk)]
+
+ ### Grant UAI access to the feature store
+
+ This code cell assigns **AzureML Data Scientist** role to the UAI on the created feature store. To learn more about access control, see role-based access control for [Azure storage accounts](../storage/blobs/data-lake-storage-access-control-model.md#role-based-access-control-azure-rbac) and [Azure Machine Learning workspace](./how-to-assign-roles.md).
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=uai-fs-role-cli)]
+
+ Follow these instructions to [get the Azure AD Object ID for your user identity](/partner-center/find-ids-and-domain-names#find-the-user-object-id). Then, use your Azure AD Object ID in the following command to assign **AzureML Data Scientist** role to your user identity on the created feature store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=aad-fs-role-cli)]
+
+ ### Obtain the default storage account and key vault for the feature store, and disable public network access to the corresponding resources
+
+ The following code cell gets the feature store object for the next steps.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=fs-get)]
+
+ This code cell gets names of default storage account and key vault for the feature store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=copy-storage-kv-props)]
+
+ This code cell disables public network access to the default storage account for the feature store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=disable-pna-fs-gen2-cli)]
+
+ The following cell prints name of the default key vault for the feature store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=print-default-kv)]
+
+ ### Disable the public network access for the default feature store key vault created earlier
+
+ * Open the default key vault that you created in the previous cell, in the Azure portal.
+ * Select the **Networking** tab.
+ * Select **Disable public access**, and then select **Apply** on the bottom left of the page.
+
+## Enable the managed virtual network for the feature store workspace
+
+ ### Update the feature store with the necessary outbound rules
+
+ The following code cell creates a YAML specification file for outbound rules that are defined for the feature store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-fs-vnet-yaml)]
+
+ This code cell updates the feature store using the generated YAML specification file with the outbound rules.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-fs-vnet-cli)]
+
+ ### Create private endpoints for the defined outbound rules
+
+ A `provision-network` command creates private endpoints from the managed virtual network where the materialization job executes to the source, offline store, observation data, default storage account, and the default key vault for the feature store. This command may need about 20 minutes to complete.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=fs-vnet-provision-cli)]
+
+ This code cell confirms that private endpoints defined by the outbound rules have been created.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=fs-show-cli)]
+
+## Update the managed virtual network for the project workspace
+
+ Next, update the managed virtual network for the project workspace. First, get the subscription ID, resource group, and workspace name for the project workspace.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=lookup-subid-rg-wsname)]
+
+ ### Update the project workspace with the necessary outbound rules
+
+ The project workspace needs access to these resources:
+
+ * Source data
+ * Offline store
+ * Observation data
+ * Feature store
+ * Default storage account of feature store
+
+ This code cell updates the project workspace using the generated YAML specification file with required outbound rules.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-fs-prjws-vnet-yaml)]
+
+ This code cell updates the project workspace using the generated YAML specification file with the outbound rules.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=create-fs-prjws-vnet-cli)]
+
+ This code cell confirms that private endpoints defined by the outbound rules have been created.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=show-prjws-cli)]
+
+ You can also verify the outbound rules from the Azure portal by navigating to **Networking** from left navigation panel for the project workspace and then opening **Workspace managed outbound access** tab.
+
+ :::image type="content" source="./media/tutorial-network-isolation-for-feature-store/project-workspace-outbound-rules.png" lightbox="./media/tutorial-network-isolation-for-feature-store/project-workspace-outbound-rules.png" alt-text="This screenshot shows outbound rules for a project workspace in Azure portal.":::
+
+## Prototype and develop a transaction rolling aggregation feature set
+
+ ### Explore the transactions source data
+
+ > [!NOTE]
+ > A publicly-accessible blob container hosts the sample data used in this tutorial. It can only be read in Spark via `wasbs` driver. When you create feature sets using your own source data, please host them in an ADLS Gen2 account, and use an `abfss` driver in the data path.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=explore-txn-src-data)]
+
+ ### Locally develop a transactions feature set
+
+ A feature set specification is a self-contained feature set definition that can be developed and tested locally.
+
+ Create the following rolling window aggregate features:
+
+ * transactions three-day count
+ * transactions amount three-day sum
+ * transactions amount three-day avg
+ * transactions seven-day count
+ * transactions amount seven-day sum
+ * transactions amount seven-day avg
+
+ Inspect the feature transformation code file `featurestore/featuresets/transactions/spec/transformation_code/transaction_transform.py`. This spark transformer performs the rolling aggregation defined for the features.
+
+ To understand the feature set and transformations in more detail, see [feature store concepts](./concept-what-is-managed-feature-store.md).
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=develop-txn-fset-locally)]
+
+ ### Export a feature set specification
+
+ To register a feature set specification with the feature store, that specification must be saved in a specific format.
+
+ To inspect the generated transactions feature set specification, open this file from the file tree to see the specification:
+
+ `featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml`
+
+ The specification contains these elements:
+
+ * `source`: a reference to a storage resource - in this case a parquet file in a blob storage resource
+ * `features`: a list of features and their datatypes. If you provide transformation code
+ * `index_columns`: the join keys required to access values from the feature set
+
+ As another benefit of persisting a feature set specification as a YAML file, the specification can be version controlled. Learn more about feature set specification in the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set specification YAML reference](./reference-yaml-featureset-spec.md).
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=dump-transactions-fs-spec)]
+
+## Register a feature-store entity
+
+ Entities help enforce use of the same join key definitions across feature sets that use the same logical entities. Entity examples could include account entities, customer entities, etc. Entities are typically created once and then reused across feature sets. For more information, see the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md).
+
+ This code cell creates an account entity for the feature store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=register-acct-entity-cli)]
+
+## Register the transaction feature set with the feature store, and submit a materialization job
+
+ To share and reuse a feature set asset, you must first register that asset with the feature store. Feature set asset registration offers managed capabilities including versioning and materialization. This tutorial series covers these topics.
+
+ The feature set asset references both the feature set spec that you created earlier, and other properties like version and materialization settings.
+
+ ### Create a feature set
+
+ The following code cell creates a feature set by using a predefined YAML specification file.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=register-txn-fset-cli)]
+
+ This code cell previews the newly created feature set.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=preview-fs-cli)]
+
+ ### Submit a backfill materialization job
+
+ The following code cell defines start and end time values for the feature materialization window, and submits a backfill materialization job.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=submit-backfill-cli)]
+
+ This code cell checks the status of the backfill materialization job, by providing `<JOB_ID_FROM_PREVIOUS_COMMAND>`.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=check-job-status-cli)]
+
+ Next, This code cell lists all the materialization jobs for the current feature set.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=list-material-ops-cli)]
+
+## Use the registered features to generate training data
+
+ ### Load observation data
+
+ Start by exploring the observation data. The core data used for training and inference typically involves observation data. The core data is then joined with feature data, to create a full training data resource. Observation data is the data captured during the time of the event. In this case, it has core transaction data including transaction ID, account ID, and transaction amount values. Here, since the observation data is used for training, it also has the target variable appended (`is_fraud`).
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=load-obs-data)]
+
+ ### Get the registered feature set, and list its features
+
+ Next, get a feature set by providing its name and version, and then list features in this feature set. Also, print some sample feature values.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=get-txn-fset)]
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=print-txn-fset-sample-values)]
+
+ ### Select features, and generate training data
+
+ Select features for the training data, and use the feature store SDK to generate the training data.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/network_isolation/Network Isolation for Feature store.ipynb?name=select-features-and-gen-training-data)]
+
+ You can see that a point-in-time join appended the features to the training data.
+
+## Optional next steps
+
+ Now that you successfully created a secure feature store and submitted a successful materialization run, you can go through the tutorial series to build an understanding of the feature store.
+
+ This tutorial contains a mixture of steps from tutorials 1 and 2 of this series. Remember to replace the necessary public storage containers used in the other tutorial notebooks with the ones created in this tutorial notebook, for the network isolation.
+
+We have reached the end of the tutorial. Your training data uses features from a feature store. You can either save it to storage for later use, or directly run model training on it.
+
+## Next steps
+
+* [Part 3: Experiment and train models using features](./tutorial-experiment-train-models-using-features.md)
+* [Part 4: Enable recurrent materialization and run batch inference](./tutorial-enable-recurrent-materialization-run-batch-inference.md)
machine-learning Tutorial Online Materialization Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-online-materialization-inference.md
+
+ Title: "Tutorial 5: Enable online materialization and run online inference (preview)"
+
+description: This is part 5 of a tutorial series on managed feature store.
+++++++ Last updated : 09/13/2023++
+#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial 5: Enable online materialization and run online inference (preview)
++
+An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and inference steps look up the feature data. For more information about feature stores, see [feature store concepts](./concept-what-is-managed-feature-store.md).
+
+Part 1 of this tutorial series showed how to create a feature set specification with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial series showed how to enable materialization and perform a backfill. Part 3 of this tutorial series showed how to experiment with features, as a way to improve model performance. Part 3 also showed how a feature store increases agility in the experimentation and training flows. Part 4 described how to run batch inference.
+
+In this tutorial, you'll
+
+> [!div class="checklist"]
+> * Set up an Azure Cache for Redis.
+> * Attach a cache to a feature store as the online materialization store, and grant the necessary permissions.
+> * Materialize a feature set to the online store.
+> * Test an online deployment with mock data.
+
+## Prerequisites
+
+> [!NOTE]
+> This tutorial uses Azure Machine Learning notebook with **Serverless Spark Compute**.
+
+* Make sure you complete parts 1 through 4 of this tutorial series. This tutorial reuses the feature store and other resources created in the earlier tutorials.
+
+## Set up
+
+This tutorial uses the Python feature store core SDK (`azureml-featurestore`). The Python SDK is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities.
+
+You don't need to explicitly install these resources for this tutorial, because in the set-up instructions shown here, the `online.yaml` file covers them.
+
+To prepare the notebook environment for development:
+
+1. Clone the [azureml-examples](https://github.com/azure/azureml-examples) repository to your local GitHub resources with this command:
+
+ `git clone --depth 1 https://github.com/Azure/azureml-examples`
+
+ You can also download a zip file from the [azureml-examples](https://github.com/azure/azureml-examples) repository. At this page, first select the `code` dropdown, and then select `Download ZIP`. Then, unzip the contents into a folder on your local device.
+
+1. Upload the feature store samples directory to the project workspace
+
+ 1. In the Azure Machine Learning workspace, open the Azure Machine Learning studio UI.
+ 1. Select **Notebooks** in left navigation panel.
+ 1. Select your user name in the directory listing.
+ 1. Select ellipses (**...**) and then select **Upload folder**.
+ 1. Select the feature store samples folder from the cloned directory path: `azureml-examples/sdk/python/featurestore-sample`.
+
+1. Run the tutorial
+
+ * Option 1: Create a new notebook, and execute the instructions in this document, step by step.
+ * Option 2: Open existing notebook `featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb`. You may keep this document open and refer to it for more explanation and documentation links.
+
+ 1. Select **Serverless Spark Compute** in the top navigation **Compute** dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **Configure session**.
+ 1. Select **Configure session** in the top status bar.
+ 1. Select **Python packages**.
+ 1. Select **Upload conda file**.
+ 1. Select file `azureml-examples/sdk/python/featurestore-sample/project/env/online.yml` located on your local device.
+ 1. (Optional) Increase the session time-out (idle time in minutes) to reduce the serverless spark cluster startup time.
+
+1. This code cell starts the Spark session. It needs about 10 minutes to install all dependencies and start the Spark session.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=start-spark-session)]
+
+1. Set up the root directory for the samples
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=root-dir)]
+
+1. Initialize the `MLClient` for the project workspace, where the tutorial notebook runs. The `MLClient` is used for the create, read, update, and delete (CRUD) operations.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-prj-ws-client)]
+
+1. Initialize the `MLClient` for the feature store workspace, for the create, read, update, and delete (CRUD) operations on the feature store workspace.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-fs-ws-client)]
+
+ > [!NOTE]
+ > A **feature store workspace** supports feature reuse across projects. A **project workspace** - the current workspace in use - leverages features from a specific feature store, to train and inference models. Many project workspaces can share and reuse the same feature store workspace.
+
+1. As mentioned earlier, this tutorial uses the Python feature store core SDK (`azureml-featurestore`). This initialized SDK client is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-fs-core-sdk)]
+
+## Prepare Azure Cache for Redis
+
+This tutorial uses Azure Cache for Redis as the online materialization store. You can create a new Redis instance, or reuse an existing instance.
+
+1. Set values for the Azure Cache for Redis resource, to use as online materialization store. In this code cell, define the name of the Azure Cache for Redis resource to create or reuse. You can override other default settings.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=redis-settings)]
+
+1. You can create a new Redis instance. You would select the Redis Cache tier (basic, standard, premium, or enterprise). Choose an SKU family available for the cache tier you select. For more information about tiers and cache performance, see [this resource](../azure-cache-for-redis/cache-best-practices-performance.md). For more information about SKU tiers and Azure cache families, see [this resource](https://azure.microsoft.com/pricing/details/cache/).
+
+ Execute this code cell to create an Azure Cache for Redis with premium tier, SKU family `P`, and cache capacity 2. It may take from five to 10 minutes to prepare the Redis instance.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=provision-redis)]
+
+1. Optionally, this code cell reuses an existing Redis instance with the previously defined name.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=reuse-redis)]
+
+1. Retrieve the user-assigned managed identity (UAI) that the feature store used for materialization. This code cell retrieves the principal ID, client ID, and ARM ID property values for the UAI used by the feature store for data materialization.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=retrieve-uai)]
+
+1. Grant the `Contributor` role to the UAI on the Azure Cache for Redis. This role is required to write data into Redis during materialization. This code cell grants the `Contributor` role to the UAI on the Azure Cache for Redis.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=uai-redis-rbac)]
+
+## Attach online materialization store to the feature store
+
+The feature store needs the Azure Cache for Redis as an attached resource, for use as the online materialization store. This code cell handles that step.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=attach-online-store)]
+
+## Materialize the `accounts` feature set data to online store
+
+### Enable materialization on the `accounts` feature set
+
+Earlier in this tutorial series, you did **not** materialize the accounts feature set because it had precomputed features, and only batch inference scenarios used it. This code cell enables online materialization so that the features become available in the online store, with low latency access. For consistency, it also enables offline materialization. Enabling offline materialization is optional.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=enable-accounts-material)]
+
+### Backfill the `account` feature set
+
+The `begin_backfill` function backfills data to all the materialization stores enabled for this feature set. Here offline and online materialization are both enabled. This code cell backfills the data to both online and offline materialization stores.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=start-accounts-backfill)]
+
+This code cell tracks completion of the backfill job. With the Azure Cache for Redis premium tier provisioned earlier, this step may take approximately 10 minutes to complete.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=track-accounts-backfill)]
+
+## Materialize `transactions` feature set data to the online store
+
+Earlier in this tutorial series, you materialized `transactions` feature set data to the offline materialization store.
+
+1. This code cell enables the `transactions` feature set online materialization.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=enable-transact-material)]
+
+1. This code cell backfills the data to both the online and offline materialization store, to ensure that both stores have the latest data. The recurrent materialization job, which you set up in tutorial 2 of this series, now materializes data to both online and offline materialization stores.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=start-transact-material)]
+
+ This code cell tracks completion of the backfill job. Using the premium tier Azure Cache for Redis provisioned earlier, this step may take approximately five minutes to complete.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=track-transact-material)]
+
+## Test locally
+
+Now, use your development environment to look up features from the online materialization store. The tutorial notebook attached to **Serverless Spark Compute** serves as the development environment.
+
+ This code cell parses the list of features from the existing feature retrieval specification.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=parse-feat-list)]
+
+ This code retrieves feature values from the online materialization store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-online-lookup)]
+
+Prepare some observation data for testing, and use that data to look up features from the online materialization store. During the online look-up, the keys (`accountID`) defined in the observation sample data might not exist in the Redis (due to `TTL`). In this case:
+
+1. Open the Azure portal.
+1. Navigate to the Redis instance.
+1. Open the console for the Redis instance, and check for existing keys with the `KEYS *` command.
+1. Replace the `accountID` values in the sample observation data with the existing keys.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=online-feat-loockup)]
+
+These steps looked up features from the online store. In the next step, you'll test online features using an Azure Machine Learning managed online endpoint.
+
+## Test online features from Azure Machine Learning managed online endpoint
+
+A managed online endpoint deploys and scores models for online/realtime inference. You can use any available inference technology - like Kubernetes, for example.
+
+This step involves these actions:
+
+1. Create an Azure Machine Learning managed online endpoint.
+1. Grant required role-based access control (RBAC) permissions.
+1. Deploy the model that you trained in the tutorial 3 of this tutorial series. The scoring script used in this step has the code to look up online features.
+1. Score the model with sample data.
+
+### Create Azure Machine Learning managed online endpoint
+
+Visit [this resource](./how-to-deploy-online-endpoints.md?tabs=azure-cli) to learn more about managed online endpoints. With the managed feature store API, you can also look up online features from other inference platforms.
+
+This code cell defines the `fraud-model` managed online endpoint.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=define-endpoint)]
+
+This code cell creates the managed online endpoint defined in the previous code cell.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=create-endpoint)]
+
+### Grant required RBAC permissions
+
+Here, you grant required RBAC permissions to the managed online endpoint on the Redis instance and feature store. The scoring code in the model deployment needs these RBAC permissions to successfully look up features from the online store with the managed feature store API.
+
+#### Get managed identity of the managed online endpoint
+
+This code cell retrieves the managed identity of the managed online endpoint:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=get-endpoint-identity)]
+
+#### Grant the `Contributor` role to the online endpoint managed identity on the Azure Cache for Redis
+
+This code cell grants the `Contributor` role to the online endpoint managed identity on the Redis instance. This RBAC permission is needed to materialize data into the Redis online store.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=endpoint-redis-rbac)]
+
+#### Grant `AzureML Data Scientist` role to the online endpoint managed identity on the feature store
+
+This code cell grants the `AzureML Data Scientist` role to the online endpoint managed identity on the feature store. This RBAC permission is required for successful deployment of the model to the online endpoint.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=endpoint-fs-rbac)]
+
+#### Deploy the model to the online endpoint
+
+Review the scoring script `project/fraud_model/online_inference/src/scoring.py`. The scoring script
+
+1. Loads the feature metadata from the feature retrieval specification packaged with the model during model training. Tutorial 3 of this tutorial series covered this task. The specification has features from both the `transactions` and `accounts` feature sets.
+1. Looks up the online features using the index keys from the request, when an input inference request is received. In this case, for both feature sets, the index column is `accountID`.
+1. Passes the features to the model to perform the inference, and returns the response. The response is a boolean value that represents the variable `is_fraud`.
+
+Next, execute this code cell to create a managed online deployment definition for model deployment.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=define-online-deployment)]
+
+Deploy the model to online endpoint with this code cell. The deployment may need four to five minutes.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=begin-online-deployment)]
+
+### Test online deployment with mock data
+
+Execute this code cell to test the online deployment with the mock data. You should see `0` or `1` as the output of this cell.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=test-online-deployment)]
+
+## Next steps
+
+* [Network isolation with feature store (preview)](./tutorial-network-isolation-for-feature-store.md)
+* [Azure Machine Learning feature stores samples repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/featurestore_sample)
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-mlflow.md
With MLflow Tracking, you can connect Azure Machine Learning as the back end of
You can use MLflow Tracking to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning back-end support. - You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud via [Azure Machine Learning compute](../how-to-create-attach-compute-cluster.md). Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning](../how-to-train-mlflow-projects.md).
managed-grafana How To Sync Teams With Aad Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-sync-teams-with-aad-groups.md
Last updated 9/11/2023
In this guide, you learn how to use Azure Active Directory (Azure AD) groups with [Grafana Team Sync](https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-team-sync/) (Azure AD group sync) to set dashboard permissions in Azure Managed Grafana. Grafana allows you to control access to its resources at multiple levels. In Managed Grafana, you use the built-in Azure RBAC roles for Grafana to define access rights users have. These permissions are applied to all resources in your Grafana workspace by default. You can't, for example, grant someone edit permission to only one particular dashboard with RBAC. If you assign a user to the Grafana Editor role, that user can make changes to any dashboard in your Grafana workspace. Using Grafana's [granular permission model](https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-team-sync/), you can elevate or demote a user's default permission level for specific dashboards (or dashboard folders).
-Setting up dashboard permissions for individual users in Managed Grafana is a little tricky. Managed Grafana stores the user assignments for its built-in RBAC roles in Azure AD. For performance reasons, it doesn't automatically synchronizes the user assignments to Grafana workspaces. Users in these roles don't show up in Grafana's **Configuration** UI until they've signed in once. You can only grant users extra permissions after they appear in the Grafana user list in **Configuration**. Azure AD group sync gets around this issue. With this feature, you create a *Grafana team* in your Grafana workspace linked with an Azure AD group. You then use that team in configuring your dashboard permissions. For example, you can grant a viewer the ability to modify a dashboard or block an editor from being able to make changes. You don't need to manage the team's member list separately since its membership is already defined in the associated Azure AD group.
+Setting up dashboard permissions for individual users in Managed Grafana is a little tricky. Managed Grafana stores the user assignments for its built-in RBAC roles in Azure AD. For performance reasons, it doesn't automatically synchronize the user assignments to Grafana workspaces. Users in these roles don't show up in Grafana's **Configuration** UI until they've signed in once. You can only grant users extra permissions after they appear in the Grafana user list in **Configuration**. Azure AD group sync gets around this issue. With this feature, you create a *Grafana team* in your Grafana workspace linked with an Azure AD group. You then use that team in configuring your dashboard permissions. For example, you can grant a viewer the ability to modify a dashboard or block an editor from being able to make changes. You don't need to manage the team's member list separately since its membership is already defined in the associated Azure AD group.
> [!IMPORTANT] > Azure AD group sync is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
managed-instance-apache-cassandra Compare Cosmosdb Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/compare-cosmosdb-managed-instance.md
- Title: Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
-description: Learn about the differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra. You also learn the benefits of each of these services and when to choose them.
---- Previously updated : 12/10/2021---
-# Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
-
-In this article, you will learn the differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra. This article provides recommendations on how to choose between the two services, or when to host your own Apache Cassandra environment.
-
-## Key differences
-
-Azure Managed Instance for Apache Cassandra provides automated deployment, scaling, and operations to maintain the node health for open-source Apache Cassandra instances in Azure. It also provides the capability to scale out the capacity of existing on-premises or cloud self-hosted Apache Cassandra clusters. It scales out by adding managed Cassandra datacenters to the existing cluster ring.
-
-The [Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandra-introduction.md) in Azure Cosmos DB is a compatibility layer over Microsoft's globally distributed cloud-native database service [Azure Cosmos DB](../cosmos-db/index.yml). The combination of these services in Azure provides a continuum of choices for users of Apache Cassandra in complex hybrid cloud environments.
-
-## How to choose?
-
-The following table shows the common scenarios, workload requirements, and aspirations where each of this deployment approaches fit:
-
-| |Self-hosted Apache Cassandra on-premises or in Azure | Azure Managed Instance for Apache Cassandra | Azure Cosmos DB for Apache Cassandra |
-|||||
-|**Deployment type**| You have a highly customized Apache Cassandra deployment with custom patches or snitches. | You have a standard open-source Apache Cassandra deployment without any custom code. | You are content with a platform that is not Apache Cassandra underneath but is compliant with all open-source client drivers at a [wire protocol](../cosmos-db/cassandra-support.md) level. |
-| **Operational overhead**| You have existing Cassandra experts who can deploy, configure, and maintain your clusters. | You want to lower the operational overhead for your Apache Cassandra node health, but still maintain control over the platform level configurations such as replication and consistency. | You want to eliminate the operational overhead by using a fully managed Platform-as-as-service database in the cloud. |
-| **Operating system requirements**| You have a requirement to maintain custom or golden Virtual Machine operating system images. | You can use vanilla images but want to have control over SKUs, memory, disks, and IOPS. | You want capacity provisioning to be simplified and expressed as a single normalized metric, with a one-to-one relationship to throughput, such as [request units](../cosmos-db/request-units.md) in Azure Cosmos DB. |
-| **Pricing model**| You want to use management software such as Datastax tooling and are happy with licensing costs. | You prefer pure open-source licensing and VM instance-based pricing. | You want to use cloud-native pricing, which includes [autoscale](../cosmos-db/manage-scale-cassandra.md#use-autoscale) and [serverless](../cosmos-db/serverless.md) offers. |
-| **Analytics**| You want full control over the provisioning of analytical pipelines regardless of the overhead to build and maintain them. | You want to use cloud-based analytical services like Azure Databricks. | You want near real-time hybrid transactional analytics built into the platform with [Azure Synapse Link for Azure Cosmos DB](../cosmos-db/synapse-link.md). |
-| **Workload pattern**| Your workload is fairly steady-state and you don't require scaling nodes in the cluster frequently. | Your workload is volatile and you need to be able to scale up or scale down nodes in a data center or add/remove data centers easily. | Your workload is often volatile and you need to be able to scale up or scale down quickly and at a significant volume. |
-| **SLAs**| You are happy with your processes for maintaining SLAs on consistency, throughput, availability, and disaster recovery. | You are happy with your processes for maintaining SLAs on consistency and throughput, but want an [SLA for availability](https://azure.microsoft.com/support/legal/sl#backup-and-restore). | You want [fully comprehensive SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_4/) on consistency, throughput, availability, and disaster recovery. |
-| **Replication and consistency**| You need to be able to configure the full array of [tunable consistency settings](https://cassandra.apache.org/doc/latest/cassandr)) |
-| **Data model**| You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are building a new application, or your existing application has a relatively uniform distribution of data with respect to both storage and throughput across partition keys. |
-
-## Next steps
-
-Get started with one of our quickstarts:
-
-* [Create a managed instance cluster from the Azure portal](create-cluster-portal.md)
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
It can be used either entirely in the cloud or as a part of a hybrid cloud and o
### Why should I use this service instead of Azure Cosmos DB for Apache Cassandra?
-Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB team. It's a standalone managed service for deploying, maintaining, and scaling open-source Apache Cassandra data-centers and clusters. [Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandra-introduction.md) on the other hand is a Platform-as-a-Service, providing an interoperability layer for the Apache Cassandra wire protocol. If your expectation is for the platform to behave in exactly the same way as any Apache Cassandra cluster, you should choose the managed instance service. To learn more, see [Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra](compare-cosmosdb-managed-instance.md).
+Azure Managed Instance for Apache Cassandra is delivered by the Azure Cosmos DB team. It's a standalone managed service for deploying, maintaining, and scaling open-source Apache Cassandra data-centers and clusters. [Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandra-introduction.md) on the other hand is a Platform-as-a-Service, providing an interoperability layer for the Apache Cassandra wire protocol. If your expectation is for the platform to behave in exactly the same way as any Apache Cassandra cluster, you should choose the managed instance service. To learn more, see [Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandr).
### Is Azure Managed Instance for Apache Cassandra dependent on Azure Cosmos DB?
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
This article shows how to migrate on-premises VMware VMs to Azure, using the [Mi
The Migration and modernization tool runs a lightweight VMware VM appliance to enable the discovery, assessment, and agentless migration of VMware VMs. If you have followed the [Discovery and assessment tutorial](discover-and-assess-using-private-endpoints.md), you've already set the appliance up. If you didn't, [set up and configure the appliance](./discover-and-assess-using-private-endpoints.md#set-up-the-azure-migrate-appliance) before you proceed.
+To use a private connection for replication, you can use the storage account created earlier during Azure Migrate project setup or create a new cache storage account and configure private endpoint. To create a new storage account with private endpoint, see [Private endpoint for storage account](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint).
+
+ - The private endpoint allows the Azure Migrate appliance to connect to the cache storage account using a private connection like an ExpressRoute private peering or VPN. Data can then be transferred directly on the private IP address.
+
+> [!Important]
+> - In addition to replication data, the Azure Migrate appliance communicates with the Azure Migrate service for its control plane activities. These activities include orchestrating replication. Control plane communication between the Azure Migrate appliance and the Azure Migrate service continues to happen over the internet on the Azure Migrate service's public endpoint.
+> - The private endpoint of the storage account should be accessible from the network where the Azure Migrate appliance is deployed.
+> - DNS must be configured to resolve DNS queries by the Azure Migrate appliance for the cache storage account's blob service endpoint to the private IP address of the private endpoint attached to the cache storage account.
+> - The cache storage account must be accessible on its public endpoint. Azure Migrate uses the cache storage account's public endpoint to move data from the storage account to replica-managed disks.
+ ## Replicate VMs After setting up the appliance and completing discovery, you can begin replicating VMware VMs to Azure.
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
ms. Previously updated : 12/12/2022 Last updated : 09/13/2023
The table summarizes discovery, assessment, and migration limits for Azure Migra
| | | **VMware vSphere VMs** | Discover and assess up to 35,000 VMs in a single Azure Migrate project. | Discover up to 10,000 VMware vSphere VMs with a single [Azure Migrate appliance](common-questions-appliance.md) for VMware vSphere. <br> The appliance supports adding multiple vCenter Servers. You can add up to 10 vCenter Servers per appliance. | **Agentless migration**: you can simultaneously replicate a maximum of 500 VMs across multiple vCenter Servers (discovered from one appliance) using a scale-out appliance.<br> **Agent-based migration**: you can [scale out](./agent-based-migration-architecture.md#performance-and-scaling) the [replication appliance](migrate-replication-appliance.md) to replicate large numbers of VMs.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10. **Hyper-V VMs** | Discover and assess up to 35,000 VMs in a single Azure Migrate project. | Discover up to 5,000 Hyper-V VMs with a single Azure Migrate appliance | An appliance isn't used for Hyper-V migration. Instead, the Hyper-V Replication Provider runs on each Hyper-V host.<br/><br/> Replication capacity is influenced by performance factors such as VM churn, and upload bandwidth for replication data.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10.
-**Physical machines** | Discover and assess up to 35,000 machines in a single Azure Migrate project. | Discover up to 250 physical servers with a single Azure Migrate appliance for physical servers. | You can [scale out](./agent-based-migration-architecture.md#performance-and-scaling) the [replication appliance](migrate-replication-appliance.md) to replicate large numbers of servers.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10.
+**Physical machines** | Discover and assess up to 35,000 machines in a single Azure Migrate project. | Discover up to 1000 physical servers with a single Azure Migrate appliance for physical servers. | You can [scale out](./agent-based-migration-architecture.md#performance-and-scaling) the [replication appliance](migrate-replication-appliance.md) to replicate large numbers of servers.<br/><br/> In the portal, you can select up to 10 machines at once for replication. To replicate more machines, add in batches of 10.
## Select a VMware vSphere migration method
mysql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-maintenance.md
You can define system-managed schedule or custom schedule for each flexible serv
* With custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a one-hour time window. * With system-managed schedule, the system will pick any one-hour window between 11pm and 7am in your server's region time.
-As part of rolling out changes, we apply the updates to the servers configured with system-managed schedule first followed by servers with custom schedule after a minimum gap of 7-days within a given region. If you intend to receive early updates on fleet of development and test environment servers, we recommend you configure system-managed schedule for servers used in development and test environment. This will allow you to receive the latest update first in your Dev/Test environment for testing and evaluation for validation. If you encounter any behavior or breaking changes, you will have time to address them before the same update is rolled out to production servers with custom-managed schedule. The update starts to roll out on custom-schedule flexible servers after 7 days and is applied to your server at the defined maintenance window. At this time, there is no option to defer the update after the notification has been sent. Custom-schedule is recommended for production environments only.
+> [!IMPORTANT]
+> Previously, a 7-day deployment gap between system-managed and custom-managed schedules was maintained. Due to evolving maintenance demands and the introduction of the [maintenance reschedule feature (preview)](#maintenance-reschedule-preview), we can no longer guarantee this 7-day gap.
In rare cases, maintenance event can be canceled by the system or may fail to complete successfully. If the update fails, the update will be reverted, and the previous version of the binaries is restored. In such failed update scenarios, you may still experience restart of the server during the maintenance window. If the update is canceled or failed, the system will create a notification about canceled or failed maintenance event respectively notifying you. The next attempt to perform maintenance will be scheduled as per your current scheduling settings and you will receive notification about it five days in advance.
+## Maintenance reschedule (preview)
+
+> [!IMPORTANT]
+> The maintenance reschedule feature is currently in preview. It is subject to limitations and ongoing development. We value your feedback to help enhance this feature. Please note that this feature is not available for servers using the burstable SKU.
+
+The **maintenance reschedule** feature grants you greater control over the timing of maintenance activities on your Azure MySQL - Flexible server. After receiving a maintenance notification, you can reschedule it to a more convenient time, irrespective of whether it was system or custom managed.
+
+### Reschedule parameters and notifications
+
+Rescheduling isn't confined to fixed time slots; it depends on the earliest and latest permissible times in the current maintenance cycle. Upon rescheduling, a notification will be sent out to confirm the changes, following the standard notification policies.
+
+### Considerations and limitations
+
+Be aware of the following when using this feature:
+
+- **Demand Constraints:** Your rescheduled maintenance might be canceled due to a high number of maintenance activities occurring simultaneously in the same region.
+- **Lock-in Period:** Rescheduling is unavailable 15 minutes prior to the initially scheduled maintenance time to maintain the reliability of the service.
+
+> [!NOTE]
+> We recommend monitoring notifications closely during the preview stage to accommodate potential adjustments.
+
+Use this feature to avoid disruptions during critical database operations. We encourage your feedback as we continue to develop this functionality.
++ ## Next steps * Learn how to [change the maintenance schedule](how-to-maintenance-portal.md)
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
To get more details about the compute series available, refer to Azure VM docume
>[!NOTE]
->For [Burstable (B-series) compute tier](../../virtual-machines/sizes-b-series-burstable.md) if the VM is started/stopped or restarted, the credits may be lost. For more information, see [Burstable (B-Series) FAQ](../../virtual-machines/sizes-b-series-burstable.md#q-why-is-my-remaining-credit-set-to-0-after-a-redeploy-or-a-stopstart).
+>For [Burstable (B-series) compute tier](../../virtual-machines/sizes-b-series-burstable.md) if the VM is started/stopped or restarted, the credits may be lost. For more information, see [Burstable (B-Series) FAQ](../../virtual-machines/sizes-b-series-burstable.md).
## Storage
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
The in-place migration provides a highly resilient and self-healing offline migr
> [!NOTE] > In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate.
+## What's new?
+* If you own a Single Server workload with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u). (Sept 2023)
+ ## Configure migration alerts and review migration schedule Servers eligible for in-place automigration are sent an advance notification by the service.
Following described are the ways to review your migration schedule once you have
> [!NOTE] > The migration schedule will be locked 7 days prior to the scheduled migration window after which youΓÇÖll be unable to reschedule.
-* The S**ingle Server overview page** for your instance displays a portal banner with information about your migration schedule.
+* The **Single Server overview page** for your instance displays a portal banner with information about your migration schedule.
* For Single Servers scheduled for automigration, a new **Migration blade** is lighted on the portal. You can review the migration schedule by navigating to the Migration blade of your Single Server instance. * If you wish to defer the migration, you can defer by a month at a time by navigating to the Migration blade of your single server instance on the Azure portal and rescheduling the migration by selecting another migration window within a month. * If your Single Server has **General Purpose SKU**, you have the other option to enable **High Availability** when reviewing the migration schedule. As High Availability can only be enabled during create time for a MySQL Flexible Server, it's highly recommended that you enable this feature when reviewing the migration schedule.
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Azure Database for MySQL
network-watcher Connection Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-troubleshoot-overview.md
+
+ Title: Connection troubleshoot overview
+
+description: Learn about Azure Network Watcher connection troubleshoot tool, the issues it can detect, and the responses it gives.
++++ Last updated : 09/13/2023
+#CustomerIntent: As an Azure administrator, I want to learn what connectivity problems I can use Connection Troubleshoot to diagnose so I can resolve those problems.
++
+# Connection troubleshoot overview
+
+With the increase of sophisticated and high-performance workloads in Azure, there's a critical need for increased visibility and control over the operational state of complex networks running these workloads. Such complex networks are implemented using network security groups, firewalls, user-defined routes, and resources provided by Azure. Complex configurations make troubleshooting connectivity issues challenging.
+
+The connection troubleshoot feature of Azure Network Watcher helps reduce the amount of time to diagnose and troubleshoot network connectivity issues. The results returned can provide insights about the root cause of the connectivity problem and whether it's due to a platform or user configuration issue.
+
+Connection troubleshoot reduces the Mean Time To Resolution (MTTR) by providing a comprehensive method of performing all connection major checks to detect issues pertaining to network security groups, user-defined routes, and blocked ports. It provides the following results with actionable insights where a step-by-step guide or corresponding documentation is provided for faster resolution:
+
+- Connectivity test with different destination types (VM, URI, FQDN, or IP Address)
+- Configuration issues that impact reachability
+- All possible hop by hop paths from the source to destination
+- Hop by hop latency
+- Latency (minimum, maximum, and average between source and destination)
+- Graphical topology view from source to destination
+- Number of probes failed during the connection troubleshoot check
+
+## Supported source and destination types
+
+Connection troubleshoot provides the capability to check TCP or ICMP connections from any of these Azure resources:
+
+- Virtual machines
+- Virtual machine scale sets
+- Azure Bastion instances
+- Application gateways (except v1)
+
+> [!IMPORTANT]
+> Connection troubleshoot requires that the virtual machine you troubleshoot from has the `AzureNetworkWatcherExtension` extension installed. The extension is not required on the destination virtual machine.
+> - To install the extension on a Windows VM, see [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+> - To install the extension on a Linux VM, see [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+
+Connection troubleshoot can test connections to any of these destinations:
+
+- Virtual machines
+- Fully qualified domain names (FQDNs)
+- Uniform resource identifiers (URIs)
+- IP addresses
+
+## Issues detected by connection troubleshoot
+
+Connection troubleshoot can detect the following types of issues that can impact connectivity:
+
+- High VM CPU utilization
+- High VM memory utilization
+- Virtual machine (guest) firewall rules blocking traffic
+- DNS resolution failures
+- Misconfigured or missing routes
+- Network security group (NSG) rules that are blocking traffic
+- Inability to open a socket at the specified source port
+- Missing address resolution protocol entries for Azure ExpressRoute circuits
+- Servers not listening on designated destination ports
+
+## Response
+
+The following table shows the properties returned after running connection troubleshoot.
+
+| Property | Description |
+| -- | -- |
+| ConnectionStatus | The status of the connectivity check. Possible results are **Reachable** and **Unreachable**. |
+| AvgLatencyInMs | Average latency during the connectivity check, in milliseconds. (Only shown if check status is reachable). |
+| MinLatencyInMs | Minimum latency during the connectivity check, in milliseconds. (Only shown if check status is reachable). |
+| MaxLatencyInMs | Maximum latency during the connectivity check, in milliseconds. (Only shown if check status is reachable). |
+| ProbesSent | Number of probes sent during the check. Maximum value is 100. |
+| ProbesFailed | Number of probes that failed during the check. Maximum value is 100. |
+| Hops | Hop by hop path from source to destination. |
+| Hops[].Type | Type of resource. Possible values are: **Source**, **VirtualAppliance**, **VnetLocal**, and **Internet**. |
+| Hops[].Id | Unique identifier of the hop. |
+| Hops[].Address | IP address of the hop. |
+| Hops[].ResourceId | Resource ID of the hop if the hop is an Azure resource. If it's an internet resource, ResourceID is **Internet**. |
+| Hops[].NextHopIds | The unique identifier of the next hop taken. |
+| Hops[].Issues | A collection of issues that were encountered during the check of the hop. If there were no issues, the value is blank. |
+| Hops[].Issues[].Origin | At the current hop, where issue occurred. Possible values are: <br>**Inbound** - Issue is on the link from the previous hop to the current hop. <br>**Outbound** - Issue is on the link from the current hop to the next hop. <br>**Local** - Issue is on the current hop. |
+| Hops[].Issues[].Severity | The severity of the detected issue. Possible values are: **Error** and **Warning**. |
+| Hops[].Issues[].Type | The type of the detected issue. Possible values are: <br>**CPU** <br>**Memory** <br>**GuestFirewall** <br>**DnsResolution** <br>**NetworkSecurityRule** <br>**UserDefinedRoute** |
+| Hops[].Issues[].Context | Details regarding the detected issue. |
+| Hops[].Issues[].Context[].key | Key of the key value pair returned. |
+| Hops[].Issues[].Context[].value | Value of the key value pair returned. |
+| NextHopAnalysis.NextHopType | The type of next hop. Possible values are: <br>**HyperNetGateway** <br>**Internet** <br>**None** <br>**VirtualAppliance** <br>**VirtualNetworkGateway** <br>**VnetLocal** |
+| NextHopAnalysis.NextHopIpAddress | IP address of next hop. |
+| | The resource identifier of the route table associated with the route being returned. If the returned route doesn't correspond to any user created routes, then this field will be the string **System Route**. |
+| SourceSecurityRuleAnalysis.Results[].Profile | Network configuration diagnostic profile. |
+| SourceSecurityRuleAnalysis.Results[].Profile.Source | Traffic source. Possible values are: *, **IP Address/CIDR**, and **Service Tag**. |
+| SourceSecurityRuleAnalysis.Results[].Profile.Destination | Traffic destination. Possible values are: *, **IP Address/CIDR**, and **Service Tag**. |
+| SourceSecurityRuleAnalysis.Results[].Profile.DestinationPort | Traffic destination port. Possible values are: * and a single port in the (0 - 65535) range. |
+| SourceSecurityRuleAnalysis.Results[].Profile.Protocol | Protocol to be verified. Possible values are: *, **TCP** and **UDP**. |
+| SourceSecurityRuleAnalysis.Results[].Profile.Direction | The direction of the traffic. Possible values are: **Outbound** and **Inbound**. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult | Network security group result. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.EvaluatedSecurityGroups[] | List of results network security groups diagnostic. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.SecurityRuleAccessResult | The network traffic is allowed or denied. Possible values are: **Allow** and **Deny**. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.EvaluatedSecurityGroups[].AppliedTo | Resource ID of the NIC or subnet to which network security group is applied. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.EvaluatedSecurityGroups[].MatchedRule | Matched network security rule. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.EvaluatedSecurityGroups[].MatchedRule.Action | The network traffic is allowed or denied. Possible values are: **Allow** and **Deny**. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.EvaluatedSecurityGroups[].MatchedRule.RuleName | Name of the matched network security rule. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.EvaluatedSecurityGroups[].NetworkSecurityGroupId | Network security group ID. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.RulesEvaluationResult[] | List of network security rules evaluation results. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.RulesEvaluationResult[].DestinationMatched | Value indicates if destination is matched. Boolean values. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.RulesEvaluationResult[].DestinationPortMatched | Value indicates if destination port is matched. Boolean values. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.RulesEvaluationResult[].Name | Name of the network security rule. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.RulesEvaluationResult[].ProtocolMatched | Value indicates if protocol is matched. Boolean values. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.RulesEvaluationResult[].SourceMatched | Value indicates if source is matched. Boolean values. |
+| SourceSecurityRuleAnalysis.Results[].NetworkSecurityGroupResult.RulesEvaluationResult[].SourcePortMatched | Value indicates if source port is matched. Boolean values. |
+| DestinationSecurityRuleAnalysis | Same as SourceSecurityRuleAnalysis format. |
+| SourcePortStatus | Determines whether the port at source is reachable or not. Possible Values are: <br>**Unknown** <br>**Reachable** <br>**Unstable** <br>**NoConnection** <br>**Timeout** |
+| DestinationPortStatus | Determines whether the port at destination is reachable or not. Possible Values are: <br>**Unknown** <br>**Reachable** <br>**Unstable** <br>**NoConnection** <br>**Timeout** |
+
+The following example shows an issue found on a hop.
+
+```json
+"Issues": [
+ {
+ "Origin": "Outbound",
+ "Severity": "Error",
+ "Type": "NetworkSecurityRule",
+ "Context": [
+ {
+ "key": "RuleName",
+ "value": "UserRule_Port80"
+ }
+ ]
+ }
+]
+```
+
+## Fault types
+
+Connection troubleshoot returns fault types about the connection. The following table provides a list of the possible returned fault types.
+
+| Type | Description |
+| - | -- |
+| CPU | High CPU utilization. |
+| Memory | High Memory utilization. |
+| GuestFirewall | Traffic is blocked due to a virtual machine firewall configuration. <br><br> A TCP ping is a unique use case in which, if there's no allowed rule, the firewall itself responds to the client's TCP ping request even though the TCP ping doesn't reach the target IP address/FQDN. This event isn't logged. If there's a network rule that allows access to the target IP address/FQDN, the ping request reaches the target server and its response is relayed back to the client. This event is logged in the network rules log. |
+| DNSResolution | DNS resolution failed for the destination address. |
+| NetworkSecurityRule | Traffic is blocked by a network security group rule (security rule is returned). |
+| UserDefinedRoute | Traffic is dropped due to a user defined or system route. |
+
+### Next step
+
+To learn how to use connection troubleshoot to test and troubleshoot connections, continue to:
+> [!div class="nextstepaction"]
+> [Troubleshoot connections using the Azure portal](network-watcher-connectivity-portal.md)
network-watcher Network Watcher Connectivity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-overview.md
- Title: Connection troubleshoot overview-
-description: Learn about Azure Network Watcher connection troubleshoot capability.
----- Previously updated : 03/22/2023----
-# Connection troubleshoot overview
-
-With the increase of sophisticated and high-performance workloads in Azure, there's a critical need for increased visibility and control over the operational state of complex networks running these workloads. Such complex networks are implemented using network security groups, firewalls, user-defined routes, and resources provided by Azure. Complex configurations make troubleshooting connectivity issues challenging.
-
-The connection troubleshoot feature of Azure Network Watcher helps reduce the amount of time to diagnose and troubleshoot network connectivity issues. The results returned can provide insights about the root cause of the connectivity problem and whether it's due to a platform or user configuration issue.
-
-Connection troubleshoot reduces the Mean Time To Resolution (MTTR) by providing a comprehensive method of performing all connection major checks to detect issues pertaining to network security groups, user-defined routes, and blocked ports. It provides the following results with actionable insights where a step-by-step guide or corresponding documentation is provided for faster resolution:
--- Connectivity test with different destination types (VM, URI, FQDN, or IP Address)-- Configuration issues that impact reachability-- All possible hop by hop paths from the source to destination-- Hop by hop latency-- Latency (minimum, maximum, and average between source and destination)-- Graphical topology view from source to destination-- Number of probes failed during the connection troubleshoot check-
-## Supported source and destination types
-
-Connection troubleshoot provides the capability to check TCP or ICMP connections from any of these Azure resources:
--- Virtual machines-- Virtual machine scale sets-- Azure Bastion instances-- Application gateways (except v1)-
-> [!IMPORTANT]
-> Connection troubleshoot requires that the virtual machine you troubleshoot from has the `AzureNetworkWatcherExtension` extension installed. The extension is not required on the destination virtual machine.
-> - To install the extension on a Windows VM, see [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
-> - To install the extension on a Linux VM, see [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
-
-Connection troubleshoot can test connections to any of these destinations:
--- Virtual machines-- Fully qualified domain names (FQDNs)-- Uniform resource identifiers (URIs)-- IP addresses-
-## Issues detected by connection troubleshoot
-
-Connection troubleshoot can detect the following types of issues that can impact connectivity:
--- High VM CPU utilization-- High VM memory utilization-- Virtual machine (guest) firewall rules blocking traffic-- DNS resolution failures-- Misconfigured or missing routes-- Network security group (NSG) rules that are blocking traffic-- Inability to open a socket at the specified source port-- Missing address resolution protocol entries for Azure ExpressRoute circuits-- Servers not listening on designated destination ports-
-## Response
-
-The following table shows the properties returned after running connection troubleshoot.
-
-|**Property** |**Description** |
-|||
-|ConnectionStatus | The status of the connectivity check. Possible results are **Reachable** and **Unreachable**. |
-|AvgLatencyInMs | Average latency during the connectivity check, in milliseconds. (Only shown if check status is reachable) |
-|MinLatencyInMs | Minimum latency during the connectivity check, in milliseconds. (Only shown if check status is reachable) |
-|MaxLatencyInMs | Maximum latency during the connectivity check, in milliseconds. (Only shown if check status is reachable) |
-|ProbesSent | Number of probes sent during the check. Max value is 100. |
-|ProbesFailed | Number of probes that failed during the check. Max value is 100. |
-|Hops | Hop by hop path from source to destination. |
-|Hops[].Type | Type of resource. Possible values are **Source**, **VirtualAppliance**, **VnetLocal**, and **Internet**. |
-|Hops[].Id | Unique identifier of the hop.|
-|Hops[].Address | IP address of the hop.|
-|Hops[].ResourceId | ResourceID of the hop if the hop is an Azure resource. If it's an internet resource, ResourceID is **Internet**. |
-|Hops[].NextHopIds | The unique identifier of the next hop taken.|
-|Hops[].Issues | A collection of issues that were encountered during the check at that hop. If there were no issues, the value is blank.|
-|Hops[].Issues[].Origin | At the current hop, where issue occurred. Possible values are:<br/> **Inbound** - Issue is on the link from the previous hop to the current hop<br/>**Outbound** - Issue is on the link from the current hop to the next hop<br/>**Local** - Issue is on the current hop.|
-|Hops[].Issues[].Severity | The severity of the issue detected. Possible values are **Error** and **Warning**. |
-|Hops[].Issues[].Type |The type of issue found. Possible values are: <br/>**CPU**<br/>**Memory**<br/>**GuestFirewall**<br/>**DnsResolution**<br/>**NetworkSecurityRule**<br/>**UserDefinedRoute** |
-|Hops[].Issues[].Context |Details regarding the issue found.|
-|Hops[].Issues[].Context[].key |Key of the key value pair returned.|
-|Hops[].Issues[].Context[].value |Value of the key value pair returned.|
-
-The following is an example of an issue found on a hop.
-
-```json
-"Issues": [
- {
- "Origin": "Outbound",
- "Severity": "Error",
- "Type": "NetworkSecurityRule",
- "Context": [
- {
- "key": "RuleName",
- "value": "UserRule_Port80"
- }
- ]
- }
-]
-```
-## Fault types
-
-Connection troubleshoot returns fault types about the connection. The following table provides a list of the current fault types returned.
-
-|**Type** |**Description** |
-|||
-|CPU | High CPU utilization. |
-|Memory | High Memory utilization. |
-|GuestFirewall | Traffic is blocked due to a virtual machine firewall configuration. <br><br> A TCP ping is a unique use case in which, if there's no allowed rule, the firewall itself responds to the client's TCP ping request even though the TCP ping doesn't reach the target IP address/FQDN. This event isn't logged. If there's a network rule that allows access to the target IP address/FQDN, the ping request reaches the target server and its response is relayed back to the client. This event is logged in the Network rules log. |
-|DNSResolution | DNS resolution failed for the destination address. |
-|NetworkSecurityRule | Traffic is blocked by a network security group rule (security rule is returned) |
-|UserDefinedRoute|Traffic is dropped due to a user defined or system route. |
-
-### Next steps
--- To learn how to use connection troubleshoot to test and troubleshoot connections, see [Troubleshoot connections with Azure Network Watcher using the Azure portal](network-watcher-connectivity-portal.md).-- To learn more about Network Watcher and its other capabilities, see [What is Azure Network Watcher?](network-watcher-monitoring-overview.md).
network-watcher Network Watcher Connectivity Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-portal.md
Title: Troubleshoot connections - Azure portal description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure portal.- + - Previously updated : 07/17/2023-- Last updated : 09/13/2023
+#CustomerIntent: As an Azure administrator, I want to learn how to use Connection Troubleshoot to diagnose connectivity problems in Azure.
# Troubleshoot connections with Azure Network Watcher using the Azure portal
-In this article, you learn how to use [Azure Network Watcher connection troubleshoot](network-watcher-connectivity-overview.md) to diagnose and troubleshoot connectivity issues.
+In this article, you learn how to use Azure Network Watcher connection troubleshoot to diagnose and troubleshoot connectivity issues. For more information about connection troubleshoot, see [Connection troubleshoot overview](network-watcher-connectivity-overview.md).
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A virtual machine with inbound TCP connectivity from 168.63.129.16 over the port being tested.
+- A virtual machine with inbound TCP connectivity from 168.63.129.16 over the port being tested (for Port scanner diagnostic test).
> [!IMPORTANT] > Connection troubleshoot requires that the virtual machine you troubleshoot from has the `AzureNetworkWatcherExtension` extension installed. The extension is not required on the destination virtual machine.
In this section, you test connectivity between two connected virtual machines.
| **Probe Settings** | | | Preferred IP version | Select **IPv4**. | | Protocol | Select **TCP**. |
- | Destination port | Enter *80*. |
+ | Destination port | Enter **80**. |
| **Connection Diagnostics** | | | Diagnostics tests | Select **Select all**. |
- :::image type="content" source="./media/network-watcher-connectivity-portal/test-virtual-machines-connected.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between two connected virtual machines.":::
+ :::image type="content" source="./media/network-watcher-connectivity-portal/test-virtual-machines-connected.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between two connected virtual machines." lightbox="./media/network-watcher-connectivity-portal/test-virtual-machines-connected.png":::
-1. Select **Test connection**.
+1. Select **Run diagnostic tests**.
The test results show that the two virtual machines are communicating with no issues:
In this section, you test connectivity between two virtual machines that have co
| **Connection Diagnostics** | | | Diagnostics tests | Select **Select all**. |
- :::image type="content" source="./media/network-watcher-connectivity-portal/test-two-virtual-machines.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between two virtual machines.":::
+ :::image type="content" source="./media/network-watcher-connectivity-portal/test-two-virtual-machines.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between two virtual machines." lightbox="./media/network-watcher-connectivity-portal/test-two-virtual-machines.png":::
-1. Select **Test connection**.
+1. Select **Run diagnostic tests**.
The test results show that the two virtual machines aren't communicating:
In this section, you test connectivity between a virtual machines and `www.bing.
| **Connection Diagnostics** | | | Diagnostics tests | Select **Connectivity**. |
- :::image type="content" source="./media/network-watcher-connectivity-portal/test-bing.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between a virtual machines and Microsoft Bing search engine.":::
+ :::image type="content" source="./media/network-watcher-connectivity-portal/test-bing.png" alt-text="Screenshot of Network Watcher connection troubleshoot in Azure portal to test the connection between a virtual machines and Microsoft Bing search engine." lightbox="./media/network-watcher-connectivity-portal/test-bing.png":::
-1. Select **Test connection**.
+1. Select **Run diagnostic tests**.
The test results show that `www.bing.com` is reachable from **VM1** virtual machine:
network-watcher Network Watcher Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-overview.md
+
+ Title: Azure Network Watcher overview
+description: Learn about Azure Network Watcher's monitoring, diagnostics, logging, and metrics capabilities in a virtual network.
++++ Last updated : 09/13/2023
+# Customer intent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
++
+# What is Azure Network Watcher?
+
+Azure Network Watcher provides a suite of tools to monitor, diagnose, view metrics, and enable or disable logs for Azure IaaS (Infrastructure-as-a-Service) resources. Network Watcher enables you to monitor and repair the network health of IaaS products like virtual machines (VMs), virtual networks (VNets), application gateways, load balancers, etc. Network Watcher isn't designed or intended for PaaS monitoring or Web analytics.
+
+Network Watcher consists of three major sets of tools and capabilities:
+
+- [Monitoring](#monitoring)
+- [Network diagnostics tools](#network-diagnostics-tools)
+- [Traffic](#traffic)
++
+> [!NOTE]
+> When you create or update a virtual network in your subscription, Network Watcher is automatically enabled in your virtual network's region. There's no impact on your resources or associated charge for automatically enabling Network Watcher. For more information, see [Enable or disable Network Watcher](network-watcher-create.md).
+
+## Monitoring
+
+Network Watcher offers two monitoring tools that help you view and monitor resources:
+
+- Topology
+- Connection monitor
+
+### Topology
+
+**Topology** provides a visualization of the entire network for understanding network configuration. It provides an interactive interface to view resources and their relationships in Azure spanning across multiple subscriptions, resource groups, and locations. For more information, see [Topology overview](network-insights-topology.md).
+
+### Connection monitor
+
+**Connection monitor** provides end-to-end connection monitoring for Azure and hybrid endpoints. It helps you understand network performance between various endpoints in your network infrastructure. For more information, see [Connection monitor overview](connection-monitor-overview.md) and [Monitor network communication between two virtual machines](connection-monitor.md).
+
+## Network diagnostics tools
+
+Network Watcher offers seven network diagnostics tools that help troubleshoot and diagnose network issues:
+
+- IP flow verify
+- NSG diagnostics
+- Next hop
+- Effective security rules
+- Connection troubleshoot
+- Packet capture
+- VPN troubleshoot
+
+### IP flow verify
+
+**IP flow verify** allows you to detect traffic filtering issues at a virtual machine level. It checks if a packet is allowed or denied to or from an IP address (IPv4 or IPv6 address). It also tells you which security rule allowed or denied the traffic. For more information, see [IP flow verify overview](network-watcher-ip-flow-verify-overview.md) and [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
+
+### NSG diagnostics
+
+**NSG diagnostics** allows you to detect traffic filtering issues at a virtual machine, virtual machine scale set, or application gateway level. It checks if a packet is allowed or denied to or from an IP address, IP prefix, or a service tag. It tells you which security rule allowed or denied the traffic. It also allows you to add a new security rule with a higher priority to allow or deny the traffic. For more information, see [NSG diagnostics overview](network-watcher-network-configuration-diagnostics-overview.md) and [Diagnose network security rules](diagnose-network-security-rules.md).
+
+### Next hop
+
+**Next hop** allows you to detect routing issues. It checks if traffic is routed correctly to the intended destination. It provides you with information about the Next hop type, IP address, and Route table ID for a specific destination IP address. For more information, see [Next hop overview](network-watcher-next-hop-overview.md) and [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md).
+
+### Effective security rules
+
+**Effective security rules** allows you to view the effective security rules applied to a network interface. It shows you all security rules applied to the network interface, the subnet the network interface is in, and the aggregate of both. For more information, see [Effective security rules overview](network-watcher-security-group-view-overview.md) and [View details of a security rule](diagnose-vm-network-traffic-filtering-problem.md#view-details-of-a-security-rule).
+
+### Connection troubleshoot
+
+**Connection troubleshoot** enables you to test a connection between a virtual machine, a virtual machine scale set, an application gateway, or a Bastion host and a virtual machine, an FQDN, a URI, or an IPv4 address. The test returns similar information returned when using the [connection monitor](#connection-monitor) capability, but tests the connection at a point in time instead of monitoring it over time, as connection monitor does. For more information, see [Connection troubleshoot overview](connection-troubleshoot-overview.md) and [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-portal.md).
+
+### Packet capture
+
+**Packet capture** allows you to remotely create packet capture sessions to track traffic to and from a virtual machine (VM) or a virtual machine scale set. For more information, see [packet capture](network-watcher-packet-capture-overview.md) and [Manage packet captures in virtual machines](network-watcher-packet-capture-manage-portal.md).
+
+### VPN troubleshoot
+
+**VPN troubleshoot** enables you to troubleshoot virtual network gateways and their connections. For more information, see [VPN troubleshoot overview](network-watcher-troubleshoot-overview.md) and [Diagnose a communication problem between networks](diagnose-communication-problem-between-networks.md).
+
+## Traffic
+
+Network Watcher offers two traffic tools that help you log and visualize network traffic:
+
+- Flow logs
+- Traffic analytics
+
+### Flow logs
+
+**Flow logs** allows you to log information about your Azure IP traffic and stores the data in Azure storage. You can log IP traffic flowing through a network security group or Azure virtual network. For more information, see:
+- [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) and [Log network traffic to and from a virtual machine](network-watcher-nsg-flow-logging-portal.md).
+- [VNet flow logs (preview)](vnet-flow-logs-overview.md) and [Manage VNet flow logs](vnet-flow-logs-powershell.md).
+
+### Traffic analytics
+
+**Traffic analytics** provides rich visualizations of flow logs data. For more information about traffic analytics, see [traffic analytics](traffic-analytics.md) and [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
++
+## Usage + quotas
+
+The **Usage + quotas** capability of Network Watcher provides a summary of how many of each network resource you've deployed in a subscription and region and what the limit is for the resource. For more information, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/network-watcher/toc.json#azure-resource-manager-virtual-networking-limits) to the number of network resources that you can create within an Azure subscription and region. This information is helpful when planning future resource deployments as you can't create more resources if you reach their limits within the subscription or region.
++
+## Network Watcher limits
+
+Network Watcher has the following limits:
++
+## Pricing
+
+For pricing details, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+
+## Service Level Agreement (SLA)
+
+For service level agreement details, see [Service Level Agreements (SLA) for Online Services](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
+
+## Frequently asked questions (FAQ)
+
+To get answers to most frequently asked questions about Network Watcher, see [Azure Network Watcher frequently asked questions (FAQ)](frequently-asked-questions.yml).
+
+## What's new?
+
+To view the latest Network Watcher feature updates, see [Service updates](https://azure.microsoft.com/updates/?query=network%20watcher).
+
+## Related content
+
+- To get started using Network Watcher diagnostics tools, see [Quickstart: Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
+- [Training module: Introduction to Azure Network Watcher](/training/modules/intro-to-azure-network-watcher).
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
postgresql Concepts Connection Pooling Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-connection-pooling-best-practices.md
Although there are different tools for connection pooling, in this section, we d
**PgBouncer** is an efficient connection pooler designed for PostgreSQL, offering the advantage of reducing processing time and optimizing resource usage in managing multiple client connections to one or more databases. **PgBouncer** incorporates three distinct pooling mode for connection rotation: -- **Session pooling:** This method assigns a server connection to the client application for the entire duration of the client's connection. Upon disconnection of the client application, PgBouncer promptly returns the server connection back to the pool. This pooling mechanism is the default setting. (Note: It isn't recommended in most of the cases and don't give any performance benefits over classic connections).
+- **Session pooling:** This method assigns a server connection to the client application for the entire duration of the client's connection. Upon disconnection of the client application, **PgBouncer** promptly returns the server connection back to the pool. This pooling mechanism is the default setting. (Note: It isn't recommended in most of the cases and don't give any performance benefits over classic connections).
- **Transaction pooling:** With transaction pooling, a server connection is dedicated to the client application during a transaction. Once the transaction is successfully completed, **PgBouncer** intelligently releases the server connection, making it available again within the pool. Transaction pooling is the default mode in Flexible server, and it does not support prepared transactions. - **Statement pooling:** In statement pooling, a server connection is allocated to the client application for each individual statement. Upon the statement's completion, the server connection is promptly returned to the connection pool. It's important to note that multi-statement transactions are not supported in this mode.
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
|**User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. | |**User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. |
- |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- |**Data Network Name** | Enter the name of the data network. |
+ |**Data Network Name** | Enter the name of the data network. |
|**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.| | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. |
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
To begin, collect the values in the following table for each SIM you want to pro
| The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. It must be a 32-character string, containing hexadecimal characters only. | `authenticationKey` | | The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | `operatorKeyCode` | | The type of device using this SIM. This value is an optional free-form string. You can use it as required to easily identify device types using the enterprise's private mobile network. | `deviceType` |
-| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. You'll need to assign a SIM policy if you want to set static IP addresses to the SIM during provisioning. | `simPolicyId` |
-
-### Collect the required information for assigning static IP addresses
-
-You only need to complete this step if you've configured static IP address allocation for your packet core instance(s) and you want to assign static IP addresses to the SIMs during SIM provisioning.
-
-Collect the values in the following table for each SIM you want to provision. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to this SIM, collect the values for each IP address.
-
-Each IP address must come from the pool you assigned for static IP address allocation when creating the relevant data network, as described in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values). For more information, see [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools).
-
-| Value | Parameter name |
-|--|--|--|
-| The data network that the SIM will use. | `staticIpConfiguration.attachedDataNetworkId` |
-| The network slice that the SIM will use. | `staticIpConfiguration.sliceId` |
-| The static IP address to assign to the SIM. | `staticIpConfiguration.staticIpAddress` |
+| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. | `simPolicyId` |
## Prepare one or more arrays for your SIMs
Use the information you collected in [Collect the required information for your
> [!IMPORTANT] > Bulk SIM provisioning is limited to 500 SIMs. If you want to provision more that 500 SIMs, you must create multiple SIM arrays with no more than 500 SIMs in any one array and repeat the provisioning process for each SIM array.
-If you don't want to configure static IP addresses for a SIM, delete the `staticIpConfiguration` parameter for that SIM. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to the same SIM, you can include additional `attachedDataNetworkId`, `sliceId` and `staticIpAddress` parameters for each IP address under `staticIpConfiguration`.
+Delete the `staticIpConfiguration` parameter for that SIM.
```json [
The following Azure resources are defined in the template.
## Next steps
-If you've configured static IP address allocation for your packet core instance(s) and you haven't already assigned static IP addresses to the SIMs you've provisioned, you can do so by following the steps in [Assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses).
+You can [Manage these SIMs](manage-existing-sims.md) using the Azure portal.
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
Next, download the SAP installation media to the VM using a script.
1. Where `playbook_bom_downloader_yaml_path` is the absolute path to sap-automation/deploy/ansible/playbook_bom_downloader.yaml. e.g. */home/loggedinusername/sap-automation/deploy/ansible/playbook_bom_downloader.yaml*
-1. For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_** or **_S42022_SPS00_v0001ms_**
+1. For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_** or **_S42022SPS00_v0001ms_**
1. For `<s_user>`, use your SAP username.
First, set up an Azure Storage account for the SAP components:
1. **HANA_2_00_071_v0001ms**
- 1. **S42022_SPS00_v0001ms**
+ 1. **S42022SPS00_v0001ms**
1. **SWPM20SP15_latest**
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
See SAP's recommendation to use AntiVirus software for SAP hosts and systems on
For more information about using Microsoft Defender for Endpoint (MDE) via Microsoft Defender for Server for SAP applications regarding `Next-generation protection` (AntiVirus) and `Endpoint Detection and Response` (EDR) see the following Microsoft resources: - [SAP Applications and Microsoft Defender for Linux | Microsoft TechCommunity](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-applications-and-microsoft-defender-for-linux/ba-p/3675480)
+- [SAP Applications and Microsoft Defender for Windows Server | Microsoft TechCommunity](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/microsoft-defender-endpoint-mde-for-sap-applications-on-windows/ba-p/3912268)
- [Enable the Microsoft Defender for Endpoint integration](../../defender-for-cloud/integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration) - [Common mistakes to avoid when defining exclusions](/microsoft-365/security/defender-endpoint/common-exclusion-mistakes-microsoft-defender-antivirus)
search Cognitive Search Tutorial Blob Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-dotnet.md
Previously updated : 06/29/2023 Last updated : 09/13/2023
To interact with your Azure Cognitive Search service you will need the service U
1. In **Settings** > **Keys**, get an admin key for full rights on the service. You can copy either the primary or secondary key.
-<!-- This code sample doesn't include a query so the following sentence should be deleted.
- Get the query key as well. It's a best practice to issue query requests with read-only access. -->
- ![Get the service name and admin key](media/search-get-started-javascript/service-name-and-keys.png) Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
For this project, install version 11 or later of the `Azure.Search.Documents` an
```json {
- "SearchServiceUri": "Put your search service URI here",
- "SearchServiceAdminApiKey": "Put your primary or secondary API key here",
- "SearchServiceQueryApiKey": "Put your query API key here",
- "AzureBlobConnectionString": "Put your Azure Blob connection string here",
+ "SearchServiceUri": "<YourSearchServiceUri>",
+ "SearchServiceAdminApiKey": "<YourSearchServiceAdminApiKey>",
+ "SearchServiceQueryApiKey": "<YourSearchServiceQueryApiKey>",
+ "AzureAIServicesKey": "<YourMultiRegionAzureAIServicesKey>",
+ "AzureBlobConnectionString": "<YourAzureBlobConnectionString>"
} ```
public static void Main(string[] args)
string searchServiceUri = configuration["SearchServiceUri"]; string adminApiKey = configuration["SearchServiceAdminApiKey"];
- string cognitiveServicesKey = configuration["CognitiveServicesKey"];
+ string azureAiServicesKey = configuration["AzureAIServicesKey"];
SearchIndexClient indexClient = new SearchIndexClient(new Uri(searchServiceUri), new AzureKeyCredential(adminApiKey)); SearchIndexerClient indexerClient = new SearchIndexerClient(new Uri(searchServiceUri), new AzureKeyCredential(adminApiKey));
private static KeyPhraseExtractionSkill CreateKeyPhraseExtractionSkill()
Build the [`SearchIndexerSkillset`](/dotnet/api/azure.search.documents.indexes.models.searchindexerskillset) using the skills you created. ```csharp
-private static SearchIndexerSkillset CreateOrUpdateDemoSkillSet(SearchIndexerClient indexerClient, IList<SearchIndexerSkill> skills,string cognitiveServicesKey)
+private static SearchIndexerSkillset CreateOrUpdateDemoSkillSet(SearchIndexerClient indexerClient, IList<SearchIndexerSkill> skills,string azureAiServicesKey)
{ SearchIndexerSkillset skillset = new SearchIndexerSkillset("demoskillset", skills) {
+ // Azure AI services was formerly known as Cognitive Services.
+ // The APIs still use the old name, so we need to create a CognitiveServicesAccountKey object.
Description = "Demo skillset",
- CognitiveServicesAccount = new CognitiveServicesAccountKey(cognitiveServicesKey)
+ CognitiveServicesAccount = new CognitiveServicesAccountKey(azureAiServicesKey)
}; // Create the skillset in your search service.
skills.Add(splitSkill);
skills.Add(entityRecognitionSkill); skills.Add(keyPhraseExtractionSkill);
-SearchIndexerSkillset skillset = CreateOrUpdateDemoSkillSet(indexerClient, skills, cognitiveServicesKey);
+SearchIndexerSkillset skillset = CreateOrUpdateDemoSkillSet(indexerClient, skills, azureAiServicesKey);
``` ### Step 3: Create an index
search Cognitive Search Tutorial Blob Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-python.md
ms.devlang: python Previously updated : 08/26/2022 Last updated : 09/13/2023
search Cognitive Search Tutorial Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob.md
Previously updated : 01/31/2023 Last updated : 09/13/2023 # Tutorial: Use REST and AI to generate searchable content from Azure blobs
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search.md
Resource logging is billable (see the [Pricing model](../azure-monitor/usage-est
1. Give the diagnostic setting a name. Use granular and descriptive names if you're creating more than one setting.
-1. Select the logs and metrics that are in scope for this setting. Selections include "allLogs", "OperationLogs", "AllMetrics". You can exclude activity logs by selecting the "OperationLogs" category.
+1. Select the logs and metrics that are in scope for this setting. Selections include "allLogs", "audit", "OperationLogs", "AllMetrics". You can exclude activity logs by selecting the "OperationLogs" category.
+ See [Microsoft.Search/searchServices (in Supported categories for Azure Monitor resource logs)](../azure-monitor/essentials/resource-logs-categories.md#microsoftsearchsearchservices)
Resource logging is billable (see the [Pricing model](../azure-monitor/usage-est
1. Select **Save**. + Once the workspace contains data, you can run log queries: + See [Tutorial: Collect and analyze resource logs from an Azure resource](../azure-monitor/essentials/tutorial-resource-logs.md) for general guidance on log queries.
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
Previously updated : 08/31/2023 Last updated : 09/13/2023 # Retrieval Augmented Generation (RAG) in Azure Cognitive Search
-Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides the data. Adding an information retrieval system gives you control over the data used by an LLM. For an enterprise solution, RAG architecture means that you can constrain natural language processing to *your enterprise content* sourced from documents, images, audio, and video.
+Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides the data. Adding an information retrieval system gives you control over the data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain natural language processing to *your enterprise content* sourced from vectorized documents, images, audio, and video.
The decision about which information retrieval system to use is critical because it determines the inputs to the LLM. The information retrieval system should provide:
The decision about which information retrieval system to use is critical because
+ Integration with LLMs.
-Azure Cognitive Search is a [proven solution for information retrieval](https://github.com/Azure-Samples/azure-search-openai-demo) in a RAG architecture because it provides compatible indexing and query capabilities, with the infrastructure and security of the Azure cloud. Through code and other components, you can design a comprehensive RAG solution that includes all of the elements for generative AI over your proprietary content.
+Azure Cognitive Search is a [proven solution for information retrieval](https://github.com/Azure-Samples/azure-search-openai-demo) in a RAG architecture. It provides indexing and query capabilities, with the infrastructure and security of the Azure cloud. Through code and other components, you can design a comprehensive RAG solution that includes all of the elements for generative AI over your proprietary content.
> [!NOTE] > New to LLM and RAG concepts? This [video clip](https://youtu.be/2meEvuWAyXs?t=404) from a Microsoft presentation offers a simple explanation.
Microsoft has several built-in implementations for using Cognitive Search in a R
+ Azure Machine Learning, a search index can be used as a [vector store](/azure/machine-learning/concept-vector-stores). You can [create a vector index in an Azure Machine Learning prompt flow](/azure/machine-learning/how-to-create-vector-index) that uses your Cognitive Search service for storage and retrieval.
-If you need a custom approach however, you can create your own custom RAG solution. The remainder of this article explores how Cognitive Search fits into a custom solution.
+If you need a custom approach however, you can create your own custom RAG solution. The remainder of this article explores how Cognitive Search fits into a custom RAG solution.
> [!NOTE] > Prefer to look at code? You can review the [Azure Cognitive Search OpenAI demo](https://github.com/Azure-Samples/azure-search-openai-demo) for an example.
Cognitive Search doesn't provide native LLM integration, web frontends, or vecto
In Cognitive Search, all searchable content is stored in a search index that's hosted on your search service in the cloud. A search index is designed for fast queries with millisecond response times, so its internal data structures exist to support that objective. To that end, a search index stores *indexed content*, and not whole content files like entire PDFs or images. Internally, the data structures include inverted indexes of [tokenized text](https://lucene.apache.org/core/7_5_0/test-framework/org/apache/lucene/analysis/Token.html), vector indexes for embeddings, and unaltered text for cases where verbatim matching is required (for example, in filters, fuzzy search, regular expression queries).
-When you set up the data for your RAG solution, you use the features that create and load an index in Cognitive Search. An index includes fields that duplicate or represent your source content. An index field might be simple transference (a title or description in a source document becomes a title or description in a search index), or a field might contain the output of an external process, such as vectorization or skill processing that generates a text description of an image.
+When you set up the data for your RAG solution, you use the features that create and load an index in Cognitive Search. An index includes fields that duplicate or represent your source content. An index field might be simple transference (a title or description in a source document becomes a title or description in a search index), or a field might contain the output of an external process, such as vectorization or skill processing that generates a representation or text description of an image.
Since you probably know what kind of content you want to search over, consider the indexing features that are applicable to each content type:
print("\n-\nPrompt:\n" + prompt)
> [!NOTE] > Some Cognitive Search features are intended for human interaction and aren't useful in a RAG pattern. Specifically, you can skip autocomplete and suggestions. Other features like facets and orderby might be useful, but would be uncommon in a RAG scenario.
-<!-- Vanity URL for this article, currently used only in the vector search overview doc
-https://aka.ms/what-is-rag -->
- ## See also + [Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models](https://ai.meta.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) + [Retrieval Augmented Generation using Azure Machine Learning prompt flow](/azure/machine-learning/concept-retrieval-augmented-generation)++ [Azure Cognitive Search and LangChain: A Seamless Integration for Enhanced Vector Search Capabilities](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/azure-cognitive-search-and-langchain-a-seamless-integration-for/ba-p/3901448)
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Azure Cognitive Search uses Hierarchical Navigable Small Worlds (HNSW), which is
+ [Try the quickstart](search-get-started-vector.md) + [Learn more about vector indexing](vector-search-how-to-create-index.md) + [Learn more about vector queries](vector-search-how-to-query.md)-++ [Azure Cognitive Search and LangChain: A Seamless Integration for Enhanced Vector Search Capabilities](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/azure-cognitive-search-and-langchain-a-seamless-integration-for/ba-p/3901448)
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
service-bus-messaging Service Bus Outages Disasters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-outages-disasters.md
When you use availability zones, **both metadata and data (messages)** are repli
> [!NOTE] > The availability zones support for the premium tier is only available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
-You can enable availability zones on new namespaces only, using the Azure portal. Service Bus doesn't support migration of existing namespaces. You can't disable zone redundancy after enabling it on your namespace.
-
-![1][]
-
+When you create a premium tier namespace, the support for availability zones (if available in the selected region) is automatically enabled for the namespace. There's no additional cost for using this feature and you can't disable or enable this feature.
## Protection against outages and disasters - standard tier To achieve resilience against datacenter outages when using the standard messaging pricing tier, Service Bus supports two approaches: **active** and **passive** replication. For each approach, if a given queue or topic must remain accessible in the presence of a datacenter outage, you can create it in both namespaces. Both entities can have the same name. For example, a primary queue can be reached under **contosoPrimary.servicebus.windows.net/myQueue**, while its secondary counterpart can be reached under **contosoSecondary.servicebus.windows.net/myQueue**.
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Azure Service Fabric
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
The following list shows the resource requirements for Azure Spring Apps service
| \*.azurecr.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling the *Azure Container Registry* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling the *Azure Storage* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling the *Azure Event Hubs* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| global.prod.microsoftmetrics.com:443 and \*.livediagnostics.monitor.azure.com:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureMonitor:443 | TCP:443 | Azure Monitor. | Allows outbound calls to Azure Monitor. |
## Azure Global required FQDN / application rules
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the
| \*.azurecr.cn:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling the *Azure Container Registry* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.chinacloudapi.cn:443 and \*.core.chinacloudapi.cn:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling the *Azure Storage* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.chinacloudapi.cn:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling the *Azure Event Hubs* [service endpoint in the virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| global.prod.microsoftmetrics.com:443 and \*.livediagnostics.monitor.azure.com:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureMonitor:443 | TCP:443 | Azure Monitor. | Allows outbound calls to Azure Monitor. |
## Microsoft Azure operated by 21Vianet required FQDN / application rules
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
An Azure resource group is a logical group that holds your Azure resources that
* The storage location of your resource group metadata. * Where your resources will run in Azure if you don't specify another region during resource creation.
-> [!IMPORTANT]
-> Azure Container Storage Preview is only available in *eastus*, *westus2*, *westus3*, and *westeurope* regions.
-
-Create a resource group using the `az group create` command. Replace `<resource-group-name>` with the name of the resource group you want to create, and replace `<location>` with *eastus*, *westus2*, *westus3*, or *westeurope*.
+Create a resource group using the `az group create` command. Replace `<resource-group-name>` with the name of the resource group you want to create, and replace `<location>` with an Azure region such as *eastus*, *westus2*, *westus3*, or *westeurope*.
```azurecli-interactive az group create --name <resource-group-name> --location <location>
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
update-center Assessment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/assessment-options.md
Update Manager (preview) provides you the flexibility to assess the status of av
## Periodic assessment
- Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager (preview). We recommend that you enable this property on your machines as it allows Update Manager (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
+ Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager (preview). We recommend that you enable this property on your machines as it allows Update Manager (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-a-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
:::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png":::
update-center Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md
Title: Deploy updates and track results in Azure Update Manager (preview).
-description: The article details how to use Azure Update Manager (preview) in the Azure portal to deploy updates and view results for supported machines.
+ Title: Deploy updates and track results in Azure Update Manager (preview)
+description: This article details how to use Azure Update Manager (preview) in the Azure portal to deploy updates and view results for supported machines.
Last updated 08/08/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The article describes how to perform an on-demand update on a single VM or multiple VMs using Update Manager (preview).
+This article describes how to perform an on-demand update on a single virtual machine (VM) or multiple VMs by using Azure Update Manager (preview).
-See the following sections for detailed information:
-- [Install updates on a single VM](#install-updates-on-single-vm)
+See the following sections for more information:
+
+- [Install updates on a single VM](#install-updates-on-a-single-vm)
- [Install updates at scale](#install-updates-at-scale) ## Supported regions
-Update Manager (preview) is available in all [Azure public regions](support-matrix.md#supported-regions).
+Update Manager (preview) is available in all [Azure public regions](support-matrix.md#supported-regions).
## Configure reboot settings
-The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
-
+The registry keys listed in [Configure automatic updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot. A reboot can happen even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
-## Install updates on single VM
+## Install updates on a single VM
->[!NOTE]
-> You can install the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM.
+You can install updates from **Overview** or **Machines** on the **Update Manager (preview)** page or from the selected VM.
-# [From Overview blade](#tab/install-single-overview)
+# [From Overview pane](#tab/install-single-overview)
-To install one time updates on a single VM, follow these steps:
+To install one-time updates on a single VM:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update Manager (preview)**, **Overview**, choose your **Subscription** and select **One-time update** to install updates.
+1. On **Update Manager (preview)** > **Overview**, select your subscription and select **One-time update** to install updates.
- :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
+ :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Screenshot that shows an example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
-1. Select **Install now** to proceed with the one-time updates.
+1. Select **Install now** to proceed with the one-time updates:
- - In **Install one-time updates**, select **+Add machine** to add the machine for deploying one-time.
+ - **Install one-time updates**: Select **Add machine** to add the machine for deploying one time.
+ - **Select resources**: Choose the machine and select **Add**.
- - In **Select resources**, choose the machine and select **Add**.
+1. On the **Updates** pane, specify the updates to include in the deployment. For each product, select or clear all supported update classifications and specify the ones to include in your update deployment.
-1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, it's necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine.
+ If your deployment is meant to apply only for a select set of updates, it's necessary to clear all the preselected update classifications when you configure the **Inclusion/exclusion** updates described in the following steps. This action ensures only the updates you've specified to include in this deployment are installed on the target machine.
> [!NOTE]
- > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in Update Manager (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
- > - Update Manager (preview) doesn't support driver updates.
-
+ > - **Selected Updates** shows a preview of OS updates that you can install based on the last OS update assessment information available. If the OS update assessment information in Update Manager (preview) is obsolete, the actual updates installed would vary. Especially if you've chosen to install a specific update category, where the OS updates applicable might vary as new packages or KB IDs might be available for the category.
+ > - Update Manager (preview) doesn't support driver updates.
- - Select **+Include update classification**, in the **Include update classification** select the appropriate classification(s) that must be installed on your machines.
+ - Select **Include update classification**. Select the appropriate classifications that must be installed on your machines.
- :::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot on including update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png":::
+ :::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot that shows update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png":::
- - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update Manager (preview) shows a preview of OS updates under the **Selected Updates** section.
-
- - To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend checking this option because updates that are not displayed here might be installed, as newer updates might be available.
-
- - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date, choose the date and select **Add** and **Next**.
+ - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, use `3103696` or `3134815`. For Windows, you can refer to the [MSRC webpage](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base release. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, use `kernel*`, `glibc`, or `libc=1.0.1`. Based on the options specified, Update Manager (preview) shows a preview of OS updates under the **Selected Updates** section.
+ - To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend selecting this option because updates that aren't displayed here might be installed, as newer updates might be available.
+ - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date**. Select the date and select **Add** > **Next**.
- :::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot on including patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png":::
+ :::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot that shows the patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png":::
-1. In **Properties**, specify the reboot and maintenance window.
+1. On the **Properties** pane, specify the reboot and maintenance window:
- Use the **Reboot** option to specify the way to handle reboots during deployment. The following options are available: * Reboot if required * Never reboot * Always reboot
- - Use the **Maximum duration (in minutes)** to specify the amount of time allowed for updates to install. The maximum limit supported is 235 minutes. Consider the following details when specifying the window:
+ - Use **Maximum duration (in minutes)** to specify the amount of time allowed for updates to install. The maximum limit supported is 235 minutes. Consider the following details when you specify the window:
* It controls the number of updates that must be installed.
- * New updates will continue to install if the maintenance window limit is approaching.
- * In-progress updates aren't terminated if the maintenance window limit is exceeded
- * Any remaining updates that are not yet installed aren't attempted. We recommend that you reevaluate the maintenance window if this is consistently encountered.
- * If the limit is exceeded on Windows, it's often because of a service pack update that is taking a long time to install.
+ * New updates continue to install if the maintenance window limit is approaching.
+ * In-progress updates aren't terminated if the maintenance window limit is exceeded.
+ * Any remaining updates that aren't yet installed aren't attempted. We recommend that you reevaluate the maintenance window if this issue is consistently encountered.
+ * If the limit is exceeded on Windows, it's often because of a service pack update that's taking a long time to install.
-1. When you're finished configuring the deployment, verify the summary in **Review + install** and select **Install**.
+1. After you're finished configuring the deployment, verify the summary in **Review + install** and select **Install**.
-
-# [From Machines blade](#tab/install-single-machine)
+# [From Machines pane](#tab/install-single-machine)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update Manager (Preview)**, **Machine**, choose your **Subscription**, choose your machine and select **One-time update** to install updates.
+1. On **Update Manager (preview)** > **Machine**, select your subscription, select your machine, and select **One-time update** to install updates.
-1. Select to **Install now** to proceed with installing updates.
+1. Select **Install now** to proceed with installing updates.
-1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
+1. On the **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next**, and follow the procedure from step 4 listed in **From Overview pane** of [Install updates on a single VM](#install-updates-on-a-single-vm).
- A notification appears to inform you the activity has started and another is created when it's completed. When it is successfully completed, you can view the installation operation results in **History**. The status of the operation can be viewed at any time from the [Azure Activity log](../azure-monitor/essentials/activity-log.md).
+ A notification informs you when the activity starts, and another tells you when it's finished. After it's successfully finished, you can view the installation operation results in **History**. You can view the status of the operation at any time from the [Azure activity log](../azure-monitor/essentials/activity-log.md).
# [From a selected VM](#tab/singlevm-deploy-home) 1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.
-1. In **Updates**, select **Go to Updates using Azure Update Manager**.
-1. In **Updates (Preview)**, select **One-time update** to install the updates.
-1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
-
+1. On the **Updates** pane, select **Go to Updates using Azure Update Manager**.
+1. On the **Updates (Preview)** pane, select **One-time update** to install the updates.
+1. On the **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next**, and follow the procedure from step 4 listed in **From Overview pane** of [Install updates on a single VM](#install-updates-on-a-single-vm).
+ ## Install updates at scale
-To create a new update deployment for multiple machines, follow these steps:
+Follow these steps to create a new update deployment for multiple machines.
->[!NOTE]
-> You can check the updates from **Overview** or **Machines** blade.
-
-You can schedule updates
+> [!NOTE]
+> You can check the updates from **Overview** or **Machines**.
-# [From Overview blade](#tab/install-scale-overview)
+You can schedule updates.
+# [From Overview pane](#tab/install-scale-overview)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update Manager (Preview)**, **Overview**, choose your **Subscription**, select **One-time update**, and **Install now** to install updates.
+1. On **Update Manager (preview)** > **Overview**, select your subscription and select **One-time update** > **Install now** to install updates.
- :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
-
-1. In **Install one-time updates**, you can select the resources and machines to install the updates.
+ :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Screenshot that shows installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
-1. In **Machines**, you can view all the machines available in your subscription. You can also use the **+Add machine** to add the machines for deploying one-time updates. You can add up to 20 machines. Choose **Select all** and select **Add**.
+1. On the **Install one-time updates** pane, you can select the resources and machines to install the updates.
-The **Machines** displays a list of machines for which you can deploy one-time update. Select **Next** and follow the procedure from step 6 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
+1. On the **Machines** page, you can view all the machines available in your subscription. You can also use **Add machine** to add the machines for deploying one-time updates. You can add up to 20 machines. Choose **Select all** and select **Add**.
+**Machines** displays a list of machines for which you can deploy a one-time update. Select **Next** and follow the procedure from step 6 listed in **From Overview pane** of [Install updates on a single VM](#install-updates-on-a-single-vm).
-# [From Machines blade](#tab/install-scale-machines)
+# [From Machines pane](#tab/install-scale-machines)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to **Machines**, select your subscription and choose your machines. You can choose **Select all** to select all the machines.
+1. Go to **Machines**, select your subscription, and choose your machines. You can choose **Select all** to select all the machines.
-1. Select **One-time update**, **Install now** to deploy one-time updates.
-
-1. In **Install one-time updates**, you can select the resources and machines to install the updates.
+1. Select **One-time update** > **Install now** to deploy one-time updates.
-1. In **Machines**, you can view all the machines available in your subscription. You can also select using the **+Add machine** to add the machines for deploying one-time updates. You can add up to 20 machines. Choose the **Select all** and select **Add**.
+1. On the **Install one-time updates** pane, you can select the resources and machines to install the updates.
-The **Machines** displays a list of machines for which you want to deploy one-time update, select **Next** and follow the procedure from step 6 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
+1. On the **Machines** page, you can view all the machines available in your subscription. You can also select by using **Add machine** to add the machines for deploying one-time updates. You can add up to 20 machines. Choose **Select all** and select **Add**.
+
+**Machines** displays a list of machines for which you want to deploy a one-time update. Select **Next** and follow the procedure from step 6 listed in **From Overview pane** of [Install updates on a single VM](#install-updates-on-a-single-vm).
-
-A notification appears to inform you the activity has started and another is created when it's completed. When it is successfully completed, you can view the installation operation results in **History**. The status of the operation can be viewed at any time from the [Azure Activity log](../azure-monitor/essentials/activity-log.md).
+A notification informs you when the activity starts, and another tells you when it's finished. After it's successfully finished, you can view the installation operation results in **History**. You can view the status of the operation at any time from the [Azure activity log](../azure-monitor/essentials/activity-log.md).
+## View update history for a single VM
-## View update history for single VM
+You can browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions. For more information, see [Update deployment history](manage-multiple-machines.md#update-deployment-history).
-You can browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions. For more information, see [Update deployment history](manage-multiple-machines.md#update-deployment-history).
+After your scheduled deployment starts, you can see its status on the **History** tab. It displays the total number of deployments, including the successful and failed deployments.
-After your scheduled deployment starts, you can see its status on the **History** tab. It displays the total number of deployments including the successful and failed deployments.
+**Windows update history** currently doesn't show the updates that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update Manager (preview)** > **Manage** > **History**.
-> [!NOTE]
-> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update manager (preview)** > **Manage** > **History**.
-
-A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid.
+A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, which is represented as **Operation ID**. It's listed along with **Status**, **Updates Installed**, and **Time** details. You can filter the results listed in the grid.
-Select any one of the update deployments from the list to open the **Update deployment run** page. Here, it shows a detailed breakdown of the updates and the installation results for the Azure VM or Arc-enabled server.
+Select any one of the update deployments from the list to open the **Update deployment run** page. Here, you can see a detailed breakdown of the updates and the installation results for the Azure VM or Azure Arc-enabled server.
The available values are:-- **Not attempted** - The update wasn't installed because there was insufficient time available, based on the defined maintenance window duration.-- **Not selected** - The update wasn't selected for deployment.-- **Succeeded** - The update succeeded.-- **Failed** - The update failed.+
+- **Not attempted**: The update wasn't installed because insufficient time was available, based on the defined maintenance window duration.
+- **Not selected**: The update wasn't selected for deployment.
+- **Succeeded**: The update succeeded.
+- **Failed**: The update failed.
## Next steps
-* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [Query logs](query-logs.md).
+* To troubleshoot issues, see [Troubleshoot issues with Azure Update Manager (preview)](troubleshoot.md).
update-center Guidance Migration Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-migration-azure.md
Last updated 08/23/2023
-# Guidance on patching while migrating from Microsoft Configuration Manager to Azure
+# Guidance on migrating Azure VMs from Microsoft Configuration Manager to Azure
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article provides the details on how to patch your migrated virtual machines on Azure.
+This article provides a guide to start using Azure Update Manager (for update management) for Azure virtual machines that are currently using Microsoft Configuration Manager (MCM).
-Microsoft Configuration Manager (MCM) helps you to manage PCs and servers, keep software up-to-date, set configuration and security policies, and monitor system status.
+Microsoft Configuration Manager (MCM), previously known as System Center Configuration Manager (SCCM), helps you to manage PCs and servers, keep software up to date, set configuration and security policies, and monitor system status.
- The [Azure Migration tool](/mem/configmgr/core/support/azure-migration-tool) helps you to programmatically create Azure virtual machines (VMs) for Configuration Manager and installs the various site roles with default settings. The validation of new roles and removal of the on-premises site system role enables MCM to provide all the on-premises capabilities and experiences in Azure.
+MCM supports several [cloud services](/mem/configmgr/core/understand/use-cloud-services) that can supplement on-premises infrastructure and can help solve business problems such as:
+- How to manage clients that roam onto the internet.
+- How to provide content resources to isolated clients or resources on the intranet, outside your firewall.
+- How to scale out infrastructure when the physical hardware isn't available or isn't logically placed to support your needs.
-Additionally, you can use the native [Azure Update Manager](overview.md) to manage and govern update compliance for Windows and Linux machines across your deployments in Azure, on-premises, and on the other cloud platforms, from a single dashboard, with no operational cost for managing the patching infrastructure. Azure Update Manager is similar to the update management component of MCM that is designed as a standalone Azure service to provide SaaS experience on Azure to manage hybrid environments.
+Customers [extend and migrate an on-premises site to Azure](/mem/configmgr/core/support/azure-migration-tool) and create Azure virtual machines (VMs) for Configuration Manager and install the various site roles with default settings. The validation of new roles and removal of the on-premises site system role enables MCM to provide all the on-premises capabilities and experiences in Azure. For more information, see [Configuration Manager on Azure FAQ](/mem/configmgr/core/understand/configuration-manager-on-azure).
-The MCM in Azure and Azure Update Manager can fulfill your patching requirements as per your requirement.
-- Using MCM, you can continue with the existing investments in MCM and the processes to maintain the patch update management cycle for Windows VMs.-- Using Azure Update Manager, you can achieve a consistent management of VMs and operating system updates across your cloud and hybrid environments. You don't need to maintain Azure virtual machines for hosting the different Configuration Manager roles and don't need an MCM license thereby reducing the total cost for maintaining the patch update management cycle for all the machines in your environment. [Learn more](https://techcommunity.microsoft.com/t5/windows-it-pro-blog/what-s-uup-new-update-style-coming-next-week/ba-p/3773065).
+## Migrate to Azure Update Manager
+
+MCM offers [multiple features and capabilities](/mem/configmgr/core/plan-design/changes/features-and-capabilities) and software [update management](/mem/configmgr/sum/understand/software-updates-introduction) is one of these.By using MCM in Azure, you can continue with the existing investments in MCM and processes to manage update cycle for Windows VMs.
+
+**Specifically for update management or patching**, as per your requirements, you can also use the native [Azure Update Manager](overview.md) to manage and govern update compliance for Windows and Linux machines across your deployments in a consistent manner. Unlike MCM that needs maintaining Azure virtual machines for hosting the different Configuration Manager roles. Azure Update Manager is designed as a standalone Azure service to provide SaaS experience on Azure to manage hybrid environments. You don't need license to use Azure Update Manager.
+
+> [!NOTE]
+> Azure Update Manager does not provide migration support for Azure VMs in MCM. For example, configurations.
+
+## Software update management capability map
+
+The following table maps the **software update management capabilities** of MCM to Azure Update Manager.
+
+**Capability** | **Microsoft Configuration Manager** | **Azure Update Manager** |
+ | | |
+Synchronize software updates between sites (Central Admin site, Primary, Secondary sites) | The top site (either central admin site or stand-alone primary site) connects to Microsoft Update to retrieve software update. [Learn more](/mem/configmgr/sum/understand/software-updates-introduction). After the top sites are synchronized, the child sites are synchronized. | There's no hierarchy of machines in Azure and therefore all machines connected to Azure receive updates from the source repository.
+Synchronize software updates/check for updates (retrieve patch metadata) | You can scan for updates periodically by setting configuration on the Software update point. [Learn more](/mem/configmgr/sum/get-started/synchronize-software-updates#to-schedule-software-updates-synchronization) | You can enable periodic assessment to enable scan of patches every 24 hours. [Learn more](assessment-options.md)|
+Configuring classifications/products to synchronize/scan/assess | You can choose the update classifications (security or critical updates) to synchronize/scan/assess. [Learn more](/mem/configmgr/sum/get-started/configure-classifications-and-products) | There's no such capability here. The entire software metadata is scanned. |
+Deploy software updates (install patches) | Provides three modes of deploying updates: </br> Manual deployment </br> Automatic deployment </br> Phased deployment [Learn more](/mem/configmgr/sum/deploy-use/deploy-software-updates) | Manual deployment is mapped to deploy [one-time updates](deploy-updates.md) and Automatic deployment is mapped to [scheduled updates](scheduled-patching.md) (The [Automatic Deployment Rules (ADRs)](/mem/configmgr/sum/deploy-use/automatically-deploy-software-updates#BKMK_CreateAutomaticDeploymentRule)) can be mapped to schedules. There's no phased deployment option.
## Manage software updates using Azure Update Manager
The MCM in Azure and Azure Update Manager can fulfill your patching requirements
1. Select the suitable [assessment](assessment-options.md) and [patching](updates-maintenance-schedules.md) options as per your requirement.
-## Map MCM capabilities to Azure Update Manager
-The following table explains the mapping capabilities of MCM software Update Management to Azure Update Manager.
+### Patch machines
-| **Capability** | **Microsoft Configuration Manager** | **Azure Update Manager**|
-| | | |
-|Synchronize software updates between sites(Central Admin site, Primary, Secondary sites)| The top site (either central admin site or stand-alone primary site) connects to Microsoft Update to retrieve software update. [Learn more](/mem/configmgr/sum/understand/software-updates-introduction). After the top sites are synchronized, the child sites are synchronized. | There's no hierarchy of machines in Azure and therefore all machines connected to Azure receive updates from the source repository. |
-|Synchronize software updates/check for updates (retrieve patch metadata). | You can scan for updates periodically by setting configuration on the Software update point. [Learn more](/mem/configmgr/sum/get-started/synchronize-software-updates#to-schedule-software-updates-synchronization). | You can enable periodic assessment to enable scan of patches every 24 hours. [Learn more](assessment-options.md). |
-|Configuring classifications/products to synchronize/scan/assess | You can choose the update classifications (security or critical updates) to synchronize/scan/assess. [Learn more](/mem/configmgr/sum/get-started/configure-classifications-and-products). | There's no such capability here. The entire software metadata is scanned.|
-|Deploy software updates (install patches)| Provides three modes of deploying updates: </br> Manual deployment </br> Automatic deployment </br> Phased deployment [Learn more](/mem/configmgr/sum/deploy-use/deploy-software-updates).| Manual deployment is mapped to deploying [one-time updates](deploy-updates.md) and Automatic deployment is mapped to [scheduled updates](scheduled-patching.md). (The [Automatic Deployment Rules (ADRs)](/mem/configmgr/sum/deploy-use/automatically-deploy-software-updates#BKMK_CreateAutomaticDeploymentRule) can be mapped to schedules). There's no phased deployment option. |
+After you set up configuration for assessment and patching, you can deploy/install either through [on-demand updates](deploy-updates.md) (One-time or manual update)or [schedule updates](scheduled-patching.md) (automatic update) only. You can also deploy updates using [Azure Update Manager's API](manage-vms-programmatically.md).
-## Limitations in Azure Update Manager (preview)
+## Limitations in Azure Update Manager
The following are the current limitations: - **Orchestration groups with Pre/Post scripts** - [Orchestration groups](/mem/configmgr/sum/deploy-use/orchestration-groups) can't be created in Azure Update Manager to specify a maintenance sequence, allow some machines for updates at the same time and so on. (The orchestration groups allow you to use the pre/post scripts to run tasks before and after a patch deployment).
-### Patching machines
-After you set up configurations for assessment and patching, you can deploy/install either through [on-demand updates](deploy-updates.md) (one time or manual update) or [schedule updates](scheduled-patching.md) (automatic update) only. You can also deploy updates using [Azure Update Manager's API](manage-vms-programmatically.md).
- ## Frequently asked questions ### Where does Azure Update Manager get its updates from?
Azure Update Manager refers to the repository that the machines point to. Most W
### Can Azure Update Manager patch OS, SQL and Third party software?
-Azure Update Manager refers to the repositories that the VMs point to. If the repository contains third party and SQL patches, Azure Update Manager can install SQL and third party patches.
-> [!NOTE]
-> By default, Windows VMs point to Windows Update repository that does not contain SQL and third party patches. If the VMs point to Microsoft Update, Azure Update Manager will patch OS, SQL, and third party updates.
+Azure Update Manager refers to the repositories (or endpoints) that the VMs point to. If the repository (or endpoints) contains updates for Microsoft products, third party software etc. then Azure Update Manager can install these patches.
+
+By default, Windows VMs point to Windows Update server. Windows Update server doesn't contain updates for Microsoft products, and third party software. If the VMs point to Microsoft Update, Azure Update Manager patches OS and Microsoft products.
+
+For the third party software patching, Azure Update Manager should be connected to WSUS and you must publish the third party updates. We can't patch third party software for Windows VMs unless they're available in WSUS.
### Do I need to configure WSUS to use Azure Update Manager?
-You don't need WSUS to deploy patches in Azure Update Manager. Typically, all the machines connect to the internet repository to get updates (unless the machines point to WSUS or local repository that isn't connected to the internet). [Learn more](/mem/configmgr/sum/).
+WSUS is a way to manage patches. Azure Update Manager will refer to whichever endpoint it's pointed to. (Windows Update, Microsoft Update, or WSUS).
## Next steps - [An overview on Azure Update Manager](overview.md)
update-center Manage Multiple Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md
Title: Manage multiple machines in Azure Update Manager (preview)
-description: The article details how to use Azure Update Manager (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal.
+description: This article explains how to use Azure Update Manager (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal.
Last updated 05/02/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. > [!IMPORTANT]
-> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
+> For a seamless scheduled patching experience, we recommend that for all Azure virtual machines (VMs), you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. For more information, see [Configure schedule patching on Azure VMs to ensure business continuity](prerequsite-for-schedule-patching.md).
-
-This article describes the various features that Update Manager (Preview) offers to manage the system updates on your machines. Using the Update Manager (preview), you can:
+This article describes the various features that Azure Update Manager (preview) offers to manage the system updates on your machines. By using the Update Manager (preview), you can:
- Quickly assess the status of available operating system updates. - Deploy updates.-- Set up recurring update deployment schedule.
+- Set up a recurring update deployment schedule.
- Get insights on the number of machines managed.-- Information on how they're managed, and other relevant details. -
-Instead of performing these actions from a selected Azure VM or Arc-enabled server, you can manage all your machines in the Azure subscription.
+- Obtain information on how they're managed and other relevant details.
+Instead of performing these actions from a selected Azure VM or Azure Arc-enabled server, you can manage all your machines in the Azure subscription.
-## View update Manager (preview) status
+## View Update Manager (preview) status
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Update Manager(preview)**.
+1. To view update assessment across all machines, including Azure Arc-enabled servers, go to **Update Manager (preview)**.
- :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot of update manager overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png":::
+ :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot that shows the Update Manager Overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png":::
- In the **Overview** page - the summary tiles show the following status:
+ On the **Overview** page, the summary tiles show the following status:
- - **Filters**ΓÇöuse filters to focus on a subset of your resources. The selectors above the tiles return **Subscription**, **Resource group**, **Resource type** (Azure VMs and Arc-enabled servers) **Location**, and **OS** type (Windows or Linux) based on the Azure role-based access rights you've been granted. You can combine filters to scope to a specific resource.
+ - **Filters**: Use filters to focus on a subset of your resources. The selectors above the tiles return **Subscription**, **Resource group**, **Resource type** (Azure VMs and Azure Arc-enabled servers), **Location**, and **OS** type (Windows or Linux) based on the Azure role-based access rights you've been granted. You can combine filters to scope to a specific resource.
+ - **Update status of machines**: Shows the update status information for assessed machines that had applicable or needed updates. You can filter the results based on classification types. By default, all [classifications](../automation/update-management/overview.md#update-classifications) are selected. According to the classification selection, the tile is updated.
- - **Update status of machines**ΓÇöshows the update status information for assessed machines that had applicable or needed updates. You can filter the results based on classification types. By default, all [classifications](../automation/update-management/overview.md#update-classifications) are selected and as per the classification selection, the tile is updated.
-
- The graph provides a snapshot for all your machines in your subscription, regardless of whether you have used Update Manager (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days.
+ The graph provides a snapshot for all your machines in your subscription, regardless of whether you've used Update Manager (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days.
From the assessment data available, machines are classified into the following categories:
- - **No updates available**ΓÇöno updates are pending for these machines and these machines are up to date.
- - **Updates available**ΓÇöupdates are pending for these machines and these machines aren't up to date.
- - **Reboot Required**ΓÇöpending a reboot for the updates to take effect.
- - **No updates data**ΓÇöno assessment data is available for these machines.
+ - **No updates available**: No updates are pending for these machines and these machines are up to date.
+ - **Updates available**: Updates are pending for these machines and these machines aren't up to date.
+ - **Reboot required**: Pending a reboot for the updates to take effect.
+ - **No updates data**: No assessment data is available for these machines.
- The following could be the reasons for no assessment data:
- - No assessment has been done over the last seven days
- - The machine has an unsupported OS
+ The following reasons could explain why there's no assessment data:
+ - No assessment has been done over the last seven days.
+ - The machine has an unsupported OS.
- The machine is in an unsupported region and you can't perform an assessment.
- - **Patch orchestration configuration of Azure virtual machines** ΓÇö all the Azure machines inventoried in the subscription are summarized by each update orchestration method. Values are:
+ - **Patch orchestration configuration of Azure virtual machines**: All the Azure machines inventoried in the subscription are summarized by each update orchestration method. Values are:
- - **Customer Managed Schedules (Preview)**ΓÇöenables schedule patching on your existing VMs.
- - **Azure Managed - Safe Deployment**ΓÇöthis mode enables automatic VM guest patching for the Azure virtual machine. Subsequent patch installation is orchestrated by Azure.
- - **Image Default**ΓÇöfor Linux machines, it uses the default patching configuration.
- - **OS orchestrated**ΓÇöthe OS automatically updates the machine.
- - **Manual updates**ΓÇöyou control the application of patches to a machine by applying patches manually inside the machine. In this mode, automatic updates are disabled for Windows OS.
-
-
-
- For more information about each orchestration method see, [automatic VM guest patching for Azure VMs](../virtual-machines/automatic-vm-guest-patching.md#patch-orchestration-modes).
+ - **Customer Managed Schedules (preview)**: Enables schedule patching on your existing VMs.
+ - **Azure Managed - Safe Deployment**: Enables automatic VM guest patching for the Azure virtual machine. Subsequent patch installation is orchestrated by Azure.
+ - **Image Default**: For Linux machines, it uses the default patching configuration.
+ - **OS orchestrated**: The OS automatically updates the machine.
+ - **Manual updates**: You control the application of patches to a machine by applying patches manually inside the machine. In this mode, automatic updates are disabled for the Windows OS.
- - **Update installation status**ΓÇöby default, the tile shows the status for the last 30 days. Using the **Time** picker, you can choose a different range. The values are:
- - **Failed**ΓÇöis when one or more updates in the deployment have failed.
- - **Completed**ΓÇöis when the deployment ends successfully by the time range selected.
- - **Completed with warnings**ΓÇöis when the deployment is completed successfully but had warnings.
- - **In progress**ΓÇöis when the deployment is currently running.
+ For more information about each orchestration method, see [Automatic VM guest patching for Azure VMs](../virtual-machines/automatic-vm-guest-patching.md#patch-orchestration-modes).
-- Select the **Update status of machines** or **Patch orchestration configuration of Azure Virtual machines** to go to the **Machines** page. -- Select the **Update installation status**, to go to the **History** page.
+ - **Update installation status**: By default, the tile shows the status for the last 30 days. By using the **Time** picker, you can choose a different range. The values are:
+ - **Failed**: One or more updates in the deployment have failed.
+ - **Completed**: The deployment ends successfully by the time range selected.
+ - **Completed with warnings**: The deployment is completed successfully but had warnings.
+ - **In progress**: The deployment is currently running.
-- **Pending Windows updates** ΓÇö the tile shows the status of pending updates for Windows machines in your subscription.-- **Pending Linux updates** ΓÇö the tile shows the status of pending updates for Linux machines in your subscription.
+- Select **Update status of machines** or **Patch orchestration configuration of Azure virtual machines** to go to the **Machines** page.
+- Select **Update installation status** to go to the **History** page.
+- **Pending Windows updates**: Status of pending updates for Windows machines in your subscription.
+- **Pending Linux updates**: Status of pending updates for Linux machines in your subscription.
## Summary of machine status
-Update Manager (preview) in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings.
+Update Manager (preview) in Azure enables you to browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview).
- In the Update Manager (preview) page, select **Machines** from the left menu.
+This section shows how you can filter information to understand the update status of your machine resources. For multiple machines, you can see how to begin an update assessment, begin an update deployment, and manage their update settings.
- :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot of Update Manager(preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png":::
+ On the **Update Manager (preview)** page, select **Machines** from the left menu.
- On the page, the table lists all the machines in the specified subscription, and for each machine it helps you understand the following details that show up based on the latest assessment.
- - **Update status**ΓÇöthe total number of updates available identified as applicable to the machine's OS.
- - **Operating system**ΓÇöthe operating system running on the machine.
- - **Resource type**ΓÇöthe machine is either hosted in Azure or is a hybrid machine managed by Arc-enabled servers.
- - **Patch orchestration**ΓÇö the patches are applied following availability-first principles and managed by Azure.
- - **Periodic assessment**ΓÇöan update setting that allows you to enable automatic periodic checking of updates.
+ :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot that shows the Update Manager (preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png":::
-The column **Patch Orchestration**, in the machine's patch mode has the following values:
+ The table lists all the machines in the specified subscription, and for each machine it helps you understand the following details that show up based on the latest assessment:
- * **Customer Managed Schedules (Preview)**ΓÇöenables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties - **Patch mode = Azure-orchestrated** and **BypassPlatformSafetyChecksOnUserSchedule = TRUE** on your behalf after receiving your consent.
- * **Azure Managed - Safe Deployment**ΓÇöfor a group of virtual machines undergoing an update, the Azure platform will orchestrate updates. The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).(i.e), the patch mode is **AutomaticByPlatform**.
- * **Automatic by OS**ΓÇöthe machine is automatically updated by the OS.
- * **Image Default**ΓÇöfor Linux machines, its default patching configuration is used.
- * **Manual**ΓÇöyou control the application of patches to a machine by applying patches manually inside the machine. In this mode automatic updates are disabled for Windows OS.
-
+ - **Update status**: The total number of updates available identified as applicable to the machine's OS.
+ - **Operating system**: The operating system running on the machine.
+ - **Resource type**: The machine is either hosted in Azure or is a hybrid machine managed by Azure Arc-enabled servers.
+ - **Patch orchestration**: The patches are applied following availability-first principles and managed by Azure.
+ - **Periodic assessment**: An update setting that allows you to enable automatic periodic checking of updates.
+
+The **Patch orchestration** column in the machine's patch mode has the following values:
-The machine's statusΓÇöfor an Azure VM, it shows it's [power state](../virtual-machines/states-billing.md#power-states-and-billing), and for an Arc-enabled server, it shows if it's connected or not.
+ * **Customer Managed Schedules (preview)**: Enables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties: `Patch mode = Azure-orchestrated` and `BypassPlatformSafetyChecksOnUserSchedule = TRUE` on your behalf after receiving your consent.
+ * **Azure Managed - Safe Deployment**: For a group of virtual machines undergoing an update, the Azure platform orchestrates updates. The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). For example, the patch mode is `AutomaticByPlatform`.
+ * **Automatic by OS**: The machine is automatically updated by the OS.
+ * **Image default**: For Linux machines, its default patching configuration is used.
+ * **Manual**: You control the application of patches to a machine by applying patches manually inside the machine. In this mode, automatic updates are disabled for Windows OS.
-Use filters to focus on a subset of your resources. The selectors above the tiles return subscriptions, resource groups, resource types (that is, Azure VMs and Arc-enabled servers), regions, etc. and are based on the Azure role-based access rights you've been granted. You can combine filters to scope to a specific resource.
+**The machine's status**: For an Azure VM, it shows its [power state](../virtual-machines/states-billing.md#power-states-and-billing). For an Azure Arc-enabled server, it shows if it's connected or not.
-The summary tiles at the top of the page summarize the number of machines that have been assessed and their update status.
+Use filters to focus on a subset of your resources. The selectors above the tiles return subscriptions, resource groups, resource types (that is, Azure VMs and Azure Arc-enabled servers), and regions. They're based on the Azure role-based access rights you've been granted. You can combine filters to scope to a specific resource.
+
+The summary tiles at the top of the page summarize the number of machines that have been assessed and their update status.
To manage the machine's update settings, see [Manage update configuration settings](manage-update-settings.md). ### Check for updates
-For machines that haven't had a compliance assessment scan for the first time, you can select one or more of them from the list and then select the **Check for updates**. You'll receive status messages as the configuration is performed.
-
- :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-multi-selection-inline.png" alt-text="Screenshot of initiating a scan assessment for selected machines with the check for updates option." lightbox="./media/manage-multiple-machines/update-center-assess-now-multi-selection-expanded.png":::
+For machines that haven't had a compliance assessment scan for the first time, you can select one or more of them from the list. Then select **Check for updates**. You receive status messages as the configuration is performed.
- Otherwise, a compliance scan is initiated, and then the results are forwarded and stored in **Azure Resource Graph**. This process takes several minutes. When the assessment is completed, a confirmation message appears on the page.
+ :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-multi-selection-inline.png" alt-text="Screenshot that shows initiating a scan assessment for selected machines with the Check for updates option." lightbox="./media/manage-multiple-machines/update-center-assess-now-multi-selection-expanded.png":::
- :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-complete-banner-inline.png" alt-text="Screenshot of assessment banner on Manage Machines page." lightbox="./media/manage-multiple-machines/update-center-assess-now-complete-banner-expanded.png":::
+ Otherwise, a compliance scan begins and the results are forwarded and stored in Azure Resource Graph. This process takes several minutes. When the assessment is finished, a confirmation message appears on the page.
+ :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-complete-banner-inline.png" alt-text="Screenshot that shows an assessment banner on the Manage Machines page." lightbox="./media/manage-multiple-machines/update-center-assess-now-complete-banner-expanded.png":::
-Select a machine from the list to open Update Manager (preview) scoped to that machine. Here, you can view its detailed assessment status, update history, configure its patch orchestration options, and initiate an update deployment.
+Select a machine from the list to open Update Manager (preview) scoped to that machine. Here, you can view its detailed assessment status and update history, configure its patch orchestration options, and begin an update deployment.
### Deploy the updates
-For assessed machines that are reporting updates available, select one or more of the machines from the list and initiate an update deployment that starts immediately. Select the machine and go to **One-time update**.
+For assessed machines that are reporting updates available, select one or more of the machines from the list and begin an update deployment that starts immediately. Select the machine and go to **One-time update**.
- :::image type="content" source="./media/manage-multiple-machines/update-center-install-updates-now-multi-selection-inline.png" alt-text="Screenshot of install one time updates for machine(s) on updates preview page." lightbox="./media/manage-multiple-machines/update-center-install-updates-now-multi-selection-expanded.png":::
-
- A notification appears to confirm that an activity has started and another is created when it's completed. When it's successfully completed, the installation operation results are available to view from either the **Update history** tab, when you select the machine from the **Machines** page, or on the **History** page, which you're redirected to automatically after initiating the update deployment. The status of the operation can be viewed at any time from the [Azure Activity log](../azure-monitor/essentials/activity-log.md).
+ :::image type="content" source="./media/manage-multiple-machines/update-center-install-updates-now-multi-selection-inline.png" alt-text="Screenshot that shows installing one-time updates for machines on the Updates (Preview) page." lightbox="./media/manage-multiple-machines/update-center-install-updates-now-multi-selection-expanded.png":::
-### Set up a recurring update deployment
+ A notification confirms when an activity starts and another tells you when it's finished. After it's successfully finished, the installation operation results are available to view. You can use the **Update history** tab, when you select the machine from the **Machines** page. You can also select the **History** page. You're redirected to this page automatically after you begin the update deployment. You can view the status of the operation at any time from the [Azure activity log](../azure-monitor/essentials/activity-log.md).
-You can create a recurring update deployment for your machines. Select your machine and select **Scheduled updates**. This opens [Create new maintenance configuration](scheduled-patching.md) flow.
+### Set up a recurring update deployment
+You can create a recurring update deployment for your machines. Select your machine and select **Scheduled updates**. A [Create new maintenance configuration](scheduled-patching.md) flow opens.
## Update deployment history
-Update Manager (preview) enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). You can filter information to understand the update assessment and deployment history for multiple machines. In Update Manager (preview), select **History** from the left menu.
-
+Update Manager (preview) enables you to browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). You can filter information to understand the update assessment and deployment history for multiple machines. On the **Update Manager (preview)** page, select **History** from the left menu.
## Update deployment history by machines
-Provides a summarized status of update and assessment actions performed against your Azure VMs and Arc-enabled servers. You can also drill into a specific machine to view update-related details and manage it directly, review the detailed update or assessment history for the machine, and other related details in the table.
+The update deployment history provides a summarized status of update and assessment actions performed against your Azure VMs and Azure Arc-enabled servers. You can also drill into a specific machine to view update-related details and manage it directly. You can review the detailed update or assessment history for the machine and other related details in the table.
:::image type="content" source="./media/manage-multiple-machines/update-center-history-page-inline.png" alt-text="Screenshot of update center History page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-history-page-expanded.png":::
+Each record shows:
+ - **Machine Name**
- - **Status**
+ - **Status**
- **Update installed**
- - **Update operation**
- - **Operation type**
+ - **Update operation**
+ - **Operation type**
- **Operation start time** - **Resource Type** - **Tags** - **Last assessed time** ## Update deployment history by maintenance run ID
-In the **History** page, select **By maintenance run ID** to view the history of the maintenance run schedules. Each record shows
- :::image type="content" source="./media/manage-multiple-machines/update-center-history-by-maintenance-run-id-inline.png" alt-text="Screenshot of update center History page by maintenance run ID in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-history-by-maintenance-run-id-expanded.png":::
+On the **History** page, select **By maintenance run ID** to view the history of the maintenance run schedules.
+
+ :::image type="content" source="./media/manage-multiple-machines/update-center-history-by-maintenance-run-id-inline.png" alt-text="Screenshot that shows the update center History page By maintenance run ID in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-history-by-maintenance-run-id-expanded.png":::
+
+Each record shows:
- **Maintenance run ID** - **Status** - **Updated machines**
+- **Maintenance Configuration**
- **Operation start time** - **Operation end time**
-When you select any one maintenance run ID record, you can view an expanded status of the maintenance run. It contains information about machines and updates. It includes the number of machines that were updated and updates installed on them, along with the status of each of the machines in the form of a pie chart. At the end of the page, it contains a list view of both machines and updates that were a part of this maintenance run.
-
- :::image type="content" source="./media/manage-multiple-machines/update-center-maintenance-run-record-inline.png" alt-text="Screenshot of maintenance run ID record." lightbox="./media/manage-multiple-machines/update-center-maintenance-run-record-expanded.png":::
+When you select any one maintenance run ID record, you can view an expanded status of the maintenance run. It contains information about machines and updates. It includes the number of machines that were updated and updates installed on them. A pie chart shows the status of each of the machines. At the end of the page, a list view shows both machines and updates that were a part of this maintenance run.
+ :::image type="content" source="./media/manage-multiple-machines/update-center-maintenance-run-record-inline.png" alt-text="Screenshot that shows a maintenance run ID record." lightbox="./media/manage-multiple-machines/update-center-maintenance-run-record-expanded.png":::
### Resource Graph
-The update assessment and deployment data are available for querying in Azure Resource Graph. You can apply this data to scenarios that include security compliance, security operations, and troubleshooting. Select **Go to resource graph** to go to the Azure Resource Graph Explorer. It enables running Resource Graph queries directly in the Azure portal. Resource Graph supports Azure CLI, Azure PowerShell, Azure SDK for Python, and more. For more information, see [First query with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
+The update assessment and deployment data are available for querying in Azure Resource Graph. You can apply this data to scenarios that include security compliance, security operations, and troubleshooting. Select **Go to resource graph** to go to the Azure Resource Graph Explorer. It enables running Resource Graph queries directly in the Azure portal. Resource Graph supports the Azure CLI, Azure PowerShell, Azure SDK for Python, and more. For more information, see [First query with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
-When the Resource Graph Explorer opens, it is automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager (preview). Ensure that you review the [query Update logs](query-logs.md) article to learn about the log records and their properties, and the sample queries included.
+When the Resource Graph Explorer opens, it's automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager (preview). Ensure that you review [Overview of query logs in Azure Update Manager (preview)](query-logs.md) to learn about the log records and their properties, and the sample queries included.
## Next steps
-* To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md)
-* To view update assessment and deployment logs generated by update manager (preview), see [query logs](query-logs.md).
+* To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [Query logs](query-logs.md).
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
Title: Manage update configuration settings in Azure Update Manager (preview)
-description: The article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview).
+description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview).
Last updated 05/30/2023
-# Manage Update configuration settings
+# Manage update configuration settings
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The article describes how to configure update settings from Azure Update Manager (preview), to control the update settings on your Azure VMs and Arc-enabled servers for one or more machines.
+This article describes how to configure update settings from Azure Update Manager (preview) to control the update settings on your Azure virtual machines (VMs) and Azure Arc-enabled servers for one or more machines.
+## Configure settings on a single VM
-## Configure settings on single VM
+To configure update settings on your machines on a single VM:
-To configure update settings on your machines on a single VM, follow these steps:
+You can schedule updates from **Overview** or **Machines** on the **Update Manager (preview)** page or from the selected VM.
->[!NOTE]
-> You can schedule updates from the Overview blade or Machines blade in Update Manager (preview) page or from the selected VM.
-
-# [From Overview blade](#tab/manage-single-overview)
+# [From Overview pane](#tab/manage-single-overview)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update Manager**, select **Overview**, select your **Subscription**, and select **Update settings**.
-1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings.
-1. In **Select resources**, select the machine and select **Add**.
-1. In the **Change update settings** page, you will see the machine classified as per the operating system with the list of following updates that you can select and apply.
+1. On the **Update Manager** page, select **Overview**, select your subscription, and then select **Update settings**.
+1. On the **Change update settings** pane, select **Add machine** to select the machine for which you want to change the update settings.
+1. On the **Select resources** pane, select the machine and select **Add**.
+1. On the **Change update settings** page, you see the machine classified according to the operating system with the list of following updates that you can select and apply.
- :::image type="content" source="./media/manage-update-settings/update-setting-to-change.png" alt-text="Highlighting the Update settings to change option in the Azure portal.":::
+ :::image type="content" source="./media/manage-update-settings/update-setting-to-change.png" alt-text="Screenshot that shows highlighting the Update settings to change option in the Azure portal.":::
- The following update settings are available for configuration for the selected machine(s):
-
- - **Periodic assessment** - The **periodic Assessment** is set to run every 24 hours. You can either enable or disable this setting.
-
- - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use Update Manager (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting.
+ The following update settings are available for configuration for the selected machines:
- - **Patch orchestration** option provides the following:
+ - **Periodic assessment**: The periodic assessment is set to run every 24 hours. You can either enable or disable this setting.
+ - **Hotpatch**: You can enable [hotpatching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition VMs. Hotpatching is a new way to install updates on supported Windows Server Azure Edition VMs that doesn't require a reboot after installation. You can use Update Manager (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable, or reset this setting.
+ - **Patch orchestration** option provides:
- - **Customer Managed Schedules (Preview)**ΓÇöenables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties - **Patch mode = Azure-orchestrated** and **BypassPlatformSafetyChecksOnUserSchedule = TRUE** on your behalf after receiving your consent.
- - **Azure Managed - Safe Deployment**ΓÇöfor a group of virtual machines undergoing an update, the Azure platform will orchestrate updates. (not applicable for Arc-enabled server). The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).(i.e), the patch mode is **AutomaticByPlatform**. There are different implications depending on whether customer schedule is attached to it or not. For more information, see the [user scenarios](prerequsite-for-schedule-patching.md#user-scenarios).
- - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
- - **Windows Automatic Updates** (AutomaticByOS) - When the workload running on the VM doesn't have to meet availability targets, the operating system updates are automatically downloaded and installed. Machines are rebooted as needed.
- - **Manual updates** - This mode disables Windows automatic updates on VMs. Patches are installed manually or using a different solution.
- - **Image Default** - Only supported for Linux Virtual Machines, this mode uses the default patching configuration in the image used to create the VM.
+ - **Customer Managed Schedules (preview)**: Enables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties, `Patch mode = Azure-orchestrated` and `BypassPlatformSafetyChecksOnUserSchedule = TRUE`, on your behalf after receiving your consent.
+ - **Azure Managed - Safe Deployment**: For a group of VMs undergoing an update, the Azure platform orchestrates updates (not applicable for Azure Arc-enabled server). The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). For example, the patch mode is `AutomaticByPlatform`. There are different implications depending on whether the customer schedule is attached to it or not. For more information, see [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios).
+ - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM by using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic. The process includes rebooting the VM as required.
+ - **Windows Automatic Updates** (`AutomaticByOS`): When the workload running on the VM doesn't have to meet availability targets, the operating system updates are automatically downloaded and installed. Machines are rebooted as needed.
+ - **Manual updates**: This mode disables Windows automatic updates on VMs. Patches are installed manually or by using a different solution.
+ - **Image Default**: Only supported for Linux VMs. This mode uses the default patching configuration in the image used to create the VM.
1. After you make the selection, select **Save**. -
-# [From Machines blade](#tab/manage-single-machines)
+# [From Machines pane](#tab/manage-single-machines)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update Manager**, select **Machines** > your **subscription**.
+1. On the **Update Manager** page, select **Machines** and select your subscription.
1. Select the checkbox of your machine from the list and select **Update settings**. 1. Select **Update Settings** to proceed with the type of update for your machine.
-1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings.
-1. In **Select resources**, select the machine and select **Add** and follow the procedure from step 5 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm).
+1. On the **Change update settings** pane, select **Add machine** to select the machine for which you want to change the update settings.
+1. On the **Select resources** pane, select the machine and select **Add**. Follow the procedure from step 5 listed in **From Overview pane** of [Configure settings on a single VM](#configure-settings-on-a-single-vm).
# [From a selected VM](#tab/singlevm-schedule-home) 1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.
-1. In **Updates (Preview)**, select **Update Settings**.
-1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm).
+1. On the **Updates (Preview)** pane, select **Update Settings**.
+1. On the **Change update settings** pane, you can select the update settings that you want to change for your machine. Follow the procedure from step 3 listed in **From Overview pane** of [Configure settings on a single VM](#configure-settings-on-a-single-vm).
A notification appears to confirm that the update settings are successfully chan
## Configure settings at scale
-To configure update settings on your machines at scale, follow these steps:
+Follow these steps to configure update settings on your machines at scale.
->[!NOTE]
-> You can schedule updates from the Overview blade or Machines blade.
+> [!NOTE]
+> You can schedule updates from **Overview** or **Machines**.
+
+# [From Overview pane](#tab/manage-scale-overview)
-# [From Overview blade](#tab/manage-scale-overview)
-
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update Manager**, select **Overview**, select your **Subscription** and select **Update settings**.
+1. In **Update Manager**, select **Overview**, select your subscription, and then select **Update settings**.
-1. In **Change update settings**, select the update settings that you want to change for your machines. Follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm).
+1. In **Change update settings**, select the update settings that you want to change for your machines. Follow the procedure from step 3 listed in **From Overview pane** of [Configure settings on a single VM](#configure-settings-on-a-single-vm).
-# [From Machines blade](#tab/manage-scale-machines)
+# [From Machines pane](#tab/manage-scale-machines)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update Manager**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list.
+1. In **Update Manager**, select **Machines** and select your subscription. Select the checkbox for all your machines from the list.
1. Select **Update Settings** to proceed with the type of update for your machines.
-1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm).
+1. In **Change update settings**, you can select the update settings that you want to change for your machine. Follow the procedure from step 3 listed in **From Overview pane** of [Configure settings on a single VM](#configure-settings-on-a-single-vm).
A notification appears to confirm that the update settings are successfully changed. -- ## Next steps
-* [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.
-* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
+* [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Azure Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.
+* To view update assessment and deployment logs generated by Update Manager (preview), see [Query logs](query-logs.md).
+* To troubleshoot issues, see [Troubleshoot issues with Update Manager (preview)](troubleshoot.md).
update-center Manage Updates Customized Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md
Title: Overview of customized images in Azure Update Manager (preview).
-description: The article describes about customized images, how to register, validate the customized images for public preview and its limitations.
+ Title: Overview of customized images in Azure Update Manager (preview)
+description: This article describes customized image support, how to register and validate customized images for public preview, and limitations.
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article describes the customized image support, how to enable the subscription and its limitations.
+This article describes customized image support, how to enable a subscription, and limitations.
> [!NOTE]
-> - Currently, we support [generalized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#generalized-images). Automatic VM guest patching for generalized custom images is not supported.
-> - [Specialized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#specialized-images) and non-Azure Compute gallery images (including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery) are not supported yet.
+> - Currently, we support [generalized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#generalized-images). Automatic virtual machine (VM) guest patching for generalized custom images isn't supported.
+> - [Specialized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#specialized-images) and non-Azure Compute Gallery images (including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery) aren't supported yet.
## Asynchronous check to validate customized image support
-If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager (preview) operations such as Check for updates, One-time update, Schedule updates, or Periodic assessment to validate if the virtual machines are supported for guest patching and then initiate patching if the VMs are supported.
+If you're using Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager (preview) operations such as **Check for updates**, **One-time update**, **Schedule updates**, or **Periodic assessment** to validate if the VMs are supported for guest patching. If the VMs are supported, you can begin patching.
-Unlike marketplace images where support is validated even before Update Manager operation is triggered. Here, there are no pre-existing validations in place and the Update Manager operations are triggered and only their success or failure determines support.
+With marketplace images, support is validated even before Update Manager operation is triggered. Here, there are no preexisting validations in place and the Update Manager operations are triggered. Only their success or failure determines support.
-For instance, assessment call, will attempt to fetch the latest patch that is available from the image's OS family to check support. It stores this support-related data in Azure Resource Graph (ARG) table, which you can query to see the support status for your Azure Compute Gallery image.
+For instance, an assessment call attempts to fetch the latest patch that's available from the image's OS family to check support. It stores this support-related data in an Azure Resource Graph table, which you can query to see the support status for your Azure Compute Gallery image.
+## Enable a subscription for public preview
-## Enable Subscription for Public Preview
-
-To self register your subscription for Public preview in Azure portal, follow these steps:
+To self-register your subscription for public preview in the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com) and select **More services**.
- :::image type="content" source="./media/manage-updates-customized-images/access-more-services.png" alt-text="Screenshot that shows how to access more services option.":::
+ :::image type="content" source="./media/manage-updates-customized-images/access-more-services.png" alt-text="Screenshot that shows how to access the More services option.":::
-1. In **All services** page, search for *Preview Features*.
+1. On the **All services** page, search for **Preview features**.
:::image type="content" source="./media/manage-updates-customized-images/access-preview-services.png" alt-text="Screenshot that shows how to access preview features.":::
-1. In **Preview features** page, enter *gallery* and select *VM Guest Patch Gallery Image Preview*.
+1. On the **Preview features** page, enter **gallery** and select **VM Guest Patch Gallery Image Preview**.
- :::image type="content" source="./media/manage-updates-customized-images/access-gallery.png" alt-text="Screenshot that shows how to access gallery.":::
+ :::image type="content" source="./media/manage-updates-customized-images/access-gallery.png" alt-text="Screenshot that shows how to access the gallery.":::
-1. In **VM Guest Patch Gallery Image Preview**, select **Register** to register your subscription.
+1. On the **VM Guest Patch Gallery Image Preview** page, select **Register** to register your subscription.
- :::image type="content" source="./media/manage-updates-customized-images/register-preview.png" alt-text="Screenshot that shows how to register the preview feature.":::
-
+ :::image type="content" source="./media/manage-updates-customized-images/register-preview.png" alt-text="Screenshot that shows how to register the Preview feature.":::
## Prerequisites to test the Azure Compute Gallery custom images (preview) -- Register the subscription for preview using the steps mentioned in [Enable Subscription for Public Preview](#enable-subscription-for-public-preview).-- Ensure that the VM in which you intend to execute the API calls must be in the same subscription that is enrolled for the feature.
+- Register the subscription for preview by following the steps in [Enable a subscription for public preview](#enable-a-subscription-for-public-preview).
+- Ensure that the VM where you intend to run the API calls is in the same subscription that's enrolled for the feature.
## Check the preview
-Initiate the asynchronous support check using either of the following APIs:
+Start the asynchronous support check by using either one of the following APIs:
+
+- API Action Invocation:
+ 1. [Assess patches](/rest/api/compute/virtual-machines/assess-patches?tabs=HTTP).
+ 1. [Install patches](/rest/api/compute/virtual-machines/install-patches?tabs=HTTP).
-1. **API Action Invocation**
- 1. [Assess patches](/rest/api/compute/virtual-machines/assess-patches?tabs=HTTP)
- 1. [Install patches](/rest/api/compute/virtual-machines/install-patches?tabs=HTTP)
+- Portal operations. Try the preview:
+ 1. [On-demand check for updates](view-updates.md)
+ 1. [One-time update](deploy-updates.md)
-1. **Portal operations**: Try the preview:
- 1. [On demand check for updates](view-updates.md).
- 1. [One-time update](deploy-updates.md).
+Validate the VM support state for Azure Resource Graph:
-**Validate the VM support state**
+- Table:
-1. **Azure Resource Graph**
- 1. Table
- - `patchassessmentresources`
- 1. Resource
- - `Microsoft.compute/virtualmachines/patchassessmentresults/configurationStatus.vmGuestPatchReadiness.detectedVMGuestPatchSupportState. [Possible values: Unknown, Supported, Unsupported, UnableToDetermine]`
+ `patchassessmentresources`
+- Resource:
+
+ `Microsoft.compute/virtualmachines/patchassessmentresults/configurationStatus.vmGuestPatchReadiness.detectedVMGuestPatchSupportState. [Possible values: Unknown, Supported, Unsupported, UnableToDetermine]`
- :::image type="content" source="./media/manage-updates-customized-images/resource-graph-view.png" alt-text="Screenshot that shows the resource in Azure Resource Graph Explorer.":::
+ :::image type="content" source="./media/manage-updates-customized-images/resource-graph-view.png" alt-text="Screenshot that shows the resource in Azure Resource Graph Explorer.":::
-We recommend that you execute the Assess Patches API once the VM is provisioned and the prerequisites are set for Public preview. This validates the support state of the VM. If the VM is supported, you can execute the Install Patches API to initiate the patching.
+We recommend that you run the Assess Patches API after the VM is provisioned and the prerequisites are set for public preview. This action validates the support state of the VM. If the VM is supported, you can run the Install Patches API to begin the patching.
## Limitations
-1. Currently, it is only applicable to Azure Compute Gallery (SIG) images and not to non-Azure Compute Gallery custom images. The Azure Compute Gallery images are of two types - generalized and specialized. Following are the supported scenarios for both:
+Currently, it's only applicable to Azure Compute Gallery (SIG) images and not to non-Azure Compute Gallery custom images. The Azure Compute Gallery images are of two types: generalized and specialized. The following supported scenarios are for both types.
- | Images | **Currently supported scenarios** | **Unsupported scenarios** |
- | | | |
- | **Azure Compute Gallery: Generalized images** | - On demand assessment </br> - On demand patching </br> - Periodic assessment </br> - Scheduled patching | Automatic VM guest patching |
- | **Azure Compute Gallery: Specialized images** | - On demand assessment </br> - On demand patching | - Periodic assessment </br> - Scheduled patching </br> - Automatic VM guest patching |
- | **Non-Azure Compute Gallery images (non-SIG)** | None | - On demand assessment </br> - On demand patching </br> - Periodic assessment </br> - Scheduled patching </br> - Automatic VM guest patching |
-
-1. Automatic VM guest patching will not work on Azure Compute Gallery images even if Patch orchestration mode is set to **Azure orchestrated/AutomaticByPlatform**. You can use scheduled patching to patch the machines and define your own schedules.
+| Images | Currently supported scenarios | Unsupported scenarios |
+| | | |
+| Azure Compute Gallery: Generalized images | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment </br> - Scheduled patching | Automatic VM guest patching |
+| Azure Compute Gallery: Specialized images | - On-demand assessment </br> - On-demand patching | - Periodic assessment </br> - Scheduled patching </br> - Automatic VM guest patching |
+| Non-Azure Compute Gallery images (non-SIG) | None | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment </br> - Scheduled patching </br> - Automatic VM guest patching |
+Automatic VM guest patching doesn't work on Azure Compute Gallery images even if Patch orchestration mode is set to `Azure orchestrated/AutomaticByPlatform`. You can use scheduled patching to patch the machines and define your own schedules.
## Next steps
-* [Learn more](support-matrix.md) about supported operating systems.
+
+[Learn more](support-matrix.md) about supported operating systems.
update-center Quickstart On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/quickstart-on-demand.md
To configure the settings on your machines, follow these steps:
In the **Change update settings** page, by default **Properties** is selected. 1. Select from the list of update settings to apply them to the selected machines.
-1. In **Update setting(s) to change**, select any option ΓÇö*Periodic assessment*, *Hotpatch* and *Patch orchestration* to configure and select **Next**. For more information, see [Configure settings on virtual machines](manage-update-settings.md#configure-settings-on-single-vm).
+1. In **Update setting(s) to change**, select any option ΓÇö*Periodic assessment*, *Hotpatch* and *Patch orchestration* to configure and select **Next**. For more information, see [Configure settings on virtual machines](manage-update-settings.md#configure-settings-on-a-single-vm).
1. In **Machines**, verify the machines for which you can apply the updates. You can also add or remove machines from the list and select **Next**.
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
Update Manager (preview) provides you the flexibility to take an immediate actio
## Update Now/One-time update
-Update Manager (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm).
+Update Manager (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-a-single-vm).
## Scheduled patching
This mode of patching allows operating system to automatically install updates a
Hotpatching allows you to install updates on supported Windows Server Azure Edition virtual machines without requiring a reboot after installation. It reduces the number of reboots required on your mission critical application workloads running on Windows Server. For more information, see [Hotpatch for new virtual machines](../automanage/automanage-hotpatch.md)
-Hotpatching property is available as a setting in Update Manager (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-single-vm)
+Hotpatching property is available as a setting in Update Manager (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-a-single-vm)
:::image type="content" source="media/updates-maintenance/hot-patch-inline.png" alt-text="Screenshot that shows the hotpatch option." lightbox="media/updates-maintenance/hot-patch-expanded.png":::
update-center Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md
Update Manager (preview) now supports new five regions for Azure Arc-enabled ser
### Improved on-boarding experience
-You can now enable periodic assessment for your machines at scale using [Policy](periodic-assessment-at-scale.md) or from the [portal](manage-update-settings.md#configure-settings-on-single-vm).
+You can now enable periodic assessment for your machines at scale using [Policy](periodic-assessment-at-scale.md) or from the [portal](manage-update-settings.md#configure-settings-on-a-single-vm).
## Next steps
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 08/31/2023 Last updated : 09/13/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-| | Public | 1.2.4487 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.4577 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Insider | 1.2.4582 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
-## Updates for version 1.2.4577 (Insider)
+## Updates for version 1.2.4582 (Insider)
-*Date published: August 29, 2023*
+*Date published: September 12, 2023*
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
In this release, we've made the following changes:
- Tooltip for the close button on the **About** panel now dismisses when keyboard focus moves. - Keyboard focus is now properly displayed for certain drop-down selectors in the **Settings** panel for published desktops.
+> [!NOTE]
+> This release was originally version 1.2.4577, but we made a hotfix after reports that connections to machines with watermarking policy enabled were failing. Version 1.2.4582, which fixes this issue, has replaced version 1.2.4577.
+ ## Updates for version 1.2.4487 *Date published: July 21, 2023*
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machines B Series Cpu Credit Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md
+
+ Title: B Series CPU Credit Model
+description: Overview of B Series CPU Credit Model
+++++ Last updated : 09/12/2023++
+# B Series CPU Credit Model
+
+While traditional Azure virtual machines provide fixed CPU performance, B-series virtual machines are the only VM type that use credits for CPU performance provisioning. B-series VMs utilize a CPU credit model to track how much CPU is consumed - the virtual machine accumulates CPU credits when a workload is operating below the base CPU performance threshold and, uses credits when running above the base CPU performance threshold until all of its credits are consumed. Upon consuming all the CPU credits, a B-series virtual machine is throttled back to its base CPU performance until it accumulates the credits to CPU burst again.
+
+## Credit concepts and definitions
+- Base CPU performance = The minimum CPU performance threshold a VM will have available always. This level sets the bar for net credit accumulation when the CPU utilization is below the base CPU performance level and, net credit consumption when the CPU utilization is above the base CPU performance.
+
+- Initial Credits = The number of credits allocated to a B-series virtual machine when a VM is deployed.
+
+- Credits banked/hour = The number of credits a B-seires virtual machine accumulates per hour if the VM is idle (no CPU performance consumption).
+
+- Max Banked Credits = The maximum number/upper limit of credits a B-seires virtual machine can accumulate. Upon reaching this upper limit, a B-series VM can no longer accumulate more credits.
+
+- CPU Credits Consumed = The number of CPU credits spent during the measurement time-period.
+
+- CPU Credits Remaining = The number of CPU credits available to consume for a given B-series VM.
+
+- Percentage CPU = CPU performance of a given VM during a measurement period.
++
+## Credits accumulation and consumption
+The credit accumulation and consumption rates are set such that a VM running at exactly its base performance level will have neither a net accumulation or consumption of bursting credits. A VM has a net credit increase whenever it's running below its base CPU performance level and will have a net decrease in credits whenever the VM is utilizing the CPU more than its base CPU performance level.
+
+To conduct calculations on credit accumulations and consumptions, customers can utilize the holistic 'credits banked per minute' formula =>
+`((Base CPU performance * number of vCPU)/2 - (Percentage CPU * number of vCPU)/2)/100`.
+
+Putting this calculation into action, let's say that a customer deploys the Standard_B2ts_v2 VM size and their workload demands 10% of the 'Percentage CPU' or CPU performance, then the 'credits banked per minute' calculation will be as follows: `((20%*2)/2 - (10%*2)/2)/100 = 0.1 credits/minute`. In such a scenario, a B-series VM is accumulating credits given the 'Percentage CPU'/ CPU performance requirement is below the 'Base CPU performance' of the Standard_B2ts_v2.
+
+Similarly, utilizing the example of a Standard_B32as_v2 VM size, if the workload demands 60% of the CPU performance for a measurement of time - then the 'credits banked per minute' calculation will be as follows: `((40%*32)/2 - (60%*32)/2)/100 = (6.4 - 9.6)/100 = -3.2 credits per minute`. Here the negative result implies the B-series VM is consuming credits given the 'Percentage CPU'/CPU performance requirement is above the 'Base CPU performance' of the Standard_B32as_v2.
+
+
+## Credit monitoring
+To monitor B-series specific credit metrics, customers can utilize the Azure monitor data platform, see [Overview of metrics in Microsoft Azure](../../azure-monitor/data-platform.md). Azure monitor data platform can be accessed via Azure portal and other orchestration paths, and via programmatic API calls to Azure monitor.
+Via Azure monitor data platform, customers can access B-series credit model specific metrics such as 'CPU Credits Consumed', 'CPU Credits Remaining' and 'Percentage CPU' for their given B-series size in real time.
++
+## Other sizes and information
+
+- [General purpose](../sizes-general.md)
+- [Compute optimized](../sizes-compute.md)
+- [Memory optimized](../sizes-memory.md)
+- [Storage optimized](../sizes-storage.md)
+- [GPU optimized](../sizes-gpu.md)
+- [High performance compute](../sizes-hpc.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+More information on Disks Types: [Disk Types](../disks-types.md#ultra-disks)
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 08/17/2023 Last updated : 09/13/2023 ms.devlang: azurecli
You can also use Azure Resource Manager templates to create an incremental snaps
] } ```+ ## Check snapshot status
$targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotNa
$targetSnapshot.CompletionPercent ``` -- ## Check sector size Snapshots with a 4096 logical sector size can only be used to create Premium SSD v2 or Ultra Disks. They can't be used to create other disk types. Snapshots of disks with 4096 logical sector size are stored as VHDX, whereas snapshots of disks with 512 logical sector size are stored as VHD. Snapshots inherit the logical sector size from the parent disk.
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
These VMs are ideal for real-world Applied AI workloads, such as:
To get started with NC A100 v4 VMs, refer to [HPC Workload Configuration and Optimization](configure.md) for steps including driver and network configuration.
-Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of [Generation 2 VMs](generation-2.md) and marketplace images. While the [Azure HPC images](configure.md) are strongly recommended, Azure HPC Ubuntu 18.04, 20.04 and Azure HPC CentOS 7.9, CentOS 8.4, RHEL 7.9, RHEL 8.5, Windows Service 2019, and Windows Service 2022 images are supported.
+Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of [Generation 2 VMs](generation-2.md) and marketplace images. While the [Azure HPC images](configure.md) are strongly recommended, Azure HPC Ubuntu 18.04, 20.04 and Azure HPC CentOS 7.9, CentOS 8.4, RHEL 7.9, RHEL 8.5, Windows Server 2019, and Windows Server 2022 images are supported.
Note: The Ubuntu-HPC 18.04-ncv4 image is only valid during preview and deprecated on 7/29/2022. All changes have been merged into standard Ubuntu-HPC 18.04 image. Please follow instruction [Azure HPC images](configure.md) for configuration.
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-b-series-burstable.md
The B-series comes in the following VM sizes:
<br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Base CPU Perf of VM | Max CPU Perf of VM | Initial Credits | Credits banked/hour | Max Banked Credits | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> |Max NICs |
-||||||||||||||
-| Standard_B1ls<sup>2</sup> | 1 | 0.5 | 4 | 5% | 100% | 30 | 3 | 72 | 2 | 160/10 | 4000/100 | 2 |
-| Standard_B1s | 1 | 1 | 4 | 10% | 100% | 30 | 6 | 144 | 2 | 320/10 | 4000/100 | 2 |
-| Standard_B1ms | 1 | 2 | 4 | 20% | 100% | 30 | 12 | 288 | 2 | 640/10 | 4000/100 | 2 |
-| Standard_B2s | 2 | 4 | 8 | 40% | 200% | 60 | 24 | 576 | 4 | 1280/15 | 4000/100 | 3 |
-| Standard_B2ms | 2 | 8 | 16 | 60% | 200% | 60 | 36 | 864 | 4 | 1920/22.5 | 4000/100 | 3 |
-| Standard_B4ms | 4 | 16 | 32 | 90% | 400% | 120 | 54 | 1296 | 8 | 2880/35 | 8000/200 | 4 |
-| Standard_B8ms | 8 | 32 | 64 | 135% | 800% | 240 | 81 | 1944 | 16 | 4320/50 | 8000/200 | 4 |
-| Standard_B12ms | 12 | 48 | 96 | 202% | 1200% | 360 | 121 | 2909 | 16 | 4320/50 | 16000/400 | 6 |
-| Standard_B16ms | 16 | 64 | 128 | 270% | 1600% | 480 | 162 | 3888 | 32 | 4320/50 | 16000/400 | 8 |
-| Standard_B20ms | 20 | 80 | 160 | 337% | 2000% | 600 | 203 | 4860 | 32 | 4320/50 | 16000/400 | 8 |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Base CPU Performance of VM (%) | Initial Credits | Credits banked/hour | Max Banked Credits | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps1 | Max NICs |
+|-||-||--|--||--|-|--||-|
+| Standard_B1ls2 | 1 | 0.5 | 4 | 10 | 30 | 3 | 72 | 2 | 160/10 | 4000/100 | 2 |
+| Standard_B1s | 1 | 1 | 4 | 20 | 30 | 6 | 144 | 2 | 320/10 | 4000/100 | 2 |
+| Standard_B1ms | 1 | 2 | 4 | 40 | 30 | 12 | 288 | 2 | 640/10 | 4000/100 | 2 |
+| Standard_B2s | 2 | 4 | 8 | 40 | 60 | 24 | 576 | 4 | 1280/15 | 4000/100 | 3 |
+| Standard_B2ms | 2 | 8 | 16 | 60 | 60 | 36 | 864 | 4 | 1920/22.5 | 4000/100 | 3 |
+| Standard_B4ms | 4 | 16 | 32 | 45 | 120 | 54 | 1296 | 8 | 2880/35 | 8000/200 | 4 |
+| Standard_B8ms | 8 | 32 | 64 | 33 | 240 | 81 | 1944 | 16 | 4320/50 | 8000/200 | 4 |
+| Standard_B12ms | 12 | 48 | 96 | 36 | 360 | 121 | 2909 | 16 | 4320/50 | 16000/400 | 6 |
+| Standard_B16ms | 16 | 64 | 128 | 40 | 480 | 162 | 3888 | 32 | 4320/50 | 16000/400 | 8 |
+| Standard_B20ms | 20 | 80 | 160 | 40 | 600 | 203 | 4860 | 32 | 4320/50 | 16000/400 | 8 |
<sup>1</sup> B-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time. <sup>2</sup> B1ls is supported only on Linux
-## Workload example
-
-Consider an office check-in/out application. The application needs CPU bursts during business hours, but not a lot of computing power during off hours. In this example, the workload requires a 16vCPU virtual machine with 64GiB of RAM to work efficiently.
-
-The table shows the hourly traffic data and the chart is a visual representation of that traffic.
-
-B16 characteristics:
-
-Max CPU perf: 16vCPU * 100% = 1600%
-
-Baseline: 270%
-
-![Chart of hourly traffic data](./media/b-series-burstable/office-workload.png)
-
-| Scenario | Time | CPU usage (%) | Credits accumulated<sup>1</sup> | Credits available |
-| | | | | |
-| B16ms Deployment | Deployment | Deployment | 480 (Initial Credits) | 480 |
-| No traffic | 0:00 | 0 | 162 | 642 |
-| No traffic | 1:00 | 0 | 162 | 804 |
-| No traffic | 2:00 | 0 | 162 | 966 |
-| No traffic | 3:00 | 0 | 162 | 1128 |
-| No traffic | 4:00 | 0 | 162 | 1290 |
-| No traffic | 5:00 | 0 | 162 | 1452 |
-| Low Traffic | 6:00 | 270 | 0 | 1452 |
-| Employees come to office (app needs 80% vCPU) | 7:00 | 1280 | -606 | 846 |
-| Employees continue coming to office (app needs 80% vCPU) | 8:00 | 1280 | -606 | 240 |
-| Low Traffic | 9:00 | 270 | 0 | 240 |
-| Low Traffic | 10:00 | 100 | 102 | 342 |
-| Low Traffic | 11:00 | 50 | 132 | 474 |
-| Low Traffic | 12:00 | 100 | 102 | 576 |
-| Low Traffic | 13:00 | 100 | 102 | 678 |
-| Low Traffic | 14:00 | 50 | 132 | 810 |
-| Low Traffic | 15:00 | 100 | 102 | 912 |
-| Low Traffic | 16:00 | 100 | 102 | 1014 |
-| Employees checking out (app needs 100% vCPU) | 17:00 | 1600 | -798 | 216 |
-| Low Traffic | 18:00 | 270 | 0 | 216 |
-| Low Traffic | 19:00 | 270 | 0 | 216 |
-| Low Traffic | 20:00 | 50 | 132 | 348 |
-| Low Traffic | 21:00 | 50 | 132 | 480 |
-| No traffic | 22:00 | 0 | 162 | 642 |
-| No traffic | 23:00 | 0 | 162 | 804 |
-
-<sup>1</sup> Credits accumulated/credits used in an hour is equivalent to: `((Base CPU perf of VM - CPU Usage) / 100) * 60 minutes`.
-
-For a D16s_v3 which has 16 vCPUs and 64 GiB of memory the hourly rate is $0.936 per hour (monthly $673.92) and for B16ms with 16 vCPUs and 64 GiB memory the rate is $0.794 per hour (monthly $547.86). <b> This results in 15% savings!</b>
-
-## Q & A
-
-### Q: What happens when my credits run out?
-**A**: When the credits are exhausted, the VM returns to the baseline performance.
-
-### Q: How do you get 135% baseline performance from a VM?
-
-**A**: The 135% is shared amongst the 8 vCPUΓÇÖs that make up the VM size. For example, if your application uses 4 of the 8 cores working on batch processing and each of those 4 vCPUΓÇÖs are running at 30% utilization the total amount of VM CPU performance would equal 120%. Meaning that your VM would be building credit time based on the 15% delta from your baseline performance. But it also means that when you have credits available that same VM can use 100% of all 8 vCPUΓÇÖs giving that VM a Max CPU performance of 800%.
-
-### Q: How can I monitor my credit balance and consumption?
-
-**A**: The **Credit** metric allows you to view how many credits your VM have been banked and the **ConsumedCredit** metric will show how many CPU credits your VM has consumed from the bank. You will be able to view these metrics from the metrics pane in the portal or programmatically through the Azure Monitor APIs.
-
-For more information on how to access the metrics data for Azure, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
-
-### Q: How are credits accumulated and consumed?
-
-**A**: The VM accumulation and consumption rates are set such that a VM running at exactly its base performance level will have neither a net accumulation or consumption of bursting credits. A VM will have a net increase in credits whenever it is running below its base performance level and will have a net decrease in credits whenever the VM is utilizing the CPU more than its base performance level.
-
-**Example**: I deploy a VM using the B1ms size for my small time and attendance database application. This size allows my application to use up to 20% of a vCPU as my baseline, which is 0.2 credits per minute I can use or bank.
-
-My application is busy at the beginning and end of my employees work day, between 7:00-9:00 AM and 4:00 - 6:00PM. During the other 20 hours of the day, my application is typically at idle, only using 10% of the vCPU. For the non-peak hours, I earn 0.2 credits per minute but only consume 0.1 credits per minute, so my VM will bank 0.1 x 60 = 6 credits per hour. For the 20 hours that I am off-peak, I will bank 120 credits.
-
-During peak hours my application averages 60% vCPU utilization, I still earn 0.2 credits per minute but I consume 0.6 credits per minute, for a net cost of 0.4 credits a minute or 0.4 x 60 = 24 credits per hour. I have 4 hours per day of peak usage, so it costs 4 x 24 = 96 credits for my peak usage.
-
-If I take the 120 credits I earned off-peak and subtract the 96 credits I used for my peak times, I bank an additional 24 credits per day that I can use for other bursts of activity.
-
-### Q: How can I calculate credits accumulated and used?
-
-**A**: You can use the following formula:
-
-(Base CPU perf of VM - CPU Usage) / 100 = Credits bank or use per minute
-
-e.g in above instance your baseline is 20% and if you use 10% of the CPU you are accumulating (20%-10%)/100 = 0.1 credit per minute.
-
-### Q: Does the B-Series support Premium Storage data disks?
-
-**A**: Yes, all B-Series sizes support Premium Storage data disks.
-
-### Q: Why is my remaining credit set to 0 after a redeploy or a stop/start?
-
-**A** : When a VM is redeployed and the VM moves to another node, the accumulated credit is lost. If the VM is stopped/started, but remains on the same node, the VM retains the accumulated credit. Whenever the VM starts fresh on a node, it gets an initial credit, for Standard_B8ms it is 240.
-
-### Q: What happens if I deploy an unsupported OS image on B1ls?
-
-**A** : B1ls only supports Linux images and if you deploy any another OS image you might not get the best customer experience.
- ## Other sizes and information - [General purpose](sizes-general.md)
e.g in above instance your baseline is 20% and if you use 10% of the CPU you are
Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
-More information on Disks Types : [Disk Types](./disks-types.md#ultra-disks)
+More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
## Next steps
virtual-machines Ubuntu Pro In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/canonical/ubuntu-pro-in-place-upgrade.md
description: Learn how to do an in-place upgrade from Ubuntu Server to Ubuntu Pr
-+ Previously updated : 08/07/2023 Last updated : 9/12/2023 # Ubuntu Server to Ubuntu Pro in-place upgrade on Azure
-**Applies to:** :heavy_check_mark: Linux virtual machines
+Customers can now upgrade their Ubuntu Server (version 16.04 or higher) virtual machines to Ubuntu
+Pro without redeployment or downtime. This method has proven useful for customers wishing to convert
+their servers from Ubuntu 18.04 LTS now that it's reached End of Life (EOL).
-Customers can now upgrade from Ubuntu Server (16.04 or higher) to Ubuntu Pro on your existing Azure
-Virtual Machines without redeployment or downtime. One of the major use cases includes conversion of
-Ubuntu 18.04 LTS going EOL to Ubuntu Pro.
-[Canonical announced that the Ubuntu 18.04 LTS (Bionic Beaver) OS images end-of-life (EOL)](https://ubuntu.com/18-04/azure).
-Canonical no longer provides technical support, software updates, or security patches for this
-version. Customers need to upgrade to Ubuntu Pro to continue to be on Ubuntu 18.04 LTS.
+> [!IMPORTANT]
+> Canonical has announced that Ubuntu 18.04 LTS (Bionic Beaver) OS images are now
+> [out of standard support][01]. This means that Canonical will no longer offer technical support,
+> software updates, or security patches for this version. Customers wishing to continue using Ubuntu
+> 18.04 LTS need to upgrade to Ubuntu Pro for continued supportability.
-## What is Ubuntu Pro?
+## What's Ubuntu Pro?
Ubuntu Pro is a cross-cloud OS, optimized for Azure, and security maintained for 10 years. The
-secure use of open-source software allows teams to utilize the latest technologies while meeting
-internal governance and compliance requirements. Ubuntu Pro 18.04 LTS, remains fully compatible with
-Ubuntu Server 18.04 LTS, but adds more security enabled by default, including compliance and
-management tools in a form suitable for small to large-scale Linux operations. Ubuntu Pro 18.04 LTS
-is fully supported until April 2028. Ubuntu Pro also comes with security patching for all Ubuntu
-packages due to Extended Security Maintenance (ESM) for Infrastructure and Applications and optional
-24/7 phone and ticket support.
-
-Customers using Ubuntu Server 18.04, for example, can upgrade to Ubuntu Pro and continue to receive
-security patches from Canonical until 2028. Customers can upgrade to Ubuntu Pro via Azure CLI.
+secure use of open-source software allows the operating system to use the latest technologies while
+meeting internal governance and compliance requirements. Ubuntu Pro 18.04 LTS remains fully
+compatible with Ubuntu Server 18.04 LTS, with more security enabled by default. It includes
+compliance and management tools in a form suitable for small to large-scale Linux operations. Ubuntu
+Pro 18.04 LTS is fully supported until April 2028. Ubuntu Pro provides Extended Security Maintenance
+(ESM) for infrastructure and applications support, providing security patching for all Ubuntu
+packages.
## Why developers and devops choose Ubuntu Pro for Azure
-* Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and
- PostgreSQL, integrated into normal system tools (for example Azure Update Manager, apt)
-* Security hardening and audit tools (CIS) to establish a security baseline across your systems (and
- help you meet the Azure Linux Security Baseline policy)
-* FIPS 140-2 certified modules
-* Common Criteria (CC) EAL2 provisioning packages
-* Kernel Live patch: kernel patches delivered immediately, without the need to reboot
-* Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance
- and advanced device support
-* 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028
-* Production ready: Ubuntu is the leading Linux in the public cloud with > 50% of Linux workloads
-* Developer friendly: Ubuntu is the \#1 Linux for developers offering the latest libraries and tools
- to innovate with the latest technologies
-* Non-stop security: Canonical publishes images frequently, ensuring security is present from the
- moment an instance launches
-* Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go
- across regions or out to the Internet for updates
-* Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same
- experience regardless of the platform. It ensures consistency of your CI/CD pipelines and
- management mechanisms.
+- Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and
+  PostgreSQL, integrated into system tools (for example Azure Update Manager, apt)
+- Security hardening and audit tools (CIS) to establish a security baseline across your systems (and
+  help you meet the Azure Linux Security Baseline policy)
+- FIPS 140-2 certified modules
+- Common Criteria (CC) EAL2 provisioning packages
+- Kernel Live patch: kernel patches delivered immediately, without the need to reboot
+- Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance
+  and advanced device support
+- 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028
+- Developer friendly: Ubuntu offers developers the latest libraries and tools to innovate with the latest technologies
+- Nonstop security: Canonical publishes images ensuring security is present from the moment an instance launches
+- Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go across regions or out to the Internet for updates
+- Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same experience regardless of the platform. It ensures consistency of your CI/CD pipelines and management mechanisms.
> [!NOTE]
-> This document presents the direction to upgrade from an Ubuntu Server (16.04 or higher) image to
-> Ubuntu Pro with zero downtime for upgrade by executing the following steps in your VMs:
->
-> 1. Converting to Ubuntu Pro license
-> 2. Validating the license
->
-> Converting to UBUNTU_PRO is an irreversible process. You can't even downgrade a VM by running
-> detach. Open a support ticket for any exceptions.
+> This document provides instructions to upgrade Ubuntu Server (16.04 or higher) to
+> Ubuntu Pro. Converting to Ubuntu Pro is an irreversible process.
## Convert to Ubuntu Pro using the Azure CLI
-```azurecli-interactive
-# The following will enable Ubuntu Pro on a virtual machine
+The following command enables Ubuntu Pro on a virtual machine in Azure:
+
+```Azure CLI
az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO ```
-```In-VM commands
-# The next step is to execute two in-VM commands
+Execute these commands inside the VM:
+
+```bash
sudo apt install ubuntu-advantage-tools sudo pro auto-attach ```
-(Note that "sudo apt install ubuntu-advantage-tools" is only necessary if "pro --version" is lower than 28)
+If the `pro --version` is lower than 28, execute this command:
+
+```bash
+sudo apt install ubuntu-advantage-tools
+```
## Validate the license
+use the `pro status --all` command to validate the license:
+ Expected output:
-![Screenshot of the expected output.](./expected-output.png)
+```output
+SERVICE      ENTITLED    STATUS    DESCRIPTION
+cc-eal       yes         disabled  Common Criteria EAL2 Provisioning Packages
+cis          yes         disables  Security compliance and audit tools
+esm-apps     yes         enabled   Expanded Security Maintenance and audit tools
+esm-infra    yes         enabled   Expanded Security Maintenance for infrastructure
+fips         yes         disabled  NIST-certified core packages
+fips-updates yes         disabled  NIST-certified core packages with priority security updates
+livepatch    yes         enabled   Canonical Livepatch service
+```
## Create an Ubuntu Pro VM using the Azure CLI
-You can also create a new VM using the Ubuntu Server images and apply Ubuntu Pro at create time.
-
-For example:
+You can create a new VM using the Ubuntu Server images and apply Ubuntu Pro at the time of creation.
+The following command enables Ubuntu Pro on a virtual machine in Azure:
-```azurecli-interactive
-# The following will enable Ubuntu Pro on a virtual machine
+```Azure CLI
az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO ```
-```In-VM commands
-# The next step is to execute two in-VM commands
+Execute these commands inside the VM:
+
+```bash
sudo apt install ubuntu-advantage-tools sudo pro auto-attach ```
->[!NOTE]
-> For systems with advantage tools version 28 or higher installed the system will perform a pro attach during a reboot.
+> [!NOTE]
+> For systems with advantage tools using version 28 or higher, installed the system will perform a
+> `pro attach` during a reboot.
## Check licensing model using the Azure CLI
-You can use the az vm get-instance-view command to check the status. Look for a licenseType field in the response. If the licenseType field exists and the value is UBUNTU_PRO, your virtual machine has Ubuntu Pro enabled.
+> [!TIP]
+> You can query the metadata in _Azure Instance Metadata Service_ to determine the virtual machine's
+> _licenseType_ value. You can use the `az vm get-instance-view` command to check the status. Look
+> for the _licenseType_ field in the response. If the field exists and the value is UBUNTU_PRO, your
+> virtual machine has Ubuntu Pro enabled. [Learn more about attested metadata][02].
```Azure CLI az vm get-instance-view -g MyResourceGroup -n MyVm ```
-## Check the licensing model of an Ubuntu Pro enabled VM using Azure Instance Metadata Service
-
-From within the virtual machine itself, you can query the attested metadata in Azure Instance Metadata Service to determine the virtual machine's licenseType value. A licenseType value of UBUNTU_PRO indicates that your virtual machine has Ubuntu Pro enabled. [Learn more about attested metadata](../../instance-metadata-service.md).
- ## Billing
-You are charged for Ubuntu Pro as part of the Preview. Visit the
-[pricing calculator](https://azure.microsoft.com/pricing/calculator/) for more details on Ubuntu Pro
-pricing. To cancel the Pro subscription during the preview period, open a support ticket through the
-Azure portal.
+Visit the [pricing calculator][03] for more details on Ubuntu Pro pricing. To cancel the Pro
+subscription during the preview period, open a support ticket through the Azure portal.
-## Frequently Asked Questions
-
-### What are the next step after launching an Ubuntu Pro VM?
+## Next steps after launching an Ubuntu Pro VM
With the availability of outbound internet access, Ubuntu Pro automatically enables premium features
-such as Extended Security Maintenance for
-[Main and Universe repositories](https://help.ubuntu.com/community/Repositories) and
-[live patch](https://ubuntu.com/security/livepatch/docs). Should any specific hardening be required
-(for example CIS), check the using 'usg' to
-[harden your servers](https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview)
-tutorial. Should you require FIPS, check enabling FIPS tutorials.
+including [Live Patch][04] and Extended Security Maintenance for
+[Main and Universe repositories][05].
+Should any specific hardening be required, check `usg` to [harden your servers][06] for CIP and FIPS
+tutorials.
+Learn more about networking requirements (such as egress traffic, endpoints and ports) by reading
+[Ubuntu Pro Client network requirements][07].
-For more information about networking requirements for making sure Pro enablement process works
-(such as egress traffic, endpoints and ports)
-[check this documentation](https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html).
+## Frequently Asked Questions
-### Does shutting down the machine stop billing?
+**Does shutting down the machine stop billing?**
-If you launch Ubuntu Pro from Azure Marketplace you pay as you go, so, if you donΓÇÖt have any machine running, you wonΓÇÖt pay anything additional.
+Launching Ubuntu Pro from Azure Marketplace is you pay as you go and only charges for running
+machines.
-### Are there volume discounts?
+**Are there volume discounts?**
Yes. Contact your Microsoft sales representative.
-### Are Reserved Instances available?
+**Are Reserved Instances available?**
+
+Yes.
-Yes
+**If the customer doesn't perform the `auto attach` function, will they still get attached to pro on reboot?**
-### If the customer doesn't do the auto attach will they still get attached to pro on reboot?
+If the customer doesn't perform the _auto attach_, they still get the Pro attached upon reboot.
+However, this action only applies if they're using version 28 of the Pro client.
-If the customer doesn't perform the auto attach, they still get the Pro attached upon reboot.
-However, this applies only if they have v28 of the Pro client.
+- For Ubuntu Jammy and Focal, this process works as expected.
+- For Ubuntu Bionic and Xenial, this process doesn't work due to older versions of the Pro client installed.
-* For Jammy and Focal, this process works as expected.
-* For Bionic and Xenial this process doesn't work due to the older versions of the Pro client installed.
+<!-- link references -->
+[01]: https://ubuntu.com/18-04/azure
+[02]: ../../instance-metadata-service.md
+[03]: https://azure.microsoft.com/pricing/calculator/
+[04]: https://ubuntu.com/security/livepatch/docs
+[05]: https://help.ubuntu.com/community/Repositories
+[06]: https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview
+[07]: https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html
virtual-network-manager Concept Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-use-cases.md
AVNM automatically maintains the desired topology you defined in the connectivit
## Security With Azure Virtual Network Manager, you create [security admin rules](concept-security-admins.md) to enforce security policies across virtual networks in your organization. Security admin rules take precedence over rules defined by network security groups, and they're applied first when analyzing traffic as seen in the following diagram:++ Common uses include: - Create standard rules that must be applied and enforced on all existing VNets and newly created VNets.
virtual-network Remove Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/remove-public-ip-address-vm.md
az network nic ip-config update \
--name ipconfigmyVM \ --resource-group myResourceGroup \ --nic-name myVMNic \
- --public-ip-address ''
+ --public-ip-address null
``` - If you don't know the name of the network interface attached to your VM, use the [az vm nic list](/cli/azure/vm/nic#az-vm-nic-list) command to view them. For example, the following command lists the names of the network interfaces attached to a VM named *myVM* in a resource group named *myResourceGroup*:
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
Synching of virtual network peers can be performed through the Azure portal or w
> [!IMPORTANT] > This feature doesn't support scenarios where the virtual network to be updated is peered with: > * A classic virtual network
-> * A managed virtual network such as the Azure VWAN hub
## Service chaining