Updates from: 09/14/2023 01:15:48
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
active-directory Inbound Provisioning Api Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-issues.md
This document covers commonly encountered errors and issues with inbound provisi
**Probable causes** 1. Your API-driven provisioning app is paused.
-1. The provisioning service is yet to update the provisioning logs with the bulk request processing details.
+1. The provisioning service is yet to update the provisioning logs with the bulk request processing details.
+2. Your On-premises provisioning agent status is inactive (If you are running the [/API-driven inbound user provisioning to on-premises Active Directory](https://go.microsoft.com/fwlink/?linkid=2245182)).
+ **Resolution:** 1. Verify that your provisioning app is running. If it isn't running, select the menu option **Start provisioning** to process the data.
+2. Turn your On-premises provisioning agent status to active by restarting the On-premise agent.
1. Expect 5 to 10-minute delay between processing the request and writing to the provisioning logs. If your API client is sending data to the provisioning /bulkUpload API endpoint, then introduce a time delay between the request invocation and provisioning logs query. ### Forbidden 403 response code
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
The following table lists each setting that can be set to Microsoft managed and
| Setting | Configuration | |-||
-| [Registration campaign](how-to-mfa-registration-campaign.md) | Beginning in July, 2023, enabled for SMS and voice call users with free and trial subscriptions. |
+| [Registration campaign](how-to-mfa-registration-campaign.md) | Beginning in July, 2023, enabled for text message and voice call users with free and trial subscriptions. |
| [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled | | [Report suspicious activity](howto-mfa-mfasettings.md#report-suspicious-activity) | Disabled |
-As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication.
+As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using text message and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication.
## Next steps
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
To manage the legacy MFA policy, click **Security** > **Multifactor Authenticati
:::image type="content" border="true" source="./media/concept-authentication-methods-manage/service-settings.png" alt-text="Screenshot of MFA service settings.":::
-To manage authentication methods for self-service password reset (SSPR), click **Password reset** > **Authentication methods**. The **Mobile phone** option in this policy allows either voice calls or SMS to be sent to a mobile phone. The **Office phone** option allows only voice calls.
+To manage authentication methods for self-service password reset (SSPR), click **Password reset** > **Authentication methods**. The **Mobile phone** option in this policy allows either voice calls or text message to be sent to a mobile phone. The **Office phone** option allows only voice calls.
:::image type="content" border="true" source="./media/concept-authentication-methods-manage/password-reset.png" alt-text="Screenshot of password reset settings.":::
If the user can't register Microsoft Authenticator based on either of those poli
- **Mobile app notification** - **Mobile app code**
-For users who are enabled for **Mobile phone** for SSPR, the independent control between policies can impact sign-in behavior. Where the other policies have separate options for SMS and voice calls, the **Mobile phone** for SSPR enables both options. As a result, anyone who uses **Mobile phone** for SSPR can also use voice calls for password reset, even if the other policies don't allow voice calls.
+For users who are enabled for **Mobile phone** for SSPR, the independent control between policies can impact sign-in behavior. Where the other policies have separate options for text message and voice calls, the **Mobile phone** for SSPR enables both options. As a result, anyone who uses **Mobile phone** for SSPR can also use voice calls for password reset, even if the other policies don't allow voice calls.
Similarly, let's suppose you enable **Voice calls** for a group. After you enable it, you find that even users who aren't group members can sign-in with a voice call. In this case, it's likely those users are enabled for **Mobile phone** in the legacy SSPR policy or **Call to phone** in the legacy MFA policy.
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
Microsoft recommends passwordless authentication methods such as Windows Hello,
:::image type="content" border="true" source="media/concept-authentication-methods/authentication-methods.png" alt-text="Illustration of the strengths and preferred authentication methods in Azure AD." :::
-Azure AD Multi-Factor Authentication (MFA) adds additional security over only using a password when a user signs in. The user can be prompted for additional forms of authentication, such as to respond to a push notification, enter a code from a software or hardware token, or respond to an SMS or phone call.
+Azure AD Multi-Factor Authentication (MFA) adds additional security over only using a password when a user signs in. The user can be prompted for additional forms of authentication, such as to respond to a push notification, enter a code from a software or hardware token, or respond to a text message or phone call.
To simplify the user on-boarding experience and register for both MFA and self-service password reset (SSPR), we recommend you [enable combined security information registration](howto-registration-mfa-sspr-combined.md). For resiliency, we recommend that you require users to register multiple authentication methods. When one method isn't available for a user during sign-in or SSPR, they can choose to authenticate with another method. For more information, see [Create a resilient access control management strategy in Azure AD](concept-resilient-controls.md).
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 07/17/2023 Last updated : 08/23/2023
# Authentication methods in Azure Active Directory - phone options
-Microsoft recommends users move away from using SMS or voice calls for multifactor authentication (MFA). Modern authentication methods like [Microsoft Authenticator](concept-authentication-authenticator-app.md) are a recommended alternative. For more information, see [It's Time to Hang Up on Phone Transports for Authentication](https://aka.ms/hangup). Users can still verify themselves using a mobile phone or office phone as secondary form of authentication used for multifactor authentication (MFA) or self-service password reset (SSPR).
+Microsoft recommends users move away from using text messages or voice calls for multifactor authentication (MFA). Modern authentication methods like [Microsoft Authenticator](concept-authentication-authenticator-app.md) are a recommended alternative. For more information, see [It's Time to Hang Up on Phone Transports for Authentication](https://aka.ms/hangup). Users can still verify themselves using a mobile phone or office phone as secondary form of authentication used for multifactor authentication (MFA) or self-service password reset (SSPR).
-You can [configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md) for direct authentication using text message. SMS-based sign-in is convenient for Frontline workers. With SMS-based sign-in, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
+You can [configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md) for direct authentication using text message. Text messages are convenient for Frontline workers. With text messages, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
>[!NOTE] >Phone call verification isn't available for Azure AD tenants with trial subscriptions. For example, if you sign up for a trial license Microsoft Enterprise Mobility and Security (EMS), phone call verification isn't available. Phone numbers must be provided in the format *+CountryCode PhoneNumber*, for example, *+1 4251234567*. There must be a space between the country/region code and the phone number. ## Mobile phone verification
-For Azure AD Multi-Factor Authentication or SSPR, users can choose to receive an SMS message with a verification code to enter in the sign-in interface, or receive a phone call.
+For Azure AD Multi-Factor Authentication or SSPR, users can choose to receive a text message with a verification code to enter in the sign-in interface, or receive a phone call.
If users don't want their mobile phone number to be visible in the directory but want to use it for password reset, administrators shouldn't populate the phone number in the directory. Instead, users should populate their **Authentication Phone** at [My Sign-Ins](https://aka.ms/setupsecurityinfo). Administrators can see this information in the user's profile, but it's not published elsewhere.
If users don't want their mobile phone number to be visible in the directory but
> [!NOTE] > Phone extensions are supported only for office phones.
-Microsoft doesn't guarantee consistent SMS or voice-based Azure AD Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve SMS deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada.
+Microsoft doesn't guarantee consistent text message or voice-based Azure AD Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve text message deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada.
> [!NOTE]
-> Starting July 2023, we will apply delivery method optimizations such that tenants with a free or trial subscription may receive an SMS message or voice call.
+> Starting July 2023, we will apply delivery method optimizations such that tenants with a free or trial subscription may receive a text message or voice call.
-### SMS message verification
+### Text message verification
-With SMS message verification during SSPR or Azure AD Multi-Factor Authentication, a Short Message Service (SMS) text is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
+With text message verification during SSPR or Azure AD Multi-Factor Authentication, a text message is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
-Android users can enable Rich Communication Services (RCS) on their devices. RCS offers encryption and other improvements over SMS. For Android, MFA text messages may be sent over RCS rather than SMS. The MFA text message is similar to SMS, but RCS messages have more Microsoft branding and a verified checkmark so users know they can trust the message.
+Text messages can be sent over channels such as Short Message Service (SMS), Rich Communication Services (RCS), or WhatsApp.
+
+Android users can enable RCS on their devices. RCS offers encryption and other improvements over SMS. For Android, MFA text messages may be sent over RCS rather than SMS. The MFA text message is similar to SMS, but RCS messages have more Microsoft branding and a verified checkmark so users know they can trust the message.
:::image type="content" source="media/concept-authentication-methods/brand.png" alt-text="Screenshot of Microsoft branding in RCS messages.":::
+Some users with phone numbers that have country codes belonging to India, Indonesia and New Zealand may receive their verification codes via WhatsApp. Like RCS, these messages are similar to SMS, but have more Microsoft branding and a verified checkmark. Only users that have WhatsApp will receive verification codes via this channel. To determine whether a user has WhatsApp, we silently attempt delivering them a message via the app using the phone number they already registered for text message verification and see if it's successfully delivered. If users don't have any internet connectivity or uninstall WhatsApp, they'll receive their verification codes via SMS. The phone number associated with Microsoft's WhatsApp Business Agent is: *+1 (217) 302 1989*.
+ ### Phone call verification With phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad.
With office phone call verification during SSPR or Azure AD Multi-Factor Authent
If you have problems with phone authentication for Azure AD, review the following troubleshooting steps:
-* ΓÇ£You've hit our limit on verification callsΓÇ¥ or ΓÇ£YouΓÇÖve hit our limit on text verification codesΓÇ¥ error messages during sign-in
+* "You've hit our limit on verification calls" or "You've hit our limit on text verification codes" error messages during sign-in
* Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation does not apply to Microsoft Authenticator or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes. * "Sorry, we're having trouble verifying your account" error message during sign-in
- * Microsoft may limit or block voice or SMS authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or SMS authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support.
+ * Microsoft may limit or block voice or text message authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or text message authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support.
* Blocked caller ID on a single device. * Review any blocked numbers configured on the device. * Wrong phone number or incorrect country/region code, or confusion between personal phone number versus work phone number.
If you have problems with phone authentication for Azure AD, review the followin
* Ensure that the user has their phone turned on and that service is available in their area, or use alternate method. * User is blocked * Have an Azure AD administrator unblock the user in the Azure portal.
-* SMS is not subscribed on the device.
- * Have the user change methods or activate SMS on the device.
-* Faulty telecom providers such as no phone input detected, missing DTMF tones issues, blocked caller ID on multiple devices, or blocked SMS across multiple devices.
- * Microsoft uses multiple telecom providers to route phone calls and SMS messages for authentication. If you see any of the above issues, have a user attempt to use the method at least five times within 5 minutes and have that user's information available when contacting Microsoft support.
+* Text messaging platforms like SMS, RCS, or WhatsApp aren't subscribed on the device.
+ * Have the user change methods or activate a text messaging platform on the device.
+* Faulty telecom providers, such as when no phone input is detected, missing DTMF tones issues, blocked caller ID on multiple devices, or blocked text messages across multiple devices.
+ * Microsoft uses multiple telecom providers to route phone calls and text messages for authentication. If you see any of these issues, have a user attempt to use the method at least five times within 5 minutes and have that user's information available when contacting Microsoft support.
* Poor signal quality. * Have the user attempt to log in using a wi-fi connection by installing the Authenticator app.
- * Or, use SMS authentication instead of phone (voice) authentication.
+ * Or use a text message instead of phone (voice) authentication.
* Phone number is blocked and unable to be used for Voice MFA
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
Previously updated : 08/23/2023 Last updated : 08/28/2023
# Conditional Access authentication strength
-Authentication strength is a Conditional Access control that allows administrators to specify which combination of authentication methods can be used to access a resource. For example, they can make only phishing-resistant authentication methods available to access a sensitive resource. But to access a nonsensitive resource, they can allow less secure multifactor authentication (MFA) combinations, such as password + SMS.
+Authentication strength is a Conditional Access control that allows administrators to specify which combination of authentication methods can be used to access a resource. For example, they can make only phishing-resistant authentication methods available to access a sensitive resource. But to access a nonsensitive resource, they can allow less secure multifactor authentication (MFA) combinations, such as password + text message.
Authentication strength is based on the [Authentication methods policy](concept-authentication-methods.md), where administrators can scope authentication methods for specific users and groups to be used across Azure Active Directory (Azure AD) federated applications. Authentication strength allows further control over the usage of these methods based upon specific scenarios such as sensitive resource access, user risk, location, and more.
The following table lists the combinations of authentication methods for each bu
|Email One-time pass (Guest)| | | | -->
-<sup>1</sup> Something you have refers to one of the following methods: SMS, voice, push notification, software OATH token and Hardware OATH token.
+<sup>1</sup> Something you have refers to one of the following methods: text message, voice, push notification, software OATH token and Hardware OATH token.
The following API call can be used to list definitions of all the built-in authentication strengths:
Users may register for authentications for which they are enabled, and in other
### How an authentication strength policy is evaluated during sign-in
-The authentication strength Conditional Access policy defines which methods can be used. Azure AD checks the policy during sign-in to determine the userΓÇÖs access to the resource. For example, an administrator configures a Conditional Access policy with a custom authentication strength that requires FIDO2 Security Key or Password + SMS. The user accesses a resource protected by this policy. During sign-in, all settings are checked to determine which methods are allowed, which methods are registered, and which methods are required by the Conditional Access policy. To be used, a method must be allowed, registered by the user (either before or as part of the access request), and satisfy the authentication strength.
+The authentication strength Conditional Access policy defines which methods can be used. Azure AD checks the policy during sign-in to determine the userΓÇÖs access to the resource. For example, an administrator configures a Conditional Access policy with a custom authentication strength that requires FIDO2 Security Key or Password + text message. The user accesses a resource protected by this policy. During sign-in, all settings are checked to determine which methods are allowed, which methods are registered, and which methods are required by the Conditional Access policy. To be used, a method must be allowed, registered by the user (either before or as part of the access request), and satisfy the authentication strength.
### How multiple Conditional Access authentication strength policies are evaluated
The following factors determine if the user gains access to the resource:
- Which methods are allowed for user sign-in in the Authentication methods policy? - Is the user registered for any available method?
-When a user accesses a resource protected by an authentication strength Conditional Access policy, Azure AD evaluates if the methods they have previously used satisfy the authentication strength. If a satisfactory method was used, Azure AD grants access to the resource. For example, let's say a user signs in with password + SMS. They access a resource protected by MFA authentication strength. In this case, the user can access the resource without another authentication prompt.
+When a user accesses a resource protected by an authentication strength Conditional Access policy, Azure AD evaluates if the methods they have previously used satisfy the authentication strength. If a satisfactory method was used, Azure AD grants access to the resource. For example, let's say a user signs in with password + text message. They access a resource protected by MFA authentication strength. In this case, the user can access the resource without another authentication prompt.
Let's suppose they next access a resource protected by Phishing-resistant MFA authentication strength. At this point, they'll be prompted to provide a phishing-resistant authentication method, such as Windows Hello for Business.
In external user scenarios, the authentication methods that can satisfy authenti
|Authentication method |Home tenant | Resource tenant | ||||
-|SMS as second factor | &#x2705; | &#x2705; |
+|text message as second factor | &#x2705; | &#x2705; |
|Voice call | &#x2705; | &#x2705; | |Microsoft Authenticator push notification | &#x2705; | &#x2705; | |Microsoft Authenticator phone sign-in | &#x2705; | |
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Azure AD CBA is an MFA (Multi factor authentication) capable method, that is Azu
If CBA enabled user only has a Single Factor (SF) certificate and need MFA 1. Use Password + SF certificate. 1. Issue Temporary Access Pass (TAP)
- 1. Admin adds Phone Number to user account and allows Voice/SMS method for user.
+ 1. Admin adds Phone Number to user account and allows Voice/text message method for user.
If CBA enabled user has not yet been issued a certificate and need MFA 1. Issue Temporary Access Pass (TAP)
- 1. Admin adds Phone Number to user account and allows Voice/SMS method for user.
+ 1. Admin adds Phone Number to user account and allows Voice/text message method for user.
If CBA enabled user cannot use MF cert (such as on mobile device without smart card support) and need MFA 1. Issue Temporary Access Pass (TAP) 1. User Register another MFA method (when user can use MF cert) 1. Use Password + MF cert (when user can use MF cert)
- 1. Admin adds Phone Number to user account and allows Voice/SMS method for user
+ 1. Admin adds Phone Number to user account and allows Voice/text message method for user
## MFA with Single-factor certificate-based authentication
active-directory Concept Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication.md
The following images show how Azure AD CBA simplifies the customer environment b
The following scenarios are supported: - User sign-ins to web browser-based applications on all platforms.-- User sign-ins to Office mobile apps, including Outlook, OneDrive, and so on.
+- User sign-ins to Office mobile apps on iOS/Android platforms as well as Office native apps in Windows, including Outlook, OneDrive, and so on.
- User sign-ins on mobile native browsers. - Support for granular authentication rules for multifactor authentication by using the certificate issuer **Subject** and **policy OIDs**. - Configuring certificate-to-user account bindings by using any of the certificate fields:
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-licensing.md
The following table provides a list of the features that are available in the va
| Protect Azure AD tenant admin accounts with MFA | ΓùÅ | ΓùÅ (*Azure AD Global Administrator* accounts only) | ΓùÅ | ΓùÅ | ΓùÅ | | Mobile app as a second factor | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ | | Phone call as a second factor | | | ΓùÅ | ΓùÅ | ΓùÅ |
-| SMS as a second factor | | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ |
+| Text message as a second factor | | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ |
| Admin control over verification methods | | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ | | Fraud alert | | | | ΓùÅ | ΓùÅ | | MFA Reports | | | | ΓùÅ | ΓùÅ |
Our recommended approach to enforce MFA is using [Conditional Access](../conditi
| Configuration flexibility | | ΓùÅ | | | **Functionality** | | Exempt users from the policy | | ΓùÅ | ΓùÅ |
-| Authenticate by phone call or SMS | ΓùÅ | ΓùÅ | ΓùÅ |
+| Authenticate by phone call or text message | ΓùÅ | ΓùÅ | ΓùÅ |
| Authenticate by Microsoft Authenticator and Software tokens | ΓùÅ | ΓùÅ | ΓùÅ | | Authenticate by FIDO2, Windows Hello for Business, and Hardware tokens | | ΓùÅ | ΓùÅ | | Blocks legacy authentication protocols | ΓùÅ | ΓùÅ | ΓùÅ |
active-directory Concept Mfa Regional Opt In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-regional-opt-in.md
Previously updated : 09/11/2023 Last updated : 09/12/2023
As a protection for our customers, Microsoft doesn't automatically support telep
In today's digital world, telecommunication services have become ingrained into our lives. But advancements come with a risk of fraudulent activities. International Revenue Share Fraud (IRSF) is a threat with severe financial implications that also makes using services more difficult. Let's look at IRSF fraud more in-depth.
-IRSF is a type of telephony fraud where criminals exploit the billing system of telecommunication services providers to make profit for themselves. Bad actors gain unauthorized access to a telecommunication network and divert traffic to those networks to skim profit for every transaction that is sent to that network. To divert traffic, bad actors steal existing usernames and passwords, create new usernames and passwords, or try a host of other things to send SMS messages and voice calls through their telecommunication network. Bad actors take advantage of multifactor authentication screens, which require an SMS or voice call before a user can access their account. This activity causes exorbitant charges and makes services unreliable for our customers, causing downtime, and system errors.
+IRSF is a type of telephony fraud where criminals exploit the billing system of telecommunication services providers to make profit for themselves. Bad actors gain unauthorized access to a telecommunication network and divert traffic to those networks to skim profit for every transaction that is sent to that network. To divert traffic, bad actors steal existing usernames and passwords, create new usernames and passwords, or try a host of other things to send text message messages and voice calls through their telecommunication network. Bad actors take advantage of multifactor authentication screens, which require a text message or voice call before a user can access their account. This activity causes exorbitant charges and makes services unreliable for our customers, causing downtime, and system errors.
Here's how an IRSF attack may happen: 1. A bad actor first gets premium rate phone numbers and registers them.
-1. A bad actor uses automated scripts to request voice calls or SMS messages. The bad actor is colluding with number providers and the telecommunication network to drive more traffic to those services. The bad actor skims some of the profits of the increased traffic.
+1. A bad actor uses automated scripts to request voice calls or text messages. The bad actor is colluding with number providers and the telecommunication network to drive more traffic to those services. The bad actor skims some of the profits of the increased traffic.
1. A bad actor will hop around different region codes to continue to drive traffic and make it hard for them to get caught. The most common way to conduct IRSF is through an end-user experience that requires a two-factor authentication code. Bad actors add those premium rate phone numbers and pump traffic to them by requesting two-factor authentication codes. This activity results in revenue-skimming, and can lead to billions of dollars in loss.
For SMS verification, the following region codes require an opt-in.
| 998 | Uzbek | ## Voice verification
-For Voice verification, the following region codes require an opt-in.
+For voice verification, the following region codes require an opt-in.
| Region Code | Region Name | |:-- |:- |
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
In addition to choosing who can be nudged, you can define how many days a user c
![Confirmation of approval](./media/how-to-nudge-authenticator-app/approved.png)
- 1. Authenticator app is now successfully set up as the userΓÇÖs default sign-in method.
+ 1. Authenticator app is now successfully set up as the user's default sign-in method.
![Installation complete](./media/how-to-nudge-authenticator-app/finish.png)
In addition to using the Azure portal, you can also enable the registration camp
To configure the policy using Graph Explorer:
-1. Sign in to Graph Explorer and ensure youΓÇÖve consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+1. Sign in to Graph Explorer and ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
To open the Permissions panel:
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
The following table lists more numbers for different countries.
| Vietnam | +84 2039990161 | > [!NOTE]
-> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-).
+> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-short-codes-are-used-for-sending-text-messages-to-my-users-).
To configure your own caller ID number, complete the following steps:
To configure your own caller ID number, complete the following steps:
1. Select **Save**. > [!NOTE]
-> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-).
+> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-short-codes-are-used-for-sending-text-messages-to-my-users-).
### Custom voice messages
active-directory How To Add Remove User To Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-user-to-group.md
This article describes how you can add or remove a new user for a group in Permi
## Add a user
-1. Navigate to the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
1. From the Azure Active Directory tile, select **Go to Azure Active Directory**. 1. From the navigation pane, select the **Groups** drop-down menu, then **All groups**. 1. Select the group name for the group you want to add the user to.
This article describes how you can add or remove a new user for a group in Permi
## Remove a user
-1. Navigate to the Microsoft [Entra admin center](https://entra.microsoft.com/#home).
+1. Sign in to the Microsoft [Entra admin center](https://entra.microsoft.com/#home).
1. From the Azure Active Directory tile, select **Go to Azure Active Directory**. 1. From the navigation pane, select the **Groups** drop-down menu, then **All groups**. 1. Select the group name for the group you want to remove the user from.
active-directory Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md
Previously updated : 06/16/2023 Last updated : 09/13/2023
This article describes how to add an Amazon Web Services (AWS) account, Microsof
The **Permissions Management Onboarding - AWS Member Account Details** page displays.
-1. Go to **Enter Your AWS Account IDs**, and then select **Add** (the plus **+** sign).
+1. Go to **Enter Your AWS Account IDs**, then select **Add** (the plus **+** sign).
1. Copy your account ID from AWS and paste it into the **Enter Account ID** box. The AWS account ID is automatically added to the script.
This article describes how to add an Amazon Web Services (AWS) account, Microsof
The **Permissions Management Onboarding - Summary** page displays.
-1. Go to **Azure subscription IDs**, and then select **Edit** (the pencil icon).
-1. Go to **Enter your Azure Subscription IDs**, and then select **Add subscription** (the plus **+** sign).
+1. Go to **Azure subscription IDs**, then select **Edit** (the pencil icon).
+1. Go to **Enter your Azure Subscription IDs**, then select **Add subscription** (the plus **+** sign).
1. Copy and paste your subscription ID from Azure and paste it into the subscription ID box. The subscription ID is automatically added to the subscriptions line in the script.
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
Previously updated : 08/24/2023 Last updated : 09/13/2023
This article describes how to onboard an Amazon Web Services (AWS) account in Microsoft Entra Permissions Management. > [!NOTE]
-> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Microsoft Entra Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+> You must have Global Administrator permissions to perform the tasks in this article.
## Explanation
Any current or future accounts found get onboarded automatically.
To view status of onboarding after saving the configuration: -- Navigate to data collectors tab.
+- Go to **Data Collectors** tab.
- Click on the status of the data collector. -- View accounts on the In Progress page
+- View accounts on the **In Progress** page
#### Option 2: Enter authorization systems 1. In the **Permissions Management Onboarding - AWS Member Account Details** page, enter the **Member Account Role** and the **Member Account IDs**.
To view status of onboarding after saving the configuration:
You can enter up to 100 account IDs. Click the plus icon next to the text box to add more account IDs. > [!NOTE]
- > Perform the next 6 steps for each account ID you add.
+ > Do the following steps for each account ID you add:
1. Open another browser window and sign in to the AWS console for the member account.
This option detects all AWS accounts that are accessible through OIDC role acces
- If AWS SSO is enabled, organization account CFT also adds policy needed to collect AWS SSO configuration details. - Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. These actions create a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection. - Click Verify and Save. -- Navigate to newly create Data Collector row under AWSdata collectors. -- Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
+- Go to the newly create Data Collector row under AWSdata collectors.
+- Click on Status column when the row has **Pending** status
- To onboard and start collection, choose specific ones from the detected list and consent for collection. ### 6. Review and save
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management. Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management. > [!NOTE]
-> A *global administrator* or *root user* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+> You must have [Global Administrator](https://aka.ms/globaladmin) permissions to perform the tasks in this article.
## Explanation
The Permissions Management service is built on Azure, and given you're onboardin
## Prerequisites
-To add Permissions Management to your Azure AD tenant:
-- You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+To add Permissions Management to your Entra ID tenant:
+- You must have an Entra ID user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
- You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you. ## How to onboard an Azure subscription
Choose from three options to manage Azure subscriptions.
#### Option 1: Automatically manage
-This option allows subscriptions to be automatically detected and monitored without further work required. A key benefit of automatic management is that any current or future subscriptions found will be onboarded automatically. The steps to detect a list of subscriptions and onboard for collection are as follows:
+This option lets subscriptions be automatically detected and monitored without further work required. A key benefit of automatic management is that any current or future subscriptions found are onboarded automatically. The steps to detect a list of subscriptions and onboard for collection are as follows:
- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope. To do this: 1. In the EPM portal, left-click the cog on the top right-hand side.
-1. Navigate to data collectors tab
-1. Ensure 'Azure' is selected
-1. Click ΓÇÿCreate ConfigurationΓÇÖ
-1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
+1. Go to data collectors tab
+1. Ensure **Azure** is selected.
+1. Click **Create Configuration.**
+1. For onboarding mode, select **Automatically Manage.**
> [!NOTE]
- > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programmatically with PowerShell or the Azure CLI.
+ > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This is performed manually in the Entra console, or programmatically with PowerShell or the Azure CLI.
-- Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ
+- Once complete, Click **Verify Now & Save.**
To view status of onboarding after saving the configuration:
-1. Collectors will now be listed and change through status types. For each collector listed with a status of ΓÇ£Collected InventoryΓÇ¥, click on that status to view further information.
-1. You can then view subscriptions on the In Progress page
+1. Collectors are now listed and change through status types. For each collector listed with a status of **Collected Inventory,** click on that status to view further information.
+1. You can then view subscriptions on the In Progress page.
#### Option 2: Enter authorization systems
-You have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 100 per collector). Follow the steps below to configure these subscriptions to be monitored:
+You have the ability to specify only certain subscriptions to manage and monitor with Permissions Management (up to 100 per collector). Follow the steps below to configure these subscriptions to be monitored:
1. For each subscription you wish to manage, ensure that the ΓÇÿReaderΓÇÖ role has been granted to Cloud Infrastructure Entitlement Management application for the subscription. 1. In the EPM portal, click the cog on the top right-hand side.
-1. Navigate to data collectors tab
+1. Go to data collectors tab
1. Ensure 'Azure' is selected 1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. Select ΓÇÿEnter Authorization SystemsΓÇÖ
You have the ability to specify only certain subscriptions to manage and monitor
To view status of onboarding after saving the configuration:
-1. Navigate to data collectors tab.
+1. Go to the **Data Collectors** tab.
1. Click on the status of the data collector.
-1. View subscriptions on the In Progress page
+1. View subscriptions on the In Progress page.
#### Option 3: Select authorization systems
This option detects all subscriptions that are accessible by the Cloud Infrastru
- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope.
-1. In the EPM portal, click the cog on the top right-hand side.
-1. Navigate to data collectors tab
-1. Ensure 'Azure' is selected
-1. Click ΓÇÿCreate ConfigurationΓÇÖ
-1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
+1. In the Permissions Management portal, click the cog on the top right-hand side.
+1. Go to the **Data Collectors** tab.
+1. Ensure **Azure** is selected.
+1. Click **Create Configuration.**
+1. For onboarding mode, select **Automatically Manage.**
> [!NOTE] > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programmatically with PowerShell or the Azure CLI. -- Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ
+- Once complete, Click **Verify Now & Save.**
To view status of onboarding after saving the configuration:
-1. Navigate to newly create Data Collector row under Azure data collectors.
-1. Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
+1. Go to newly create Data Collector row under Azure data collectors.
+1. Click on Status column when the row has **Pending** status
1. To onboard and start collection, choose specific ones subscriptions from the detected list and consent for collection. ### 2. Review and save.
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
Previously updated : 08/24/2023 Last updated : 09/13/2023
This article also describes how to disable the controller in Microsoft Azure and
> [!NOTE] > You can enable the controller in AWS if you disabled it during onboarding. Once you enable the controller in AWS, you canΓÇÖt disable it.
-1. Sign in to the AWS console of the member account in a separate browser window.
-1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
-1. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**.
+1. In a separate browser window, sign in to the AWS console of the member account.
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **AWS**, then select **Create Configuration**.
1. On the **Permissions Management Onboarding - AWS Member Account Details** page, select **Launch Template**. The **AWS CloudFormation create stack** page opens, displaying the template.
This article also describes how to disable the controller in Microsoft Azure and
This AWS CloudFormation stack creates a collection role in the member account with necessary permissions (policies) for data collection. A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack. 1. Return to Permissions Management, and on the Permissions Management **Onboarding - AWS Member Account Details** page, select **Next**.
-1. On **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+1. On **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, then select **Verify Now & Save**.
The following message appears: **Successfully created configuration.**
You can enable or disable the controller in Azure at the Subscription level of y
- If you have read-only permission, the **Role** column displays **Reader**. - If you have administrative permission, the **Role** column displays **User Access Administrator**.
-1. To add the administrative role assignment, return to the **Access control (IAM)** page, and then select **Add role assignment**.
+1. To add the administrative role assignment, return to the **Access control (IAM)** page, then select **Add role assignment**.
1. Add or remove the role assignment for Cloud Infrastructure Entitlement Management.
-1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
-1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**.
-1. On the **Permissions Management Onboarding - Azure Subscription Details** page, enter the **Subscription ID**, and then select **Next**.
-1. On **Permissions Management Onboarding ΓÇô Summary** page, review the controller permissions, and then select **Verify Now & Save**.
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **Azure**, then select **Create Configuration**.
+1. On the **Permissions Management Onboarding - Azure Subscription Details** page, enter the **Subscription ID**, then select **Next**.
+1. On **Permissions Management Onboarding ΓÇô Summary** page, review the controller permissions, then select **Verify Now & Save**.
The following message appears: **Successfully Created Configuration.**
You can enable or disable the controller in Azure at the Subscription level of y
1. Optionally, execute ``mciem-enable-gcp-api.sh`` to enable all recommended GCP APIs.
-1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
1. On the **Data Collectors** dashboard, select **GCP**, and then select **Create Configuration**. 1. On the **Permissions Management Onboarding - Azure AD OIDC App Creation** page, select **Next**. 1. On the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID**, and then select **Next**.
-1. On the **Permissions Management Onboarding - GCP Project IDs** page, enter the **Project IDs**, and then select **Next**.
-1. On the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+1. On the **Permissions Management Onboarding - GCP Project IDs** page, enter the **Project IDs**, then select **Next**.
+1. On the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, then select **Verify Now & Save**.
The following message appears: **Successfully Created Configuration.**
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
Previously updated : 07/21/2023 Last updated : 09/13/2023
This article describes how to enable Microsoft Entra Permissions Management in y
To enable Permissions Management in your organization: -- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must have an Entra ID tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
- You must be eligible for or have an active assignment to the *Permissions Management Administrator* role as a user in that tenant. ## How to enable Permissions Management on your Azure AD tenant 1. In your browser:
- 1. Go to [Entra services](https://entra.microsoft.com) and use your credentials to sign in to [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
- 1. If you aren't already authenticated, sign in as a *Permissions Management Administrator* user.
- 1. If needed, activate the *Permissions Management Administrator* role in your Azure AD tenant.
- 1. In the Azure portal, select **Permissions Management**, and then select the link to purchase a license or begin a trial.
+ 1. Browse to the [Microsoft Entra admin center](https://entra.microsoft.com) and sign in to [Microsoft Entra ID](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) as a [Global Administrator](https://aka.ms/globaladmin).
+ 1. If needed, activate the *Permissions Management Administrator* role in your Entra ID tenant.
+ 1. In the Azure portal, select **Entra Permissions Management**, then select the link to purchase a license or begin a trial.
## Activate a free trial or paid license There are two ways to activate a trial or a full product license. -- The first way is to go to [admin.microsoft.com](https://admin.microsoft.com).
- - Sign in with *Global Admin* or *Billing Admin* credentials for your tenant.
- - Go to Setup and sign up for an Entra Permissions Management trial.
- - For self-service, navigate to the [Microsoft 365 portal](https://aka.ms/TryPermissionsManagement) to sign up for a 45-day free trial or to purchase licenses.
-- The second way is through Volume Licensing or Enterprise agreements. If your organization falls under a volume license or enterprise agreement scenario, contact your Microsoft representative.
+- The first way is to go to the [Microsoft 365 admin center](https://admin.microsoft.com).
+ - Sign in as a *Global Administrator* for your tenant.
+ - Go to Setup and sign up for a Microsoft Entra Permissions Management trial.
+ - For self-service, Go to the [Microsoft 365 portal](https://aka.ms/TryPermissionsManagement) to sign up for a 45-day free trial or to purchase licenses.
+- The second way is through Volume Licensing or Enterprise agreements.
+ - If your organization falls under a volume license or enterprise agreement scenario, contact your Microsoft representative.
Permissions Management launches with the **Data Collectors** dashboard.
Use the **Data Collectors** dashboard in Permissions Management to configure dat
1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
- - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+ - In the Permissions Management home page, select **Settings** (the gear icon), then select the **Data Collectors** subtab.
1. Select the authorization system you want: **AWS**, **Azure**, or **GCP**.
active-directory Permissions Management Quickstart Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-quickstart-guide.md
Previously updated : 08/24/2023 Last updated : 09/13/2023
Before you begin, you need access to these tools for the onboarding process:
- Access to a local BASH shell with the Azure CLI or Azure Cloud Shell using BASH environment (Azure CLI is included). - Access to AWS, Azure, and GCP consoles.-- A user must have *Global Administrator* or *Permissions Management Administrator* role assignments to create a new app registration in Entra ID tenant is required for AWS and GCP onboarding.
+- A user must have the *Global Administrator* role assignment to create a new app registration in Entra ID tenant is required for AWS and GCP onboarding.
## Step 1: Set-up Permissions Management
If the above points are met, continue with:
[Enable Microsoft Entra Permissions Management in your organization](onboard-enable-tenant.md)
-Ensure you're a *Global Administrator* or *Permissions Management Administrator*. Learn more about [Permissions Management roles and permissions](product-roles-permissions.md).
+Ensure you're a *Global Administrator*. Learn more about [Permissions Management roles and permissions](product-roles-permissions.md).
## Step 2: Onboard your multicloud environment
Permissions Management automatically discovers all current subscriptions. Once d
> To use **Automatic** or **Select** modes, the controller must be enabled while configuring data collection. To configure data collection:
-1. In Permissions Management, navigate to the data collectors page.
-2. Select a cloud environment: AWS, Azure, or GCP.
+1. In Permissions Management, go to the **Data Collectors** page.
+2. Select a cloud environment: **AWS**, **Azure**, or **GCP**.
3. Click **Create configuration**. ### Onboard Amazon Web Services (AWS)
active-directory Product Privileged Role Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-privileged-role-insights.md
The **Azure AD Insights** tab shows you who is assigned to privileged roles in y
> Microsoft recommends that you keep two break glass accounts permanently assigned to the global administrator role. Make sure that these accounts don't require the same multi-factor authentication mechanism to sign in as other administrative accounts. This is described further in [Manage emergency access accounts in Microsoft Entra](../roles/security-emergency-access.md). > [!NOTE]
-> Keep role assignments permanent if a user has a an additional Microsoft account (for example, an account they use to sign in to Microsoft services like Skype, or Outlook.com). If you require multi-factor authentication to activate a role assignment, a user with an additional Microsoft account will be locked out.
+> Keep role assignments permanent if a user has a an additional Microsoft account (for example, an account they use to sign in to Microsoft services like Skype or Outlook.com). If you require multi-factor authentication to activate a role assignment, a user with an additional Microsoft account will be locked out.
## Prerequisite To view information on the Azure AD Insights tab, you must have Permissions Management Administrator role permissions.
active-directory Product Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-roles-permissions.md
Title: Microsoft Entra Permissions Management roles and permissions description: Review roles and the level of permissions assigned in Microsoft Entra Permissions Management.
-# customerintent: As a cloud administer, I want to understand Permissions Management role assignments, so that I can effectively assign the correct permissions to users.
+# customerintent: As a cloud administrator, I want to understand Permissions Management role assignments, so that I can effectively assign the correct permissions to users.
In Microsoft Azure and Microsoft Entra Permissions Management role assignments g
- **Billing Administrator**: Performs common billing related tasks like updating payment information. - **Permissions Management Administrator**: Manages all aspects of Entra Permissions Management.
-See [Microsoft Entra ID built-in roles to learn more.](product-privileged-role-insights.md)
+See [Microsoft Entra ID built-in roles to learn more.](https://go.microsoft.com/fwlink/?linkid=2247090)
## Enabling Permissions Management-- To activate a trial or purchase a license, you must have *Global Administrator* or *Billing Administrator* permissions.
+- To activate a trial or purchase a license, you must have *Global Administrator* permissions.
## Onboarding your Amazon Web Service (AWS), Microsoft Entra, or Google Cloud Platform (GCP) environments
active-directory Quickstart Single Page App Angular Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-angular-sign-in.md
Previously updated : 07/27/2023 Last updated : 09/13/2023
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using Angular
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript Angular single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+This quickstart uses a sample Angular single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-This quickstart uses MSAL Angular v2 with the authorization code flow.
+In this article you'll register a SPA in the Microsoft Entra admin center, and download a sample Angular SPA. Next, you'll run the sample application, sign in with your personal Microsoft account or a work/school account, and sign out.
## Prerequisites
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-
-## Register your quickstart application
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+## Register the application in the Microsoft Entra admin center
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **New registration**.
-1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. Select **Register**.
+1. The application's Overview pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
+
+## Add a redirect URI
+ 1. Under **Manage**, select **Authentication**.
-1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Single-page application**.
1. Set the **Redirect URIs** value to `http://localhost:4200/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user. 1. Select **Configure** to apply the changes. 1. Under **Platform Configurations** expand **Single-page application**.
-1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png) Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
-
-#### Step 2: Download the project
-
-To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa/archive/main.zip).
-
-#### Step 3: Configure your JavaScript app
-
-In the *src* folder, open the *app* folder then open the *app.module.ts* file and update the `clientID`, `authority`, and `redirectUri` values in the `auth` object.
-
-```javascript
-// MSAL instance to be passed to msal-angular
-export function MSALInstanceFactory(): IPublicClientApplication {
- return new PublicClientApplication({
- auth: {
- clientId: 'Enter_the_Application_Id_Here',
- authority: 'Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here',
- redirectUri: 'Enter_the_Redirect_Uri_Here'
- },
- cache: {
- cacheLocation: BrowserCacheLocation.LocalStorage,
- storeAuthStateInCookie: isIE, // set to true for IE 11 },
- });
-}
-```
-
-Modify the values in the `auth` section as described here:
+1. Confirm that for **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png), your **Redirect URI** is eligible for the Authorization Code Flow with PKCE.
-- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+## Clone or download the sample application
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
-- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).-- `Enter_the_Tenant_info_here` is set to one of the following:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+To obtain the sample application, you can either clone it from GitHub or download it as a .zip file.
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
- - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
- To find the value of **Supported account types**, go to the app registration's **Overview** page.
-- `Enter_the_Redirect_Uri_Here` is `http://localhost:4200/`.
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-docs-code-javascript.git
+ ```
-The `authority` value in your *app.module.ts* should be similar to the following if you're using the main (global) Azure cloud:
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-docs-code-javascript/archive/refs/heads/main.zip)
-```javascript
-authority: "https://login.microsoftonline.com/common",
-```
+## Configure the project
-Scroll down in the same file and update the `graphMeEndpoint`.
-- Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`-- `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](/graph/deployments).
+1. In your IDE, open the project folder, *ms-identity-docs-code-javascript/angular-spa*, containing the sample.
+1. Open *src/app/app.module.ts* and replace the file contents with the following snippet:
-```javascript
-export function MSALInterceptorConfigFactory(): MsalInterceptorConfiguration {
- const protectedResourceMap = new Map<string, Array<string>>();
- protectedResourceMap.set('Enter_the_Graph_Endpoint_Herev1.0/me', ['user.read']);
+ :::code language="csharp" source="~/ms-identity-docs-code-javascript/angular-spa/src/app/app.module.ts":::
- return {
- interactionType: InteractionType.Redirect,
- protectedResourceMap
- };
-}
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the **Directory (tenant) ID** that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ * `RedirectUri` - The **Redirect URI** of the application. If necessary, replace the text in quotes with the redirect URI that was recorded earlier from the overview page of the registered application.
- #### Step 4: Run the project
+## Run the application and sign in
Run the project with a web server by using Node.js: 1. To start the server, run the following commands from within the project directory:+ ```console npm install npm start ```
-1. Browse to `http://localhost:4200/`.
-
-1. Select **Login** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click the **Profile** button to display your user information on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### msal.js
+1. Copy the https URL that appears in the terminal, for example, `https://localhost:4200`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+ :::image type="content" source="./media/quickstarts/angular-spa/quickstart-angular-spa-sign-in.png" alt-text="Screenshot of JavaScript App depicting the results of the API call.":::
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+## Sign out from the application
-```console
-npm install @azure/msal-browser @azure/msal-angular@2
-```
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
-## Next steps
+## Related content
-For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md)
-> [!div class="nextstepaction"]
-> [Tutorial to sign in users and call Microsoft Graph](tutorial-v2-javascript-auth-code.md)
+- Learn more by building this Angular SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-angular-auth-code.md)
active-directory Quickstart Single Page App Javascript Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-javascript-sign-in.md
Previously updated : 07/27/2023 Last updated : 09/13/2023
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using JavaScript
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+This quickstart uses a sample JavaScript (JS) single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
-See [How the sample works](#how-the-sample-works) for an illustration.
+In this article you'll register a SPA in the Microsoft Entra admin center, and download a sample JS SPA. Next, you'll run the sample application, sign in with your personal Microsoft account or a work or school account, and sign out.
## Prerequisites
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-
-## Register and download your quickstart application
--
-### Step 1: Register your application
+## Register the application in the Microsoft Entra admin center
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select **New registration**.
-1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
-1. Under **Manage**, select **Authentication**.
-1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
-1. Set the **Redirect URI** value to `http://localhost:3000/`.
-1. Select **Configure**.
-
-### Step 2: Download the project
-
-To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-v2/archive/master.zip).
-
-### Step 3: Configure your JavaScript app
-
-In the *app* folder, open the *authConfig.js* file, and then update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
-
-```javascript
-// Config object to be passed to MSAL on creation
-const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_Uri_Here",
- },
- cache: {
- cacheLocation: "sessionStorage", // This configures where your cache will be stored
- storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
- }
-};
-```
-
-Modify the values in the `msalConfig` section:
--- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.-
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
-- `Enter_the_Cloud_Instance_Id_Here` is the Azure cloud instance. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).-- `Enter_the_Tenant_info_here` is one of the following:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
-
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
- - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+1. Select **Register**.
+1. The application's Overview pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
- To find the value of **Supported account types**, go to the app registration's **Overview** page.
-- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
+## Add a redirect URI
-The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
-
-```javascript
-authority: "https://login.microsoftonline.com/common",
-```
+1. Under **Manage**, select **Authentication**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Single-page application**.
+1. Set the **Redirect URIs** value to `http://localhost:3000/`.
+1. Select **Configure** to apply the changes.
+1. Under **Platform Configurations** expand **Single-page application**.
+1. Confirm that for **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png), your **Redirect URI** is eligible for the Authorization Code Flow with PKCE.
-Next, open the *graphConfig.js* file to update the `graphMeEndpoint` and `graphMailEndpoint` values in the `apiConfig` object.
+## Clone or download the sample application
-```javascript
- // Add here the endpoints for MS Graph API services you would like to use.
- const graphConfig = {
- graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me",
- graphMailEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me/messages"
- };
+To obtain the sample application, you can either clone it from GitHub or download it as a .zip file.
- // Add here scopes for access token to be used at MS Graph API endpoints.
- const tokenRequest = {
- scopes: ["Mail.Read"]
- };
-```
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
-`Enter_the_Graph_Endpoint_Here` is the endpoint that API calls are made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information about Microsoft Graph on national clouds, see [National cloud deployment](/graph/deployments).
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-javascript-tutorial
+ ```
-If you're using the main (global) Microsoft Graph API service, the `graphMeEndpoint` and `graphMailEndpoint` values in the *graphConfig.js* file should be similar to the following:
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/archive/refs/heads/main.zip).
+
+## Configure the project
+
+1. In your IDE, open the project folder, *ms-identity-javascript-tutorial/angular-spa*, containing the sample.
+1. Open *1-Authentication/1-sign-in/App/authConfig.js* and replace the file contents with the following snippet:
+
+ ```javascript
+ /**
+ * Configuration object to be passed to MSAL instance on creation.
+ * For a full list of MSAL.js configuration parameters, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
+ */
+
+ const msalConfig = {
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply.
+ authority: 'https://login.microsoftonline.com/Enter_the_Tenant_Info_Here', // Defaults to "https://login.microsoftonline.com/common"
+ redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/
+ navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response.
+ },
+ cache: {
+ cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO.
+ storeAuthStateInCookie: false, // set this to true if you have to support IE
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback: (level, message, containsPii) => {
+ if (containsPii) {
+ return;
+ }
+ switch (level) {
+ case msal.LogLevel.Error:
+ console.error(message);
+ return;
+ case msal.LogLevel.Info:
+ console.info(message);
+ return;
+ case msal.LogLevel.Verbose:
+ console.debug(message);
+ return;
+ case msal.LogLevel.Warning:
+ console.warn(message);
+ return;
+ }
+ },
+ },
+ },
+ };
+
+ /**
+ * Scopes you add here will be prompted for user consent during sign-in.
+ * By default, MSAL.js will add OIDC scopes (openid, profile, email) to any login request.
+ * For more information about OIDC scopes, visit:
+ * https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent#openid-connect-scopes
+ */
+ const loginRequest = {
+ scopes: ["openid", "profile"],
+ };
+
+ /**
+ * An optional silentRequest object can be used to achieve silent SSO
+ * between applications by providing a "login_hint" property.
+ */
+
+ // const silentRequest = {
+ // scopes: ["openid", "profile"],
+ // loginHint: "example@domain.net"
+ // };
+
+ // exporting config object for jest
+ if (typeof exports !== 'undefined') {
+ module.exports = {
+ msalConfig: msalConfig,
+ loginRequest: loginRequest,
+ };
+ }
+ ```
-```javascript
-graphMeEndpoint: "https://graph.microsoft.com/v1.0/me",
-graphMailEndpoint: "https://graph.microsoft.com/v1.0/me/messages"
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the **Directory (tenant) ID** that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ * `RedirectUri` - The **Redirect URI** of the application. If necessary, replace the text in quotes with the redirect URI that was recorded earlier from the overview page of the registered application.
-### Step 4: Run the project
+## Run the application and sign in
-Run the project with a web server by using Node.js.
+Run the project with a web server by using Node.js:
1. To start the server, run the following commands from within the project directory:
Run the project with a web server by using Node.js.
npm install npm start ```
+1. Copy the https URL that appears in the terminal, for example, `https://localhost:3000`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
-1. Go to `http://localhost:3000/`.
-
-1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information is displayed on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### MSAL.js
-
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. The sample's *https://docsupdatetracker.net/index.html* file contains a reference to the library:
-
-```html
-<script type="text/javascript" src="https://alcdn.msauth.net/browser/2.0.0-beta.0/js/msal-browser.js" integrity=
-"sha384-r7Qxfs6PYHyfoBR6zG62DGzptfLBxnREThAlcJyEfzJ4dq5rqExc1Xj3TPFE/9TH" crossorigin="anonymous"></script>
-```
+ :::image type="content" source="./media/quickstarts/js-spa/quickstart-js-spa-sign-in.png" alt-text="Screenshot of JavaScript App depicting the results of the API call.":::
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+## Sign out from the application
-```console
-npm install @azure/msal-browser
-```
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
-## Next steps
+## Related content
-For a more detailed step-by-step guide on building the application used in this quickstart, see the following tutorial:
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md).
-> [!div class="nextstepaction"]
-> [Tutorial to sign in users and call Microsoft Graph](tutorial-v2-javascript-auth-code.md)
+- Learn more by building this JavaScript SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-spa.md)
active-directory Quickstart Single Page App React Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-react-sign-in.md
Previously updated : 07/27/2023 Last updated : 09/13/2023
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using React
+This quickstart uses a sample React single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE). The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript React single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
+In this article you'll register a SPA in the Microsoft Entra admin center, and download a sample React SPA. Next, you'll run the sample application, sign in with your personal Microsoft account or a work or school account, and sign out.
## Prerequisites
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
--
-## Register and download your quickstart application
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-
-### Step 1: Register your application
+## Register the application in the Microsoft Entra admin center
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application. 1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **New registration**.
-1. When the **Register an application** page appears, enter a name for your application.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. The application's overview pane is displayed when registration is complete. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in your application source code.
+1. Select **Register**.
+1. The application's Overview pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
+
+## Add a redirect URI
+ 1. Under **Manage**, select **Authentication**.
-1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
-1. Set the **Redirect URIs** value to `http://localhost:3000/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Single-page application**.
+1. Set the **Redirect URIs** value to `http://localhost:3000/`.
1. Select **Configure** to apply the changes. 1. Under **Platform Configurations** expand **Single-page application**.
-1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png) Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
-
-### Step 2: Download the project
-
-To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip).
-
-### Step 3: Configure your JavaScript app
-
-In the *src* folder, open the *authConfig.js* file and update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
+1. Confirm that for **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png), your **Redirect URI** is eligible for the Authorization Code Flow with PKCE.
-```javascript
-/**
-* Configuration object to be passed to MSAL instance on creation.
-* For a full list of MSAL.js configuration parameters, visit:
-* https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
-*/
-export const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_Uri_Here"
- },
- cache: {
- cacheLocation: "sessionStorage", // This configures where your cache will be stored
- storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
- },
-```
+## Clone or download the sample application
-Modify the values in the `msalConfig` section as described here:
+To obtain the sample application, you can either clone it from GitHub or download it as a *.zip* file.
-- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
-- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).-- `Enter_the_Tenant_info_here` is set to one of the following:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
-
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
- - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-docs-code-javascript.git
+ ```
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-docs-code-javascript/tree/main)
- To find the value of **Supported account types**, go to the app registration's **Overview** page.
-- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
+If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
-The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
+## Configure the project
-```javascript
-authority: "https://login.microsoftonline.com/common",
-```
+1. In your IDE, open the project folder, *ms-identity-docs-code-javascript/react-spa*, containing the sample.
+1. Open *src/authConfig.js* and replace the file contents with the following snippet:
-Scroll down in the same file and update the `graphMeEndpoint`.
-- Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`-- `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](/graph/deployments).
+ :::code language="csharp" source="~/ms-identity-docs-code-javascript/react-spa/src/authConfig.js":::
-```javascript
- // Add here the endpoints for MS Graph API services you would like to use.
- export const graphConfig = {
- graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me"
- };
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the **Directory (tenant) ID** that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ * `RedirectUri` - The **Redirect URI** of the application. If necessary, replace the text in quotes with the redirect URI that was recorded earlier from the overview page of the registered application.
-### Step 4: Run the project
+## Run the application and sign in
Run the project with a web server by using Node.js: 1. To start the server, run the following commands from within the project directory:+ ```console npm install npm start ```
-1. Browse to `http://localhost:3000/`.
-
-1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click on the **Request Profile Information** to display your profile information on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### msal.js
+1. Copy the https URL that appears in the terminal, for example, `https://localhost:3000`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+ :::image type="content" source="./media/single-page-app-tutorial-04-call-api/display-api-call-results.png" alt-text="Screenshot of React App depicting the results of the API call.":::
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+## Sign out from the application
-```console
-npm install @azure/msal-browser @azure/msal-react
-```
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
-## Next steps
+## Related content
-Next, try a step-by-step tutorial to learn how to build a React SPA from scratch that signs in users and calls the Microsoft Graph API to get user profile data:
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md)
-> [!div class="nextstepaction"]
-> [Tutorial: Sign in users and call Microsoft Graph](./single-page-app-tutorial-01-register-app.md)
+- Learn more by building this React SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./single-page-app-tutorial-01-register-app.md)
active-directory Quickstart Web App Aspnet Core Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-aspnet-core-sign-in.md
Title: "Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET Core web app"
-description: Learn how an ASP.NET Core web app leverages Microsoft.Identity.Web to implement Microsoft sign-in using OpenID Connect and call Microsoft Graph
+description: Learn how an ASP.NET Core web app uses Microsoft.Identity.Web to implement Microsoft sign-in using OpenID Connect and call Microsoft Graph
Previously updated : 04/16/2023 Last updated : 08/28/2023
# Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET Core web app
-The following quickstart uses a ASP.NET Core web app code sample to demonstrate how to sign in users from any Azure Active Directory (Azure AD) organization.
-See [How the sample works](#how-the-sample-works) for an illustration.
+This quickstart uses a sample ASP.NET Core web app to show you how to sign in users by using the [authorization code flow](./v2-oauth2-auth-code-flow.md) and call the Microsoft Graph API. The sample uses [Microsoft Authentication Library for .NET](/entra/msal/dotnet/) and [Microsoft Identity Web](/entra/msal/dotnet/microsoft-identity-web/) for ASP.NET to handle authentication.
+
+In this article you register a web application in the Microsoft Entra admin center, and download a sample ASP.NET web application. You'll run the sample application, sign in with your personal Microsoft account or a work or school account, and sign out.
## Prerequisites
-* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+* An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [.NET Core SDK 6.0+](https://dotnet.microsoft.com/download)
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
-## Register and download your quickstart application
-
-### Step 1: Register your application
-
+## Register the application in the Microsoft Entra admin center
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. For **Name**, enter a name for the application. For example, enter **AspNetCore-Quickstart**. Users of the app will see this name, and can be changed later.
-1. Set the **Redirect URI** type to **Web** and value to `https://localhost:44321/signin-oidc`.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. On the page that appears, select **+ New registration**.
+1. When the **Register an application** page appears, enter a name for your application, such as *identity-client-app*.
+1. Under **Supported account types**, select *Accounts in this organizational directory only*.
1. Select **Register**.
-1. Under **Manage**, select **Authentication**.
-1. For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
-1. Under **Implicit grant and hybrid flows**, select **ID tokens**.
-1. Select **Save**.
-1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-1. Enter a **Description**, for example `clientsecret1`.
-1. Select **In 1 year** for the secret's expiration.
-1. Select **Add** and immediately record the secret's **Value** for use in a later step. The secret value is *never displayed again* and is irretrievable by any other means. Record it in a secure location as you would any password.
-
-### Step 2: Download the ASP.NET Core project
-
-[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
-
-### Step 3: Configure your ASP.NET Core project
-
-1. Extract the *.zip* file to a local folder that's close to the root of the disk to avoid errors caused by path length limitations on Windows. For example, extract to *C:\Azure-Samples*.
-1. Open the solution in the chosen code editor.
-1. In *appsettings.json*, replace the values of `ClientId`, and `TenantId`. The value for the application (client) ID and the directory (tenant) ID, can be found in the app's **Overview** page on the Azure portal.
-
- ```json
- "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]",
- "ClientId": "Enter_the_Application_Id_here",
- "TenantId": "common",
- ```
-
- - `Enter_the_Application_Id_Here` is the application (client) ID for the registered application.
- - Replace `Enter_the_Tenant_Info_Here` with one of the following:
- - If the application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). The directory (tenant) ID can be found on the app's **Overview** page.
- - If the application supports **Accounts in any organizational directory**, replace this value with `organizations`.
- - If the application supports **All Microsoft account users**, leave this value as `common`.
- - Replace `Enter_the_Client_Secret_Here` with the **Client secret** that was created and recorded in an earlier step.
-
-For this quickstart, don't change any other values in the *appsettings.json* file.
-
-### Step 4: Build and run the application
+1. The application's **Overview** pane displays upon successful registration. Record the **Application (client) ID** and **Directory (tenant) ID** to be used in your application source code.
-Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the F5 key.
+## Add a redirect URI
-A prompt for credentials will appear, and then a request for consent to the permissions that the app requires. Select **Accept** on the consent prompt.
--
-After consenting to the requested permissions, the app displays that sign-in has been successful using correct Azure Active Directory credentials. The user's account email address will be displayed in the *API result* section of the page. This was extracted using the Microsoft Graph API.
--
-## More information
-
-This section gives an overview of the code required to sign in users and call the Microsoft Graph API on their behalf. This overview can be useful to understand how the code works, main arguments, and also if you want to add sign-in to an existing ASP.NET Core application and call Microsoft Graph. It uses [Microsoft.Identity.Web](microsoft-identity-web.md), which is a wrapper around [MSAL.NET](msal-overview.md).
-
-### How the sample works
-
-![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
-
-### Startup class
-
-The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts:
-
-```csharp
- // Get the scopes from the configuration (appsettings.json)
- var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
-
- public void ConfigureServices(IServiceCollection services)
- {
- // Add sign-in with Microsoft
- services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
-
- // Add the possibility of acquiring a token to call a protected web API
- .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
+1. Under **Manage**, select **Authentication**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens, select **Web**.
+1. For **Redirect URIs**, enter `https://localhost:5001/signin-oidc`.
+1. Under **Front-channel logout URL**, enter `https://localhost:5001/signout-oidc`.
+1. Select **Configure** to apply the changes.
- // Enables controllers and pages to get GraphServiceClient by dependency injection
- // And use an in memory token cache
- .AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
- .AddInMemoryTokenCaches();
+## Clone or download the sample application
- services.AddControllersWithViews(options =>
- {
- var policy = new AuthorizationPolicyBuilder()
- .RequireAuthenticatedUser()
- .Build();
- options.Filters.Add(new AuthorizeFilter(policy));
- });
+To obtain the sample application, you can either clone it from GitHub or download it as a *.zip* file.
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-docs-code-dotnet/archive/refs/heads/main.zip). Extract it to a file path where the length of the name is fewer than 260 characters.
+- To clone the sample, open a command prompt and navigate to where you wish to create the project, and enter the following command:
+
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-docs-code-dotnet.git
+ ```
- // Enables a UI and controller for sign in and sign out.
- services.AddRazorPages()
- .AddMicrosoftIdentityUI();
- }
-```
+## Create and upload a self-signed certificate
-The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
+1. Using your terminal, use the following commands to navigate to create a self-signed certificate in the project directory.
-The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity platform authentication to the application. The application is then configured to sign in users based on the following information in the `AzureAD` section of the *appsettings.json* configuration file:
+ ```console
+ cd ms-identity-docs-code-dotnet\web-app-aspnet\
+ dotnet dev-certs https -ep ./certificate.crt --trust
+ ```
-| *appsettings.json* key | Description |
-||-|
-| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
-| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
-| `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+1. Return to the Microsoft Entra admin center, and under **Manage**, select **Certificates & secrets** > **Upload certificate**.
+1. Select the **Certificates (0)** tab, then select **Upload certificate**.
+1. An **Upload certificate** pane appears. Use the icon to navigate to the certificate file you created in the previous step, and select **Open**.
+1. Enter a description for the certificate, for example *Certificate for aspnet-web-app*, and select **Add**.
+1. Record the **Thumbprint** value for use in the next step.
-The `EnableTokenAcquisitionToCallDownstreamApi` method enables the application to acquire a token to call protected web APIs. `AddMicrosoftGraph` enables the controllers or Razor pages to benefit directly the `GraphServiceClient` (by dependency injection) and the `AddInMemoryTokenCaches` methods enables your app to benefit from a token cache.
+## Configure the project
-The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
+1. In your IDE, open the project folder, *ms-identity-docs-code-dotnet\web-app-aspnet*, containing the sample.
+1. Open *appsettings.json* and replace the file contents with the following snippet;
-```csharp
-app.UseAuthentication();
-app.UseAuthorization();
+ :::code language="csharp" source="~/ms-identity-docs-code-dotnet/web-app-aspnet/appsettings.json" :::
-app.UseEndpoints(endpoints =>
-{
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
- endpoints.MapRazorPages();
-});
-```
+ * `TenantId` - The identifier of the tenant where the application is registered. Replace the text in quotes with the `Directory (tenant) ID` that was recorded earlier from the overview page of the registered application.
+ * `ClientId` - The identifier of the application, also referred to as the client. Replace the text in quotes with the `Application (client) ID` value that was recorded earlier from the overview page of the registered application.
+ * `ClientCertificates` - A self-signed certificate is used for authentication in the application. Replace the text of the `CertificateThumbprint` with the thumbprint of the certificate that was previously recorded.
-### Protect a controller or a controller's method
+## Run the application and sign in
-The controller or its methods can be protected by applying the `[Authorize]` attribute to the controller's class or one or more of its methods. This `[Authorize]` attribute restricts access by allowing only authenticated users. If the user isn't already authenticated, an authentication challenge can be started to access the controller. In this quickstart, the scopes are read from the configuration file:
+1. In your project directory, use the terminal to enter the following command;
-```csharp
-[AuthorizeForScopes(ScopeKeySection = "DownstreamApi:Scopes")]
-public async Task<IActionResult> Index()
-{
- var user = await _graphServiceClient.Me.GetAsync();
- ViewData["ApiResult"] = user.DisplayName;
+ ```console
+ dotnet run
+ ```
- return View();
-}
-```
+1. Copy the `https` URL that appears in the terminal, for example, `https://localhost:5001`, and paste it into a browser. We recommend using a private or incognito browser session.
+1. Follow the steps and enter the necessary details to sign in with your Microsoft account. You'll be requested an email address so a one time passcode can be sent to you. Enter the code when prompted.
+1. The application will request permission to maintain access to data you have given it access to, and to sign you in and read your profile. Select **Accept**.
+1. The following screenshot appears, indicating that you have signed in to the application and have accessed your profile details from the Microsoft Graph API.
+ ![Screenshot of the application showing the user's profile details.](media/quickstarts/aspnet-core/quickstart-dotnet-webapp-sign-in.png)
-## Next steps
+## Sign-out from the application
-The following GitHub repository contains the ASP.NET Core code sample referenced in this quickstart and more samples that show how to achieve the following:
+1. Find the **Sign out** link in the top right corner of the page, and select it.
+1. You'll be prompted to pick an account to sign out from. Select the account you used to sign in.
+1. A message appears indicating that you have signed out.
+1. Although you have signed out, the application is still running from your terminal. To stop the application in your terminal, press **Ctrl+C**.
-- Add authentication to a new ASP.NET Core web application.-- Call Microsoft Graph, other Microsoft APIs, or your own web APIs.-- Add authorization.-- Sign in users in national clouds or with social identities.
+## Related content
-> [!div class="nextstepaction"]
-> [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md).
+- Create an ASP.NET web app from scratch with the series [Tutorial: Register an application with the Microsoft identity platform](./web-app-tutorial-01-register-application.md).
active-directory Tenant Restrictions V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md
Although these alternatives provide protection, certain scenarios can only be co
After you create a tenant restrictions v2 policy, you can enforce the policy on each Windows 10, Windows 11, and Windows Server 2022 device by adding your tenant ID and the policy ID to the device's **Tenant Restrictions** configuration. When tenant restrictions are enabled on a Windows device, corporate proxies aren't required for policy enforcement. Devices don't need to be Azure AD managed to enforce tenant restrictions v2; domain-joined devices that are managed with Group Policy are also supported. > [!NOTE]
-> Tenant restrictions V2 on Windows is a partial solution that protects the authentication and data planes for some scenarios. It works on managed Windows devices and does not protect .NET stack, Chrome, or Firefox. The Windows solution provides a temporary solution until general availability of Universal tenant restrictions in Global Secure Access (preview).
+> Tenant restrictions V2 on Windows is a partial solution that protects the authentication and data planes for some scenarios. It works on managed Windows devices and does not protect .NET stack, Chrome, or Firefox. The Windows solution provides a temporary solution until general availability of Universal tenant restrictions in [Microsoft Entra Global Secure Access (preview)](/azure/global-secure-access/overview-what-is-global-secure-access).
#### Administrative Templates (.admx) for Windows 10 November 2021 Update (21H2) and Group policy settings
active-directory Concept Fundamentals Mfa Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-mfa-get-started.md
- Title: Azure AD Multi-Factor Authentication for your organization
-description: Learn about the available features of Azure AD Multi-Factor Authentication for your organization based on your license model
----- Previously updated : 03/18/2020--------
-# Overview of Azure AD Multi-Factor Authentication for your organization
-
-There are multiple ways to enable Azure AD Multi-Factor Authentication for your Azure Active Directory (AD) users based on the licenses that your organization owns.
-
-![Investigate signals and enforce MFA if needed](./media/concept-fundamentals-mfa-get-started/verify-signals-and-perform-mfa-if-required.png)
-
-Based on our studies, your account is more than 99.9% less likely to be compromised if you use multi-factor authentication (MFA).
-
-So how does your organization turn on MFA even for free, before becoming a statistic?
-
-## Free option
-
-Customers who are utilizing the free benefits of Azure AD can use [security defaults](../fundamentals/security-defaults.md) to enable multi-factor authentication in their environment.
-
-## Microsoft 365 Business, E3, or E5
-
-For customers with Microsoft 365, there are two options:
-
-* Azure AD Multi-Factor Authentication is either enabled or disabled for all users, for all sign-in events. There is no ability to only enable multi-factor authentication for a subset of users, or only under certain scenarios. Management is through the Office 365 portal.
-* For an improved user experience, upgrade to Azure AD Premium P1 or P2 and use Conditional Access. For more information, see secure Microsoft 365 resources with multi-factor authentication.
-
-## Azure AD Premium P1
-
-For customers with Azure AD Premium P1 or similar licenses that include this functionality such as Enterprise Mobility + Security E3, Microsoft 365 F1, or Microsoft 365 E3:
-
-Use [Azure AD Conditional Access](../authentication/tutorial-enable-azure-mfa.md) to prompt users for multi-factor authentication during certain scenarios or events to fit your business requirements.
-
-## Azure AD Premium P2
-
-For customers with Azure AD Premium P2 or similar licenses that include this functionality such as Enterprise Mobility + Security E5 or Microsoft 365 E5:
-
-Provides the strongest security position and improved user experience. Adds [risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md) to the Azure AD Premium P1 features that adapts to user's patterns and minimizes multi-factor authentication prompts.
-
-## Authentication methods
-
-| Method | Security defaults | All other methods |
-| | | |
-| Notification through mobile app | X | X |
-| Verification code from mobile app or hardware token | | X |
-| Text message to phone | | X |
-| Call to phone | | X |
-
-## Next steps
-
-To get started, see the tutorial to [secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md).
-
-For more information on licensing, see [Features and licenses for Azure AD Multi-Factor Authentication](../authentication/concept-mfa-licensing.md).
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
To learn more about Azure AD built-in roles and their permissions, see [Azure AD
One Azure AD tenant can have up to 500 role-assignable groups. To learn more about Azure AD service limits and restrictions, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).
-Azure AD role-assignable group feature is not part of Azure AD Privileged Identity Management (Azure AD PIM). It requires a Microsoft Entra Premium P1, P2, or Microsoft Entra ID Governance license.
+Azure AD role-assignable group feature is not part of Azure AD Privileged Identity Management (Azure AD PIM). For more information on licensing, see [Microsoft Entra ID Governance licensing fundamentals](../../active-directory/governance/licensing-fundamentals.md) .
+ ## Relationship between role-assignable groups and PIM for Groups
If a user is an active member of Group A, and Group A is an eligible member of G
## Privileged Identity Management and app provisioning (Public Preview)
-> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q]
- If the group is configured for [app provisioning](../app-provisioning/index.yml), activation of group membership will trigger provisioning of group membership (and user account itself if it wasnΓÇÖt provisioned previously) to the application using SCIM protocol. In Public Preview we have a functionality that triggers provisioning right after group membership is activated in PIM.
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md
-# Understand the Privileged Identity Management APIs
+# Privileged Identity Management APIs
-You can perform Privileged Identity Management (PIM) tasks using the Microsoft Graph APIs for Azure Active Directory (Azure AD) roles and groups, and the Azure Resource Manager API for Azure resource roles. This article describes important concepts for using the APIs for Privileged Identity Management.
+Privileged Identity Management (PIM), part of Microsoft Entra, includes three providers:
-For requests and other details about PIM APIs, check out:
+ - PIM for Azure AD roles
+ - PIM for Azure resources
+ - PIM for Groups
+
+You can manage assignments in PIM for Azure AD roles and PIM for Groups using Microsoft Graph API. You can manage assignments in PIM for Azure Resources using Azure Resource Manager (ARM) API. This article describes important concepts for using the APIs for Privileged Identity Management.
+
+Find more details about APIs that allow to manage assignments in the documentation:
- [PIM for Azure AD roles API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)-- [PIM for groups API reference (preview))(/graph/api/resources/privilegedidentitymanagement-for-groups-api-overview)-- [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests)
+- [PIM for Azure resource roles API reference](/rest/api/authorization/privileged-role-eligibility-rest-sample)
+- [PIM for Groups API reference](/graph/api/resources/privilegedidentitymanagement-for-groups-api-overview)
+- [PIM Alerts for Azure AD Roles API reference](/graph/api/resources/privilegedidentitymanagementv3-overview?view=graph-rest-beta#building-blocks-of-the-pim-alerts-apis)
+- [PIM Alerts for Azure Resources API reference](/rest/api/authorization/role-management-alert-rest-sample)
+ ## PIM API history
-There have been several iterations of the PIM APIs over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
+There have been several iterations of the PIM API over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
### Iteration 1 ΓÇô Deprecated
-Under the `/beta/privilegedRoles` endpoint, Microsoft had a classic version of the PIM APIs which only supported Azure AD roles. Access to this API was retired in June 2021.
+Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the PIM API which only supported Azure AD roles and is no longer supported. Access to this API was deprecated in June 2021.
### Iteration 2 ΓÇô Supports Azure AD roles and Azure resource roles
-Under the `/beta/privilegedAccess` endpoint, Microsoft supported both `/aadRoles` and `/azureResources`. The `/aadRoles` endpoint has been retired but the `/azureResources` endpoint is still available in your tenant. Microsoft recommends against starting any new development with the APIs available through the `/azureResources` endpoint. This API will never be released to general availability and will be eventually deprecated and retired.
-
-### Current iteration ΓÇô Azure AD roles and groups in Microsoft Graph and Azure resource roles in Azure Resource Manager
-
-Currently, in general availability, this is the final iteration of the PIM APIs. Based on customer feedback, the PIM APIs for managing Azure AD roles are now under the **unifiedRoleManagement** set of APIs and the Azure Resource PIM APIs is now under the Azure Resource Manager role assignment APIs. These locations also provide a few additional benefits including:
--- Alignment of the PIM APIs for regular role assignment of both Azure AD roles and Azure Resource roles.-- Reducing the need to call additional PIM APIs to onboard a resource, get a resource, or get a role definition.-- Supporting app-only permissions.-- New features such as approval and email notification configuration.-
-This iteration also includes PIM APIs for managing ownership and membership of groups as well as security alerts for PIM for Azure AD roles.
+Under the `/beta/privilegedAccess` endpoint, Microsoft supported both `/aadRoles` and `/azureResources`. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
-## Current permissions required
+### Iteration 3 (Current) ΓÇô PIM for Azure AD roles, groups in Microsoft Graph API, and for Azure resources in ARM API
-### Azure AD roles
+This is the final iteration of the PIM API. It includes:
+ - PIM for Azure AD Roles in Microsoft Graph API - Generally available.
+ - PIM for Azure resources in ARM API - Generally available.
+ - PIM for groups in Microsoft Graph API - Preview.
+ - PIM Alerts for Azure AD Roles in Microsoft Graph API - Preview.
+ - PIM Alerts for Azure Resources in ARM API - Preview.
-To understand the permissions that you need to call the PIM Microsoft Graph API for Azure AD roles, see [Role management permissions](/graph/permissions-reference#role-management-permissions).
+Having PIM for Azure AD Roles in Microsoft Graph API and PIM for Azure Resources in ARM API provide a few benefits including:
+ - Alignment of the PIM API for regular role assignment API for both Azure AD roles and Azure Resource roles.
+ - Reducing the need to call additional PIM API to onboard a resource, get a resource, or get role definition.
+ - Supporting app-only permissions.
+ - New features such as approval and email notification configuration.
-The easiest way to specify the required permissions is to use the Azure AD consent framework.
-### Azure resource roles
+### Overview of PIM API iteration 3
- The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Microsoft Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+PIM APIs across providers (both Microsoft Graph APIs and ARM APIs) follow the same principles.
-## Calling PIM API with an app-only token
+#### Assignments management
+To create assignment (active or eligible), renew, extend, of update assignment (active or eligible), activate eligible assignment, deactivate eligible assignment, use resources **\*AssignmentScheduleRequest** and **\*EligibilityScheduleRequest**:
-### Azure AD roles
+ - For Azure AD Roles: [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest), [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest);
+ - For Azure resources: [Role Assignment Schedule Request](/rest/api/authorization/role-assignment-schedule-requests), [Role Eligibility Schedule Request](/rest/api/authorization/role-eligibility-schedule-requests);
+ - For Groups: [privilegedAccessGroupAssignmentScheduleRequest](/graph/api/resources/privilegedaccessgroupassignmentschedulerequest), [privilegedAccessGroupEligibilityScheduleRequest](/graph/api/resources/privilegedaccessgroupeligibilityschedulerequest).
- PIM API now supports app-only permissions on top of delegated permissions.
+Creation of **\*AssignmentScheduleRequest** or **\*EligibilityScheduleRequest** objects may lead to creation of read-only **\*AssignmentSchedule**, **\*EligibilitySchedule**, **\*AssignmentScheduleInstance**, and **\*EligibilityScheduleInstance** objects.
-- For app-only permissions, you must call the API with an application that's already been consented with either the required Azure AD or Azure role permissions.-- For delegated permission, you must call the PIM API with both a user and an application token. The user must be assigned to either the Global Administrator role or Privileged Role Administrator role, and ensure that the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+ - **\*AssignmentSchedule** and **\*EligibilitySchedule** objects show current assignments and requests for assignments to be created in the future.
+ - **\*AssignmentScheduleInstance** and **\*EligibilityScheduleInstance** objects show current assignments only.
-### Azure resource roles
+When an eligible assignment is activated (Create **\*AssignmentScheduleRequest** was called), the **\*EligibilityScheduleInstance** continues to exist, new **\*AssignmentSchedule** and a **\*AssignmentScheduleInstance** objects will be created for that activated duration.
- PIM API for Azure resources supports both user only and application only calls. Simply make sure the service principal has either the owner or user access administrator role on the resource.
+For more information about assignment and activation APIs, seeΓÇ»[PIM API for managing role assignments and eligibilities](/graph/api/resources/privilegedidentitymanagementv3-overview#pim-api-for-managing-role-assignment).
-## Design of current API iteration
+
-PIM API consists of two categories that are consistent for both the API for Azure AD roles and Azure resource roles: assignment and activation API requests, and policy settings.
+#### PIM Policies (role settings)
-### Assignment and activation APIs
+To manage the PIM policies, use **roleManagementPolicy** and **roleManagementPolicyAssignment** entities:
+ - For PIM for Azure AD roles, PIM for Groups: [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy), [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment)
+ - For PIM for Azure resources: [Role Management Policies](/rest/api/authorization/role-management-policies), [Role Management Policy Assignments](/rest/api/authorization/role-management-policy-assignments)
-To make eligible assignments, time-bound eligible or active assignments, and to activate eligible assignments, PIM provides the following resources:
+The **\*roleManagementPolicy** resource includes rules that constitute PIM policy: approval requirements, maximum activation duration, notification settings, etc.
-- [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest)-- [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest)
+The **\*roleManagementPolicyAssignment** object attaches the policy to a specific role.
-These entities work alongside pre-existing **roleDefinition** and **roleAssignment** resources for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
+For more information about the policy settings APIs, seeΓÇ»[role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim).
-- If you are trying to create or retrieve a persistent (active) role assignment that does not have a schedule (start or end time), you should avoid these PIM entities and focus on the read/write operations under the roleAssignment entity
+## Permissions
-- To create an eligible assignment with or without an expiration time you can use the write operation on the [unifiedRoleEligibilityScheduleRequest](/graph/api/resources/unifiedroleeligibilityschedulerequest) resource
+### PIM for Azure AD roles
-- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on the [unifiedRoleAssignmentScheduleRequest](/graph/api/resources/unifiedroleassignmentschedulerequest) resource
+For Graph API permissions required for PIM for Azure AD roles, seeΓÇ»[Role management permissions](/graph/permissions-reference#role-management-permissions).
-- To activate an eligible assignment, you should also use the [write operation on roleAssignmentScheduleRequest](/graph/api/rbacapplication-post-roleassignmentschedulerequests) with a `selfActivate` **action** property.
+### PIM for Azure resources
-Each of the request objects would create the following read-only objects:
+The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Microsoft Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
-- [unifiedRoleAssignmentSchedule](/graph/api/resources/unifiedroleassignmentschedule)-- [unifiedRoleEligibilitySchedule](/graph/api/resources/unifiedroleeligibilityschedule)-- [unifiedRoleAssignmentScheduleInstance](/graph/api/resources/unifiedroleassignmentscheduleinstance)-- [unifiedRoleEligibilityScheduleInstance](/graph/api/resources/unifiedroleeligibilityscheduleinstance)
+### PIM for Groups
-The **unifiedRoleAssignmentSchedule** and **unifiedRoleEligibilitySchedule** objects show a schedule of all the current and future assignments.
+For Graph API permissions required for PIM for Groups, see [PIM for Groups ΓÇô Permissions and privileges](/graph/api/resources/privilegedidentitymanagement-for-groups-api-overview#permissions-and-privileges).
-When an eligible assignment is activated, the **unifiedRoleEligibilityScheduleInstance** continues to exist. The **unifiedRoleAssignmentScheduleRequest** for the activation would create a separate **unifiedRoleAssignmentSchedule** object and a **unifiedRoleAssignmentScheduleInstance** for that activated duration.
-The instance objects are the actual assignments that currently exist whether it is an eligible assignment or an active assignment. You should use the GET operation on the instance entity to retrieve a list of eligible assignments / active assignments to a role/user.
-For more information about assignment and activation APIs, see [PIM API for managing role assignments and eligibilities](/graph/api/resources/privilegedidentitymanagementv3-overview#pim-api-for-managing-role-assignment).
-
-### Policy settings APIs
-
-To manage the settings of Azure AD roles, we provide the following entities:
--- [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy)-- [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment)-
-The [unifiedroleManagementPolicy](/graph/api/resources/unifiedrolemanagementpolicy) resource through it's **rules** relationship defines the rules or settings of the Azure AD role. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The [unifiedroleManagementPolicyAssignment](/graph/api/resources/unifiedrolemanagementpolicyassignment) object attaches the policy to a specific role.
-
-Use the APIs supported by these resources retrieve role management policy assignments for all Azure AD role or filter the list by a **roleDefinitionId**, and then update the rules or settings in the policy associated with the Azure AD role.
-
-For more information about the policy settings APIs, see [role settings and PIM](/graph/api/resources/privilegedidentitymanagementv3-overview#role-settings-and-pim).
## Relationship between PIM entities and role assignment entities
-The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the unifiedRoleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and unifiedRoleAssignmentScheduleInstance would both include:
+The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the **\*AssignmentScheduleInstance**. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and **\*AssignmentScheduleInstance** would both include:
- Persistent (active) assignments made outside of PIM - Persistent (active) assignments with a schedule made inside PIM - Activated eligible assignments
+PIM-specific properties (such as end time) will be available only through **\*AssignmentScheduleInstance** object.
+ ## Next steps - [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
active-directory Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-approval-workflow.md
With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure roles to require approval for activation, and choose one or multiple users or groups as delegated approvers. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable. ++ ## View pending requests [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentSche
>[!NOTE] >Approvers are not able to approve their own role activation requests.
-1. Find and select the request that you want to approve. An approve or deny page appears.
-
- ![Screenshot that shows the "Approve requests - Azure AD roles" page.](./media/azure-ad-pim-approval-workflow/resources-approve-pane.png)
-
-1. In the **Justification** box, enter the business justification.
-
-1. Select **Approve**. You will receive an Azure notification of your approval.
-
- ![Approve notification showing request was approved](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Submit**. You will receive an Azure notification of your approval.
## Approve pending requests using Microsoft Graph API
+>[!NOTE]
+> Approval for **extend and renew** requests is currently not supported by the Microsoft Graph API
+ ### Get IDs for the steps that require approval For a specific activation request, this command gets all the approval steps that need approval. Multi-step approvals are not currently supported.
GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentAppr
PATCH https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentApprovals/<request-ID-GUID>/steps/<approval-step-ID-GUID> {
- "reviewResult": "Approve",
- "justification": "abcdefg"
+ "reviewResult": "Approve", // or "Deny"
+ "justification": "Trusted User"
} ````
Successful PATCH calls generate an empty response.
## Deny requests
-1. Find and select the request that you want to deny. An approve or deny page appears.
-
- ![Approve requests - approve or deny pane with details and Justification box](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
-
-1. In the **Justification** box, enter the business justification.
-
-1. Select **Deny**. A notification appears with your denial.
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Deny**. A notification appears with your denial.
## Workflow notifications
active-directory Pim Powershell Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-powershell-migration.md
+
+ Title: PIM PowerShell for Azure Resources Migration Guidance
+description: The following documentation provides guidance for Privileged Identity Management (PIM) PowerShell migration.
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 07/11/2023+++++
+# PIM PowerShell for Azure Resources Migration Guidance
+The following table provides guidance on using the new PowerShell cmdlts in the newer Azure PowerShell module.
++
+## New cmdlts in the Azure PowerShell module
+
+|Old AzureADPreview cmd|New Az cmd equivalent|Description|
+|--|--|--|
+|Get-AzureADMSPrivilegedResource|[Get-AzResource](/powershell/module/az.resources/get-azresource)|Get resources|
+|Get-AzureADMSPrivilegedRoleDefinition|[Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition)| Get role definitions|
+|Get-AzureADMSPrivilegedRoleSetting|[Get-AzRoleManagementPolicy](/powershell/module/az.resources/get-azrolemanagementpolicy)|Get the specified role management policy for a resource scope|
+|Set-AzureADMSPrivilegedRoleSetting|[Update-AzRoleManagementPolicy](/powershell/module/az.resources/update-azrolemanagementpolicy)| Update a rule defined for a role management policy|
+|Open-AzureADMSPrivilegedRoleAssignmentRequest|[New-AzRoleAssignmentScheduleRequest](/powershell/module/az.resources/new-azroleassignmentschedulerequest)|Used for Assignment Requests</br>Create role assignment schedule request
+|Open-AzureADMSPrivilegedRoleAssignmentRequest|[New-AzRoleEligibilityScheduleRequest](/powershell/module/az.resources/new-azroleeligibilityschedulerequest)|Used for Eligibility Requests</br>Create role eligibility schedule request|
+
+## Next steps
+
+- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagementv3-overview)
active-directory Pim Resource Roles Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md
With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD),
Follow the steps in this article to approve or deny requests for Azure resource roles. + ## View pending requests [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] As a delegated approver, you'll receive an email notification when an Azure resource role request is pending your approval. You can view these pending requests in Privileged Identity Management. + 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator). 1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests**.
As a delegated approver, you'll receive an email notification when an Azure reso
In the **Requests for role activations** section, you'll see a list of requests pending your approval. + ## Approve requests
-1. Find and select the request that you want to approve. An approve or deny page appears.
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Approve**. You will receive an Azure notification of your approval.
- ![Approve requests - approve or deny pane with details and Justification box](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
-1. In the **Justification** box, enter the business justification.
+## Approve pending requests using Microsoft ARM API
-1. Select **Approve**. You will receive an Azure notification of your approval.
+>[!NOTE]
+> Approval for **extend and renew** requests is currently not supported by the Microsoft ARM API
- ![Approve notification showing request was approved](./media/pim-resource-roles-approval-workflow/resources-approve-notification.png)
+### Get IDs for the steps that require approval
-## Deny requests
+To get the details of any stage of a role assignment approval, you can use [Role Assignment Approval Step - Get By ID](/rest/api/authorization/role-assignment-approval-step/get-by-id?tabs=HTTP) REST API.
+
+#### HTTP request
+
+````HTTP
+GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignmentApprovals/{approvalId}/stages/{stageId}?api-version=2021-01-01-preview
+````
-1. Find and select the request that you want to deny. An approve or deny page appears.
- ![Approve requests - approve or deny pane with details and Justification box](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png)
+### Approve the activation request step
-1. In the **Justification** box, enter the business justification.
+#### HTTP request
+
+````HTTP
+PATCH
+PATCH https://management.azure.com/providers/Microsoft.Authorization/roleAssignmentApprovals/{approvalId}/stages/{stageId}?api-version=2021-01-01-preview
+{
+ "reviewResult": "Approve", // or "Deny"
+ "justification": "Trusted User"
+}
+ ````
+
+#### HTTP response
+
+Successful PATCH calls generate an empty response.
+
+For more information, see [Use Role Assignment Approvals to approve PIM role activation requests with REST API](/rest/api/authorization/privileged-approval-sample)
+
+## Deny requests
-1. Select **Deny**. A notification appears with your denial.
+ 1. Find and select the request that you want to approve. An approve or deny page appears.
+ 2. In the **Justification** box, enter the business justification.
+ 3. Select **Deny**. A notification appears with your denial.
## Workflow notifications
ai-services Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/background-removal.md
The SDK example assumes that you defined the environment variables `VISION_KEY`
Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.common.visionserviceoptions) object using one of the constructors. For example:
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)]
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/how-to/program.cs?name=vision_service_options)]
#### [Python](#tab/python) Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example:
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)]
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/how-to/main.py?name=vision_service_options)]
#### [C++](#tab/cpp) At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example:
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)]
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=vision_service_options)]
Where we used this helper function to read the value of an environment variable:
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)]
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=get_env_var)]
#### [REST API](#tab/rest)
Create a new **VisionSource** object from the URL of the image you want to analy
**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)]
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/how-to/program.cs?name=vision_source)]
> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.common.visionsource.fromfile).
+> You can also analyze a local image by passing in the full-path image file name (see [VisionSource.FromFile](/dotnet/api/azure.ai.vision.common.visionsource.fromfile)), or by copying the image into the SDK's input buffer (see [VisionSource.FromImageSourceBuffer](/dotnet/api/azure.ai.vision.common.visionsource.fromimagesourcebuffer))
#### [Python](#tab/python) In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze.
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)]
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/how-to/main.py?name=vision_source)]
> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL.
+> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL (see argument name **filename**). Alternatively, you can analyze an image in a memory buffer by constructing **VisionSource** using the argument **image_source_buffer**.
#### [C++](#tab/cpp) Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl).
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)]
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/how-to/how-to.cpp?name=vision_source)]
> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile).
+> You can also analyze a local image by passing in the full-path image file name (see [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile)), or by copying the image into the SDK's input buffer (see [VisionSource::FromImageSourceBuffer](/cpp/cognitive-services/vision/input-visionsource#fromimagesourcebuffer)).
#### [REST API](#tab/rest)
ai-services Install Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/install-sdk.md
+
+ Title: Install the Vision SDK
+
+description: In this guide, you learn how to install the Vision SDK for your preferred programming language.
++++++ Last updated : 08/01/2023++
+zone_pivot_groups: programming-languages-vision-40-sdk
++
+# Install the Vision SDK
+++++
+## Next steps
+
+Follow the [Image Analysis quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) to get started.
ai-services Overview Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/overview-sdk.md
+
+ Title: Vision SDK Overview
+
+description: This page gives you an overview of the Azure AI Vision SDK for Image Analysis.
++++++ Last updated : 08/01/2023++++
+# Vision SDK overview
+
+The Vision SDK (Preview) provides a convenient way to access the Image Analysis service using [version 4.0 of the REST APIs](https://aka.ms/vision-4-0-ref).
++
+## Supported languages
+
+The Vision SDK supports the following languages and platforms:
+
+| Programming language | Quickstart | API Reference | Platform support |
+|-||--||
+| C# <sup>1</sup> | [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp) | [reference](/dotnet/api/azure.ai.vision.imageanalysis) | Windows, UWP, Linux |
+| C++ <sup>2</sup> | [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-cpp) | [reference](/cpp/cognitive-services/vision) | Windows, Linux |
+| Python | [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-python) | [reference](/python/api/azure-ai-vision) | Windows, Linux |
+
+<sup>1 The Vision SDK for C# is based on .NET Standard 2.0. See [.NET Standard](/dotnet/standard/net-standard?tabs=net-standard-2-0#net-implementation-support) documentation.</sup>
+
+<sup>2 ANSI-C isn't a supported programming language for the Vision SDK.</sup>
+
+## GitHub samples
+
+Numerous samples are available in the [Azure-Samples/azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) repository on GitHub.
+
+## Getting help
+
+If you need assistance using the Vision SDK or would like to report a bug or suggest new features, open a [GitHub issue in the samples repository](https://github.com/Azure-Samples/azure-ai-vision-sdk/issues). The SDK development team monitors these issues.
+
+Before you create a new issue:
+* Make sure you first scan to see if a similar issue already exists.
+* Find the sample closest to your scenario and run it to see if you see the same issue in the sample code.
+
+## Release notes
+
+* **Vision SDK 0.15.1-beta.1** released September 2023.
+ * Image Analysis Java JRE APIs for Windows x64 and Linux x64 were added.
+ * Image Analysis can now be done from a memory buffer (C#, C++, Python, Java).
+* **Vision SDK 0.13.0-beta.1** released July 2023. Image Analysis support was added for Universal Windows Platform (UWP) applications (C++, C#). Run-time package size reduction: Only the two native binaries
+`Azure-AI-Vision-Native.dll` and `Azure-AI-Vision-Extension-Image.dll` are now needed.
+* **Vision SDK 0.11.1-beta.1** released May 2023. Image Analysis APIs were updated to support [Background Removal](../how-to/background-removal.md).
+* **Vision SDK 0.10.0-beta.1** released April 2023. Image Analysis APIs were updated to support [Dense Captions](../concept-describe-images-40.md?tabs=dense).
+* **Vision SDK 0.9.0-beta.1** first released on March 2023, targeting Image Analysis applications on Windows and Linux platforms.
++
+## Next steps
+
+- [Install the SDK](./install-sdk.md)
+- [Try the Image Analysis Quickstart](../quickstarts-sdk/image-analysis-client-library-40.md)
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Previously updated : 09/05/2023 Last updated : 09/13/2023 recommendations: false
After you approve the request in your search service, you can start using the [c
> Virtual networks & private endpoints are only supported for the API, and not currently supported for Azure OpenAI Studio. ### Storage accounts
-Storage accounts in virtual networks and private endpoints are currently not supported by Azure OpenAI on your data.
+Storage accounts in virtual networks, firewalls, and private endpoints are currently not supported by Azure OpenAI on your data.
## Azure Role-based access controls (Azure RBAC)
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
zone_pivot_groups: openai-use-your-data
# Quickstart: Chat with Azure OpenAI models using your own data +
+[Reference](/javascript/api/@azure/openai) | [Source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) | [Package (npm)](https://www.npmjs.com/package/@azure/openai) | [Samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/tests/Samples)
++ In this quickstart you can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication. + ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
- Be sure that you are assigned at least the [Cognitive Services Contributor](./how-to/role-based-access-control.md#cognitive-services-contributor) role for the Azure OpenAI resource. +
+- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
+ > [!div class="nextstepaction"] > [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=OVERVIEW&Pillar=AOAI&Product=ownData&Page=quickstart&Section=Prerequisites) + [!INCLUDE [Connect your data to OpenAI](includes/connect-your-data-studio.md)] ::: zone pivot="programming-language-studio"
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
::: zone-end +++ ::: zone pivot="rest-api" [!INCLUDE [REST API quickstart](includes/use-your-data-rest.md)]
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
ai-services Speech Container Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-lid.md
The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-
| Version | Path | |--|| | Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
-| 1.11.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.11.0-amd64-preview` |
+| 1.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.12.0-amd64-preview` |
All tags, except for `latest`, are in the following format and are case sensitive:
The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-
"tags": [ "1.1.0-amd64-preview", "1.11.0-amd64-preview",
+ "1.12.0-amd64-preview",
"1.3.0-amd64-preview", "1.5.0-amd64-preview", <--redacted for brevity-->
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md
| Chinese (Literary) | `lzh` |Γ£ö|Γ£ö|||| | Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Chinese Traditional | `zh-Hant` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| chiShona|`sn`|Γ£ö|Γ£ö||||
| Croatian | `hr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Czech | `cs` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Danish | `da` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Gujarati | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|| | Haitian Creole | `ht` |Γ£ö|Γ£ö||Γ£ö|Γ£ö|
+| Hausa|`ha`|Γ£ö|Γ£ö||||
| Hebrew | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Hindi | `hi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Hmong Daw (Latin) | `mww` |Γ£ö|Γ£ö|||Γ£ö| | Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Igbo|`ig`|Γ£ö|Γ£ö||||
| Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Inuinnaqtun | `ikt` |Γ£ö|Γ£ö|||| | Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
| Kannada | `kn` |Γ£ö|Γ£ö|Γ£ö||| | Kazakh | `kk` |Γ£ö|Γ£ö|||| | Khmer | `km` |Γ£ö|Γ£ö||Γ£ö||
+| Kinyarwanda|`rw`|Γ£ö|Γ£ö||||
| Klingon | `tlh-Latn` |Γ£ö| ||Γ£ö|Γ£ö| | Klingon (plqaD) | `tlh-Piqd` |Γ£ö| ||Γ£ö||
+| Konkani|`gom`|Γ£ö|Γ£ö||||
| Korean | `ko` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö|| | Kurdish (Northern) | `kmr` |Γ£ö|Γ£ö||||
| Lao | `lo` |Γ£ö|Γ£ö||Γ£ö|| | Latvian | `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Lithuanian | `lt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Lingala|`ln`|Γ£ö|Γ£ö||||
+| Lower Sorbian|`dsb`|Γ£ö| ||||
+| Luganda|`lug`|Γ£ö|Γ£ö||||
| Macedonian | `mk` |Γ£ö|Γ£ö||Γ£ö||
+| Maithili|`mai`|Γ£ö|Γ£ö||||
| Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö||| | Malay (Latin) | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö|| | Nepali | `ne` |Γ£ö|Γ£ö|||| | Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Nyanja|`nya`|Γ£ö|Γ£ö||||
| Odia | `or` |Γ£ö|Γ£ö|Γ£ö||| | Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö|| | Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Punjabi | `pa` |Γ£ö|Γ£ö|Γ£ö||| | Queretaro Otomi | `otq` |Γ£ö|Γ£ö|||| | Romanian | `ro` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Rundi|`run`|Γ£ö|Γ£ö||||
| Russian | `ru` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Samoan (Latin) | `sm` |Γ£ö|Γ£ö |Γ£ö||| | Serbian (Cyrillic) | `sr-Cyrl` |Γ£ö|Γ£ö||Γ£ö|| | Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Sesotho|`st`|Γ£ö|Γ£ö||||
+| Sesotho sa Leboa|`nso`|Γ£ö|Γ£ö||||
+| Setswana|`tn`|Γ£ö|Γ£ö||||
+| Sindhi|`sd`|Γ£ö|Γ£ö||||
+| Sinhala|`si`|Γ£ö|Γ£ö||||
| Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Somali (Arabic) | `so` |Γ£ö|Γ£ö||Γ£ö||
| Uzbek (Latin) | `uz` |Γ£ö|Γ£ö||Γ£ö|| | Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Xhosa|`xh`|Γ£ö|Γ£ö||||
+| Yoruba|`yo`|Γ£ö|Γ£ö||||
| Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö|| | Zulu | `zu` |Γ£ö|Γ£ö||||
## Transliteration
-The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Translation feature supports the following languages. In the "To/From", "<-->" indicates that the language can be transliterated from or to either of the scripts listed. The "-->" indicates that the language can only be transliterated from one script to the other.
+The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Translation feature supports the following languages. In the `To/From`, `<-->` indicates that the language can be transliterated from or to either of the scripts listed. The `-->` indicates that the language can only be transliterated from one script to the other.
| Language | Language code | Script | To/From | Script| |:-- |:-:|:-:|:-:|:-:|
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/whats-new.md
Previously updated : 07/18/2023 Last updated : 09/12/2023 <!-- markdownlint-disable MD024 -->
Translator is a language service that enables users to translate text and docume
Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+## September 2023
+
+* Translator service has [text, document translation, and container language support](language-support.md) for the following 18 languages:
+
+|Language|Code|Cloud ΓÇô Text Translation and Document Translation|Containers ΓÇô Text Translation|Description
+|:-|:-|:-|:-|
+|chiShona|`sn`|Γ£ö|Γ£ö|The official language of Zimbabwe with more than 8 million native speakers.|
+|Hausa|`ha`|Γ£ö|Γ£ö|The most widely used language in West Africa with more than 150 million speakers worldwide.|
+|Igbo|`ig`|Γ£ö|Γ£ö|The principal native language of the Igbo people of Nigeria with more than 44 million speakers.|
+|Kinyarwanda|`rw`|Γ£ö|Γ£ö|The national language of Rwanda with more than 12 million speakers primarily in East and Central Africa.|
+|Lingala|`ln`|Γ£ö|Γ£ö|One of four official languages of the Democratic Republic of the Congo with more than 60 million speakers.|
+|Luganda|`lug`|Γ£ö|Γ£ö|A major language of Uganda with more than 5 million speakers.|
+|Nyanja|`nya`|Γ£ö|Γ£ö| Nynaja, also known as Chewa, is spoken mainly in Malawi and has more than 2 million native speakers.|
+|Rundi|`run`|Γ£ö|Γ£ö| Rundi, also known as Kirundi, is the national language of Burundi and has more than 6 million native speakers.|
+|Sesotho|`st`|Γ£ö|Γ£ö| Sesotho, also know as Sotho, is the national and official language of Lesotho, one of 12 official languages of South Africa, and one of 16 official languages of Zimbabwe. It has more than 5.6 native speakers.
+|Sesotho sa Leboa|`nso`|Γ£ö|Γ£ö|Sesotho, also known as Northern Sotho, is the native language of more than 4.6 million people in South Africa.|
+|Setswana|`tn`|Γ£ö|Γ£ö|Setswana, also known as Tswana, is an official language of Botswana and South Africa and has more than 5 million speakers.|
+|Xhosa|`xh`|Γ£ö|Γ£ö|An official language of South Africa and Zimbabwe, Xhosa has more than 20 million speakers.|
+|Yoruba|`yo`|Γ£ö|Γ£ö|The principal native language of the Yoruba people of West Africa, it has more than 50 million speakers.|
+|Konkani|`gom`|Γ£ö|Γ£ö|The official language of the Indian state of Goa with more than 7 million speakers worldwide.|
+|Maithili|`mai`|Γ£ö|Γ£ö|One of the 22 officially recognized languages of India and the second most spoken language in Nepal. It has more than 20 million speakers.|
+|Sindhi|`sd`|Γ£ö|Γ£ö|Sindhi is an official language of the Sindh province of Pakistan and the Rajasthan state in India. It has more than 33 million speakers worldwide.|
+|Sinhala|`si`|Γ£ö|Γ£ö|One of the official and national languages of Sri Lanka, Sinhala has more than 16 million native speakers.|
+|Lower Sorbian|`dsb`|Γ£ö|Currently, not supported in containers |A West Slavic language spoken primarily in eastern Germany. It has approximately 7,000 speakers.|
+ ## July 2023 [!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)]
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
Yes, you can run all three configurations i.e `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`simultaneously. In case the windows overlap AKS decides the running order.
+* I configured a maintenance window, but upgrade didn't happen - why?
+
+ AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 6 hours between the creation/update of the maintenance configuration, and when it's scheduled to start.
+
+* AKS auto-upgrade didn't upgrade all my agent pools - or one of the pools was upgraded outside of the maintenance window?
+
+ If an agent pool fails to upgrade (eg. because of Pod Disruption Budgets preventing it to upgrade) or is in a Failed state, then it might be upgraded later outside of the maintenance window. This scenario is called "catch-up upgrade" and avoids letting Agent pools with a different version than the AKS control plane.
+ * Are there any best practices for the maintenance configurations? We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy].
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 08/02/2023 Last updated : 09/13/2023
The following are the current limitations and known issues with PowerShell runbo
**Known issues**
+* Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required directory.
+* Modules imported through an ARM template might not load with `Import-module`. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+* `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
+* PowerShell 5.1 modules uploaded through .zip files might not load in Runbooks. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+* Completed jobs might show a warning message: *Both Az and AzureRM modules were detected on this machine. Az and AzureRM modules cannot be imported in the same session or used in the same script or runbook*. This is just a warning message and does not impact job execution.
* PowerShell runbooks can't retrieve an unencrypted [variable asset](./shared-resources/variables.md) with a null value. * PowerShell runbooks can't retrieve a variable asset with `*~*` in the name. * A [Get-Process](/powershell/module/microsoft.powershell.management/get-process) operation in a loop in a PowerShell runbook can crash after about 80 iterations.
The following are the current limitations and known issues with PowerShell runbo
**Limitations** - You must be familiar with PowerShell scripting.- - The Azure Automation internal PowerShell cmdlets aren't supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your PowerShell runbook to access the Automation account shared resources (assets) functions. - For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules. - *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version.
The following are the current limitations and known issues with PowerShell runbo
**Known issues**
+- Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required directory.
+- Modules imported through an ARM template might not load with `Import-module`. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+- `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
- Executing child scripts using `.\child-runbook.ps1` isn't supported in this preview. **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from `Az.Automation` module) to start another runbook from parent runbook. - Runbook properties defining logging preference isn't supported in PowerShell 7 runtime.
The following are the current limitations and known issues with PowerShell runbo
- Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md). **Known issues**-
+- Runbooks taking dependency on internal file paths such as `C:\modules` might fail due to changes in service backend infrastructure. Change runbook code to ensure there are no dependencies on internal file paths and use [Get-ChildItem](/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.3) to get the required directory.
+- Modules imported through an ARM template might not load with `Import-module`. As a workaround, create a .zip file (with name as module name) and add the module files directly to the .zip file instead of zipping the named folder (for example - *ModuleNamedZipFile.zip\ModuleFiles*). You can then delete or again add the modules to the new .zip file.
+- `Get-AzStorageAccount` cmdlet might fail with an error: *The `Get-AzStorageAccount` command was found in the module `Az.Storage`, but the module could not be loaded*.
- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview. **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook. - Runbook properties defining logging preference isn't supported in PowerShell 7 runtime.
Following are the limitations of Python runbooks
- Azure Automation doesn't supportΓÇ»**sys.stderr**. - The Python **automationassets** package isn't available on pypi.org, so it's not available for import onto a Windows machine.
-# [Python 3.10 (preview)](#tab/py10)
-**Limitations**
+# [Python 3.10 (preview)](#tab/py10)
- For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md) - Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if required dependencies of packages aren't imported into automation account.
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-app-configuration Reference Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md
The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which
#### Use Connection String 1. Create a Kubernetes Secret in the same namespace as the `AzureAppConfigurationProvider` resource and add Azure App Configuration connection string with key *azure_app_configuration_connection_string* in the Secret.
-2. Set the `spec.connectionStringReference` property to the name of the Secret in the following sample `AzureAppConfigurationProvider` resource and deploy it to the Kubernetes cluster.
+1. Set the `spec.connectionStringReference` property to the name of the Secret in the following sample `AzureAppConfigurationProvider` resource and deploy it to the Kubernetes cluster.
``` yaml apiVersion: azconfig.io/v1beta1
The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which
target: configMapName: configmap-created-by-appconfig-provider ```- ### Key-value selection Use the `selectors` property to filter the key-values to be downloaded from Azure App Configuration.
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
You'll need to connect and authenticate to a Kubernetes cluster and have an exis
kubectl config use-context <Kubernetes cluster name> ```
-### Upgrade Arc data controller extension
-
-Upgrade the Arc data controller extension first.
-
-Retrieve the name of your extension and its version:
-
-1. Go to the Azure portal
-1. Select **Overview** for your Azure Arc enabled Kubernetes cluster
-1. Selecting the **Extensions** tab on the left.
-
-Alternatively, you can use `az` CLI to get the name of your extension and its version running.
-
-```azurecli
-az k8s-extension list --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters
-```
-
-Example:
-
-```azurecli
-az k8s-extension list --resource-group rg-arcds --cluster-name aks-arc --cluster-type connectedClusters
-```
-
-After you retrieve the extension name and its version, upgrade the extension.
-
-```azurecli
-az k8s-extension update --resource-group <resource-group> --cluster-name <connected cluster name> --cluster-type connectedClusters --name <name of extension> --version <extension version> --release-train stable --config systemDefaultValues.image="<registry>/<repository>/arc-bootstrapper:<imageTag>"
-```
-
-Example:
-
-```azurecli
-az k8s-extension update --resource-group rg-arcds --cluster-name aks-arc --cluster-type connectedClusters --name aks-arc-ext --version
-1.2.19581002 --release-train stable --config systemDefaultValues.image="mcr.microsoft.com/arcdata/arc-bootstrapper:v1.7.0_2022-05-24"
-```
- ### Upgrade data controller You can perform a dry run first. The dry run validates the registry exists, the version schema, and the private repository authorization token (if used). To perform a dry run, use the `--dry-run` parameter in the `az arcdata dc upgrade` command. For example:
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023 #
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 06/06/2023 Last updated : 09/11/2023
Metadata information about a connected machine is collected after the Connected
* Cluster resource ID (for Azure Stack HCI nodes) * Hardware manufacturer * Hardware model
-* CPU socket, physical core and logical core counts
+* CPU family, socket, physical core and logical core counts
+* Total physical memory
+* Serial number
+* SMBIOS asset tag
* Cloud provider * Amazon Web Services (AWS) metadata, when running in AWS: * Account ID
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.30 - May 2023
+
+Download for [Windows](https://download.microsoft.com/download/7/7/9/779eae73-a12b-4170-8c5e-abec71bc14cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- Introduced a scheduled task that checks for agent updates on a daily basis. Currently, the update mechanism is inactive and no changes are made to your server even if a newer agent version is available. In the future, you'll be able to schedule updates of the Azure Connected Machine agent from Azure. For more information, see [Automatic agent upgrades](manage-agent.md#automatic-agent-upgrades).
+
+### Fixed
+
+- Resolved an issue that could cause the agent to go offline after rotating its connectivity keys.
+- `azcmagent show` no longer shows an incomplete resource ID or Azure portal page URL when the agent isn't configured.
+ ## Version 1.29 - April 2023 Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-949a-4b16-a29a-3d1dcb29cff7/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Connected Machine agent description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. Previously updated : 07/11/2023 Last updated : 09/11/2023
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md).
+## Version 1.34 - September 2023
+
+Download for [Windows](https://download.microsoft.com/download/b/3/2/b3220316-13db-4f1f-babf-b1aab33b364f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- [Extended Security Updates for Windows Server 2012 and 2012 R2](prepare-extended-security-updates.md) can be purchased and enabled through Azure Arc. If your server is already running the Azure Connected Machine agent, [upgrade to agent version 1.34](manage-agent.md#upgrade-the-agent) or later to take advantage of this new capability.
+- Additional system metadata is collected to enhance your device inventory in Azure:
+ - Total physical memory
+ - Additional processor information
+ - Serial number
+ - SMBIOS asset tag
+- Network requests to Microsoft Entra ID (formerly Azure Active Directory) now use `login.microsoftonline.com` instead of `login.windows.net`
+
+### Fixed
+
+- Better handling of disconnected agent scenarios in the extension manager and policy engine.
+ ## Version 1.33 - August 2023 Download for [Windows](https://download.microsoft.com/download/0/c/7/0c7a484b-e29e-42f9-b3e9-db431df2e904/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
Agent version 1.33 contains a fix for [CVE-2023-38176](https://msrc.microsoft.co
### Known issue
-[azcmagent check](azcmagent-check.md) validates a new endpoint in this release: `<geography>-ats.his.arc.azure.com`. This endpoint is reserved for future use and not required for the Azure Connected Machine agent to operate successfully. However, if you are using a private endpoint, this endpoint will fail the network connectivity check. You can safely ignore this endpoint in the results and should instead confirm that all other endpoints are reachable.
+[azcmagent check](azcmagent-check.md) validates a new endpoint in this release: `<geography>-ats.his.arc.azure.com`. This endpoint is reserved for future use and not required for the Azure Connected Machine agent to operate successfully. However, if you're using a private endpoint, this endpoint will fail the network connectivity check. You can safely ignore this endpoint in the results and should instead confirm that all other endpoints are reachable.
This endpoint will be removed from `azcmagent check` in a future release.
To check if you're running the latest version of the Azure connected machine age
- Improved output of the [azcmagent check](azcmagent-check.md) command - Better handling of spaces in the `--location` parameter of [azcmagent connect](azcmagent-connect.md)
-## Version 1.30 - May 2023
-
-Download for [Windows](https://download.microsoft.com/download/7/7/9/779eae73-a12b-4170-8c5e-abec71bc14cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
-
-### New features
--- Introduced a scheduled task that checks for agent updates on a daily basis. Currently, the update mechanism is inactive and no changes are made to your server even if a newer agent version is available. In the future, you'll be able to schedule updates of the Azure Connected Machine agent from Azure. For more information, see [Automatic agent upgrades](manage-agent.md#automatic-agent-upgrades).-
-### Fixed
--- Resolved an issue that could cause the agent to go offline after rotating its connectivity keys.-- `azcmagent show` no longer shows an incomplete resource ID or Azure portal page URL when the agent isn't configured.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the follow
Other Azure services through Azure Arc-enabled servers are available, with offerings such as: * [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) - As part of the cloud security posture management (CSPM) pillar, it provides server protections through [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers.md) to help protect you from various cyber threats and vulnerabilities.
-* [Update Manager (preview)](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
+* [Azure Update Manager (preview)](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
* [Azure Policy](../../governance/policy/overview.md) helps to enforce organizational standards and to assess compliance at-scale. Beyond providing an aggregated view to evaluate the overall state of the environment, Azure Policy helps to bring your resources to compliance through bulk and automatic remediation. >[!NOTE]
- >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Update Manager (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
+ >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
## Prepare delivery of ESUs
azure-cache-for-redis Cache Remove Tls 10 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-remove-tls-10-11.md
description: Learn how to remove TLS 1.0 and 1.1 from your application when comm
Previously updated : 07/13/2023 Last updated : 09/12/2023 ms.devlang: csharp, golang, java, javascript, php, python
ms.devlang: csharp, golang, java, javascript, php, python
# Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
-There's an industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later. TLS versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. They also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. This [TLS security blog](https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/) explains some of these vulnerabilities in more detail.
+To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of the TLS 1.2 in October, 2024. TLS versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses.
-As a part of this effort, we'll be making the following changes to Azure Cache for Redis:
+TLS versions 1.0 and 1.1 also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. This [TLS security blog](https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/) explains some of these vulnerabilities in more detail.
-* **Phase 1:** We'll configure the default minimum TLS version to be 1.2 for newly created cache instances (previously, it was TLS 1.0). Existing cache instances won't be updated at this point. You can still use the Azure portal or other management APIs to [change the minimum TLS version](cache-configure.md#access-ports) to 1.0 or 1.1 for backward compatibility.
-* **Phase 2:** We'll stop supporting TLS 1.1 and TLS 1.0. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service is expected to be available while we migrate it to support only TLS 1.2 or later.
+> [!IMPORTANT]
+> On October 1, 2024, the TLS 1.2 requirement will be enforced.
+>
+>
+
+As a part of this effort, you can expect the following changes to Azure Cache for Redis:
+
+- _Phase 1_: Azure Cache for Redis stops offering TLS 1.0/1.1 as an option for MinimumTLSVersion setting for new cache creates. Existing cache instances won't be updated at this point. You can still use the Azure portal or other management APIs to [change the minimum TLS version](cache-configure.md#access-ports) to 1.0 or 1.1 for backward compatibility.
+- _Phase 2_: Azure Cache for Redis stops supporting TLS 1.1 and TLS 1.0 starting October 1, 2024. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service will be available while we update the MinimumTLSVerion for all caches to 1.2.
- > [!WARNING]
- > Phase 2 is postponed because of COVID-19. We strongly recommend that you begin planning for this change now and proactively update clients to support TLS 1.2 or later.
- >
+| Date | Description |
+|-- |-|
+| September 2023 | TLS 1.0/1.1 retirement announcement |
+| March 1, 2024 | Beginning March 1, 2024, you will not be able to set the Minimum TLS version for any cache to 1.0 or 1.1.
+| September 30, 2024 | Ensure that all your applications are connecting to Azure Cache for Redis using TLS 1.2 and Minimum TLS version on your cache settings is set to 1.2
+| October 1, 2024 | Minimum TLS version for all cache instances is updated to 1.2. This means Azure Cache for Redis instances will reject connections using TLS 1.0 or 1.1.
> [!IMPORTANT]
- > The content in this article does not apply to Azure Cache for Redis Enterprise/Enterprise Flash as the Enterprise tiers support TLS 1.2 only.
+ > The content in this article does not apply to Azure Cache for Redis Enterprise/Enterprise Flash because the Enterprise tiers only support TLS 1.2.
>
-As part of this change, we'll also remove support for older cypher suites that aren't secure. Our supported cypher suites are restricted to the following suites when the cache is configured with a minimum of TLS 1.2:
+As part of this change, Azure Cache for Redis removes support for older cipher suites that aren't secure. Supported cipher suites are restricted to the following suites when the cache is configured with a minimum of TLS 1.2:
-* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
-This article provides general guidance about how to detect dependencies on these earlier TLS versions and remove them from your application.
-
-The dates when these changes take effect are:
-
-| Cloud | Phase 1 Start Date | Phase 2 Start Date |
-|-|--|-|
-| Azure (global) | January 13, 2020 | Postponed because of COVID-19 |
-| Azure Government | March 13, 2020 | Postponed because of COVID-19 |
-| Azure Germany | March 13, 2020 | Postponed because of COVID-19 |
-| Microsoft Azure operated by 21Vianet | March 13, 2020 | Postponed because of COVID-19 |
-
-> [!NOTE]
-> Phase 2 is postponed because of COVID-19. This article will be updated when specific dates are set.
->
+The following sections provide guidance about how to detect dependencies on these earlier TLS versions and remove them from your application.
## Check whether your application is already compliant
-You can find out whether your application works with TLS 1.2 by setting the **Minimum TLS version** value to TLS 1.2 on a test or staging cache, then running tests. The **Minimum TLS version** setting is in the [Advanced settings](cache-configure.md#advanced-settings) of your cache instance in the Azure portal. If the application continues to function as expected after this change, it's probably compliant. You might need to configure the Redis client library used by your application to enable TLS 1.2 to connect to Azure Cache for Redis.
+You can find out whether your application works with TLS 1.2 by setting the **Minimum TLS version** value to TLS 1.2 on a test or staging cache, then running tests. The **Minimum TLS version** setting is in the [Advanced settings](cache-configure.md#advanced-settings) of your cache instance in the Azure portal. If the application continues to function as expected after this change, it's probably compliant. You also need to configure the Redis client library used by your application to enable TLS 1.2 to connect to Azure Cache for Redis.
## Configure your application to use TLS 1.2
Most applications use Redis client libraries to handle communication with their
Redis .NET clients use the earliest TLS version by default on .NET Framework 4.5.2 or earlier, and use the latest TLS version on .NET Framework 4.6 or later. If you're using an older version of .NET Framework, enable TLS 1.2 manually:
-* **StackExchange.Redis:** Set `ssl=true` and `sslProtocols=tls12` in the connection string.
-* **ServiceStack.Redis:** Follow the [ServiceStack.Redis](https://github.com/ServiceStack/ServiceStack.Redis#servicestackredis-ssl-support) instructions and requires ServiceStack.Redis v5.6 at a minimum.
+- _StackExchange.Redis_: Set `ssl=true` and `sslProtocols=tls12` in the connection string.
+- _ServiceStack.Redis_: Follow the [ServiceStack.Redis](https://github.com/ServiceStack/ServiceStack.Redis#servicestackredis-ssl-support) instructions and requires ServiceStack.Redis v5.6 at a minimum.
### .NET Core
-Redis .NET Core clients default to the OS default TLS version, which depends on the OS itself.
+Redis .NET Core clients default to the OS default TLS version, which depends on the OS itself.
Depending on the OS version and any patches that have been applied, the effective default TLS version can vary. For more information, see [here](/dotnet/framework/network-programming/#support-for-tls-12). However, if you're using an old OS or just want to be sure, we recommend configuring the preferred TLS version manually through the client. - ### Java Redis Java clients use TLS 1.0 on Java version 6 or earlier. Jedis, Lettuce, and Redisson can't connect to Azure Cache for Redis if TLS 1.0 is disabled on the cache. Upgrade your Java framework to use new TLS versions. For Java 7, Redis clients don't use TLS 1.2 by default but can be configured for it. Jedis allows you to specify the underlying TLS settings with the following code snippet:
-``` Java
+```java
SSLSocketFactory sslSocketFactory = (SSLSocketFactory) SSLSocketFactory.getDefault(); SSLParameters sslParameters = new SSLParameters(); sslParameters.setEndpointIdentificationAlgorithm("HTTPS");
shardInfo.setPassword("cachePassword");
Jedis jedis = new Jedis(shardInfo); ```
-The Lettuce and Redisson clients don't yet support specifying the TLS version. They'll break if the cache accepts only TLS 1.2 connections. Fixes for these clients are being reviewed, so check with those packages for an updated version with this support.
+The Lettuce and Redisson clients don't yet support specifying the TLS version. They break if the cache accepts only TLS 1.2 connections. Fixes for these clients are being reviewed, so check with those packages for an updated version with this support.
In Java 8, TLS 1.2 is used by default and shouldn't require updates to your client configuration in most cases. To be safe, test your application.
Node Redis and IORedis use TLS 1.2 by default.
### PHP #### Predis
-
-* Versions earlier than PHP 7: Predis supports only TLS 1.0. These versions don't work with TLS 1.2; you must upgrade to use TLS 1.2.
-
-* PHP 7.0 to PHP 7.2.1: Predis uses only TLS 1.0 or 1.1 by default. You can use the following workaround to use TLS 1.2. Specify TLS 1.2 when you create the client instance:
+
+- Versions earlier than PHP 7: Predis supports only TLS 1.0. These versions don't work with TLS 1.2; you must upgrade to use TLS 1.2.
+
+- PHP 7.0 to PHP 7.2.1: Predis uses only TLS 1.0 or 1.1 by default. You can use the following workaround to use TLS 1.2. Specify TLS 1.2 when you create the client instance:
``` PHP $redis=newPredis\Client([
Node Redis and IORedis use TLS 1.2 by default.
]); ```
-* PHP 7.3 and later versions: Predis uses the latest TLS version.
+- PHP 7.3 and later versions: Predis uses the latest TLS version.
#### PhpRedis
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 05/31/2023- Last updated : 09/12/2023 # What's New in Azure Cache for Redis
+## September 2023
+
+### Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
+
+To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of the TLS 1.2 in October, 2024.
+
+As a part of this effort, you can expect the following changes to Azure Cache for Redis:
+
+- _Phase 1_: Azure Cache for Redis stops offering TLS 1.0/1.1 as an option for MinimumTLSVersion setting for new cache creates. Existing cache instances won't be updated at this point. You can still use the Azure portal or other management APIs to [change the minimum TLS version](cache-configure.md#access-ports) to 1.0 or 1.1 for backward compatibility.
+- _Phase 2_: Azure Cache for Redis stops supporting TLS 1.1 and TLS 1.0 starting October 1, 2024. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service is expected to be available while we update the MinimumTLSVerion for all caches to 1.2.
+
+For more information, see [Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis](cache-remove-tls-10-11.md).
+ ## June 2023
-Azure Active Directory for authentication and role-based access control are available across regions that support Azure Cache for Redis.
+Azure Active Directory for authentication and role-based access control is available across regions that support Azure Cache for Redis.
## May 2023
For more information, see [Configure clustering for Azure Cache for Redis instan
### 99th percentile latency metric (preview)
-A new metric is available to track the worst-case latency of server-side commands in Azure Cache for Redis instances. Latency is measured by using `PING` commands and tracking response times. This metric can be used to track the health of your cache instance and to see if long-running commands are compromising latency performance.
+A new metric is available to track the worst-case latency of server-side commands in Azure Cache for Redis instances. Latency is measured by using `PING` commands and tracking response times. This metric can be used to track the health of your cache instance and to see if long-running commands are compromising latency performance.
For more information, see [Monitor Azure Cache for Redis](cache-how-to-monitor.md#list-of-metrics).
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
Before you begin, you must have the following prerequisites:
+ The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later. ::: zone pivot="nodejs-model-v3"
-+ [Node.js](https://nodejs.org/) version 18 or 16.
++ [Node.js](https://nodejs.org/) version 20 (preview), 18 or 16. ::: zone-end ::: zone pivot="nodejs-model-v4"
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> ```
- The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
+ The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It's recommended that you use the latest LTS version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
# [Azure PowerShell](#tab/azure-powershell)
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 18 -FunctionsVersion 4 -Location <REGION> ```
- The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It's recommended that you use the latest LTS version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
description: Learn how to develop and test Azure Functions by using the Azure Fu
ms.devlang: csharp, java, javascript, powershell, python Previously updated : 06/19/2022 Last updated : 09/01/2023
+zone_pivot_groups: programming-languages-set-functions
#Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
The Azure Functions extension provides these benefits:
* Publish your Azure Functions project directly to Azure. * Write your functions in various languages while taking advantage of the benefits of Visual Studio Code.
-The extension can be used with the following languages, which are supported by the Azure Functions runtime starting with version 2.x:
-
-* [C# compiled](functions-dotnet-class-library.md)
-* [C# script](functions-reference-csharp.md)<sup>*</sup>
-* [JavaScript](functions-reference-node.md?tabs=javascript)
-* [Java](functions-reference-java.md)
-* [PowerShell](functions-reference-powershell.md)
-* [Python](functions-reference-python.md)
-* [TypeScript](functions-reference-node.md?tabs=typescript)
-
-<sup>*</sup>Requires that you [set C# script as your default project language](#c-script-projects).
-
-In this article, examples are currently available only for JavaScript (Node.js) and C# class library functions.
-
-This article provides details about how to use the Azure Functions extension to develop functions and publish them to Azure. Before you read this article, you should [create your first function by using Visual Studio Code](./create-first-function-vs-code-csharp.md).
+>You're viewing the C# version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-csharp.md).
+>You're viewing the Java version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-java.md).
+>You're viewing the JavaScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-node.md).
+>You're viewing the PowerShell version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-powershell.md).
+>You're viewing the Python version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-python.md).
+>You're viewing the TypeScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
+If you want to get started right away, complete the [Visual Studio Code quickstart article](./create-first-function-vs-code-typescript.md).
> [!IMPORTANT] > Don't mix local development and portal development for a single function app. When you publish from a local project to a function app, the deployment process overwrites any functions that you developed in the portal.
This article provides details about how to use the Azure Functions extension to
These prerequisites are only required to [run and debug your functions locally](#run-functions-locally). They aren't required to create or publish projects to Azure Functions.
-# [C\#](#tab/csharp)
-
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-
-* The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-
-* [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x).
-
-# [Java](#tab/java)
-
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-
-* [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
-
-* [Java](/azure/developer/jav#java-versions).
-
-* [Maven 3 or later](https://maven.apache.org/).
-
-# [JavaScript](#tab/nodejs)
++ The [Azure Functions Core Tools](functions-run-local.md), which enables an integrated local debugging experience. When using the Azure Functions extension, the easiest way to install Core Tools is by running the `Azure Functions: Install or Update Azure Functions Core Tools` command from the command pallet. ++ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
++ [.NET (CLI)](/dotnet/core/tools/), which is included in the .NET SDK.++ [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
-* [Node.js](https://nodejs.org/), one of the [supported versions](functions-reference-node.md#node-version). Use the `node --version` command to check your version.
++ [Java](/azure/developer/jav#java-versions).
-# [PowerShell](#tab/powershell)
++ [Maven 3 or later](https://maven.apache.org/).++ [Node.js](https://nodejs.org/), one of the [supported versions](functions-reference-node.md#node-version). Use the `node --version` command to check your version.++ [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time.
++ [.NET 6.0 runtime](https://dotnet.microsoft.com/download).
-* [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
++ The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). ++ [Python](https://www.python.org/downloads/), one of the [supported versions](functions-reference-python.md#python-version).
-* [.NET 6.0 runtime](https://dotnet.microsoft.com/download).
-
-* The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
-
-# [Python](#tab/python)
-
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time.
-
-* [Python](https://www.python.org/downloads/), one of the [supported versions](functions-reference-python.md#python-version).
-
-* [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
++ [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. [!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)]-- ## Create an Azure Functions project
The Functions extension lets you create a function app project, along with your
1. A function is created in your chosen language and in the template for an HTTP-triggered function.
- :::image type="content" source="./media/functions-develop-vs-code/new-function-created.png" alt-text="Screenshot for H T T P-triggered function template in Visual Studio Code.":::
- ### Generated project files The project template creates a project in your chosen language and installs required dependencies. For any language, the new project has these files:
The project template creates a project in your chosen language and installs requ
Depending on your language, these other files are created:
-# [C\#](#tab/csharp)
-
-* [HttpExample.cs class library file](functions-dotnet-class-library.md#functions-class-library-project) that implements the function.
-
-# [Java](#tab/java)
-
-* A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
-
-* A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
-
-# [JavaScript](#tab/nodejs)
+An HttpExample.cs class library file, the contents of which vary depending on whether your project runs in an [isolated worker process](dotnet-isolated-process-guide.md#net-isolated-worker-process-project) or [in-process](functions-dotnet-class-library.md#functions-class-library-project) with the Functions host.
++ A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
-* A package.json file in the root folder.
++ A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function.
+Files generated depend on the chosen Node.js programming model for Functions:
+### [v3](#tab/node-v3)
++ A package.json file in the root folder.
-* An HttpExample folder that contains the [function.json definition file](functions-reference-node.md#folder-structure) and the [index.js file](functions-reference-node.md#exporting-a-function), a Node.js file that contains the function code.
++ An HttpExample folder that contains:
-# [PowerShell](#tab/powershell)
+ + The [function.json definition file](functions-reference-node.md#folder-structure)
+ + An [index.js file](functions-reference-node.md#exporting-a-function), which contains the function code.
-* An HttpExample folder that contains the [function.json definition file](functions-reference-powershell.md#folder-structure) and the run.ps1 file, which contains the function code.
+### [v4](#tab/node-v4)
-# [Python](#tab/python)
++ A package.json file in the root folder.
-* A project-level requirements.txt file that lists packages required by Functions.
-
-* An HttpExample folder that contains the [function.json definition file](functions-reference-python.md#folder-structure) and the \_\_init\_\_.py file, which contains the function code.
++ A named .js file in the _src\functions_ folder, which contains both the function definition and your function code.
-At this point, you can [add input and output bindings](#add-input-and-output-bindings) to your function.
-You can also [add a new function to your project](#add-a-function-to-your-project).
+An HttpExample folder that contains:
-## Install binding extensions
++ The [function.json definition file](functions-reference-powershell.md#folder-structure)++ A run.ps1 file, which contains the function code.
-Except for HTTP and timer triggers, bindings are implemented in extension packages. You must install the extension packages for the triggers and bindings that need them. The process for installing binding extensions depends on your project's language.
+Files generated depend on the chosen Python programming model for Functions:
+
+### [v2](#tab/python-v2)
-# [C\#](#tab/csharp)
++ A project-level requirements.txt file that lists packages required by Functions.
-Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. The following example demonstrates how you add a binding for an [in-process class library](functions-dotnet-class-library.md):
++ A function_app.py file that contains both the function definition and code.
-```terminal
-dotnet add package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
-```
+### [v1](#tab/python-v1)
-The following example demonstrates how you add a binding for an [isolated-process class library](dotnet-isolated-process-guide.md):
++ A project-level requirements.txt file that lists packages required by Functions.
-```terminal
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
-```
-
-In either case, replace `<BINDING_TYPE_NAME>` with the name of the package that contains the binding you need. You can find the desired binding reference article in the [list of supported bindings](./functions-triggers-bindings.md#supported-bindings).
-
-Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
-
-# [Java](#tab/java)
-++ An HttpExample folder that contains:
+ + The [function.json definition file](functions-reference-python.md#folder-structure)
+ + An \_\_init\_\_.py file, which contains the function code.
-# [JavaScript](#tab/nodejs)
--
-# [PowerShell](#tab/powershell)
-+
-# [Python](#tab/python)
+At this point, you can do one of these tasks:
-++ [Add input or output bindings to an existing function](#add-input-and-output-bindings).++ [Add a new function to your project](#add-a-function-to-your-project).++ [Run your functions locally](#run-functions-locally).++ [Publish your project to Azure](#publish-to-azure). ## Add a function to your project You can add a new function to an existing project by using one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
-The results of this action depend on your project's language:
-
-# [C\#](#tab/csharp)
-
-A new C# class library (.cs) file is added to your project.
-
-# [Java](#tab/java)
-
-A new Java (.java) file is added to your project.
+The results of this action are that a new C# class library (.cs) file is added to your project.
+The results of this action are that a new Java (.java) file is added to your project.
+The results of this action depend on the Node.js model version.
-# [JavaScript](#tab/nodejs)
+### [v3](#tab/node-v3)
A new folder is created in the project. The folder contains a new function.json file and the new JavaScript code file.
-# [PowerShell](#tab/powershell)
+### [v4](#tab/node-v4)
-A new folder is created in the project. The folder contains a new function.json file and the new PowerShell code file.
++ A package.json file in the root folder.
-# [Python](#tab/python)
-
-The results depend on the Python programming model. For more information, see the [Azure Functions Python developer guide](./functions-reference-python.md).
-
-**Python v1**: A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
-
-**Python v2**: New function code is added either to the default function_app.py file or to another Python file you selected.
++ A named .js file in the _src\functions_ folder, which contains both the function definition and your function code.
+The results of this action are that a new folder is created in the project. The folder contains a new function.json file and the new PowerShell code file.
+The results of this action depend on the Python model version.
-## <a name="add-input-and-output-bindings"></a>Connect to services
-
-You can connect your function to other Azure services by adding input and output bindings. Bindings connect your function to other services without you having to write the connection code. The process for adding bindings depends on your project's language. To learn more about bindings, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
-
-The following examples connect to a storage queue named `outqueue`, where the connection string for the storage account is set in the `MyStorageConnection` application setting in local.settings.json.
-
-# [C\#](#tab/csharp)
+### [v2](#tab/python-v2)
-Update the function method to add the following parameter to the `Run` method definition:
+New function code is added either to the function_app.py file (the default behavior) or to another Python file you selected.
+### [v1](#tab/python-v1)
-The `msg` parameter is an `ICollector<T>` type, which represents a collection of messages that are written to an output binding when the function completes. The following code adds a message to the collection:
+A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+
- Messages are sent to the queue when the function completes.
+## <a name="add-input-and-output-bindings"></a>Connect to services
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=csharp) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+You can connect your function to other Azure services by adding input and output bindings. Bindings connect your function to other services without you having to write the connection code.
-# [Java](#tab/java)
+For example, the way you define an output binding that writes data to a storage queue depends on your process model:
-Update the function method to add the following parameter to the `Run` method definition:
+### [In-process](#tab/in-process)
+Update the function method to add a binding parameter defined by using the `Queue` attribute. You can use an `ICollector<T>` type to represent a collection of messages.
-The `msg` parameter is an `OutputBinding<T>` type, where `T` is a string that is written to an output binding when the function completes. The following code sets the message in the output binding:
+### [Isolated process](#tab/isolated-process)
+Update the function method to add a binding parameter defined by using the `QueueOutput` attribute. You can use a `MultiResponse` object to return multiple messages or multiple output streams.
-This message is sent to the queue when the function completes.
+
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=java) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
+For example, to add an output binding that writes data to a storage queue you update the function method to add a binding parameter defined by using the [`QueueOutput`](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation. The [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding) object represents the messages that are written to an output binding when the function completes.
+For example, the way you define the output binding that writes data to a storage queue depends on your Node.js model version:
-# [JavaScript](#tab/nodejs)
+### [v3](#tab/node-v3)
[!INCLUDE [functions-add-output-binding-vs-code](../../includes/functions-add-output-binding-vs-code.md)]
-In your function code, the `msg` binding is accessed from the `context`, as in this example:
+### [v4](#tab/node-v4)
+Using the Node.js v4 model, you must manually add a `return:` option in the function definition using the `storageQueue` function on the `output` object, which defines the storage queue to write the `return` output. Output is written when the function completes.
-This message is sent to the queue when the function completes.
-
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=javascript) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
-
-# [PowerShell](#tab/powershell)
+
[!INCLUDE [functions-add-output-binding-vs-code](../../includes/functions-add-output-binding-vs-code.md)]
+For example, the way you define the output binding that writes data to a storage queue depends on your Python model version:
-
-This message is sent to the queue when the function completes.
+### [v2](#tab/python-v2)
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=powershell) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
+The `@queue_output` decorator on the function is used to define a named binding parameter for the output to the storage queue, where `func.Out` defines what output is written.
-# [Python](#tab/python)
+### [v1](#tab/python-v1)
[!INCLUDE [functions-add-output-binding-vs-code](../../includes/functions-add-output-binding-vs-code.md)]
-Update the `Main` definition to add an output parameter `msg: func.Out[func.QueueMessage]` so that the definition looks like the following example:
+
-
-The following code adds string data from the request to the output queue:
--
-This message is sent to the queue when the function completes.
-
-To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=python) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
-- [!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
To learn more, see the [Queue storage output binding reference article](function
Before you can publish your Functions project to Azure, you must have a function app and related resources in your Azure subscription to run your code. The function app provides an execution context for your functions. When you publish to a function app in Azure from Visual Studio Code, the project is packaged and deployed to the selected function app in your Azure subscription.
-When you create a function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you'll have more control over the remote resources created.
+When you create a function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you have more control over the remote resources created.
### Quick function app create
When the project is running, you can use the **Execute Function Now...** feature
1. When the function runs locally and after the response is received, a notification is raised in Visual Studio Code. Information about the function execution is shown in **Terminal** panel.
-Running functions locally doesn't require using keys.
+Keys aren't required when running locally, which applies to both function keys and admin-level keys.
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
By default, these settings aren't migrated automatically when the project is pub
Values in **ConnectionStrings** are never published.
-The function application settings values can also be read in your code as environment variables. For more information, see the Environment variables sections of these language-specific reference articles:
+### [Isolated process](#tab/isolated-process)
+The function application settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).
-* [C# precompiled](functions-dotnet-class-library.md#environment-variables)
-* [C# script (.csx)](functions-reference-csharp.md#environment-variables)
-* [Java](functions-reference-java.md#environment-variables)
-* [JavaScript](functions-reference-node.md#environment-variables)
-* [PowerShell](functions-reference-powershell.md#environment-variables)
-* [Python](functions-reference-python.md#environment-variables)
+### [In-process](#tab/in-process)
+The function application settings values can also be read in your code as environment variables as with any ASP.NET Core app.
+++++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-java.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-node.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-powershell.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-python.md#environment-variables). ## Application settings in Azure
If you've created application settings in Azure, you can download them into your
As with uploading, if the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings that have conflicting values in the two locations, you're prompted to choose how to proceed.
+## Install binding extensions
+
+Except for HTTP and timer triggers, bindings are implemented in extension packages.
+
+You must explicitly install the extension packages for the triggers and bindings that need them. The specific package you install depends on your project's process model.
+
+### [Isolated process](#tab/isolated-process)
+
+Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. This template demonstrates how you add a binding for an [isolated-process class library](dotnet-isolated-process-guide.md):
+
+```terminal
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
+```
+
+### [In-process](#tab/in-process)
+
+Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. This template demonstrates how you add a binding for an [in-process class library](functions-dotnet-class-library.md):
+
+```terminal
+dotnet add package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
+```
+++
+Replace `<BINDING_TYPE_NAME>` with the name of the package that contains the binding you need. You can find the desired binding reference article in the [list of supported bindings](./functions-triggers-bindings.md#supported-bindings).
+
+Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to the current Functions runtime are specified in the reference article for the binding.
+
+C# script uses [extension bundles](functions-bindings-register.md#extension-bundles).
++
+If for some reason you can't use an extension bundle to install binding extensions for your project, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
++ ## Monitoring functions When you [run functions locally](#run-functions-locally), log data is streamed to the Terminal console. You can also get log data when your Functions project is running in a function app in Azure. You can connect to streaming logs in Azure to see near-real-time log data. You should enable Application Insights for a more complete understanding of how your function app is behaving.
When you're developing an application, it's often useful to see logging informat
:::image type="content" source="media/functions-develop-vs-code/streaming-logs-vscode-console.png" alt-text="Screenshot for streaming logs output for H T T P trigger.":::
-To learn more, see [Streaming logs](functions-monitoring.md#streaming-logs).
--
-> [!NOTE]
-> Streaming logs support only a single instance of the Functions host. When your function is scaled to multiple instances, data from other instances isn't shown in the log stream. [Live Metrics Stream](../azure-monitor/app/live-stream.md) in Application Insights does support multiple instances. While also in near-real time, streaming analytics is based on [sampled data](configure-monitoring.md#configure-sampling).
+To learn more, see [Streaming logs](functions-monitoring.md?tabs=vs-code#streaming-logs).
### Application Insights
-We recommend that you monitor the execution of your functions by integrating your function app with Application Insights. When you create a function app in the Azure portal, this integration occurs by default. When you create your function app during Visual Studio publishing, you need to integrate Application Insights yourself. To learn how, see [Enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
+You should monitor the execution of your functions by integrating your function app with Application Insights. When you create a function app in the Azure portal, this integration occurs by default. When you create your function app during Visual Studio publishing, you need to integrate Application Insights yourself. To learn how, see [Enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
To learn more about monitoring using Application Insights, see [Monitor Azure Functions](functions-monitoring.md).
Now that you've configured the Terminal with Rosetta to run x86 emulation for Py
![Screenshot of starting a new Rosetta terminal in Visual Studio Code.](./media/functions-develop-vs-code/vs-code-rosetta.png) + ## C\# script projects By default, all C# projects are created as [C# compiled class library projects](functions-dotnet-class-library.md). If you prefer to work with C# script projects instead, you must select C# script as the default language in the Azure Functions extension settings:
By default, all C# projects are created as [C# compiled class library projects](
1. Select **C#Script** from **Azure Function: Project Language**. After you complete these steps, calls made to the underlying Core Tools include the `--csx` option, which generates and publishes C# script (.csx) project files. When you have this default language specified, all projects that you create default to C# script projects. You're not prompted to choose a project language when a default is set. To create projects in other languages, you must change this setting or remove it from the user settings.json file. After you remove this setting, you're again prompted to choose your language when you create a project. ## Command palette reference
The Azure Functions extension provides a useful graphical interface in the area
| **Disconnect from Repo** | Removes the [continuous deployment](functions-continuous-deployment.md) connection between a function app in Azure and a source control repository. | | **Download Remote Settings** | Downloads settings from the chosen function app in Azure into your local.settings.json file. If the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings that have conflicting values in the two locations, you're prompted to choose how to proceed. Be sure to save changes to your local.settings.json file before you run this command. | | **Edit settings** | Changes the value of an existing function app setting in Azure. This command doesn't affect settings in your local.settings.json file. |
-| **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime will decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
+| **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
| **Execute Function Now** | Manually starts a function using admin APIs. This command is used for testing, both locally during debugging and against functions running in Azure. When a function in Azure starts, the extension first automatically obtains an admin key, which it uses to call the remote admin APIs that start functions in Azure. The body of the message sent to the API depends on the type of trigger. Timer triggers don't require you to pass any data. | | **Initialize Project for Use with VS Code** | Adds the required Visual Studio Code project files to an existing Functions project. Use this command to work with a project that you created by using Core Tools. | | **Install or Update Azure Functions Core Tools** | Installs or updates [Azure Functions Core Tools], which is used to run functions locally. |
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
In code, assemblies are referenced like the following example:
To reference a custom assembly, you can use either a *shared* assembly or a *private* assembly:
-* Shared assemblies are shared across all functions within a function app. To reference a custom assembly, upload the assembly to a folder named `bin` in your [function app root folder](functions-reference.md#folder-structure) (wwwroot).
+* Shared assemblies are shared across all functions within a function app. To reference a custom assembly, upload the assembly to a folder named `bin` in the root folder (wwwroot) of your function app.
* Private assemblies are part of a given function's context, and support side-loading of different versions. Private assemblies should be uploaded in a `bin` folder in the function directory. Reference the assemblies using the file name, such as `#r "MyAssembly.dll"`.
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
The following table shows each version of the Node.js programming model along wi
| [Programming Model Version](https://www.npmjs.com/package/@azure/functions?activeTab=versions) | Support Level | [Functions Runtime Version](./functions-versions.md) | [Node.js Version](https://github.com/nodejs/release#release-schedule) | Description | | - | - | | | |
-| 4.x | Preview | 4.16+ | 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. |
-| 3.x | GA | 4.x | 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file |
+| 4.x | Preview | 4.16+ | 20.x (Preview), 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. |
+| 3.x | GA | 4.x | 20.x (Preview), 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file |
| 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | | 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. |
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Title: Guidance for developing Azure Functions
description: Learn the Azure Functions concepts and techniques that you need to develop functions in Azure, across all programming languages and bindings. ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Previously updated : 08/10/2023 Last updated : 09/06/2023
+zone_pivot_groups: programming-languages-set-functions
+ # Azure Functions developer guide
-In Azure Functions, specific functions share a few core technical concepts and components, regardless of the language or binding you use. Before you jump into learning details specific to a given language or binding, be sure to read through this overview that applies to all of them.
+
+In Azure Functions, all functions share some core technical concepts and components, regardless of your preferred language or development environment. This article is language-specific. Choose your preferred language at the top of the article.
This article assumes that you've already read the [Azure Functions overview](functions-overview.md).
-## Function code
-A *function* is the primary concept in Azure Functions. A function contains two important pieces - your code, which can be written in various languages, and some config, the function.json file. For compiled languages, this config file is generated automatically from annotations in your code. For scripting languages, you must provide the config file yourself.
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio](./functions-create-your-first-function-visual-studio.md), [Visual Studio Code](./create-first-function-vs-code-csharp.md), or from the [command prompt](./create-first-function-cli-csharp.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Maven](create-first-function-cli-java.md) (command line), [Eclipse](functions-create-maven-eclipse.md), [IntelliJ IDEA](functions-create-maven-intellij.md), [Gradle](functions-create-first-java-gradle.md), [Quarkus](functions-create-first-quarkus.md), [Spring Cloud](/azure/developer/jav).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-node.md) or from the [command prompt](./create-first-function-cli-node.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-typescript.md) or from the [command prompt](./create-first-function-cli-typescript.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-powershell.md) or from the [command prompt](./create-first-function-cli-powershell.md).
+If you prefer to jump right in, you can complete a quickstart tutorial using [Visual Studio Code](./create-first-function-vs-code-python.md) or from the [command prompt](./create-first-function-cli-python.md).
-The function.json file defines the function's trigger, bindings, and other configuration settings. Every function has one and only one trigger. The runtime uses this config file to determine the events to monitor and how to pass data into and return data from a function execution. The following is an example function.json file.
+## Code project
-```json
-{
- "disabled":false,
- "bindings":[
- // ... bindings here
- {
- "type": "bindingType",
- "direction": "in",
- "name": "myParamName",
- // ... more depending on binding
- }
- ]
-}
-```
+At the core of Azure Functions is a language-specific code project that implements one or more units of code execution called _functions_. Functions are simply methods that run in the Azure cloud based on events, in response to HTTP requests, or on a schedule. Think of your Azure Functions code project as a mechanism for organizing, deploying, and collectively managing your individual functions in the project when they're running in Azure. For more information, see [Organize your functions](functions-best-practices.md#organize-your-functions).
-For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For detailed language-specific guidance, see the [C# developers guide](dotnet-isolated-process-guide.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [Java developers guide](functions-reference-java.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [Node.js developers guide](functions-reference-node.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [PowerShell developers guide](functions-reference-powershell.md).
+The way that you lay out your code project and how you indicate which methods in your project are functions depends on the development language of your project. For language-specific guidance, see the [Python developers guide](functions-reference-python.md).
+All functions must have a trigger, which defines how the function starts and can provide input to the function. Your functions can optionally define input and output bindings. These bindings simplify connections to other services without you having to work with client SDKs. For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
-The `bindings` property is where you configure both triggers and bindings. Each binding shares a few common settings and some settings, which are specific to a particular type of binding. Every binding requires the following settings:
+Azure Functions provides a set of language-specific project and function templates that make it easy to create new code projects and add functions to your project. You can use any of the tools that support Azure Functions development to generate new apps and functions using these templates.
-| Property | Values | Type | Comments|
-|||||
-| type | Name of binding.<br><br>For example, `queueTrigger`. | string | |
-| direction | `in`, `out` | string | Indicates whether the binding is for receiving data into the function or sending data from the function. |
-| name | Function identifier.<br><br>For example, `myQueue`. | string | The name that is used for the bound data in the function. For C#, this is an argument name; for JavaScript, it's the key in a key/value list. |
+## Development tools
-## Function app
-A function app provides an execution context in Azure in which your functions run. As such, it's the unit of deployment and management for your functions. A function app is composed of one or more individual functions that are managed, deployed, and scaled together. All of the functions in a function app share the same pricing plan, deployment method, and runtime version. Think of a function app as a way to organize and collectively manage your functions. To learn more, see [How to manage a function app](functions-how-to-use-azure-function-app-settings.md).
+The following tools provide an integrated development and publishing experience for Azure Functions in your preferred language:
-> [!NOTE]
-> All functions in a function app must be authored in the same language. In [previous versions](functions-versions.md) of the Azure Functions runtime, this wasn't required.
++ [Visual Studio](./functions-develop-vs.md)++ [Visual Studio Code](./functions-develop-vs-code.md)
-## Folder structure
++ [Azure Functions Core Tools](./functions-develop-local.md) (command prompt) ++ [Eclipse](functions-create-maven-eclipse.md )
-The above is the default (and recommended) folder structure for a Function app. If you wish to change the file location of a function's code, modify the `scriptFile` section of the _function.json_ file. We also recommend using [package deployment](deployment-zip-push.md) to deploy your project to your function app in Azure. You can also use existing tools like [continuous integration and deployment](functions-continuous-deployment.md) and Azure DevOps.
++ [Gradle](functions-create-first-java-gradle.md)
-> [!NOTE]
-> If deploying a package manually, make sure to deploy your _host.json_ file and function folders directly to the `wwwroot` folder. Do not include the `wwwroot` folder in your deployments. Otherwise, you end up with `wwwroot\wwwroot` folders.
++ [IntelliJ IDEA](functions-create-maven-intellij.md)
-#### Use local tools and publishing
-Function apps can be authored and published using a variety of tools, including [Visual Studio](./functions-develop-vs.md), [Visual Studio Code](./create-first-function-vs-code-csharp.md), [IntelliJ](./functions-create-maven-intellij.md), [Eclipse](./functions-create-maven-eclipse.md), and the [Azure Functions Core Tools](./functions-develop-local.md). For more information, see [Code and test Azure Functions locally](./functions-develop-local.md).
++ [Quarkus](functions-create-first-quarkus.md)
-<!--NOTE: I've removed documentation on FTP, because it does not sync triggers on the consumption plan --glenga -->
++ [Spring Cloud](/azure/developer/java/spring-framework/getting-started-with-spring-cloud-function-in-azure?toc=/azure/azure-functions/toc.json)
-## <a id="fileupdate"></a> How to edit functions in the Azure portal
-The Functions editor built into the Azure portal lets you update your code and your *function.json* file directly inline. This is recommended only for small changes or proofs of concept - best practice is to use a local development tool like VS Code.
+These tools integrate with [Azure Functions Core Tools](./functions-develop-local.md) so that you can run and debug on your local computer using the Functions runtime. For more information, see [Code and test Azure Functions locally](./functions-develop-local.md).
-## Parallel execution
-When multiple triggering events occur faster than a single-threaded function runtime can process them, the runtime may invoke the function multiple times in parallel. If a function app is using the [Consumption hosting plan](event-driven-scaling.md), the function app could scale out automatically. Each instance of the function app, whether the app runs on the Consumption hosting plan or a regular [App Service hosting plan](../app-service/overview-hosting-plans.md), might process concurrent function invocations in parallel using multiple threads. The maximum number of concurrent function invocations in each function app instance varies based on the type of trigger being used as well as the resources used by other functions within the function app.
+<a id="fileupdate"></a> There's also an editor in the Azure portal that lets you update your code and your *function.json* definition file directly in the portal. You should only use this editor for small changes or creating proof-of-concept functions. You should always develop your functions locally, when possible. For more information, see [Create your first function in the Azure portal](functions-create-function-app-portal.md).
+Portal editing is only supported for [Node.js version 3](functions-reference-node.md?pivots=nodejs-model-v3), which uses the function.json file.
+Portal editing is only supported for [Python version 1](functions-reference-python.md?pivots=python-mode-configuration), which uses the function.json file.
-## Functions runtime versioning
+## Deployment
-You can configure the version of the Functions runtime using the `FUNCTIONS_EXTENSION_VERSION` app setting. For example, the value "~4" indicates that your function app uses 4.x as its major version. Function apps are upgraded to each new minor version as they're released. For more information, including how to view the exact version of your function app, see [How to target Azure Functions runtime versions](set-runtime-version.md).
+When you publish your code project to Azure, you're essentially deploying your project to an existing function app resource. A function app provides an execution context in Azure in which your functions run. As such, it's the unit of deployment and management for your functions. From an Azure Resource perspective, a function app is equivalent to a site resource (`Microsoft.Web/sites`) in Azure App Service, which is equivalent to a web app.
-## Repositories
-The code for Azure Functions is open source and stored in GitHub repositories:
+A function app is composed of one or more individual functions that are managed, deployed, and scaled together. All of the functions in a function app share the same [pricing plan](functions-scale.md), [deployment method](functions-deployment-technologies.md), and [runtime version](functions-versions.md). For more information, see [How to manage a function app](functions-how-to-use-azure-function-app-settings.md).
-* [Azure Functions](https://github.com/Azure/Azure-Functions)
-* [Azure Functions host](https://github.com/Azure/azure-functions-host/)
-* [Azure Functions portal](https://github.com/azure/azure-functions-ux)
-* [Azure Functions templates](https://github.com/azure/azure-functions-templates)
-* [Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/)
-* [Azure WebJobs SDK Extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/)
+When the function app and any other required resources don't already exist in Azure, you first need to create these resources before you can deploy your project files. You can create these resources in one of these ways:
++ During [Visual Studio](./functions-develop-vs.md#publish-to-azure) publishing ++ Using [Visual Studio Code](./functions-develop-vs-code.md#publish-to-azure)+++ Programmatically using [Azure CLI](./scripts/functions-cli-create-serverless.md), [Azure PowerShell](./create-resources-azure-powershell.md#create-a-serverless-function-app-for-c), [ARM templates](functions-create-first-function-resource-manager.md), or [Bicep templates](functions-create-first-function-bicep.md)+++ In the [Azure portal](functions-create-function-app-portal.md)+
+In addition to tool-based publishing, Functions supports other technologies for deploying source code to an existing function app. For more information, see [Deployment technologies in Azure Functions](functions-deployment-technologies.md).
+
+## Connect to services
+
+A major requirement of any cloud-based compute service is reading data from and writing data to other cloud services. Functions provides an extensive set of bindings that makes it easier for you to connect to services without having to work with client SDKs.
+
+Whether you use the binding extensions provided by Functions or you work with client SDKs directly, you securely store connection data and do not include it in your code. For more information, see [Connections](#connections).
-## Bindings
-Here's a table of all supported bindings.
+### Bindings
+Functions provides bindings for many Azure services and a few third-party services, which are implemented as extensions. For more information, see the [complete list of supported bindings](functions-triggers-bindings.md#supported-bindings).
-Having issues with errors coming from the bindings? Review the [Azure Functions Binding Error Codes](functions-bindings-error-pages.md) documentation.
+Binding extensions can support both inputs and outputs, and many triggers also act as input bindings. Bindings let you configure the connection to services so that the Functions host can handle the data access for you. For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+If you're having issues with errors coming from bindings, see the [Azure Functions Binding Error Codes](functions-bindings-error-pages.md) documentation.
+
+### Client SDKs
+
+While Functions provides bindings to simplify data access in your function code, you're still able to use a client SDK in your project to directly access a given service, if you prefer. You might need to use client SDKs directly should your functions require a functionality of the underlying SDK that's not supported by the binding extension.
+
+When using client SDKs, you should use the same process for [storing and accessing connection strings](#connections) used by binding extensions.
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-dotnet-class-library.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-java.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-node.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-powershell.md#environment-variables).
+When you create a client SDK instance in your functions, you should get the connection info required by the client from [Environment variables](functions-reference-python.md#environment-variables).
## Connections
-Your function project references connection information by name from its configuration provider. It doesn't directly accept the connection details, allowing them to be changed across environments. For example, a trigger definition might include a `connection` property. This might refer to a connection string, but you can't set the connection string directly in a `function.json`. Instead, you would set `connection` to the name of an environment variable that contains the connection string.
+As a security best practice, Azure Functions takes advantage of the application settings functionality of Azure App Service to help you more securely store strings, keys, and other tokens required to connect to other services. Application settings in Azure are stored encrypted and can be accessed at runtime by your app as environment variable `name` `value` pairs. For triggers and bindings that require a connection property, you set the application setting name instead of the actual connection string. You can't configure a binding directly with a connection string or key.
-The default configuration provider uses environment variables. These might be set by [Application Settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure Functions service, or from the [local settings file](functions-develop-local.md#local-settings-file) when developing locally.
+For example, consider a trigger definition that has a `connection` property. Instead of the connection string, you set `connection` to the name of an environment variable that contains the connection string. Using this secrets access strategy both makes your apps more secure and makes it easier for you to change connections across environments. For even more security, you can use identity-based connections.
+
+The default configuration provider uses environment variables. These variables are defined in [application settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure and in the [local settings file](functions-develop-local.md#local-settings-file) when developing locally.
### Connection values
-When the connection name resolves to a single exact value, the runtime identifies the value as a _connection string_, which typically includes a secret. The details of a connection string are defined by the service to which you wish to connect.
+When the connection name resolves to a single exact value, the runtime identifies the value as a _connection string_, which typically includes a secret. The details of a connection string depend on the service to which you connect.
However, a connection name can also refer to a collection of multiple configuration items, useful for configuring [identity-based connections](#configure-an-identity-based-connection). Environment variables can be treated as a collection by using a shared prefix that ends in double underscores `__`. The group can then be referenced by setting the connection name to this prefix.
The following components support identity-based connections:
[!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)]
-Choose a tab below to learn about permissions for each component:
+Choose one of these tabs to learn about permissions for each component:
# [Azure Blobs extension](#tab/blob)
An identity-based connection for an Azure service accepts the following common p
| Property | Environment variable template | Description | ||||| | Token Credential | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. This setting should be set to `managedidentity` if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment. |
-| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It is invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. |
+| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It's invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. |
| Resource ID | `<CONNECTION_NAME_PREFIX>__managedIdentityResourceId` | When `credential` is set to `managedidentity`, this property can be set to specify the resource Identifier to be used when obtaining a token. The property accepts a resource identifier corresponding to the resource ID of the user-defined managed identity. It's invalid to specify both a resource ID and a client ID. If neither are specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set.
-Additional options may be supported for a given connection type. Refer to the documentation for the component making the connection.
+Other options may be supported for a given connection type. Refer to the documentation for the component making the connection.
##### Local development with identity-based connections > [!NOTE]
-> Local development with identity-based connections requires updated versions of the [Azure Functions Core Tools](./functions-run-local.md). You can check your currently installed version by running `func -v`. For Functions v3, use version `3.0.3904` or later. For Functions v4, use version `4.0.3904` or later.
+> Local development with identity-based connections requires version `4.0.3904` of [Azure Functions Core Tools](functions-run-local.md), or a later version.
When you're running your function project locally, the above configuration tells the runtime to use your local developer identity. The connection attempts to get a token from the following locations, in order:
If none of these options are successful, an error occurs.
Your identity may already have some role assignments against Azure resources used for development, but those roles may not provide the necessary data access. Management roles like [Owner](../role-based-access-control/built-in-roles.md#owner) aren't sufficient. Double-check what permissions are required for connections for each component, and make sure that you have them assigned to yourself.
-In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity based on a client ID and client Secret for an Azure Active Directory service principal. **This configuration option is not supported when hosted in the Azure Functions service.** To use an ID and secret on your local machine, define the connection with the following additional properties:
+In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity based on a client ID and client Secret for an Azure Active Directory service principal. **This configuration option is not supported when hosted in the Azure Functions service.** To use an ID and secret on your local machine, define the connection with the following extra properties:
| Property | Environment variable template | Description | ||||
Here's an example of `local.settings.json` properties required for identity-base
#### Connecting to host storage with an identity
-The Azure Functions host uses the `AzureWebJobsStorage` connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This can be configured to use an identity as well.
+The Azure Functions host uses the `AzureWebJobsStorage` connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This connection can also be configured to use an identity.
> [!CAUTION] > Other components in Functions rely on `AzureWebJobsStorage` for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md).
To use an identity-based connection for `AzureWebJobsStorage`, configure the fol
[Common properties for identity-based connections](#common-properties-for-identity-based-connections) may also be set as well.
-If you're configuring `AzureWebJobsStorage` using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service will be inferred for this account. This won't work if the storage account is in a sovereign cloud or has a custom DNS.
+If you're configuring `AzureWebJobsStorage` using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service are inferred for this account. This doesn't work when the storage account is in a sovereign cloud or has a custom DNS.
| Setting | Description | Example value | |--|--||
If you're configuring `AzureWebJobsStorage` using a storage account that uses th
## Reporting Issues [!INCLUDE [Reporting Issues](../../includes/functions-reporting-issues.md)]
+## Open source repositories
+
+The code for Azure Functions is open source, and you can find key components in these GitHub repositories:
+
+* [Azure Functions](https://github.com/Azure/Azure-Functions)
+
+* [Azure Functions host](https://github.com/Azure/azure-functions-host/)
+
+* [Azure Functions portal](https://github.com/azure/azure-functions-ux)
+
+* [Azure Functions templates](https://github.com/azure/azure-functions-templates)
+
+* [Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/)
+
+* [Azure WebJobs SDK Extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/)
+* [Azure Functons .NET worker (isolated process)](https://github.com/Azure/azure-functions-dotnet-worker)
+* [Azure Functions Java worker](https://github.com/Azure/azure-functions-java-worker)
+* [Azure Functions Node.js Programming Model](https://github.com/Azure/azure-functions-nodejs-library)
+* [Azure Functions PowerShell worker](https://github.com/Azure/azure-functions-powershell-worker)
+* [Azure Functions Python worker](https://github.com/Azure/azure-functions-python-worker)
++ ## Next steps+ For more information, see the following resources:
-* [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-* [Code and test Azure Functions locally](./functions-develop-local.md)
-* [Best Practices for Azure Functions](functions-best-practices.md)
-* [Azure Functions C# developer reference](functions-dotnet-class-library.md)
-* [Azure Functions Node.js developer reference](functions-reference-node.md)
++ [Azure Functions scenarios](functions-scenarios.md)++ [Code and test Azure Functions locally](./functions-develop-local.md)++ [Best Practices for Azure Functions](functions-best-practices.md)
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 01/09/2023 Last updated : 09/01/2023 zone_pivot_groups: programming-languages-set-functions # Azure Functions runtime versions overview
-<a name="top"></a>Azure Functions currently supports several versions of the runtime host. The following table details the available versions, their support level, and when they should be used:
+<a name="top"></a>Azure Functions currently supports two versions of the runtime host. The following table details the currently supported runtime versions, their support level, and when they should be used:
| Version | Support level | Description | | | | | | 4.x | GA | **_Recommended runtime version for functions in all languages._** Check out [Supported language versions](#languages). |
-| 3.x | GA<sup>*</sup> | Reached the end of life (EOL) for extended support on December 13, 2022. We highly recommend you [migrate your apps to version 4.x](migrate-version-3-version-4.md) for full support. |
-| 2.x | GA<sup>*</sup> | Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrate your apps to version 4.x](migrate-version-3-version-4.md) for full support. |
| 1.x | GA | Supported only for C# apps that must use .NET Framework. This version is in maintenance mode, with enhancements provided only in later versions. We highly recommend you migrate your apps to version 4.x, which [supports .NET Framework 4.8](migrate-version-1-version-4.md?tabs=v4&pivots=programming-language-csharp).|
-<sup>*</sup>For a detailed support statement about end-of-life versions, see [this migration article](migrate-version-3-version-4.md).
+> [!IMPORTANT]
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support. For more information, see [Retired versions](#retired-versions).
-This article details some of the differences between these versions, how you can create each version, and how to change the version on which your functions run.
+This article details some of the differences between supported versions, how you can create each version, and how to change the version on which your functions run.
+
+## Levels of support
[!INCLUDE [functions-support-levels](../../includes/functions-support-levels.md)] ## Languages
-All functions in a function app must share the same language. You chose the language of functions in your function app when you create the app. The language of your function app is maintained in the [FUNCTIONS\_WORKER\_RUNTIME](functions-app-settings.md#functions_worker_runtime) setting, and shouldn't be changed when there are existing functions.
-
-The following table indicates which programming languages are currently supported in each runtime version.
+All functions in a function app must share the same language. You choose the language of functions in your function app when you create the app. The language of your function app is maintained in the [FUNCTIONS\_WORKER\_RUNTIME](functions-app-settings.md#functions_worker_runtime) setting, and shouldn't be changed when there are existing functions.
[!INCLUDE [functions-supported-languages](../../includes/functions-supported-languages.md)]
+For information about the language versions of previously supported versions of the Functions runtime, see [Retired runtime versions](language-support-policy.md#retired-runtime-versions).
+ ## <a name="creating-1x-apps"></a>Run on a specific version The version of the Functions runtime used by published apps in Azure is dictated by the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) application setting. In some cases and for certain languages, other settings may apply.
If you receive a warning about your extension bundle version not meeting a minim
To learn more about extension bundles, see [Extension bundles](functions-bindings-register.md#extension-bundles). ::: zone-end
+## Retired versions
+
+These versions of the Functions runtime reached the end of life (EOL) for extended support on December 13, 2022.
+
+| Version | Current support level | Previous support level |
+| | | |
+| 3.x | Out-of-support |GA |
+| 2.x | Out-of-support | GA |
+
+As soon as possible, you should migrate your apps to version 4.x to obtain full support. For a complete set of language-specific migration instructions, see [Migrate apps to Azure Functions version 4.x](migrate-version-3-version-4.md).
+
+Apps using versions 2.x and 3.x can still be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps aren't eligible for new features, security patches, and performance optimizations. You can only get related service support after you upgrade your apps to version 4.x.
+
+End of support for these older runtime versions is due to the end of support for .NET Core 3.1, which they had as a core dependency. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
+ ## Locally developed application versions You can make the following updates to function apps to locally change the targeted versions. ### Visual Studio runtime versions
-In Visual Studio, you select the runtime version when you create a project. Azure Functions tools for Visual Studio supports the three major runtime versions. The correct version is used when debugging and publishing based on project settings. The version settings are defined in the `.csproj` file in the following properties:
+In Visual Studio, you select the runtime version when you create a project. Azure Functions tools for Visual Studio supports the two major runtime versions. The correct version is used when debugging and publishing based on project settings. The version settings are defined in the `.csproj` file in the following properties:
# [Version 4.x](#tab/v4)
You can also choose `net6.0`, `net7.0`, `net8.0`, or `net48` as the target frame
> [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
-# [Version 3.x](#tab/v3)
-
-Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrating your apps to version 4.x](migrate-version-3-version-4.md) for full support.
-
-# [Version 2.x](#tab/v2)
-
-Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrating your apps to version 4.x](migrate-version-3-version-4.md) for full support.
- # [Version 1.x](#tab/v1) ```xml
Reached the end of life (EOL) on December 13, 2022. We highly recommend you [mig
```
-### VS Code and Azure Functions Core Tools
+### Visual Studio Code and Azure Functions Core Tools
[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
This article explains Azure functions language runtime support policy.
## Retirement process
-Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full-support coverages for function apps, Functions support aligns with end-of-life support for a given language. To achieve this, Functions implements a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
+Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full-support coverages for function apps, Functions support aligns with end-of-life support for a given language. To achieve this goal, Functions implements a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
### Notification phase
-We'll send notification emails to function app users about upcoming language version retirements. The notifications will be at least one year before the date of retirement. Upon the notification, you should prepare to upgrade the language version that your functions apps use to a supported version.
+The Functions team sends notification emails to function app users about upcoming language version retirements. The notifications are sent at least one year before the date of retirement. When you receive the notification, you should prepare to upgrade functions apps to use to a supported version.
### Retirement phase
-After the language end-of-life date, function apps that use retired language versions can still be created and deployed, and they continue to run on the platform. However your apps won't be eligible for new features, security patches, and performance optimizations until you upgrade them to a supported language version.
+After the language end-of-life date, function apps that use retired language versions can still be created and deployed, and they continue to run on the platform. However your apps aren't eligible for new features, security patches, and performance optimizations until you upgrade them to a supported language version.
> [!IMPORTANT] >You're highly encouraged to upgrade the language version of your affected function apps to a supported version.
After the language end-of-life date, function apps that use retired language ver
## Retirement policy exceptions
-There are few exceptions to the retirement policy outlined above. Here is a list of languages that are approaching or have reached their end-of-life (EOL) dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they are no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
+There are few exceptions to the retirement policy outlined above. Here's a list of languages that are approaching or have reached their end-of-life (EOL) dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they're no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
|Language Versions |EOL Date |Retirement Date| |--|--|-|
To learn more about specific language version support policy timeline, visit the
|PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)| |Python |[link](./functions-reference-python.md#python-version)|
+## Retired runtime versions
+
+This historical table shows the highest language level for specific Azure Functions runtime versions that are no longer supported:
+
+|Language |2.x | 3.x |
+|--|| |
+|[C#](functions-dotnet-class-library.md)|GA (.NET Core 2.1)| GA (.NET Core 3.1 & .NET 5<sup>*</sup>) |
+|[JavaScript/TypeScript](functions-reference-node.md?tabs=javascript)|GA (Node.js 10 & 8)| GA (Node.js 14, 12, & 10) |
+|[Java](functions-reference-java.md)|GA (Java 8)| GA (Java 11 & 8)|
+|[PowerShell](functions-reference-powershell.md) |N/A|N/A|
+|[Python](functions-reference-python.md#python-version)|GA (Python 3.7)| GA (Python 3.9, 3.8, 3.7)|
+|[TypeScript](functions-reference-node.md?tabs=typescript) |GA| GA |
+
+<sup>*</sup>.NET 5 was only supported for C# apps running in the [isolated worker model](dotnet-isolated-process-guide.md).
+
+For the language levels currently supported by Azure Functions, see [Languages by runtime version](supported-languages.md#languages-by-runtime-version).
## Next steps
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
In version 2.x, the following changes were made:
* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
+* The names of some [pre-defined custom metrics](analyze-telemetry-data.md) were changed after version 1.x. `Duration` was replaced with `MaxDurationMs`, `MinDurationMs`, and `AvgDurationMs`. `Success Rate` was also renamed to `Success Rate`.
+ ## Considerations for Azure Stack Hub [App Service on Azure Stack Hub](/azure-stack/operator/azure-stack-app-service-overview) does not support version 4.x of Azure Functions. When you are planning a migration off of version 1.x in Azure Stack Hub, you can choose one of the following options:
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
Last updated 07/31/2023
zone_pivot_groups: programming-languages-set-functions
-# <a name="top"></a>Migrate apps from Azure Functions version 3.x to version 4.x
+# Migrate apps from Azure Functions version 3.x to version 4.x
Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md). > [!IMPORTANT]
-> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support.
->
-> Apps using versions 2.x and 3.x can still be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll only get related service support once you upgrade them to version 4.x.
->
-> End of support for these older runtime versions is due to the end of support for .NET Core 3.1, which they had as a core dependency. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
->
-> We highly recommend that you migrate your function apps to version 4.x of the Functions runtime by following this article.
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support. For more information, see [Retired versions](functions-versions.md#retired-versions).
This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
azure-functions Streaming Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/streaming-logs.md
While developing an application, you often want to see what's being written to t
There are two ways to view a stream of log files being generated by your function executions.
-* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan.
+* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan. When your function is scaled to multiple instances, data from other instances isn't shown using this method.
* **Live Metrics Stream**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances and supports all plan types. This method uses [sampled data](configure-monitoring.md#configure-sampling).
In Application Insights, select **Live Metrics Stream**. [Sampled log entries](c
## [Visual Studio Code](#tab/vs-code)
+To turn on the streaming logs for your function app in Azure:
+
+1. Select F1 to open the command palette, and then search for and run the command **Azure Functions: Start Streaming Logs**.
+
+1. Select your function app in Azure, and then select **Yes** to enable application logging for the function app.
+
+1. Trigger your functions in Azure. Notice that log data is displayed in the Output window in Visual Studio Code.
+
+1. When you're done, remember to run the command **Azure Functions: Stop Streaming Logs** to disable logging for the function app.
## [Core Tools](#tab/core-tools)
azure-functions Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/supported-languages.md
Title: Supported languages in Azure Functions
-description: Learn which languages are supported (GA) and which are in preview, and ways to extend Functions development to other languages.
+description: Learn which languages are supported for developing your Functions in Azure, the support level of the various language versions, and potential end-of-life dates.
Previously updated : 11/27/2019- Last updated : 08/27/2023
+zone_pivot_groups: programming-languages-set-functions
# Supported languages in Azure Functions
-This article explains the levels of support offered for languages that you can use with Azure Functions. It also describes strategies for creating functions using languages not natively supported.
+This article explains the levels of support offered for your preferred language when using Azure Functions. It also describes strategies for creating functions using languages not natively supported.
[!INCLUDE [functions-support-levels](../../includes/functions-support-levels.md)] ## Languages by runtime version
-[Several versions of the Azure Functions runtime](functions-versions.md) are available. The following table shows which languages are supported in each runtime version.
- [!INCLUDE [functions-portal-language-support](../../includes/functions-portal-language-support.md)]
Azure Functions provides a guarantee of support for the major versions of suppor
> [!NOTE] >Because Azure Functions can remove the support of older minor versions at any time after a new minor version is available, you shouldn't pin your function apps to a specific minor/patch version of a programming language.
->
## Custom handlers
Custom handlers are lightweight web servers that receive events from the Azure F
Starting with version 2.x, the runtime is designed to offer [language extensibility](https://github.com/Azure/azure-webjobs-sdk-script/wiki/Language-Extensibility). The JavaScript and Java languages in the 2.x runtime are built with this extensibility.
-## Next steps
+## Next steps
+### [Isolated process](#tab/isolated-process)
+
+> [!div class="nextstepaction"]
+> [.NET isolated worker process reference](dotnet-isolated-process-guide.md).
-To learn more about how to develop functions in the supported languages, see the following resources:
+### [In-process](#tab/in-process)
+
+> [!div class="nextstepaction"]
+> [In-process C# developer reference](functions-dotnet-class-library.md)
++
-+ [C# class library developer reference](functions-dotnet-class-library.md)
-+ [C# script developer reference](functions-reference-csharp.md)
-+ [Java developer reference](functions-reference-java.md)
-+ [JavaScript developer reference](functions-reference-node.md?tabs=javascript)
-+ [PowerShell developer reference](functions-reference-powershell.md)
-+ [Python developer reference](functions-reference-python.md)
-+ [TypeScript developer reference](functions-reference-node.md?tabs=typescript)
+> [!div class="nextstepaction"]
+> [Java developer reference](functions-reference-java.md)
+> [!div class="nextstepaction"]
+> [Node.js developer reference](functions-reference-node.md?tabs=javascript)
+> [!div class="nextstepaction"]
+> [PowerShell developer reference](functions-reference-powershell.md)
+> [!div class="nextstepaction"]
+> [Python developer reference](functions-reference-python.md)
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
||| |Number of violations|The number of violations that trigger the alert.| |Evaluation period|The time period within which the number of violations occur. |
- |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data.<br> If the query requires more data than the alert evaluation you can change the time range manually.
-If the query contains **ago** command in the query, it will be cahnged automatically to 2 days (48 hours).|
+ |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data. If the query requires more data than the alert evaluation you can change the time range manually. If the query contains **ago** command, it will be changed automatically to 2 days (48 hours).|<br>
> [!NOTE] > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**. If you don't, the rule creation will fail because it won't meet the policy requirements.
azure-monitor Alerts Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-plan.md
+
+ Title: 'Azure Monitor best practices: Alerts and automated actions'
+description: Recommendations for deployment of Azure Monitor alerts and automated actions.
+++ Last updated : 05/31/2023++++
+# Deploy Azure Monitor: Alerts and automated actions
+
+This article provides guidance on alerts in Azure Monitor. Alerts proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal. You can create alerts that:
+
+- Send a proactive notification.
+- Initiate an automated action to attempt to remediate an issue.
+
+## Alerting strategy
+
+An alerting strategy defines your organization's standards for:
+
+- The types of alert rules that you'll create for different scenarios.
+- How you'll categorize and manage alerts after they're created.
+- Automated actions and notifications that you'll take in response to alerts.
+
+Defining an alert strategy assists you in defining the configuration of alert rules including alert severity and action groups.
+
+For factors to consider as you develop an alerting strategy, see [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy).
+
+## Alert rule types
+
+Alerts in Azure Monitor are created by alert rules that you must create. For guidance on recommended alert rules, see the monitoring documentation for each Azure service. Azure Monitor doesn't have any alert rules by default.
+
+Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require.
+
+- Activity log rules. Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating an activity log alert.
+- Metric alert rules. Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a metric alert.
+- Log alert rules. Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a log query alert.
+- [Application alerts](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) for a description of the different tests and information on creating them.
+
+## Alert severity
+
+Each alert rule defines the severity of the alerts that it creates based on the following table. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify alerts that require the greatest urgency.
+
+| Level | Name | Description |
+|:|:|:|
+| Sev 0 | Critical | Loss of service or application availability or severe degradation of performance. Requires immediate attention. |
+| Sev 1 | Error | Degradation of performance or loss of availability of some aspect of an application or service. Requires attention but not immediate. |
+| Sev 2 | Warning | A problem that doesn't include any current loss in availability or performance, although it has the potential to lead to more severe problems if unaddressed. |
+| Sev 3 | Informational | Doesn't indicate a problem but provides interesting information to an operator, such as successful completion of a regular process. |
+| Sev 4 | Verbose | Doesn't indicate a problem but provides detailed information that is verbose.
+
+Assess the severity of the condition each rule is identifying to assign an appropriate level. Define the types of issues you assign to each severity level and your standard response to each in your alerts strategy.
+
+## Action groups
+
+Automated responses to alerts in Azure Monitor are defined in [action groups](action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following items:
+
+- **Notifications**: Messages that notify operators and administrators that an alert was created.
+- **Actions**: Automated processes that attempt to correct the detected issue.
+
+## Notifications
+
+Notifications are messages sent to one or more users to notify them that an alert has been created. Because a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards:
+
+- Email
+- SMS
+- Push to Azure app
+- Voice
+- Email Azure Resource Manager role
+
+## Actions
+
+Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each action is typically used.
+
+### Automated remediation
+
+Use the following actions to attempt automated remediation of the issue identified by the alert:
+
+- **Automation runbook**: Start a built-in runbook or a custom runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine.
+- **Azure Functions**: Start an Azure function.
+
+### ITSM and on-call management
+
+- **IT service management (ITSM)**: Use the ITSM Connector to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules.
+- **Webhooks**: Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call.
+- **Secure webhook**: Integrate ITSM with Azure Active Directory Authentication.
+
+## Minimize alert activity
+
+You want to create alerts for any important information in your environment. But you don't want to create excessive alerts and notifications for issues that don't warrant them. To minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators, follow these guidelines:
+
+- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting.
+- Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.
+- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue.
+- Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together.
+- Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention.
+
+## Create alert rules at scale
+
+Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale:
+
+- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
+- For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](resource-manager-alerts-metric.md).
+- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
+
+> [!NOTE]
+> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert.
+
+## Next steps
+
+[Optimize cost in Azure Monitor](../best-practices-cost.md).
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 01/24/2023 Last updated : 09/12/2023 ms.devlang: csharp, java, javascript, vb
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
Title: Deploy Application Insights Agent description: Learn how to use Application Insights Agent to monitor website performance. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
description: Monitor ASP.NET Core web applications for availability, performance
ms.devlang: csharp Previously updated : 04/24/2023 Last updated : 09/12/2023 # Application Insights for ASP.NET Core applications
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Review TrackAvailability() test results description: This article explains how to review data logged by TrackAvailability() tests Previously updated : 08/20/2023 Last updated : 09/12/2023 # Review TrackAvailability() test results
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
Title: Availability Standard test - Azure Monitor Application Insights description: Set up Standard tests in Application Insights to check for availability of a website with a single request test. Previously updated : 03/22/2023 Last updated : 09/12/2023 # Standard test
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Autoinstrumentation for Azure Monitor Application Insights
description: Overview of autoinstrumentation for Azure Monitor Application Insights codeless application performance management. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Title: ApplicationInsights.config reference - Azure | Microsoft Docs description: Enable or disable data collection modules and add performance counters and other parameters. Previously updated : 03/22/2023 Last updated : 09/12/2023 ms.devlang: csharp
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
Title: Use Search in Azure Application Insights | Microsoft Docs description: Search and filter raw telemetry sent by your web app. Previously updated : 03/22/2023 Last updated : 09/12/2023
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
Title: Java Profiler for Azure Monitor Application Insights description: How to configure the Azure Monitor Application Insights for Java Profiler Previously updated : 11/15/2022 Last updated : 09/12/2023 ms.devlang: java
The ApplicationInsights Java Agent monitors CPU, memory, and request duration su
#### Profile now
-Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button will immediately request a profile in all agents that are attached to the Application Insights instance.
+Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button immediately requests a profile in all agents that are attached to the Application Insights instance.
> [!WARNING] > Invoking Profile now will enable the profiler feature, and Application Insights will apply default CPU and memory SLA triggers. When your application breaches those SLAs, Application Insights will gather Java profiles. If you wish to disable profiling later on, you can do so within the trigger menu shown in [Installation](#installation).
For instance, take the following scenario:
- Therefore the maximum possible size of tenured would be 922 mb. - Your threshold was set via the user interface to 75%, therefore your threshold would be 75% of 922 mb, 691 mb.
-In this scenario, a profile will occur in the following circumstances:
+In this scenario, a profile occurs in the following circumstances:
- Full garbage collection is executed - The Tenured regions occupancy is above 691 mb after collection ### Request
-SLA triggers are based on OpenTelemetry (otel) and they will initiate a profile if certain criteria is fulfilled.
+SLA triggers are based on OpenTelemetry (otel) and they initiate a profile if certain criteria is fulfilled.
Each individual trigger configuration is formed as follows:
For instance, the following scenario would trigger a profile if: more than 75% o
### Installation
-The following steps will guide you through enabling the profiling component on the agent and configuring resource limits that will trigger a profile if breached.
+The following steps guide you through enabling the profiling component on the agent and configuring resource limits that trigger a profile if breached.
-1. Configure the resource thresholds that will cause a profile to be collected:
+1. Configure the resource thresholds that cause a profile to be collected:
1. Browse to the Performance -> Profiler section of the Application Insights instance. :::image type="content" source="./media/java-standalone-profiler/performance-blade.png" alt-text="Screenshot of the link to open performance pane." lightbox="media/java-standalone-profiler/performance-blade.png":::
The following steps will guide you through enabling the profiling component on t
> [!WARNING] > The Java profiler does not support the "Sampling" trigger. Configuring this will have no effect.
-After these steps have been completed, the agent will monitor the resource usage of your process and trigger a profile when the threshold is exceeded. When a profile has been triggered and completed, it will be viewable from the
+After these steps have been completed, the agent will monitor the resource usage of your process and trigger a profile when the threshold is exceeded. When a profile has been triggered and completed, it's viewable from the
Application Insights instance within the Performance -> Profiler section. From that screen the profile can be downloaded, once download the JFR recording file can be opened and analyzed within a tool of your choosing, for example JDK Mission Control (JMC). :::image type="content" source="./media/java-standalone-profiler/configure-blade-inline.png" alt-text="Screenshot of profiler page features and settings." lightbox="media/java-standalone-profiler/configure-blade-inline.png":::
Example configuration:
```
-`memoryTriggeredSettings` This configuration will be used if a memory profile is requested. This value can be one of:
+`memoryTriggeredSettings` This configuration is used if a memory profile is requested. This value can be one of:
- `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see Warning section above for details. - `profile`. Uses the `profile.jfc` configuration that ships with JFR. - A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
-`cpuTriggeredSettings` This configuration will be used if a cpu profile is requested.
+`cpuTriggeredSettings` This configuration is used if a cpu profile is requested.
This value can be one of: - `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see Warning section above for details. - `profile`. Uses the `profile.jfc` jfc configuration that ships with JFR. - A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
-`manualTriggeredSettings` This configuration will be used if a manual profile is requested.
+`manualTriggeredSettings` This configuration is used if a manual profile is requested.
This value can be one of: - `profile-without-env-data` (default value). A profile with certain sensitive events disabled, see
This value can be one of:
`enableRequestTriggering` Whether JFR profiling should be triggered based on request configuration. This value can be one of: -- `true` Profiling will be triggered if a request trigger threshold is breached.
+- `true` Profiling is triggered if a request trigger threshold is breached.
- `false` (default value). Profiling will not be triggered by request configuration. ## Frequently asked questions
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Title: Telemetry processor examples - Azure Monitor Application Insights for Java description: Explore examples that show telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 05/13/2023 Last updated : 09/12/2023 ms.devlang: java
# Telemetry processor examples - Azure Monitor Application Insights for Java
-This article provides examples of telemetry processors in Application Insights for Java. You'll find samples for include and exclude configurations. You'll also find samples for attribute processors and span processors.
+This article provides examples of telemetry processors in Application Insights for Java, including samples for include and exclude configurations. It also includes samples for attribute processors and span processors.
## Include and exclude Span samples In this section, you'll see how to include and exclude spans. You'll also see how to exclude multiple spans and apply selective processing. ### Include spans
-This section shows how to include spans for an attribute processor. Spans that don't match the properties aren't processed by the processor.
+This section shows how to include spans for an attribute processor. The processor doesn't process spans that don't match the properties.
A match requires the span name to be equal to `spanA` or `spanB`.
This span doesn't match the include properties, and the processor actions aren't
### Exclude spans
-This section demonstrates how to exclude spans for an attribute processor. Spans that match the properties aren't processed by this processor.
+This section demonstrates how to exclude spans for an attribute processor. This processor doesn't process spans that match the properties.
A match requires the span name to be equal to `spanA` or `spanB`.
This span doesn't match the exclude properties, and the processor actions are ap
### Exclude spans by using multiple criteria
-This section demonstrates how to exclude spans for an attribute processor. Spans that match the properties aren't processed by this processor.
+This section demonstrates how to exclude spans for an attribute processor. This processor doesn't process spans that match the properties.
A match requires the following conditions to be met: * An attribute (for example, `env` with value `dev`) must exist in the span.
Let's assume the input log message body is `Starting PetClinicApplication on Wor
### Masking sensitive data in log message The following sample shows how to mask sensitive data in a log message body using both log processor and attribute processor.
-Let's assume the input log message body is `User account with userId 123456xx failed to login`. The log processor updates output message body to `User account with userId {redactedUserId} failed to login` and the attribute processor deletes the new attribute `redactedUserId` which was adding in the previous step.
+Let's assume the input log message body is `User account with userId 123456xx failed to login`. The log processor updates output message body to `User account with userId {redactedUserId} failed to login` and the attribute processor deletes the new attribute `redactedUserId`, which was adding in the previous step.
```json { "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Title: Telemetry processors (preview) - Azure Monitor Application Insights for Java description: Learn to configure telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 05/13/2023 Last updated : 09/12/2023 ms.devlang: java
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
description: Learn how to install and use JavaScript feature extensions (Click A
ibiza Previously updated : 07/10/2023 Last updated : 09/12/2023 ms.devlang: javascript
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
Title: Microsoft Azure Monitor Application Insights JavaScript SDK description: Microsoft Azure Monitor Application Insights JavaScript SDK is a powerful tool for monitoring and analyzing web application performance. Previously updated : 08/11/2023 Last updated : 09/12/2023 ms.devlang: javascript
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from Application Insights instrumentation keys to connection strings description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Title: Monitor Node.js services with Application Insights | Microsoft Docs description: Monitor performance and diagnose problems in Node.js services with Application Insights. Previously updated : 06/23/2023 Last updated : 09/12/2023 ms.devlang: javascript
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Title: Add, modify, and filter Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to add, modify, and filter OpenTelemetry for applications using Azure Monitor. Previously updated : 08/11/2023 Last updated : 09/12/2023 ms.devlang: csharp, javascript, typescript, python
You can't extend the Java Distro with community instrumentation libraries. To re
Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient. ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics, trace } = require("@opentelemetry/api");
+ const { registerInstrumentations } = require( "@opentelemetry/instrumentation");
const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express');
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const traceHandler = appInsights.getTraceHandler();
- traceHandler.addInstrumentation(new ExpressInstrumentation());
+ useAzureMonitor();
+ const tracerProvider = trace.getTracerProvider().getDelegate();
+ const meterProvider = metrics.getMeterProvider();
+ registerInstrumentations({
+ instrumentations: [
+ new ExpressInstrumentation(),
+ ],
+ tracerProvider: tracerProvider,
+ meterProvider: meterProvider
+ });
``` ### [Python](#tab/python)
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const meter = metrics.getMeter("testMeter");
let histogram = meter.createHistogram("histogram"); histogram.record(1, { "testKey": "testValue" }); histogram.record(30, { "testKey": "testValue2" });
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const meter = metrics.getMeter("testMeter");
let counter = meter.createCounter("counter"); counter.add(1, { "testKey": "testValue" }); counter.add(5, { "testKey2": "testValue" });
public class Program {
#### [Node.js](#tab/nodejs) ```typescript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { metrics } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const meter = metrics.getMeter("testMeter");
let gauge = meter.createObservableGauge("gauge"); gauge.addCallback((observableResult: ObservableResult) => { let randomNumber = Math.floor(Math.random() * 100);
You can use `opentelemetry-api` to update the status of a span and record except
#### [Node.js](#tab/nodejs) ```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const tracer = appInsights.getTraceHandler().getTracer();
-let span = tracer.startSpan("hello");
-try{
- throw new Error("Test Error");
-}
-catch(error){
- span.recordException(error);
-}
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
+
+ useAzureMonitor();
+ const tracer = trace.getTracer("testTracer");
+ let span = tracer.startSpan("hello");
+ try{
+ throw new Error("Test Error");
+ }
+ catch(error){
+ span.recordException(error);
+ }
``` #### [Python](#tab/python)
you can add your spans by using the OpenTelemetry API.
#### [Node.js](#tab/nodejs) ```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const tracer = appInsights.getTraceHandler().getTracer();
-let span = tracer.startSpan("hello");
-span.end();
+ useAzureMonitor();
+ const tracer = trace.getTracer("testTracer");
+ let span = tracer.startSpan("hello");
+ span.end();
```
Not available in .NET.
#### [Node.js](#tab/nodejs)
-First, get the `LogHandler`:
+You need to use `applicationinsights` v3 Beta package to achieve this. (https://www.npmjs.com/package/applicationinsights/v/beta)
```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const logHandler = appInsights.getLogHandler();
+ const { TelemetryClient } = require("applicationinsights");
+
+ const appInsights = new TelemetryClient();
```
-Then use the `LogHandler` to send custom telemetry:
+Then use the `TelemetryClient` to send custom telemetry:
##### Events ```javascript
-let eventTelemetry = {
- name: "testEvent"
-};
-logHandler.trackEvent(eventTelemetry);
+ let eventTelemetry = {
+ name: "testEvent"
+ };
+ appInsights.trackEvent(eventTelemetry);
``` ##### Logs ```javascript
-let traceTelemetry = {
- message: "testMessage",
- severity: "Information"
-};
-logHandler.trackTrace(traceTelemetry);
+ let traceTelemetry = {
+ message: "testMessage",
+ severity: "Information"
+ };
+ appInsights.trackTrace(traceTelemetry);
``` ##### Exceptions ```javascript
-try {
- ...
-} catch (error) {
- let exceptionTelemetry = {
- exception: error,
- severity: "Critical"
- };
- logHandler.trackException(exceptionTelemetry);
-}
+ try {
+ ...
+ } catch (error) {
+ let exceptionTelemetry = {
+ exception: error,
+ severity: "Critical"
+ };
+ appInsights.trackException(exceptionTelemetry);
+ }
``` #### [Python](#tab/python)
Adding one or more span attributes populates the `customDimensions` field in the
##### [Node.js](#tab/nodejs) ```typescript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
-const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
+ const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
+ const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ useAzureMonitor();
+ const tracerProvider = trace.getTracerProvider().getDelegate();
-class SpanEnrichingProcessor implements SpanProcessor{
- forceFlush(): Promise<void>{
- return Promise.resolve();
- }
- shutdown(): Promise<void>{
- return Promise.resolve();
- }
- onStart(_span: Span): void{}
- onEnd(span: ReadableSpan){
- span.attributes["CustomDimension1"] = "value1";
- span.attributes["CustomDimension2"] = "value2";
+ class SpanEnrichingProcessor implements SpanProcessor{
+ forceFlush(): Promise<void>{
+ return Promise.resolve();
+ }
+ shutdown(): Promise<void>{
+ return Promise.resolve();
+ }
+ onStart(_span: Span): void{}
+ onEnd(span: ReadableSpan){
+ span.attributes["CustomDimension1"] = "value1";
+ span.attributes["CustomDimension2"] = "value2";
+ }
}
-}
-appInsights.getTraceHandler().addSpanProcessor(new SpanEnrichingProcessor());
+ tracerProvider.addSpanProcessor(new SpanEnrichingProcessor());
``` ##### [Python](#tab/python)
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```typescript ...
-const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+ const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-class SpanEnrichingProcessor implements SpanProcessor{
- ...
+ class SpanEnrichingProcessor implements SpanProcessor{
+ ...
- onEnd(span){
- span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
+ onEnd(span){
+ span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
+ }
}
-}
``` ##### [Python](#tab/python)
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```typescript ...
-import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
+ import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
-class SpanEnrichingProcessor implements SpanProcessor{
- ...
+ class SpanEnrichingProcessor implements SpanProcessor{
+ ...
- onEnd(span: ReadableSpan){
- span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
+ onEnd(span: ReadableSpan){
+ span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
+ }
}
-}
``` ##### [Python](#tab/python)
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c
Attributes could be added only when calling manual track APIs only. Log attributes for console, bunyan and Winston are currently not supported.
-```javascript
-const config = new ApplicationInsightsConfig();
-config.instrumentations.http = httpInstrumentationConfig;
-const appInsights = new ApplicationInsightsClient(config);
-const logHandler = appInsights.getLogHandler();
-const attributes = {
- "testAttribute1": "testValue1",
- "testAttribute2": "testValue2",
- "testAttribute3": "testValue3"
-};
-logHandler.trackEvent({
- name: "testEvent",
- properties: attributes
-});
+```typescript
+ const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+ const { logs } = require("@opentelemetry/api-logs");
+
+ useAzureMonitor();
+ const logger = logs.getLogger("testLogger");
+ const logRecord = {
+ body : "testEvent",
+ attributes: {
+ "testAttribute1": "testValue1",
+ "testAttribute2": "testValue2",
+ "testAttribute3": "testValue3"
+ }
+ };
+ logger.emit({
+ name: "testEvent",
+ properties: attributes
+ });
``` #### [Python](#tab/python)
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): ```typescript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { useAzureMonitor, ApplicationInsightsOptions } = require("@azure/monitor-opentelemetry");
const { IncomingMessage } = require("http"); const { RequestOptions } = require("https"); const { HttpInstrumentationConfig }= require("@opentelemetry/instrumentation-http");
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
return false; } };
- const config = new ApplicationInsightsConfig();
- config.instrumentations.http = httpInstrumentationConfig;
- const appInsights = new ApplicationInsightsClient(config);
+ const config: ApplicationInsightsOptions = {
+ instrumentationOptions: {
+ http: {
+ httpInstrumentationConfig
+ },
+ },
+ };
+ useAzureMonitor(config);
``` 2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
Get the request trace ID and the span ID in your code:
### [Node.js](#tab/nodejs) -- To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates, see the [`applicationinsights` npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry).
+- To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.
- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Title: Configure Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications. Previously updated : 08/11/2023 Last updated : 09/12/2023 ms.devlang: csharp, javascript, typescript, python
For more information about Java, see the [Java supplemental documentation](java-
```typescript const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+ const { trace } = require("@opentelemetry/api");
const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base'); const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 08/30/2023 Last updated : 09/12/2023 ms.devlang: csharp, javascript, typescript, python
Follow the steps in this section to instrument your application with OpenTelemet
### [ASP.NET Core](#tab/aspnetcore) -- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2
+- [ASP.NET Core Application](/aspnet/core/introduction-to-aspnet-core) using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet)
### [.NET](#tab/net)
As part of using Application Insights instrumentation, we collect and send diagn
### [Node.js](#tab/nodejs) - For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md)-- To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates, see the [`applicationinsights` npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry).
+- To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.
- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Nodejs Exporter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-exporter.md
Title: Enable the Azure Monitor OpenTelemetry exporter for Node.js applications description: This article provides guidance on how to enable the Azure Monitor OpenTelemetry exporter for Node.js applications. Previously updated : 05/10/2023 Last updated : 09/12/2023 ms.devlang: javascript
function doWork(parent) {
#### Set the Application Insights connection string
-You can set the connection string either programmatically or by setting the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. In the event that both have been set, the programmatic connection string will take precedence.
+You can set the connection string either programmatically or by setting the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. If both have been set, the programmatic connection string takes precedence.
You can find your connection string in the Overview Pane of your Application Insights Resource.
As part of using Application Insights instrumentation, we collect and send diagn
## Set the Cloud Role Name and the Cloud Role Instance
-You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node.
+You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They appear on the Application Map as the name underneath a node.
Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
The following table represents the currently supported custom telemetry types:
You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries).
-The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
+The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
class SpanEnrichingProcessor {
#### Set the user ID or authenticated user ID
-You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance below. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
+You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the guidance in this section. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
> [!IMPORTANT] > Consult applicable privacy laws before you set the Authenticated User ID.
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: Data Collection Basics of Azure Monitor Application Insights description: This article provides an overview of how to collect telemetry to send to Azure Monitor Application Insights. Previously updated : 07/07/2023 Last updated : 09/12/2023
azure-monitor Opentelemetry Python Opencensus Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md
Title: Migrating Azure Monitor Application Insights Python from OpenCensus to OpenTelemetry description: This article provides guidance on how to migrate from the Azure Monitor Application Insights Python SDK and OpenCensus exporter to OpenTelemetry. Previously updated : 08/01/2023 Last updated : 09/12/2023 ms.devlang: python
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Title: Automate Application Insights with PowerShell | Microsoft Docs description: Automate creating and managing resources, alerts, and availability tests in PowerShell by using an Azure Resource Manager template. Previously updated : 03/22/2023 Last updated : 09/12/2023
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Application Insights | Microsoft Docs description: This article shows how to use connection strings. Previously updated : 08/11/2023 Last updated : 09/12/2023
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: 'Design your Application Insights deployment: One vs. many resources?' description: Direct telemetry to different resources for development, test, and production stamps. Previously updated : 11/15/2022 Last updated : 09/12/2023
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Title: Usage analysis with Application Insights | Azure Monitor description: Understand your users and what they do with your app. Previously updated : 07/10/2023 Last updated : 09/12/2023
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 06/23/2023 Last updated : 09/12/2023
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Title: 'Azure Monitor best practices: Alerts and automated actions'
-description: Recommendations for deployment of Azure Monitor alerts and automated actions.
+ Title: Best practices for Azure Monitor alerts
+description: Provides a template for a Well-Architected Framework (WAF) article specific to Azure Monitor alerts.
-- Previously updated : 05/31/2023--++ Last updated : 09/04/2023+
-# Deploy Azure Monitor: Alerts and automated actions
-
-This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It provides guidance on alerts in Azure Monitor. Alerts proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal. You can create alerts that:
--- Send a proactive notification.-- Initiate an automated action to attempt to remediate an issue.-
-## Alerting strategy
-
-An alerting strategy defines your organization's standards for:
--- The types of alert rules that you'll create for different scenarios.-- How you'll categorize and manage alerts after they're created.-- Automated actions and notifications that you'll take in response to alerts.-
-Defining an alert strategy assists you in defining the configuration of alert rules including alert severity and action groups.
-
-For factors to consider as you develop an alerting strategy, see [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy).
-
-## Alert rule types
-
-Alerts in Azure Monitor are created by alert rules that you must create. For guidance on recommended alert rules, see the monitoring documentation for each Azure service. Azure Monitor doesn't have any alert rules by default.
-
-Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require.
+# Best practices for Azure Monitor alerts
+This article provides architectural best practices for Azure Monitor alerts, alert processing rules, and action groups. The guidance is based on the five pillars of architecture excellence described in [Azure Well-Architected Framework](/azure/architecture/framework/).
-- [Activity log rules](alerts/activity-log-alerts.md). Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md) for information on creating an activity log alert.-- [Metric alert rules](alerts/alerts-metric-overview.md). Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create, view, and manage metric alerts by using Azure Monitor](alerts/alerts-metric.md) for information on creating a metric alert.-- [Log alert rules](alerts/alerts-unified-log.md). Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create, view, and manage log alerts by using Azure Monitor](alerts/alerts-log.md) for information on creating a log query alert.-- [Application alerts](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) for a description of the different tests and information on creating them.
-## Alert severity
-Each alert rule defines the severity of the alerts that it creates based on the following table. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify alerts that require the greatest urgency.
+## Reliability
+In the cloud, we acknowledge that failures happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. Use the following information to minimize failure of your Azure Monitor alert rule components.
-| Level | Name | Description |
-|:|:|:|
-| Sev 0 | Critical | Loss of service or application availability or severe degradation of performance. Requires immediate attention. |
-| Sev 1 | Error | Degradation of performance or loss of availability of some aspect of an application or service. Requires attention but not immediate. |
-| Sev 2 | Warning | A problem that doesn't include any current loss in availability or performance, although it has the potential to lead to more severe problems if unaddressed. |
-| Sev 3 | Informational | Doesn't indicate a problem but provides interesting information to an operator, such as successful completion of a regular process. |
-| Sev 4 | Verbose | Doesn't indicate a problem but provides detailed information that is verbose.
-Assess the severity of the condition each rule is identifying to assign an appropriate level. Define the types of issues you assign to each severity level and your standard response to each in your alerts strategy.
-## Action groups
+## Security
+Security is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to maximize the security of Azure Monitor alerts.
-Automated responses to alerts in Azure Monitor are defined in [action groups](alerts/action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following items:
-- **Notifications**: Messages that notify operators and administrators that an alert was created.-- **Actions**: Automated processes that attempt to correct the detected issue.
-## Notifications
+## Cost optimization
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
-Notifications are messages sent to one or more users to notify them that an alert has been created. Because a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards:
--- Email-- SMS-- Push to Azure app-- Voice-- Email Azure Resource Manager role-
-## Actions
-
-Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each action is typically used.
-
-### Automated remediation
-
-Use the following actions to attempt automated remediation of the issue identified by the alert:
--- **Automation runbook**: Start a built-in runbook or a custom runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine.-- **Azure Functions**: Start an Azure function.-
-### ITSM and on-call management
--- **IT service management (ITSM)**: Use the [ITSM Connector]() to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules.-- **Webhooks**: Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call.-- **Secure webhook**: Integrate ITSM with Azure Active Directory Authentication.-
-## Minimize alert activity
+> [!NOTE]
+> See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
-You want to create alerts for any important information in your environment. But you don't want to create excessive alerts and notifications for issues that don't warrant them. To minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators, follow these guidelines:
-- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting.-- Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.-- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue.-- Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together.-- Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention.
-## Create alert rules at scale
+## Operational excellence
+Operational excellence refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for supporting Azure Monitor alerts.
-Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale:
-- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts/alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).-- For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md).-- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
-> [!NOTE]
-> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert.
+## Performance efficiency
+Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner.
+Alerts offer a high degree of performance efficiency without any design decisions.
-## Next steps
+## Next step
-[Optimize cost in Azure Monitor](best-practices-cost.md)
+- [Get best practices for a complete deployment of Azure Monitor](best-practices.md).
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
To run a cross-service query, you need:
## Function supportability
-Azure Monitor cross-service queries support functions for Application Insights, Log Analytics, Azure Data Explorer, and Azure Resource Graph.
+Azure Monitor cross-service queries support **only ".show"** functions for Application Insights, Log Analytics, Azure Data Explorer, and Azure Resource Graph.
This capability enables cross-cluster queries to reference an Azure Monitor, Azure Data Explorer, or Azure Resource Graph tabular function directly. The following commands are supported with the cross-service query:
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.16.22650 | BE537D4396625ADD93B8C1D5AF098AE9D9472D8A20B2682B32920C5517F1C041 |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.16.22650 | FF86D821BA845833C9FE5F6D5C8A5F7A60617D3AD7D84C75143F3E244ABAAB74 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.17.3860 | BA3D1CF76E2BCCE35815B0F62C0A18E84E0459B468066D0F80F56514A74E0BF6 |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.17.3860 | 22538642730748F4AD8688D00C2919055825BA425BAAD3591D6EBE0021863617 |
## Install the Dependency agent on Windows
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 08/23/2023 Last updated : 09/13/2023
Azure NetApp Files backup is supported for the following regions:
* Qatar Central * South Africa North * South Central US
+* South India
* Southeast Asia
+* Sweden Central
* UAE North * UK South * West Europe
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a n
1. Reboot the Windows systems connecting to the existing SMB share. > [!NOTE]
- > Selecting the **Enable Continuous Availability** option alone does not automatically make the existing SMB sessions continuously available. After selecting the option, be sure to reboot the server for the change to take effect.
+ > Selecting the **Enable Continuous Availability** option alone does not automatically make the existing SMB sessions continuously available. After selecting the option, be sure to reboot the server immediately for the change to take effect.
1. Use the following command to verify that CA is enabled and used on the system thatΓÇÖs mounting the volume:
You can enable the SMB Continuous Availability (CA) feature when you [create a n
If you know the server name, you can use the `-ServerName` parameter with the command. See the [Get-SmbConnection](/powershell/module/smbshare/get-smbconnection?view=windowsserver2019-ps&preserve-view=true) PowerShell command details.
-1. Once you enable SMB Continuous Availability, reboot the server for the change to take effect.
- ## Next steps * [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
You can enable preview features by adding:
The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include:
+- **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference.
- **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). - **symbolicNameCodegen**: Allows the ARM template layer to use a new schema to represent resources as an object dictionary rather than an array of objects. This feature improves the semantic equivalent of the Bicep and ARM templates, resulting in more reliable code generation. Enabling this feature has no effect on the Bicep layer's functionality.
+- **testFramework**: Should be enabled in tandem with `assertions` experimental feature flag for expected functionality. Allows you to author client-side, offline unit-test test blocks that reference Bicep files and mock deployment parameters in a separate `test.bicep` file using the new `test` keyword. Test blocks can be run with the command *bicep test <filepath_to_file_with_test_blocks>* which runs all `assert` statements in the Bicep files referenced by the test blocks.
- **userDefinedFunctions**: Allows you to define your own custom functions. See [User-defined functions in Bicep](./user-defined-functions.md). ## Next steps
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | expressRouteProviderPorts | No | No | > | expressRouteServiceProviders | No | No | > | firewallPolicies | Yes, see [note below](#network-limitations) | Yes |
+> | firewallPolicies / ruleCollectionGroups | No | No |
> | frontdoors | Yes, but limited (see [note below](#network-limitations)) | Yes | > | frontdoors / frontendEndpoints | Yes, but limited (see [note below](#network-limitations)) | No | > | frontdoors / frontendEndpoints / customHttpsConfiguration | Yes, but limited (see [note below](#network-limitations)) | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | internalPublicIpAddresses | No | No | > | ipGroups | Yes | Yes | > | loadBalancers | Yes | Yes |
+> | loadBalancers / backendAddressPools | No | No |
> | localNetworkGateways | Yes | Yes | > | natGateways | Yes | Yes | > | networkExperimentProfiles | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | networkManagers | Yes | Yes | > | networkProfiles | Yes | Yes | > | networkSecurityGroups | Yes | Yes |
+> | networkSecurityGroups / securityRules | No | No |
> | networkSecurityPerimeters | Yes | Yes | > | networkVirtualAppliances | Yes | Yes | > | networkWatchers | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | publicIPPrefixes | Yes | Yes | > | routeFilters | Yes | Yes | > | routeTables | Yes | Yes |
+> | routeTables / routes | No | No |
> | securityPartnerProviders | Yes | Yes | > | serviceEndpointPolicies | Yes | Yes | > | trafficManagerGeographicHierarchies | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | virtualNetworkGateways | Yes | Yes | > | virtualNetworks | Yes | Yes | > | virtualNetworks / privateDnsZoneLinks | No | No |
+> | virtualNetworks / subnets | No | No |
> | virtualNetworks / taggedTrafficConsumers | No | No | > | virtualNetworkTaps | Yes | Yes | > | virtualRouters | Yes | Yes |
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
description: Learn how to connect to your virtual machines using a specified pri
Previously updated : 08/23/2023 Last updated : 09/13/2023
IP-based connection lets you connect to your on-premises, non-Azure, and Azure v
* IP-based connection wonΓÇÖt work with force tunneling over VPN, or when a default route is advertised over an ExpressRoute circuit. Azure Bastion requires access to the Internet and force tunneling, or the default route advertisement will result in traffic blackholing.
-* Azure Active Directory authentication and custom ports and protocols aren't currently supported when connecting to a VM via native client.
+* Azure Active Directory Authentication isn't supported for RDP connections. Azure Active Directory authentication is supported for SSH connections via native client.
-* UDR is not supported on Bastion subnet, including with IP-based connection.
+* Custom ports and protocols aren't currently supported when connecting to a VM via native client.
+
+* UDR isn't supported on Bastion subnet, including with IP-based connection.
## Prerequisites
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 23-09 | [5030329] | Servicing Stack Update LKG | 4.122 | Sep 12, 2023 | | Rel 23-09 | [5030504] | Servicing Stack Update LKG | 5.86 | Sep 12, 2023 | | Rel 23-09 | [5028264] | Servicing Stack Update LKG | 2.142 | Jul 11, 2023 |
+| Rel 23-09 | [4494175] | January '20 Microcode | 5.86 | Sep 1, 2020 |
+| Rel 23-09 | [4494174] | January '20 Microcode | 6.62 | Sep 1, 2020 |
| Rel 23-09 | 5030369 | Servicing Stack Update | 7.31 | | | Rel 23-09 | 5030505 | Servicing Stack Update | 6.62 | |
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
TODO:
- Should we be using a newer API version? --> ```bash
- token=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".accessToken")
+ token=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".access_token")
curl https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token" -s | jq ```
TODO:
Uri = "$env:MSI_ENDPOINT`?resource=https://management.core.windows.net/" Headers = @{Metadata='true'} }
- $token= ((Invoke-WebRequest @parameters ).content | ConvertFrom-Json).accessToken
+ $token= ((Invoke-WebRequest @parameters ).content | ConvertFrom-Json).access_token
$parameters = @{ Uri = 'https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview' Headers = @{Authorization = "Bearer $token"}
again.
Bash: ```bash
- TOKEN=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")
+ TOKEN=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".access_token")
curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $TOKEN" ```
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
Chat conversations happen within **chat threads**. Chat threads have the followi
Typically the thread creator and participants have same level of access to the thread and can execute all related operations available in the SDK, including deleting it. Participants don't have write access to messages sent by other participants, which means only the message sender can update or delete their sent messages. If another participant tries to do that, they get an error. ### Chat Data
-Azure Communication Services stores chat messages for 90 days. Chat thread participants can use `ListMessages` to view message history for a particular thread. However, the API does not return messages once the 90 day period has passed. Users that are removed from a chat thread are able to view previous message history for 90 days but cannot send or receive new messages. To learn more about data being stored in Azure Communication Services chat service, refer to the [data residency and privacy page](../privacy.md).
+Azure Communication Services stores chat messages indefinitely till they are deleted by the customer. Chat thread participants can use `ListMessages` to view message history for a particular thread. Users that are removed from a chat thread are able to view previous message history but cannot send or receive new messages. Accidentally deleted messages are not recoverable by the system. To learn more about data being stored in Azure Communication Services chat service, refer to the [data residency and privacy page](../privacy.md).
+
+In 2024, new functionality will be introduced where customers must choose between indefinite message retention or automatic deletion after 90 days. Existing messages remain unaffected.
For customers that use Virtual appointments, refer to our Teams Interoperability [user privacy](../interop/guest/privacy.md#chat-storage) for storage of chat messages in Teams meetings.
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
This sandbox setup is designed to help developers begin building the application
|Send typing indicator|Chat thread|10|30| ### Chat storage
-Chat messages are stored for 90 days. Submit [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you require storage for longer time period. If the time period is less than 90 days for chat messages, use the delete chat thread APIs.
+Azure Communication Services stores chat messages indefinitely till they are deleted by the customer.
+
+Beginning in CY24 Q1, customers must choose between indefinite message retention or automatic deletion after 90 days. Existing messages remain unaffected, but customers can opt for a 90-day retention period if desired.
+
+> [!NOTE]
+> Accidentally deleted messages are not recoverable by the system.
## Voice and video calling
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Alphanumeric sender ID is not capable of receiving inbound messages or STOP mess
Short Code availability is currently restricted to paid Azure subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md). ### Can you text to a toll-free number from a short code?
-No. Texting to a toll-free number from a short code is not supported. You also wont be able to receive a message from a toll-free number to a short code.
+ACS toll-free numbers are enabled to receive messages from short codes. However, short codes are not typically enabled to send messages to toll-free numbers. If your messages from short codes to ACS toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers.
### How should a short code be formatted? Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes blade without any prefix.
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Call ID**: This ID is used to identify Communication Services calls. * **SMS message ID**: This ID is used to identify SMS messages. * **Short Code Program Brief ID**: This ID is used to identify a short code program brief application.
+* **Toll-free verification campaign brief ID**: This ID is used to identify a toll-free verification campaign brief application.
* **Email message ID**: This ID is used to identify Send Email requests. * **Correlation ID**: This ID is used to identify requests made using Call Automation. * **Call logs**: These logs contain detailed information can be used to troubleshoot calling and network issues.
The program brief ID can be found on the [Azure portal](https://portal.azure.com
:::image type="content" source="./media/short-code-trouble-shooting.png" alt-text="Screenshot showing a short code program brief ID."::: +
+## Access your toll-free verification campaign brief ID
+The program brief ID can be found on the [Azure portal](https://portal.azure.com) in the Regulatory Documents blade.
++ ## Access your email operation ID
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Previously updated : 06/30/2021 Last updated : 09/12/2023
The following bandwidth requirements are for the native Windows, Android, and iO
## Firewall configuration
-Communication Services connections require internet connectivity to specific ports and IP addresses to deliver high-quality multimedia experiences. Without access to these ports and IP addresses, Communication Services can still work. The optimal experience is provided when the recommended ports and IP ranges are open.
+Communication Services connections require internet connectivity to specific ports and IP addresses to deliver high-quality multimedia experiences. Without access to these ports and IP addresses, Communication Services will not work properly. The list of IP ranges and allow listed domains that need to be enabled are:
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- |
confidential-computing Choose Confidential Containers Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/choose-confidential-containers-offerings.md
Title: Choose container offerings for confidential computing description: How to choose the right confidential container offerings to meet your security, isolation and developer needs.-+ Previously updated : 11/01/2021- Last updated : 9/12/2023+
Azure confidential computing offers multiple types of containers with varying ti
Confidential containers also help with code protection through encryption. You can create hardware-based assurances and hardware root of trust. You can also lower your attack surface area with confidential containers.
-The diagram below will guide different offerings in this portfolio
--- ## Links to container compute offerings
-**Confidential VM worker nodes on AKS)** supporting full AKS features with node level VM based Trusted Execution Environment (TEE). Also support remote guest attestation. [Get started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md)
+**Confidential VM worker nodes on AKS** supporting full AKS features with node level VM based Trusted Execution Environment (TEE). Also support remote guest attestation. [Get started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md)
-**Unmodified containers with serverless offering** [confidential containers on Azure Container Instance (ACI)](./confidential-containers.md#vm-isolated-confidential-containers-on-azure-container-instances-acipublic-preview) supporting existing Linux containers with remote guest attestation flow.
+**Unmodified containers with serverless offering** [confidential containers on Azure Container Instance (ACI)](./confidential-containers.md#vm-isolated-confidential-containers-on-azure-container-instances-aci) supporting existing Linux containers with remote guest attestation flow.
**Unmodified containers with Intel SGX** support higher programming languages on Intel SGX through the Azure Partner ecosystem of OSS projects. For more information, see the [unmodified containers deployment flow and samples](./confidential-containers.md).
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
Title: Confidential containers on Azure description: Learn about unmodified container support with confidential containers. -+ Previously updated : 3/1/2023- Last updated : 9/12/2023+
Below are the qualities of confidential containers:
- Provides strong assurances of data confidentiality, code integrity and data integrity in a cloud environment with hardware based confidential computing offerings - Helps isolate your containers from other container groups/pods, as well as VM node OS kernel
-## VM Isolated Confidential containers on Azure Container Instances (ACI) - Public preview
+## VM Isolated Confidential containers on Azure Container Instances (ACI)
[Confidential containers on ACI](../container-instances/container-instances-confidential-overview.md) enables fast and easy deployment of containers natively in Azure and with the ability to protect data and code in use thanks to AMD EPYC™ processors with confidential computing capabilities. This is because your container(s) runs in a hardware-based and attested Trusted Execution Environment (TEE) without the need to adopt a specialized programming model and without infrastructure management overhead. With this launch you get: 1. Full guest attestation, which reflects the cryptographic measurement of all hardware and software components running within your Trusted Computing Base (TCB). 2. Tooling to generate policies that will be enforced in the Trusted Execution Environment.
confidential-computing Virtual Machine Solutions Sgx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-sgx.md
Previously updated : 12/20/2021 Last updated : 9/12/2023
Under **properties**, you also have to specify an image under **storageProfile**
"version": "latest" }, "20_04-lts-gen2": {
- "offer": "UbuntuServer",
+ "offer": "0001-com-ubuntu-server-focal",
"publisher": "Canonical", "sku": "20_04-lts-gen2", "version": "latest" } "22_04-lts-gen2": {
- "offer": "UbuntuServer",
+ "offer": "0001-com-ubuntu-server-jammy",
"publisher": "Canonical", "sku": "22_04-lts-gen2", "version": "latest"
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
container-registry Tutorial Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-cache.md
+
+ Title: Artifact Cache - Overview
+description: An overview on Artifact Cache feature, its limitations and benefits of enabling the feature in your Registry.
+ Last updated : 04/19/2022++
+# Artifact Cache - Overview
+
+Artifact Cache feature allows users to cache container images in a private container registry. Artifact Cache is available in *Basic*, *Standard*, and *Premium* [service tiers](container-registry-skus.md).
+
+This article is part one in a six-part tutorial series. The tutorial covers:
+
+> [!div class="checklist"]
+1. [Artifact Cache](tutorial-artifact-cache.md)
+2. [Enable Artifact Cache - Azure portal](tutorial-enable-artifact-cache.md)
+3. [Enable Artifact Cache with authentication - Azure portal](tutorial-enable-artifact-cache-auth.md)
+4. [Enable Artifact Cache - Azure CLI](tutorial-enable-artifact-cache-cli.md)
+5. [Enable Artifact Cache with authentication - Azure CLI](tutorial-enable-artifact-cache-auth-cli.md)
+6. [Troubleshooting guide for Artifact Cache](tutorial-troubleshoot-artifact-cache.md)
+
+## Artifact Cache
+
+Artifact Cache enables you to cache container images from public and private repositories.
+
+Implementing Artifact Cache provides the following benefits:
+
+***More Reliable pull operations:*** Faster pulls of container images are achievable by caching the container images in ACR. Since Microsoft manages the Azure network, pull operations are faster by providing Geo-Replication and Availability Zone support to the customers.
+
+***Private networks:*** Cached registries are available on private networks. Therefore, users can configure their firewall to meet compliance standards.
+
+***Ensuring upstream content is delivered***: All registries, especially public ones like Docker Hub and others, have anonymous pull limits in order to ensure they can provide services to everyone. Artifact Cache allows users to pull images from the local ACR instead of the upstream registry. Artifact Cache ensures the content delivery from upstream and users gets the benefit of pulling the container images from the cache without counting to the pull limits.
+
+## Terminology
+
+- Cache Rule - A Cache Rule is a rule you can create to pull artifacts from a supported repository into your cache.
+ - A cache rule contains four parts:
+
+ 1. Rule Name - The name of your cache rule. For example, `Hello-World-Cache`.
+
+ 2. Source - The name of the Source Registry.
+
+ 3. Repository Path - The source path of the repository to find and retrieve artifacts you want to cache. For example, `docker.io/library/hello-world`.
+
+ 4. New ACR Repository Namespace - The name of the new repository path to store artifacts. For example, `hello-world`. The Repository can't already exist inside the ACR instance.
+
+- Credentials
+ - Credentials are a set of username and password for the source registry. You require Credentials to authenticate with a public or private repository. Credentials contain four parts
+
+ 1. Credentials - The name of your credentials.
+
+ 2. Source registry Login Server - The login server of your source registry.
+
+ 3. Source Authentication - The key vault locations to store credentials.
+
+ 4. Username and Password secrets- The secrets containing the username and password.
+
+## Upstream support
+
+Artifact Cache currently supports the following upstream registries:
+
+| Upstream registries | Support | Availability |
+| | | -- |
+| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| ECR Public | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Nivida | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+| Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+
+## Limitations
+
+- Artifact Cache feature doesn't support Customer managed key (CMK) enabled registries.
+
+- Cache will only occur after at least one image pull is complete on the available container image. For every new image available, a new image pull must be complete. Artifact Cache doesn't automatically pull new tags of images when a new tag is available. It is on the roadmap but not supported in this release.
+
+- Artifact Cache only supports 1000 cache rules.
+
+## Next steps
+
+* To enable Artifact Cache using the Azure portal advance to the next article: [Enable Artifact Cache](tutorial-enable-artifact-cache.md).
+
+<!-- LINKS - External -->
+
+[docker-rate-limit]:https://aka.ms/docker-rate-limit
container-registry Tutorial Enable Artifact Cache Auth Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-auth-cli.md
+
+ Title: Enable Artifact Cache with authentication - Azure CLI
+description: Learn how to enable Artifact Cache with authentication using Azure CLI.
++ Last updated : 06/17/2022+++
+# Enable Artifact Cache with authentication - Azure CLI
+
+This article is part five of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. In [part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. In [part three](tutorial-enable-artifact-cache-cli.md), you learn how to enable Artifact Cache feature by using the Azure CLI. In [part four](tutorial-enable-artifact-cache-auth.md), you learn how to enable Artifact Cache feature with authentication by using Azure portal.
+
+This article walks you through the steps of enabling Artifact Cache with authentication by using the Azure CLI. You have to use the Credential set to make an authenticated pull or to access a private repository.
+
+## Prerequisites
+
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+* You have an existing Key Vault to store credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials]
+* You can set and retrieve secrets from your Key Vault. Learn more about [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret]
+
+## Configure Artifact Cache with authentication - Azure CLI
+
+### Create a Credential Set - Azure CLI
+
+Before configuring a Credential Set, you have to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
+
+1. Run [az acr credential set create][az-acr-credential-set-create] command to create a credential set.
+
+ - For example, To create a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set create
+ -r MyRegistry \
+ -n MyRule \
+ -l docker.io \
+ -u https://MyKeyvault.vault.azure.net/secrets/usernamesecret \
+ -p https://MyKeyvault.vault.azure.net/secrets/passwordsecret
+ ```
+
+2. Run [az acr credential set update][az-acr-credential-set-update] to update the username or password KV secret ID on a credential set.
+
+ - For example, to update the username or password KV secret ID on a credential set a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set update -r MyRegistry -n MyRule -p https://MyKeyvault.vault.azure.net/secrets/newsecretname
+ ```
+
+3. Run [az-acr-credential-set-show][az-acr-credential-set-show] to show a credential set.
+
+ - For example, to show a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set show -r MyRegistry -n MyCredSet
+ ```
+
+### Create a cache rule with a Credential Set - Azure CLI
+
+1. Run [az acr cache create][az-acr-cache-create] command to create a cache rule.
+
+ - For example, to create a cache rule with a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu -c MyCredSet
+ ```
+
+2. Run [az acr cache update][az-acr-cache-update] command to update the credential set on a cache rule.
+
+ - For example, to update the credential set on a cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache update -r MyRegistry -n MyRule -c NewCredSet
+ ```
+
+ - For example, to remove a credential set from an existing cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache update -r MyRegistry -n MyRule --remove-cred-set
+ ```
+
+3. Run [az acr cache show][az-acr-cache-show] command to show a cache rule.
+
+ - For example, to show a cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache show -r MyRegistry -n MyRule
+ ```
+
+### Assign permissions to Key Vault
+
+1. Get the principal ID of system identity in use to access Key Vault.
+
+ ```azurecli-interactive
+ PRINCIPAL_ID=$(az acr credential-set show
+ -n MyCredSet \
+ -r MyRegistry \
+ --query 'identity.principalId' \
+ -o tsv)
+ ```
+
+2. Run the [az keyvault set-policy][az-keyvault-set-policy] command to assign access to the Key Vault, before pulling the image.
+
+ - For example, to assign permissions for the credential set access the KeyVault secret
+
+ ```azurecli-interactive
+ az keyvault set-policy --name MyKeyVault \
+ --object-id $PRINCIPAL_ID \
+ --secret-permissions get
+ ```
+
+### Pull your Image
+
+1. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+## Clean up the resources
+
+1. Run [az acr cache list][az-acr-cache-list] command to list the cache rules in the Azure Container Registry.
+
+ - For example, to list the cache rules for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache list -r MyRegistry
+ ```
+
+2. Run [az acr cache delete][az-acr-cache-delete] command to delete a cache rule.
+
+ - For example, to delete a cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr cache delete -r MyRegistry -n MyRule
+ ```
+
+3. Run[az acr credential set list][az-acr-credential-set-list] to list the credential sets in an Azure Container Registry.
+
+ - For example, to list the credential sets for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set list -r MyRegistry
+ ```
+
+4. Run [az-acr-credential-set-delete][az-acr-credential-set-delete] to delete a credential set.
+
+ - For example, to delete a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr credential-set delete -r MyRegistry -n MyCredSet
+ ```
+
+## Next steps
+
+* Advance to the [next article](tutorial-troubleshoot-artifact-cache.md) to walk through the troubleshoot guide for Registry Cache.
+
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]: ../key-vault/secrets/quick-create-cli.md#add-a-secret-to-key-vault
+[set-and-retrieve-a-secret]: ../key-vault/secrets/quick-create-cli.md#retrieve-a-secret-from-key-vault
+[az-keyvault-set-policy]: ../key-vault/general/assign-access-policy.md#assign-an-access-policy
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[Azure Cloud Shell]: /azure/cloud-shell/quickstart
+[az-acr-cache-create]:/cli/azure/acr/cache#az-acr-cache-create
+[az-acr-cache-show]:/cli/azure/acr/cache#az-acr-cache-show
+[az-acr-cache-list]:/cli/azure/acr/cache#az-acr-cache-list
+[az-acr-cache-delete]:/cli/azure/acr/cache#az-acr-cache-delete
+[az-acr-cache-update]:/cli/azure/acr/cache#az-acr-cache-update
+[az-acr-credential-set-create]:/cli/azure/acr/credential-set#az-acr-credential-set-create
+[az-acr-credential-set-update]:/cli/azure/acr/credential-set#az-acr-credential-set-update
+[az-acr-credential-set-show]: /cli/azure/acr/credential-set#az-acr-credential-set-show
+[az-acr-credential-set-list]: /cli/azure/acr/credential-set#az-acr-credential-set-list
+[az-acr-credential-set-delete]: /cli/azure/acr/credential-set#az-acr-credential-set-delete
container-registry Tutorial Enable Artifact Cache Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-auth.md
+
+ Title: Enable Artifact Cache with authentication - Azure portal
+description: Learn how to enable Artifact Cache with authentication using Azure portal.
+ Last updated : 04/19/2022+++
+# Enable Artifact Cache with authentication - Azure portal
+
+This article is part four of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. In [part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. In [part three](tutorial-enable-artifact-cache-cli.md) , you learn how to enable Artifact Cache feature by using the Azure CLI.
+
+This article walks you through the steps of enabling Artifact Cache with authentication by using the Azure portal. You have to use the Credential set to make an authenticated pull or to access a private repository.
+
+## Prerequisites
+
+* Sign in to the [Azure portal](https://ms.portal.azure.com/).
+* You have an existing Key Vault to store credentials. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials]
+* You have the existing Key vaults without the RBAC controls.
+
+## Configure Artifact Cache with authentication - Azure portal
+
+Follow the steps to create cache rule in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+
+2. In the side **Menu**, under the **Services**, select **Cache** .
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-preview-01.png" alt-text="Screenshot for Registry cache.":::
++
+3. Select **Create Rule**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-blade-02.png" alt-text="Screenshot for Create Rule.":::
++
+4. A window for **New cache rule** appears.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/new-cache-rule-auth-03.png" alt-text="Screenshot for new Cache Rule.":::
++
+5. Enter the **Rule name**.
+
+6. Select **Source** Registry from the dropdown menu.
+
+7. Enter the **Repository Path** to the artifacts you want to cache.
+
+8. For adding authentication to the repository, check the **Authentication** box.
+
+9. Choose **Create new credentials** to create a new set of credentials to store the username and password for your source registry. Learn how to [create new credentials](tutorial-enable-artifact-cache-auth.md#create-new-credentials)
+
+10. If you have the credentials ready, **Select credentials** from the drop-down menu.
+
+11. Under the **Destination**, Enter the name of the **New ACR Repository Namespace** to store cached artifacts.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/save-cache-rule-04.png" alt-text="Screenshot to save Cache Rule.":::
++
+12. Select on **Save**
+
+13. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+### Create new credentials
+
+Before configuring a Credential Set, you require to create and store secrets in the Azure KeyVault and retrieve the secrets from the Key Vault. Learn more about [creating and storing credentials in a Key Vault.][create-and-store-keyvault-credentials] and to [set and retrieve a secret from Key Vault.][set-and-retrieve-a-secret].
+
+1. Navigate to **Credentials** > **Add credential set** > **Create new credentials**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/add-credential-set-05.png" alt-text="Screenshot for adding credential set.":::
++
+ :::image type="content" source="./media/container-registry-artifact-cache/create-credential-set-06.png" alt-text="Screenshot for create new credential set.":::
++
+1. Enter **Name** for the new credentials for your source registry.
+
+1. Select a **Source Authentication**. Artifact Cache currently supports **Select from Key Vault** and **Enter secret URI's**.
+
+1. For the **Select from Key Vault** option, Learn more about [creating credentials using key vault][create-and-store-keyvault-credentials].
+
+1. Select on **Create**
+
+## Next steps
+
+* Advance to the [next article](tutorial-enable-artifact-cache-cli.md) to enable the Artifact Cache using Azure CLI.
+
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]: ../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault
+[set-and-retrieve-a-secret]: ../key-vault/secrets/quick-create-portal.md#retrieve-a-secret-from-key-vault
container-registry Tutorial Enable Artifact Cache Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache-cli.md
+
+ Title: Enable Artifact Cache - Azure CLI
+description: Learn how to enable Registry Cachein your Azure Container Registry using Azure CLI.
++ Last updated : 06/17/2022+++
+# Enable Artifact Cache - Azure CLI
+
+This article is part three of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. [Part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. This article walks you through the steps of enabling Artifact Cache by using the Azure CLI without authentication.
+
+## Prerequisites
+
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.46.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+
+## Configure Artifact Cache - Azure CLI
+
+Follow the steps to create a Cache rule without using a Credential set.
+
+### Create a Cache rule
+
+1. Run [az acr Cache create][az-acr-cache-create] command to create a Cache rule.
+
+ - For example, to create a Cache rule without a credential set for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu-
+ ```
+
+2. Run [az acr Cache show][az-acr-cache-show] command to show a Cache rule.
+
+ - For example, to show a Cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache show -r MyRegistry -n MyRule
+ ```
+
+### Pull your image
+
+1. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+## Clean up the resources
+
+1. Run [az acr Cache list][az-acr-cache-list] command to list the Cache rules in the Azure Container Registry.
+
+ - For example, to list the Cache rules for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache list -r MyRegistry
+ ```
+
+2. Run [az acr Cache delete][az-acr-cache-delete] command to delete a Cache rule.
+
+ - For example, to delete a Cache rule for a given `MyRegistry` Azure Container Registry.
+
+ ```azurecli-interactive
+ az acr Cache delete -r MyRegistry -n MyRule
+ ```
+
+## Next steps
+
+* To enable Artifact Cache with authentication using the Azure CLI advance to the next article [Enable Artifact Cache - Azure CLI](tutorial-enable-artifact-cache-auth-cli.md).
+
+<!-- LINKS - External -->
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[Azure Cloud Shell]: /azure/cloud-shell/quickstart
+[az-acr-cache-create]:/cli/azure/acr/cache#az-acr-cache-create
+[az-acr-cache-show]:/cli/azure/acr/cache#az-acr-cache-show
+[az-acr-cache-list]:/cli/azure/acr/cache#az-acr-cache-list
+[az-acr-cache-delete]:/cli/azure/acr/cache#az-acr-cache-delete
container-registry Tutorial Enable Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-artifact-cache.md
+
+ Title: Enable Artifact Cache - Azure portal
+description: Learn how to enable Registry Cache in your Azure Container Registry using Azure portal.
+ Last updated : 04/19/2022+++
+# Enable Artifact Cache - Azure portal
+
+This article is part two of a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. This article walks you through the steps of enabling Artifact Cache by using the Azure portal without authentication.
+
+## Prerequisites
+
+* Sign in to the [Azure portal](https://ms.portal.azure.com/)
+
+## Configure Artifact Cache - Azure portal
+
+Follow the steps to create cache rule in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+
+2. In the side **Menu**, under the **Services**, select **Cache**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-preview-01.png" alt-text="Screenshot for Registry cache.":::
++
+3. Select **Create Rule**.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/cache-blade-02.png" alt-text="Screenshot for Create Rule.":::
++
+4. A window for **New cache rule** appears.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/new-cache-rule-03.png" alt-text="Screenshot for new Cache Rule.":::
++
+5. Enter the **Rule name**.
+
+6. Select **Source** Registry from the dropdown menu.
+
+7. Enter the **Repository Path** to the artifacts you want to cache.
+
+8. You can skip **Authentication**, if you aren't accessing a private repository or performing an authenticated pull.
+
+9. Under the **Destination**, Enter the name of the **New ACR Repository Namespace** to store cached artifacts.
++
+ :::image type="content" source="./media/container-registry-artifact-cache/save-cache-rule-04.png" alt-text="Screenshot to save Cache Rule.":::
++
+10. Select on **Save**
+
+11. Pull the image from your cache using the Docker command by the registry login server name, repository name, and its desired tag.
+
+ - For example, to pull the image from the repository `hello-world` with its desired tag `latest` for a given registry login server `myregistry.azurecr.io`.
+
+ ```azurecli-interactive
+ docker pull myregistry.azurecr.io/hello-world:latest
+ ```
+
+## Next steps
+
+* Advance to the [next article](tutorial-enable-artifact-cache-cli.md) to enable the Artifact Cache using Azure CLI.
+
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]:../key-vault/secrets/quick-create-portal.md
container-registry Tutorial Troubleshoot Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-artifact-cache.md
+
+ Title: Troubleshoot Artifact Cache
+description: Learn how to troubleshoot the most common problems for a registry enabled with the Artifact Cache feature.
+ Last updated : 04/19/2022+++
+# Troubleshoot guide for Artifact Cache
+
+This article is part six in a six-part tutorial series. [Part one](tutorial-artifact-cache.md) provides an overview of Artifact Cache, its features, benefits, and limitations. In [part two](tutorial-enable-artifact-cache.md), you learn how to enable Artifact Cache feature by using the Azure portal. In [part three](tutorial-enable-artifact-cache-cli.md), you learn how to enable Artifact Cache feature by using the Azure CLI. In [part four](tutorial-enable-artifact-cache-auth.md), you learn how to enable Artifact Cache feature with authentication by using Azure portal. In [part five](tutorial-enable-artifact-cache-auth-cli.md), you learn how to enable Artifact Cache feature with authentication by using Azure CLI.
+
+This article helps you troubleshoot problems you might encounter when attempting to use Artifact Cache.
+
+## Symptoms and Causes
+
+May include one or more of the following issues:
+
+- Cached images don't appear in a real repository
+ - [Cached images don't appear in a live repository](tutorial-troubleshoot-artifact-cache.md#cached-images-dont-appear-in-a-live-repository)
+
+- Credential set has an unhealthy status
+ - [Unhealthy Credential Set](tutorial-troubleshoot-artifact-cache.md#unhealthy-credential-set)
+
+- Unable to create a cache rule
+ - [Cache rule Limit](tutorial-troubleshoot-artifact-cache.md#cache-rule-limit)
+
+## Potential Solutions
+
+## Cached images don't appear in a live repository
+
+If you're having an issue with cached images not showing up in your repository in ACR, we recommend verifying the repository path. Incorrect repository paths lead the cached images to not show up in your repository in ACR.
+
+- The Login server for Docker Hub is `docker.io`.
+- The Login server for Microsoft Artifact Registry is `mcr.microsoft.com`.
+
+The Azure portal autofills these fields for you. However, many Docker repositories begin with `library/` in their path. For example, in-order to cache the `hello-world` repository, the correct Repository Path is `docker.io/library/hello-world`.
+
+## Unhealthy Credential Set
+
+Credential sets are a set of Key Vault secrets that operate as a Username and Password for private repositories. Unhealthy Credential sets are often a result of these secrets no longer being valid. In the Azure portal, you can select the credential set, to edit and apply changes.
+
+- Verify the secrets in Azure Key Vault haven't expired.
+- Verify the secrets in Azure Key Vault are valid.
+- Verify the access to the Azure Key Vault is assigned.
+
+To assign the access to Azure Key Vault:
+
+```azurecli-interactive
+az keyvault set-policy --name myKeyVaultName --object-id myObjID --secret-permissions get
+```
+
+Learn more about [Key Vaults][create-and-store-keyvault-credentials].
+Learn more about [Assigning the access to Azure Key Vault][az-keyvault-set-policy].
+
+## Unable to create a Cache rule
+
+### Cache rule Limit
+
+If you're facing issues while creating a Cache rule, we recommend verifying if you have more than 1000 cache rules created.
+
+We recommend deleting any unwanted cache rules to avoid hitting the limit.
+
+Learn more about the [Cache Terminology](tutorial-artifact-cache.md#terminology)
+
+## Upstream support
+
+Artifact Cache currently supports the following upstream registries:
+
+| Upstream registries | Support | Availability |
+| | | -- |
+| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| ECR Public | Supports unauthenticated pulls only. | Azure CLI, Azure portal |
+| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| Nivida | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
+| Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal |
+| registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
++
+<!-- LINKS - External -->
+[create-and-store-keyvault-credentials]:../key-vault/secrets/quick-create-portal.md
+[az-keyvault-set-policy]: ../key-vault/general/assign-access-policy.md#assign-an-access-policy
cosmos-db Choose Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/choose-service.md
+
+ Title: Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
+description: Learn about the differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra. You also learn the benefits of each of these services and when to choose them.
++++++ Last updated : 09/05/2023++
+# Differences between Azure Managed Instance for Apache Cassandra and Azure Cosmos DB for Apache Cassandra
+
+In this article, you will learn the differences between [Azure Managed Instance for Apache Cassandra](../../managed-instance-apache-cassandr) Azure Cosmos DB for Apache Cassandra. This article provides recommendations on how to choose between the two services, or when to host your own Apache Cassandra environment.
+
+## Key differences
+
+Azure Managed Instance for Apache Cassandra provides automated deployment, scaling, and operations to maintain the node health for open-source Apache Cassandra instances in Azure. It also provides the capability to scale out the capacity of existing on-premises or cloud self-hosted Apache Cassandra clusters. It scales out by adding managed Cassandra datacenters to the existing cluster ring.
+
+The RU-based [Azure Cosmos DB for Apache Cassandra](introduction.md) in Azure Cosmos DB is a compatibility layer over Microsoft's globally distributed cloud-native database service [Azure Cosmos DB](../index.yml).
+
+## How to choose?
+
+The following table shows the common scenarios, workload requirements, and aspirations where each of this deployment approaches fit:
+
+| |Self-hosted Apache Cassandra on-premises or in Azure | Azure Managed Instance for Apache Cassandra | Azure Cosmos DB for Apache Cassandra |
+|||||
+|**Deployment type**| You have a highly customized Apache Cassandra deployment with custom patches or snitches. | You have a standard open-source Apache Cassandra deployment without any custom code. | You are content with a platform that is not Apache Cassandra underneath but is compliant with all open-source client drivers at a [wire protocol](../cassandra-support.md) level. |
+|**Operational overhead**| You have existing Cassandra experts who can deploy, configure, and maintain your clusters. | You want to lower the operational overhead for your Apache Cassandra node health, but still maintain control over the platform level configurations such as replication and consistency. | You want to eliminate the operational overhead by using a fully managed Platform-as-as-service database in the cloud. |
+|**Production Support**| You handle live incidents and outages yourself, including contacting relevant infrastructure teams for compute, networking, storage, etc. | You want a first-party managed service experience that will act as a one-stop shop for supporting live incidents and outages. | You want a first-party managed service experience that will act as a one-stop shop for live incidents and outages. |
+|**Software Support**| You handle all patches, and ensure that software is upgraded before end of life.| You want a first-party managed service experience that will offer Cassandra software level support beyond end of live| You want a first-party managed service experience where software level support is completely abstracted.|
+|**Operating system requirements**| You have a requirement to maintain custom or golden Virtual Machine operating system images. | You can use vanilla images but want to have control over SKUs, memory, disks, and IOPS. | You want capacity provisioning to be simplified and expressed as a single normalized metric, with a one-to-one relationship to throughput, such as [request units](../request-units.md) in Azure Cosmos DB. |
+|**Pricing model**| You want to use management software such as Datastax tooling and are happy with licensing costs. | You prefer pure open-source licensing and VM instance-based pricing. | You want to use cloud-native pricing, which includes [autoscale](scale-account-throughput.md#use-autoscale) and [serverless](../serverless.md) offers. |
+|**Analytics**| You want full control over the provisioning of analytical pipelines regardless of the overhead to build and maintain them. | You want to use cloud-based analytical services like Azure Databricks. | You want near real-time hybrid transactional analytics built into the platform with [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md). |
+|**Workload pattern**| Your workload is fairly steady-state and you don't require scaling nodes in the cluster frequently. | Your workload is volatile and you need to be able to scale up or scale down nodes in a data center or add/remove data centers easily. | Your workload is often volatile and you need to be able to scale up or scale down quickly and at a significant volume. |
+|**SLAs**| You are happy with your processes for maintaining SLAs on consistency, throughput, availability, and disaster recovery. | You are happy with your processes for maintaining SLAs on consistency and throughput, but want an [SLA for availability](https://azure.microsoft.com/support/legal/sl#backup-and-restore). | You want [fully comprehensive SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_4/) on consistency, throughput, availability, and disaster recovery. |
+|**Replication and consistency**| You need to be able to configure the full array of [tunable consistency settings](https://cassandra.apache.org/doc/latest/cassandr)) |
+|**Data model**| You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are building a new application, or your existing application has a relatively uniform distribution of data with respect to both storage and throughput across partition keys. |
+
+## Next steps
+
+* [Build a Java app to manage Azure Cosmos DB for Apache Cassandra data](manage-data-java-v4-sdk.md)
+* [Create an Azure Managed instance for Apache Cassandra cluster in Azure portal](../../managed-instance-apache-cassandr)
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
Container container = await database.CreateContainerIfNotExistsAsync(containerPr
#### [Java SDK v4](#tab/java-v4)
-```java
+#### [JavaScript SDK v4](#tab/javascript-v4)
+
+```javascript
// List of partition keys, in hierarchical order. You can have up to three levels of keys. List<String> subpartitionKeyPaths = new ArrayList<String>(); subpartitionKeyPaths.add("/TenantId");
item.setSessionId("0000-11-0000-1111");
Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+ // Create a new item
+const item: UserSession = {
+ Id: 'f7da01b0-090b-41d2-8416-dacae09fbb4a',
+ TenantId: 'Microsoft',
+ UserId: '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b',
+ SessionId: '0000-11-0000-1111'
+}
+
+// Pass in the object, and the SDK automatically extracts the full partition key path
+const { resource: document } = await = container.items.create(item);
+
+```
#### Manually specify the path
PartitionKey partitionKey = new PartitionKeyBuilder()
Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item, partitionKey); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+const item: UserSession = {
+ Id: 'f7da01b0-090b-41d2-8416-dacae09fbb4a',
+ TenantId: 'Microsoft',
+ UserId: '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b',
+ SessionId: '0000-11-0000-1111'
+}
+
+// Specify the full partition key path when creating the item
+const partitionKey: PartitionKey = new PartitionKeyBuilder()
+ .addValue(item.TenantId)
+ .addValue(item.UserId)
+ .addValue(item.SessionId)
+ .build();
+
+// Create the item in the container
+const { resource: document } = await container.items.create(item, partitionKey);
+```
### Perform a key/value lookup (point read) of an item
PartitionKey partitionKey = new PartitionKeyBuilder()
// Perform a point read Mono<CosmosItemResponse<UserSession>> readResponse = container.readItem(id, partitionKey, UserSession.class); ```---
-##### [JavaScript SDK v4](#tab/javascript-v4)
+##### [Javascript SDK v4](#tab/javascript-v4)
```javascript // Store the unique identifier
-String id = "f7da01b0-090b-41d2-8416-dacae09fbb4a";
+const id = "f7da01b0-090b-41d2-8416-dacae09fbb4a";
// Build the full partition key path
-PartitionKey partitionKey = new PartitionKeyBuilder()
- .add("Microsoft") //TenantId
- .add("8411f20f-be3e-416a-a3e7-dcd5a3c1f28b") //UserId
- .add("0000-11-0000-1111") //SessionId
+const partitionKey: PartitionKey = new PartitionKeyBuilder()
+ .addValue(item.TenantId)
+ .addValue(item.UserId)
+ .addValue(item.SessionId)
.build();
-
+ // Perform a point read
-Mono<CosmosItemResponse<UserSession>> readResponse = container.readItem(id, partitionKey, UserSession.class);
+const { resource: document } = await container.item(id, partitionKey).read();
```- ### Run a query
pagedResponse.byPage().flatMap(fluxResponse -> {
return Flux.empty(); }).blockLast(); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+// Define a single-partition query that specifies the full partition key path
+const query: string = "SELECT * FROM c WHERE c.TenantId = 'Microsoft' AND c.UserId = '8411f20f-be3e-416a-a3e7-dcd5a3c1f28b' AND c.SessionId = '0000-11-0000-1111'";
+
+// Retrieve an iterator for the result set
+const queryIterator = container.items.query(query);
+
+while (queryIterator.hasMoreResults()) {
+ const { resources: results } = await queryIterator.fetchNext();
+ // Process result
+}
+```
pagedResponse.byPage().flatMap(fluxResponse -> {
}).blockLast(); ```
+##### [Javascript SDK v4](#tab/javascript-v4)
+
+```javascript
+// Define a targeted cross-partition query specifying prefix path[s]
+const query: string = "SELECT * FROM c WHERE c.TenantId = 'Microsoft'";
+
+// Retrieve an iterator for the result set
+const queryIterator = container.items.query(query);
+
+while (queryIterator.hasMoreResults()) {
+ const { resources: results } = await queryIterator.fetchNext();
+ // Process result
+}
+```
## Limitations and known issues
cosmos-db Choose Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/choose-model.md
Here are a few key factors to help you decide which is the right option for you.
[**Get started with Azure Cosmos DB for MongoDB RU**](./quickstart-python.md)
+> [!TIP]
+> Want to try the Azure Cosmos DB for MongoDB RU with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free.
+ ### Choose vCore-based if - You're migrating (lift & shift) an existing MongoDB workload or building a new MongoDB application.-- Your workload has more point reads (fetching a single item by its ID and shard key value) and few long-running queries and complex aggregation pipeline operations.
+- Your workload has more long-running queries, complex aggregation pipelines, distributed transactions, joins, etc.
- You prefer high-capacity vertical and horizontal scaling with familiar vCore-based cluster tiers such as M30, M40, M50 and more. - You're running applications requiring 99.995% availability. [**Get started with Azure Cosmos DB for MongoDB vCore**](./vcore/quickstart-portal.md)
-> [!TIP]
-> Want to try the Azure Cosmos DB for MongoDB with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free.
- ## Resource and billing differences between the options The RU and vCore services have different architectures with important billing differences.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md
+
+ Title: Introduction/Overview
+
+description: Use Azure Cosmos DB for MongoDB to store and query massive amounts of data using popular open-source drivers.
+++++ Last updated : 09/12/2023++
+# What is Azure Cosmos DB for MongoDB?
++
+[Azure Cosmos DB](../introduction.md) is a fully managed NoSQL and relational database for modern app development.
+
+Azure Cosmos DB for MongoDB makes it easy to use Azure Cosmos DB as if it were a MongoDB database. You can use your existing MongoDB skills and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the connection string for your account using the API for MongoDB.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWXr4T]
+
+## Cosmos DB for MongoDB benefits
+
+Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service offerings such as MongoDB Atlas:
+
+### Request Unit (RU) architecture
+
+[A fully managed MongoDB-compatible service](./ru/introduction.md) with flexible scaling using [Request Units (RUs)](../request-units.md). Designed for cloud-native applications.
+
+- **Instantaneous scalability**: With the [Autoscale](../provision-throughput-autoscale.md) feature, your database scales instantaneously with zero warmup period. Other MongoDB offerings such as MongoDB Atlas can take hours to scale up and up to days to scale down.
+
+- **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This management includes sharding and optimizing the number of shards. Other MongoDB offerings such as MongoDB Atlas, require you to specify and manage sharding to horizontally scale. This automation gives you more time to focus on developing applications for your users.
+
+- **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
+
+- **Active-active database**: Unlike MongoDB Atlas, Cosmos DB for MongoDB supports active-active across multiple regions. Databases can span multiple regions, with no single point of failure for **writes and reads for the same data**. MongoDB Atlas global clusters only support active-passive deployments for writes for the same data.
+- **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. The Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to its architecture. This scalability means that you can scale your database to the exact size you need, without paying for unused resources.
+
+- **Real time analytics (HTAP) at any scale**: Run analytics workloads against your transactional MongoDB data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Easily create Power BI dashboards, integrate with Azure Machine Learning and Azure AI services, and bring all of your data from your MongoDB workloads into a single data warehousing solution. Learn more about the [Azure Synapse Link](../synapse-link.md).
+
+- **Serverless deployments**: Cosmos DB for MongoDB offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you're only charged per operation, and don't pay for the database when you don't use it.
+
+> [!TIP]
+> Visit [Choose your model](./choose-model.md) for an in-depth comparison of each architecture to help you choose which one is right for you.
+
+### vCore Architecture
+
+[A fully managed MongoDB-compatible service](./vcore/introduction.md) with dedicated instances for new and existing MongoDB apps. This architecture offers a familiar vCore architecture for MongoDB users, efficient scaling, and seamless integration with Azure services.
+
+- **Native Vector Search**: Seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB for MongoDB vCore. This integration is an all-in-one solution, unlike other vector search solutions that send your data between service integrations.
+
+- **Flat pricing with Low total cost of ownership**: Enjoy a familiar pricing model for Azure Cosmos DB for MongoDB vCore, based on compute (vCores & RAM) and storage (disks).
+
+- **Elevate querying with Text Indexes**: Enhance your data querying efficiency with our text indexing feature. Seamlessly navigate full-text searches across MongoDB collections, simplifying the process of extracting valuable insights from your documents.
+
+- **Scale with no shard key required**: Simplify your development process with high-capacity vertical scaling, all without the need for a shard key. Sharding and scaling horizontally is simple once collections are into the TBs.
+
+- **Free 35 day Backups with point in time restore (PITR)**: Azure Cosmos DB for MongoDB vCore offers free 35 day backups for any amount of data.
+
+> [!TIP]
+> Visit [Choose your model](./choose-model.md) for an in-depth comparison of each architecture to help you choose which one is right for you.
+
+## How Azure Cosmos DB for MongoDB works
+
+Cosmos DB for MongoDB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with MongoDB client SDKs, drivers, and tools. Azure Cosmos DB doesn't host the MongoDB database engine. Any MongoDB client driver compatible with the API version you're using should be able to connect, with no special configuration.
+
+> [!IMPORTANT]
+> This article describes a feature of Azure Cosmos DB that provides wire protocol compatibility with MongoDB databases. Microsoft does not run MongoDB databases to provide this service. Azure Cosmos DB is not affiliated with MongoDB, Inc.
+
+## Next steps
+
+- Read the [FAQ](faq.yml)
+- [Connect an existing MongoDB application to Azure Cosmos DB for MongoDB RU](connect-account.md)
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
To create a vector index, use the following `createIndexes` template:
| Field | Type | Description | | | | | | `index_name` | string | Unique name of the index. |
-| `path_to_property` | string | Path to the property that contains the vector. This path can be a top-level property or a dot notation path to the property. If a dot notation path is used, then all the nonleaf elements can't be arrays. |
+| `path_to_property` | string | Path to the property that contains the vector. This path can be a top-level property or a dot notation path to the property. If a dot notation path is used, then all the nonleaf elements can't be arrays. Vectors must be a `number[]` to be indexed and return in vector search results.|
| `kind` | string | Type of vector index to create. Currently, `vector-ivf` is the only supported index option. | | `numLists` | integer | This integer is the number of clusters that the inverted file (IVF) index uses to group the vector data. We recommend that `numLists` is set to `documentCount/1000` for up to 1 million documents and to `sqrt(documentCount)` for more than 1 million documents. Using a `numLists` value of `1` is akin to performing brute-force search, which will have limited performance. | | `similarity` | string | Similarity metric to use with the IVF index. Possible options are `COS` (cosine distance), `L2` (Euclidean distance), and `IP` (inner product). |
To create a vector index, use the following `createIndexes` template:
> > If you're experimenting with a new scenario or creating a small demo, you can start with `numLists` set to `1` to perform a brute-force search across all vectors. This should provide you with the most accurate results from the vector search, however be aware that the search speed and latency will be slow. After your initial setup, you should go ahead and tune the `numLists` parameter using the above guidance.
+> [!IMPORTANT]
+> Vectors must be a `number[]` to be indexed. Using another type, such as `double[]`, prevents the document from being indexed. Non-indexed documents won't be returned in the result of a vector search.
++ ## Examples The following examples show you how to index vectors, add documents that have vector properties, perform a vector search, and retrieve the index configuration.
This guide demonstrates how to create a vector index, add documents that have ve
> [!div class="nextstepaction"] > [Build AI apps with Azure Cosmos DB for MongoDB vCore vector search](vector-search-ai.md) * Learn more about [Azure OpenAI embeddings](../../../ai-services/openai/concepts/understand-embeddings.md)
-* Learn how to [generate embeddings using Azure OpenAI](../../../ai-services/openai/tutorials/embeddings.md)
+* Learn how to [generate embeddings using Azure OpenAI](../../../ai-services/openai/tutorials/embeddings.md)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-pull-model.md
Here's an example of how to obtain the iterator in latest version mode that retu
```js const options = {
- changeFeedStartFrom: ChangeFeedStartFrom.Beginning()
+ changeFeedStartFrom: ChangeFeedStartFrom.Now()
}; const iterator = container.items.getChangeFeedIterator(options);
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
cosmos-db Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-create-bicep.md
+
+ Title: 'Quickstart: create a cluster using Bicep'
+description: Using Bicep template for provisioning a cluster of Azure Cosmos DB for PostgreSQL
+++++ Last updated : 09/07/2023++
+# Use a Bicep file to provision an Azure Cosmos DB for PostgreSQL cluster
++
+Azure Cosmos DB for PostgreSQL is a managed service that allows you to run horizontally scalable PostgreSQL databases in the cloud. In this article you learn, using Bicep to provision and manage an Azure Cosmos DB for PostgreSQL cluster.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+## Create the Bicep file
+
+Provision an Azure Cosmos DB for PostgreSQL cluster that permits distributing data into shards, alongside HA node for high availability.
+
+Create a .bicep file and copy the following into it.
+
+```Bicep
+@secure()
+param administratorLoginPassword string
+param location string
+param clusterName string
+param coordinatorVCores int = 4
+param coordinatorStorageQuotaInMb int = 262144
+param coordinatorServerEdition string = 'GeneralPurpose'
+param enableShardsOnCoordinator bool = true
+param nodeServerEdition string = 'MemoryOptimized'
+param nodeVCores int = 4
+param nodeStorageQuotaInMb int = 524288
+param nodeCount int
+param enableHa bool
+param coordinatorEnablePublicIpAccess bool = true
+param nodeEnablePublicIpAccess bool = true
+param availabilityZone string = '1'
+param postgresqlVersion string = '15'
+param citusVersion string = '12.0'
+
+resource serverName_resource 'Microsoft.DBforPostgreSQL/serverGroupsv2@2022-11-08' = {
+ name: clusterName
+ location: location
+ tags: {}
+ properties: {
+ administratorLoginPassword: administratorLoginPassword
+ coordinatorServerEdition: coordinatorServerEdition
+ coordinatorVCores: coordinatorVCores
+ coordinatorStorageQuotaInMb: coordinatorStorageQuotaInMb
+ enableShardsOnCoordinator: enableShardsOnCoordinator
+ nodeCount: nodeCount
+ nodeServerEdition: nodeServerEdition
+ nodeVCores: nodeVCores
+ nodeStorageQuotaInMb: nodeStorageQuotaInMb
+ enableHa: enableHa
+ coordinatorEnablePublicIpAccess: coordinatorEnablePublicIpAccess
+ nodeEnablePublicIpAccess: nodeEnablePublicIpAccess
+ citusVersion: citusVersion
+ postgresqlVersion: postgresqlVersion
+ preferredPrimaryZone: availabilityZone
+ }
+ }
+```
+
+[Resource format](/azure/templates/microsoft.dbforpostgresql/servergroupsv2?pivots=deployment-language-bicep) could be referred to learn about the supported resource parameters.
+
+## Deploy the Bicep file
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group create --name exampleRG --location eastus
+az deployment group create --resource-group exampleRG --template-file provision.bicep
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+New-AzResourceGroup -Name "exampleRG" -Location "eastus"
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile "./provision.bicep"
+```
++
+You're prompted to enter these values:
+
+- **clusterName**: The cluster name determines the DNS name your applications use to connect, in the form `<node-qualifier>-<clustername>.<uniqueID>.postgres.cosmos.azure.com`. For example, The [domain name](./concepts-node-domain-name.md) postgres.cosmos.azure.com is appended to the cluster name you provide. The Cluster name must only contain lowercase letters, numbers and hyphens. The cluster name must not start or end in a hyphen.
+- **location**: Azure [region](./resources-regions.md) where the cluster and associated nodes are created.
+- **nodeCount**: Number of worker nodes in your cluster. Setting it to `0` provisions a single node cluster while value of greater than equal to two (`>= 2`) provisions a multi-node cluster.
+- **enableHA**: With this option selected if a node goes down, the failed node's standby automatically becomes the new node. Database applications continue to access the cluster with the same connection string.
+- **administratorLoginPassword**: Enter a new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and nonalphanumeric characters (!, $, #, %, etc.).
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to validate the deployment and review the deployed resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Next step
+
+With your cluster created, it's time to connect with a PostgreSQL client.
+
+> [!div class="nextstepaction"]
+> [Connect to your cluster](quickstart-connect-psql.md)
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 08/14/2023 Last updated : 09/12/2023
For Azure Storage accounts:
Or - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions. Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.-- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
- >[!NOTE]
- > Export to storage accounts behind firewall is in preview.
-
+- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
:::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing the From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" ::: If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 09/06/2023 Last updated : 09/13/2023
defender-for-cloud Defender For Storage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md
To simulate a malware upload using an EICAR test file, follow these steps:
1. Select on **Security Alerts**. 1. Review the security alert:
+1. a. Locate the alert titled **Malicious file uploaded to storage account**.
- . Locate the alert titled **Malicious file uploaded to storage account**.
- 1. Select on the alertΓÇÖs **View full details** button to see all the related details.
+1. b. Select on the alertΓÇÖs **View full details** button to see all the related details.
- 1. Learn more about Defender for Storage security alerts in the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md#alerts-azurestorage).
+1. Learn more about Defender for Storage security alerts in the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md#alerts-azurestorage).
## Testing sensitive data threat detection
Learn more about:
- [Threat detection and alerts](defender-for-storage-threats-alerts.md) +
defender-for-cloud Episode Thirty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty.md
Title: New Custom Recommendations for AWS and GCP | Defender for Cloud in the field
+ Title: New custom recommendations for AWS and GCP | Defender for Cloud in the field
description: Learn about new custom recommendations for AWS and GCP in Defender for Cloud Last updated 05/14/2023
-# New Custom Recommendations for AWS and GCP in Defender for Cloud
+# New custom recommendations for AWS and GCP in Defender for Cloud
**Episode description**: In this episode of Defender for Cloud in the Field, Yael Genut joins Yuri Diogenes to talk about the new custom recommendations for AWS and GCP. Yael explains the importance of creating custom recommendations in a multicloud environment and how to use Kusto Query Language to create these customizations. Yael also demonstrates the step-by-step process to create custom recommendations using this new capability and how these custom recommendations appear in the Defender for Cloud dashboard.
Last updated 05/14/2023
- [03:15](/shows/mdc-in-the-field/new-custom-recommendations#time=03m15s) - Creating a custom recommendation based on a template - [08:20](/shows/mdc-in-the-field/new-custom-recommendations#time=08m20s) - Creating a custom recommendation from scratch - [12:27](/shows/mdc-in-the-field/new-custom-recommendations#time=12m27s) - Custom recommendation update interval-- [14:30](/shows/mdc-in-the-field/new-custom-recommendations#time=14m30s) - Filtering custom recommendations in the Defender for Cloud dashboard -- [16:40](/shows/mdc-in-the-field/new-custom-recommendations#time=16m40s) - Prerequisites to use the custom recommendations feature
+- [14:30](/shows/mdc-in-the-field/new-custom-recommendations#time=14m30s) - Filtering custom recommendations in the Defender for Cloud dashboard
+- [16:40](/shows/mdc-in-the-field/new-custom-recommendations#time=16m40s) - Prerequisites to use the custom recommendations feature
## Recommended resources - Learn how to [create custom recommendations and security standards](create-custom-recommendations.md)
Last updated 05/14/2023
## Next steps > [!div class="nextstepaction"]
-> [Understanding data aware security posture capability](episode-thirty-one.md)
+> [Understanding data aware security posture capabilities](episode-thirty-one.md)
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
September 6, 2023
Containers vulnerability assessment powered by Microsoft Defender Vulnerability Management (MDVM), now supports an additional trigger for scanning images pulled from an ACR. This newly added trigger provides additional coverage for active images in addition to the existing triggers scanning images pushed to an ACR in the last 90 days and images currently running in AKS.
-This new trigger is available today for some customers, and will be available to all customers by mid-September.
+The new trigger will start rolling out today, and is expected to be available to all customers by end of September.
For more information, see [Container Vulnerability Assessment powered by MDVM](agentless-container-registry-vulnerability-assessment.md)
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
+
+ Title: How to determine your resource usage and quota
+description: Learn how to determine where the Dev Box resources for your subscription are used and if you have any spare capacity against your quota.
+++++ Last updated : 08/21/2023
+
+
+# Determine resource usage and quota
+
+To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. You can see the default quota for each resource type by subscription type here:
+
+Keeping track of how your quota of VM cores is being used across your subscriptions can be difficult. You may want to know what your current usage is, how much you have left, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the Usage + Quotas page.
+
+## Determine your usage and quota
+
+1. In the [Azure portal](https://portal.azure.com), go to the subscription you want to examine.
+
+1. On the Subscription page, under Settings, select **Usage + quotas**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/subscription-overview.png" alt-text="Screenshot showing the Subscription overview left menu, with Usage and quotas highlighted." lightbox="media/how-to-determine-your-quota-usage/subscription-overview.png":::
+
+1. To view Usage + quotas information about Microsoft Dev Box, select **Dev Box**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-dev-box.png" alt-text="Screenshot showing the Usage and quotas page, with Dev Box highlighted." lightbox="media/how-to-determine-your-quota-usage/select-dev-box.png":::
+
+1. In this example, you can see the **Quota name**, the **Region**, the **Subscription** the quota is assigned to, the **Current Usage**, and whether or not the limit is **Adjustable**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription.png" alt-text="Screenshot showing the Usage and quotas page, with column headings highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription.png":::
+
+1. You can also see that the usage is grouped by level; regular, low, and no usage.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription-groups.png" alt-text="Screenshot showing the Usage and quotas page, with VM size groups highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription-groups.png" :::
+
+1. To view quota and usage information for specific regions, select the **Region:** filter, select the regions to display, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-regions.png" lightbox="media/how-to-determine-your-quota-usage/select-regions.png" alt-text="Screenshot showing the Usage and quotas page, with Regions drop down highlighted.":::
+
+1. To view only the items that are using part of your quota, select the **Usage:** filter, and then select **Only items with usage**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-items-with-usage.png" lightbox="media/how-to-determine-your-quota-usage/select-items-with-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Only show items with usage option highlighted.":::
+
+1. To view items that are using above a certain amount of your quota, select the **Usage:** filter, and then select **Select custom usage**.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Select custom usage option highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" :::
+
+1. You can then set a custom usage threshold, so only the items using above the specified percentage of the quota are displayed.
+
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Select custom usage option and configuration settings highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage.png":::
+
+1. Select **Apply**.
+
+ Each subscription has its own Usage + quotas page, which covers all the various services in the subscription, not just Microsoft Dev Box.
+
+## Related content
+
+- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#microsoft-dev-box-limits).
+- To learn how to request a quota increase, see [Request a quota limit increase](./how-to-request-quota-increase.md).
dev-box How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-request-quota-increase.md
+
+ Title: Request a quota limit increase for Dev Box resources
+description: Learn how to request a quota increase to expand the number of dev box resources you can use in your subscription. Request an increase for dev box cores and other resources.
+++++ Last updated : 08/22/2023++
+# Request a quota limit increase
+
+This article describes how to submit a support request for increasing the number of resources for Microsoft Dev Box in your Azure subscription.
+
+When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Microsoft Dev Box team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
+
+The time it takes to increase your quota varies depending on the VM size, region, and number of resources requested. You won't have to go through the process of requesting extra capacity often, but to ensure you have the resources you require when you need them, you should:
+
+- Request capacity as far in advance as possible.
+- If possible, be flexible on the region where you're requesting capacity.
+- Recognize that capacity remains assigned for the lifetime of a subscription. When dev box resources are deleted, the capacity remains assigned to the subscription.
+- Request extra capacity only if you need more than is already assigned to your subscription.
+- Make incremental requests for VM cores rather than making large, bulk requests. Break requests for large numbers of cores into smaller requests for extra flexibility in how those requests are fulfilled.
+
+Learn more about the general [process for creating Azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+## Prerequisites
+
+- To create a support request, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Support Request Contributor](/azure/role-based-access-control/built-in-roles#support-request-contributor) role at the subscription level.
+- Before you create a support request for a limit increase, you need to gather additional information.
+
+## Gather information for your request
+
+You'll find submitting a support request for additional quota is quicker if you gather the required information before you begin the request process.
+
+- **Determine your current quota usage**
+
+ For each of your subscriptions, you can check your current usage of each Deployment Environments resource type in each region. Determine your current usage by following these steps: [Determine usage and quota](./how-to-determine-your-quota-usage.md).
+
+- **Determine the region for the additional quota**
+
+ Dev Box resources can exist in many regions. You can choose to deploy resources in multiple regions close to your dev box users. For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+
+- **Choose the quota type of the additional quota.**
+
+ The following Dev Box resources are limited by subscription. You can request an increase in the number of resources for each of these types.
+
+ - Dev box definitions
+ - Dev centers
+ - Network settings
+ - Pools
+ - Projects
+ - Network connections
+ - Dev Box general cores
+ - Other
+
+ When you want to increase the number of dev boxes available to your developers, you should request an increase in the number of Dev Box general cores.
+
+## Submit a new support request
+
+Follow these steps to request a limit increase:
+
+1. On the Azure portal home page, select Support & troubleshooting, and then select **Help + support**
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/submit-new-request.png" alt-text="Screenshot of the Azure portal home page, highlighting the Request core limit increase button." lightbox="./media/how-to-request-capacity-increase/submit-new-request.png":::
+
+1. On the **Help + support** page, select **Create a support request**.
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/create-support-request.png" alt-text="Screenshot of the Help + support page, highlighting Create a support request." lightbox="./media/how-to-request-capacity-increase/create-support-request.png":::
+
+1. On the **New support request** page, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Issue type** | *Service and subscription limits (quotas)* |
+ | **Subscription** | Select the subscription to which the request applies. |
+ | **Quota type** | *Microsoft Dev Box* |
+
+1. On the **Additional details** tab, in the **Problem details** section, select **Enter details**.
+
+ :::image type="content" source="media/how-to-request-capacity-increase/enter-details.png" alt-text="Screenshot of the New support request page, highlighting Enter details." lightbox="media/how-to-request-capacity-increase/enter-details.png":::
+
+1. In **Quota details**, enter the following information, and then select **Next**.
+
+ | Name | Value |
+ | -- | - |
+ | **Region** | Select the **Region** in which you want to increase your quota. |
+ | **Quota type** | When you select a Region, Azure displays your current usage and your current for all quota types. </br> Select the **Quota type** that you want to increase. |
+ | **New total limit** | Enter the new total limit that you want to request. |
+ | **Is it a limit decrease?** | Select **Yes** or **No**. |
+ | **Additional information** | Enter any extra information about your request. |
+
+ :::image type="content" source="media/how-to-request-capacity-increase/quota-details.png" alt-text="Screenshot of the Quota details pane." lightbox="media/how-to-request-capacity-increase/quota-details.png":::
+
+1. Select **Save and continue**.
+## Complete the support request
+
+To complete the support request, enter the following information:
+
+1. Complete the remainder of the support request **Additional details** tab using the following information:
+
+ ### Advanced diagnostic information
+
+ |Name |Value |
+ |||
+ |**Allow collection of advanced diagnostic information**|Select yes or no.|
+
+ ### Support method
+
+ |Name |Value |
+ |||
+ |**Support plan**|Select your support plan.|
+ |**Severity**|Select the severity of the issue.|
+ |**Preferred contact method**|Select email or phone.|
+ |**Your availability**|Enter your availability.|
+ |**Support language**|Select your language preference.|
+
+ ### Contact information
+
+ |Name |Value |
+ |||
+ |**First name**|Enter your first name.|
+ |**Last name**|Enter your last name.|
+ |**Email**|Enter your contact email.|
+ |**Additional email for notification**|Enter an email for notifications.|
+ |**Phone**|Enter your contact phone number.|
+ |**Country/region**|Enter your location.|
+ |**Save contact changes for future support requests.**|Select the check box to save changes.|
+
+1. Select **Next**.
+
+1. On the **Review + create** tab, review the information, and then select **Create**.
+
+## Related content
+
+- To learn how to check your quota usage, see [Determine usage and quota](./how-to-determine-your-quota-usage.md).
+- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#microsoft-dev-box-limits)
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
Previously updated : 04/25/2023 Last updated : 09/12/2023 #Customer intent: As a dev box user, I want to understand how to create and access a dev box so that I can start work.
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
description: Follow this tutorial to learn how to build out an end-to-end Azure Digital Twins solution that's driven by device data. Previously updated : 09/26/2022 Last updated : 09/12/2023
+# CustomerIntent: As a developer, I want to create a data flow from devices through Azure Digital Twins so that I can have a connected digital twin solution.
# Optional fields. Don't forget to remove # if you need a field. # #
In this tutorial, you will...
> * Use an [Azure Functions](../azure-functions/functions-overview.md) app to route simulated telemetry from an [IoT Hub](../iot-hub/about-iot-hub.md) device into digital twin properties > * Propagate changes through the twin graph by processing digital twin notifications with Azure Functions, endpoints, and routes [!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-h3.md)]
The function app is part of the sample project you downloaded, located in the *d
### Publish the app
-To publish the function app to Azure, you'll need to create a storage account, then create the function app in Azure, and finally publish the functions to the Azure function app. This section completes these actions using the Azure CLI.
+To publish the function app to Azure, you'll need to create a storage account, then create the function app in Azure, and finally publish the functions to the Azure function app. This section completes these actions using the Azure CLI. In each command, replace any placeholders in angle brackets with the details for your own resources.
1. Create an Azure storage account by running the following command:
To publish the function app to Azure, you'll need to create a storage account, t
1. Create an Azure function app by running the following command: ```azurecli-interactive
- az functionapp create --name <name-for-new-function-app> --storage-account <name-of-storage-account-from-previous-step> --functions-version 4 --consumption-plan-location <location> --runtime dotnet --runtime-version 6 --resource-group <resource-group>
+ az functionapp create --name <name-for-new-function-app> --storage-account <name-of-storage-account-from-previous-step> --functions-version 4 --consumption-plan-location <location> --runtime dotnet-isolated --runtime-version 7 --resource-group <resource-group>
``` 1. Next, you'll zip up the functions and publish them to your new Azure function app.
To publish the function app to Azure, you'll need to create a storage account, t
1. In the console, run the following command to publish the project locally: ```cmd/sh
- dotnet publish -c Release
+ dotnet publish -c Release -o publish
```
- This command publishes the project to the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
+ This command publishes the project to the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\publish* directory.
- 1. Using your preferred method, create a zip of the published files that are located in the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory. Name the zipped folder *publish.zip*.
+ 1. Using your preferred method, create a zip of the published files that are located **inside** the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\publish* directory. Name the zipped folder *publish.zip*.
- >[!TIP]
- >If you're using PowerShell, you can create the zip by copying the full path to that *\publish* directory and pasting it into the following command:
- >
- >```powershell
- >Compress-Archive -Path <full-path-to-publish-directory>\* -DestinationPath .\publish.zip
- >```
- > The cmdlet will create the *publish.zip* file in the directory location of your console.
+ >[!IMPORTANT]
+ >Make sure the zipped folder does not include an extra layer for the *publish* folder itself. It should only contain the contents that were inside the *publish* folder.
- Your *publish.zip* file should contain folders for *bin*, *ProcessDTRoutedData*, and *ProcessHubToDTEvents*, and there should also be a *host.json* file.
+ Here's an image of how the zip contents might look (it may change depending on your version of .NET).
:::image type="content" source="media/tutorial-end-to-end/publish-zip.png" alt-text="Screenshot of File Explorer in Windows showing the contents of the publish zip folder.":::
The first setting gives the function app the **Azure Digital Twins Data Owner**
The result of this command is outputted information about the role assignment you've created. The function app now has permissions to access data in your Azure Digital Twins instance.
-#### Configure application settings
+#### Configure application setting
The second setting creates an environment variable for the function with the URL of your Azure Digital Twins instance. The function code will use the value of this variable to refer to your instance. For more information about environment variables, see [Manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
The output is information about the device that was created.
Next, configure the device simulator to send data to your IoT Hub instance.
-Begin by getting the IoT hub connection string with this command:
+Begin by getting the IoT hub connection string with the following command. The connection string value will start with `HostName=`.
```azurecli-interactive az iot hub connection-string show --hub-name <your-IoT-hub-name>
The *ProcessHubToDTEvents* function you published earlier listens to the IoT Hub
To see the data from the Azure Digital Twins side, switch to your other console window that's open to the *AdtSampleApp\SampleClientApp* folder. Run the *SampleClientApp* project with `dotnet run`.
+```cmd/sh
+dotnet run
+```
+ Once the project is running and accepting commands, run the following command to get the temperatures being reported by the digital twin thermostat67: ```cmd/sh
Here's a review of the scenario that you built in this tutorial.
2. Simulated device telemetry is sent to IoT Hub, where the *ProcessHubToDTEvents* Azure function is listening for telemetry events. The *ProcessHubToDTEvents* Azure function uses the information in these events to set the `Temperature` property on thermostat67 (**arrow B** in the diagram). 3. Property change events in Azure Digital Twins are routed to an Event Grid topic, where the *ProcessDTRoutedData* Azure function is listening for events. The *ProcessDTRoutedData* Azure function uses the information in these events to set the `Temperature` property on room21 (**arrow C** in the diagram). ## Clean up resources
event-grid Create View Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespaces.md
If you already created a namespace and want to increase or decrease TUs, follow
:::image type="content" source="media/create-view-manage-namespaces/namespace-scale.png" alt-text="Screenshot showing Event Grid scale page.":::
+ > [!NOTE]
+ > For quotas and limits for resources in a namespace including maximum TUs in a namespace, See [Azure Event Grid quotas and limits](quotas-limits.md).
+ ## Delete a namespace 1. Follow instructions from the [View a namespace](#view-a-namespace) section to view all the namespaces, and select the namespace that you want to delete from the list.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
event-hubs Event Hubs About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-about.md
Last updated 03/07/2023
-# Azure Event HubsΓÇöA big data streaming platform and event ingestion service
+# What is Azure Event Hubs? ΓÇö A big data streaming platform and event ingestion service
Event Hubs is a modern big data streaming platform and event ingestion service that can seamlessly integrate with other Azure and Microsoft services, such as Stream Analytics, Power BI, and Event Grid, along with outside services like Apache Spark. The service can process millions of events per second with low latency. The data sent to an event hub (Event Hubs instance) can be transformed and stored by using any real-time analytics providers or batching or storage adapters. ## Why use Event Hubs? Data is valuable only when there's an easy way to process and get timely insights from data sources. Event Hubs provides a distributed stream processing platform with low latency and seamless integration, with data and analytics services inside and outside Azure to build your complete big data pipeline.
-Event Hubs represents the "front door" for an event pipeline, often called an **event ingestor** in solution architectures. An event ingestor is a component or service that sits between event publishers and event consumers to decouple the production of an event stream from the consumption of those events. Event Hubs provides a unified streaming platform with time retention buffer, decoupling event producers from event consumers.
+Event Hubs represents the "front door" for an event pipeline, often called an **event ingestor** in solution architectures. An event ingestor is a component or service that sits between event publishers and event consumers to decouple the production of events from the consumption of those events. Event Hubs provides a unified streaming platform with time retention buffer, decoupling event producers from event consumers.
The following sections describe key features of the Azure Event Hubs service:
The following sections describe key features of the Azure Event Hubs service:
Event Hubs is a fully managed Platform-as-a-Service (PaaS) with little configuration or management overhead, so you focus on your business solutions. [Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) gives you the PaaS Kafka experience without having to manage, configure, or run your clusters. ## Event Hubs for Apache Kafka
-[Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) furthermore enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
+Azure Event Hubs for Apache Kafka ecosystems enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure. For more information, see [Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md).
## Schema Registry in Azure Event Hubs
-[Azure Schema Registry](schema-registry-overview.md) in Event Hubs provides a centralized repository for managing schemas of events streaming applications. Azure Schema Registry comes free with every Event Hubs namespace, and it integrates seamlessly with you Kafka applications or Event Hubs SDK based applications.
+Schema Registry in Event Hubs provides a centralized repository for managing schemas of events streaming applications. Azure Schema Registry comes free with every Event Hubs namespace, and it integrates seamlessly with your Kafka applications or Event Hubs SDK based applications.
-It ensures data compatibility and consistency across event producers and consumers, enabling seamless schema evolution, validation, and governance, and promoting efficient data exchange and interoperability.
+It ensures data compatibility and consistency across event producers and consumers, enabling seamless schema evolution, validation, and governance, and promoting efficient data exchange and interoperability. For more information, see [Schema Registry in Azure Event Hubs](schema-registry-overview.md).
## Support for real-time and batch processing Ingest, buffer, store, and process your stream in real time to get actionable insights. Event Hubs uses a [partitioned consumer model](event-hubs-scalability.md#partitions), enabling multiple applications to process the stream concurrently and letting you control the speed of processing. Azure Event Hubs also integrates with [Azure Functions](../azure-functions/index.yml) for a serverless architecture. ## Capture event data
-[Capture](event-hubs-capture-overview.md) your data in near-real time in an [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/) or [Azure Data Lake Storage](https://azure.microsoft.com/services/data-lake-store/) for long-term retention or micro-batch processing. You can achieve this behavior on the same stream you use for deriving real-time analytics. Setting up capture of event data is fast. There are no administrative costs to run it, and it scales automatically with Event Hubs [throughput units](event-hubs-scalability.md#throughput-units) or [processing units](event-hubs-scalability.md#processing-units). Event Hubs enables you to focus on data processing rather than on data capture.
+Capture your data in near-real time in an [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/) or [Azure Data Lake Storage](https://azure.microsoft.com/services/data-lake-store/) for long-term retention or micro-batch processing. You can achieve this behavior on the same stream you use for deriving real-time analytics. Setting up capture of event data is fast. There are no administrative costs to run it, and it scales automatically with Event Hubs [throughput units](event-hubs-scalability.md#throughput-units) or [processing units](event-hubs-scalability.md#processing-units). Event Hubs enables you to focus on data processing rather than on data capture. For more information, see [Event Hubs Capture](event-hubs-capture-overview.md).
## Scalable
-With Event Hubs, you can start with data streams in megabytes, and grow to gigabytes or terabytes. The [Auto-inflate](event-hubs-auto-inflate.md) feature is one of the many options available to scale the number of throughput units or processing units to meet your usage needs.
+With Event Hubs, you can start with data streams in megabytes, and grow to gigabytes or terabytes. The [Autoinflate](event-hubs-auto-inflate.md) feature is one of the many options available to scale the number of throughput units or processing units to meet your usage needs.
## Rich ecosystem With a broad ecosystem available for the industry-standard AMQP 1.0 protocol and SDKs available in various languages: [.NET](https://github.com/Azure/azure-sdk-for-net/), [Java](https://github.com/Azure/azure-sdk-for-java/), [Python](https://github.com/Azure/azure-sdk-for-python/), [JavaScript](https://github.com/Azure/azure-sdk-for-js/), you can easily start processing your streams from Event Hubs. All supported client languages provide low-level integration. The ecosystem also provides you with seamless integration with Azure services like Azure Stream Analytics and Azure Functions and thus enables you to build serverless architectures. ## Event Hubs premium and dedicated
-Event Hubs **premium** caters to high-end streaming needs that require superior performance, better isolation with predictable latency and minimal interference in a managed multitenant PaaS environment. On top of all the features of the standard offering, the premium tier offers several extra features such as [dynamic partition scale up](dynamically-add-partitions.md), extended retention, and [customer-managed-keys](configure-customer-managed-key.md). For more information, see [Event Hubs Premium](event-hubs-premium-overview.md).
+Event Hubs **premium** caters to high-end streaming needs that require superior performance, better isolation with predictable latency, and minimal interference in a managed multitenant PaaS environment. On top of all the features of the standard offering, the premium tier offers several extra features such as [dynamic partition scale up](dynamically-add-partitions.md), extended retention, and [customer-managed-keys](configure-customer-managed-key.md). For more information, see [Event Hubs Premium](event-hubs-premium-overview.md).
Event Hubs **dedicated** tier offers single-tenant deployments for customers with the most demanding streaming needs. This single-tenant offering has a guaranteed 99.99% SLA and is available only on our dedicated pricing tier. An Event Hubs cluster can ingress millions of events per second with guaranteed capacity and subsecond latency. Namespaces and event hubs created within the dedicated cluster include all features of the premium offering and more. For more information, see [Event Hubs Dedicated](event-hubs-dedicated-overview.md).
Event Hubs contains the following key components.
| Component | Description | | | -- |
-| Event producers | Any entity that sends data to an event hub. Event publishers can publish events using HTTPS or AMQP 1.0 or Apache Kafka (1.0 and above). |
+| Event producers | Any entity that sends data to an event hub. Event publishers can publish events using HTTPS or AMQP 1.0 or Apache Kafka (1.0 and higher). |
| Partitions | Each consumer only reads a specific subset, or a partition, of the message stream. | | Consumer groups | A view (state, position, or offset) of an entire event hub. Consumer groups enable consuming applications to each have a separate view of the event stream. They read the stream independently at their own pace and with their own offsets. | | Event receivers | Any entity that reads event data from an event hub. All Event Hubs consumers connect via the AMQP 1.0 session. The Event Hubs service delivers events through a session as they become available. All Kafka consumers connect via the Kafka protocol 1.0 and later. |
-| [Throughput units (standard tier)](event-hubs-scalability.md#throughput-units) or [processing units (premium tier)](event-hubs-scalability.md#processing-units) or [capacity units (dedicated)](event-hubs-dedicated-overview.md) | Pre-purchased units of capacity that control the throughput capacity of Event Hubs. |
+| [Throughput units (standard tier)](event-hubs-scalability.md#throughput-units) or [processing units (premium tier)](event-hubs-scalability.md#processing-units) or [capacity units (dedicated)](event-hubs-dedicated-overview.md) | Prepurchased units of capacity that control the throughput capacity of Event Hubs. |
The following figure shows the Event Hubs stream processing architecture: ![Event Hubs](./media/event-hubs-about/event_hubs_architecture.png)
The following figure shows the Event Hubs stream processing architecture:
> [!NOTE] > For more information, see [Event Hubs features or components](event-hubs-features.md). - ## Next steps To get started using Event Hubs, see the **Send and receive events** tutorials:
To get started using Event Hubs, see the **Send and receive events** tutorials:
- [Python](event-hubs-python-get-started-send.md) - [JavaScript](event-hubs-node-get-started-send.md) - [Go](event-hubs-go-get-started-send.md)-- [C (send only)](event-hubs-c-getstarted-send.md)-- [Apache Storm (receive only)](event-hubs-storm-getstarted-receive.md)
+- [C](event-hubs-c-getstarted-send.md) (send only)
+- [Apache Storm](event-hubs-storm-getstarted-receive.md) (receive only)
To learn more about Event Hubs, see the following articles:
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
Title: 'Quickstart: Send or receive events using .NET'
-description: A quickstart to create a .NET Core application that sends events to Azure Event Hubs and then receive those events by using the Azure.Messaging.EventHubs package.
+description: A quickstart that shows you how to create a .NET Core application that sends events to and receive events from Azure Event Hubs.
Last updated 03/09/2023
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/06/2023 Last updated : 09/13/2023
firewall Explicit Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/explicit-proxy.md
With the Explicit proxy mode (supported for HTTP/S), you can define proxy settin
- First, upload the PAC file to a storage container that you create. Then, on the **Enable explicit proxy** page, configure the shared access signature (SAS) URL. Configure the port where the PAC is served from, and then select **Apply** at the bottom of the page.
- The SAS URL must have READ permissions so the firewall can upload the file. If changes are made to the PAC file, a new SAS URL needs to be generated and configured on the firewall **Enable explicit proxy** page.
+ The SAS URL must have READ permissions so the firewall can download the file. If changes are made to the PAC file, a new SAS URL needs to be generated and configured on the firewall **Enable explicit proxy** page.
:::image type="content" source="media/explicit-proxy/shared-access-signature.png" alt-text="Screenshot showing generate shared access signature.":::
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 09/06/2023 Last updated : 09/13/2023
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 09/06/2023 Last updated : 09/13/2023
hdinsight Apache Hadoop Visual Studio Tools Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-visual-studio-tools-get-started.md
keywords: hadoop tools,hive query,visual studio,visual studio hadoop
Previously updated : 08/05/2022 Last updated : 09/13/2023 # Use Data Lake Tools for Visual Studio to connect to Azure HDInsight and run Apache Hive queries
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
Last updated 07/28/2023
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/Azure/HDInsight/releases).
+## Release date: July 25, 2023
+
+This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2307201242**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+
+HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
+
+**OS versions**
+
+* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+
+For workload specific versions, see
+
+* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
+* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
+
+## ![Icon showing Whats new.](./media/hdinsight-release-notes/whats-new.svg) What's new
+* HDInsight 5.1 is now supported with ESP cluster.
+* Upgraded version of Ranger 2.3.0 and Oozie 5.2.1 are now part of HDInsight 5.1
+* The Spark 3.3.1 (HDInsight 5.1) cluster comes with Hive Warehouse Connector (HWC) 2.1, which works together with the Interactive Query (HDInsight 5.1) cluster.
+
+> [!IMPORTANT]
+> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on August 8, 2023. The action is to update to the latest image **2307201242**. Customers are advised to plan accordingly.
+
+|CVE | Severity| CVE Title|
+|-|-|-|
+|[CVE-2023-35393](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35393)| Important|Azure Apache Hive Spoofing Vulnerability|
+|[CVE-2023-35394](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35394)| Important|Azure HDInsight Jupyter Notebook Spoofing Vulnerability|
+|[CVE-2023-36877](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36877)| Important|Azure Apache Oozie Spoofing Vulnerability|
+|[CVE-2023-36881](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36881)| Important|Azure Apache Ambari Spoofing Vulnerability|
+|[CVE-2023-38188](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38188)| Important|Azure Apache Hadoop Spoofing Vulnerability|
+
+
+## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
+
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. Customers need to plan for the updates before 30, September 2023.
+* Cluster permissions for secure storage
+ * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account.
+* In-line quota update.
+ * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase.
+* HDInsight Cluster Creation with Custom VNets.
+ * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before 30, September 2023.ΓÇ»
+* Basic and Standard A-series VMs Retirement.
+ * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31, August 2024.
+* Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
+ * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before 30 September 2023.ΓÇ»
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight)
+
+YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [twitter](https://twitter.com/AzureHDInsight)
+
+ > [!NOTE]
+ > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
+ ## Release date: May 08, 2023 This release applies to HDInsight 4.x and 5.x HDInsight release is available to all regions over several days. This release is applicable for image number **2304280205**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
For workload specific versions, see
1. **Quota Management for HDInsight**
- HDInsight currently allocates quota to customer subscriptions at a regional level. The cores allocated to customers are generic and not classified at a VM family level (For example, Dv2, Ev3, Eav4, etc.).
+ HDInsight currently allocates quota to customer subscriptions at a regional level. The cores allocated to customers are generic and not classified at a VM family level (For example, `Dv2`, `Ev3`, `Eav4`, etc.).
HDInsight introduced an improved view, which provides a detail and classification of quotas for family-level VMs, this feature allows customers to view current and remaining quotas for a region at the VM family level. With the enhanced view, customers have richer visibility, for planning quotas, and a better user experience. This feature is currently available on HDInsight 4.x and 5.x for East US EUAP region. Other regions to follow later.
For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-ver
* Upgraded Zookeeper to 3.6.3 * Kafka Streams support * Stronger delivery guarantees for the Kafka producer enabled by default.
- * log4j 1.x replaced with reload4j.
+ * `log4j` 1.x replaced with `reload4j`.
* Send a hint to the partition leader to recover the partition. * `JoinGroupRequest` and `LeaveGroupRequest` have a reason attached. * Added Broker count metrics8.
- * Mirror Maker2 improvements.
+ * Mirror `Maker2` improvements.
**HBase 2.4.11 Upgrade (Preview)** * This version has new features such as the addition of new caching mechanism types for block cache, the ability to alter `hbase:meta table` and view the `hbase:meta` table from the HBase WEB UI.
For workload specific versions, see [here.](./hdinsight-40-component-versioning.
![Icon showing what's changed with text.](media/hdinsight-release-notes/new-icon-for-changed.png)
-* HDInsight has moved away from Azul Zulu Java JDK 8 to Adoptium Temurin JDK 8, which supports high-quality TCK certified runtimes, and associated technology for use across the Java ecosystem.
+* HDInsight has moved away from Azul Zulu Java JDK 8 to `Adoptium Temurin JDK 8`, which supports high-quality TCK certified runtimes, and associated technology for use across the Java ecosystem.
-* HDInsight has migrated to reload4j. The log4j changes are applicable to
+* HDInsight has migrated to `reload4j`. The `log4j` changes are applicable to
* Apache Hadoop * Apache Zookeeper
For more information on how to check Ubuntu version of cluster, see [here](https
|[HIVE-26127](https://issues.apache.org/jira/browse/HIVE-26127)| INSERT OVERWRITE error - File Not Found| |[HIVE-24957](https://issues.apache.org/jira/browse/HIVE-24957)| Wrong results when subquery has COALESCE in correlation predicate| |[HIVE-24999](https://issues.apache.org/jira/browse/HIVE-24999)| HiveSubQueryRemoveRule generates invalid plan for IN subquery with multiple correlations|
-|[HIVE-24322](https://issues.apache.org/jira/browse/HIVE-24322)| If there's direct insert, the attempt ID has to be checked when reading the manifest fails|
+|[HIVE-24322](https://issues.apache.org/jira/browse/HIVE-24322)| If there is direct insert, the attempt ID has to be checked when reading the manifest fails|
|[HIVE-23363](https://issues.apache.org/jira/browse/HIVE-23363)| Upgrade DataNucleus dependency to 5.2 | |[HIVE-26412](https://issues.apache.org/jira/browse/HIVE-26412)| Create interface to fetch available slots and add the default| |[HIVE-26173](https://issues.apache.org/jira/browse/HIVE-26173)| Upgrade derby to 10.14.2.0|
-|[HIVE-25920](https://issues.apache.org/jira/browse/HIVE-25920)| Bump Xerce2 to 2.12.2.|
+|[HIVE-25920](https://issues.apache.org/jira/browse/HIVE-25920)| Bump `Xerce2` to 2.12.2.|
|[HIVE-26300](https://issues.apache.org/jira/browse/HIVE-26300)| Upgrade Jackson data bind version to 2.12.6.1+ to avoid CVE-2020-36518| ## Release date: 08/10/2022
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
### Other bug fixes
-1. Yarn logΓÇÖs CLI failed to retrieve the logs if any TFile is corrupt or empty.
+1. Yarn logΓÇÖs CLI failed to retrieve the logs if any `TFile` is corrupt or empty.
2. Resolved invalid service principal details error while getting the OAuth token from Azure Active Directory. 3. Improved cluster creation reliability when 100+ worked nodes are configured.
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
|Bug Fixes|Apache JIRA| ||| |Tez Build Failure: FileSaver.js not found|[TEZ-4411](https://issues.apache.org/jira/browse/TEZ-4411)|
-|Wrong FS Exception when warehouse and scratchdir are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
+|Wrong FS Exception when warehouse and `scratchdir` are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
|TezUtils.createConfFromByteString on Configuration larger than 32 MB throws com.google.protobuf.CodedInputStream exception|[TEZ-4142](https://issues.apache.org/jira/browse/TEZ-4142)| |TezUtils::createByteStringFromConf should use snappy instead of DeflaterOutputStream|[TEZ-4113](https://issues.apache.org/jira/browse/TEZ-4411)| |Update protobuf dependency to 3.x|[TEZ-4363](https://issues.apache.org/jira/browse/TEZ-4363)|
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
### Other bug fixes
-1. Yarn logΓÇÖs CLI failed to retrieve the logs if any TFile is corrupt or empty.
+1. Yarn logΓÇÖs CLI failed to retrieve the logs if any `TFile` is corrupt or empty.
2. Resolved invalid service principal details error while getting the OAuth token from Azure Active Directory. 3. Improved cluster creation reliability when 100+ worked nodes are configured.
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
|Bug Fixes|Apache JIRA| ||| |Tez Build Failure: FileSaver.js not found|[TEZ-4411](https://issues.apache.org/jira/browse/TEZ-4411)|
-|Wrong FS Exception when warehouse and scratchdir are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
+|Wrong FS Exception when warehouse and `scratchdir` are on different FS|[TEZ-4406](https://issues.apache.org/jira/browse/TEZ-4406)|
|TezUtils.createConfFromByteString on Configuration larger than 32 MB throws com.google.protobuf.CodedInputStream exception|[TEZ-4142](https://issues.apache.org/jira/browse/TEZ-4142)| |TezUtils::createByteStringFromConf should use snappy instead of DeflaterOutputStream|[TEZ-4113](https://issues.apache.org/jira/browse/TEZ-4411)| |Update protobuf dependency to 3.x|[TEZ-4363](https://issues.apache.org/jira/browse/TEZ-4363)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Bug Fixes|Apache JIRA| |||
-|TableSnapshotInputFormat should use ReadType.STREAM for scanning HFiles |[HBASE-26273](https://issues.apache.org/jira/browse/HBASE-26273)|
+|TableSnapshotInputFormat should use ReadType.STREAM for scanning `HFiles` |[HBASE-26273](https://issues.apache.org/jira/browse/HBASE-26273)|
|Add option to disable scanMetrics in TableSnapshotInputFormat |[HBASE-26330](https://issues.apache.org/jira/browse/HBASE-26330)| |Fix for ArrayIndexOutOfBoundsException when balancer is executed |[HBASE-22739](https://issues.apache.org/jira/browse/HBASE-22739)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Include MultiDelimitSerDe in HiveServer2 By Default|[HIVE-20619](https://issues.apache.org/jira/browse/HIVE-20619)| | Remove glassfish.jersey and mssql-jdbc classes from jdbc-standalone jar|[HIVE-22134](https://issues.apache.org/jira/browse/HIVE-22134)| | Null pointer exception on running compaction against an MM table.|[HIVE-21280](https://issues.apache.org/jira/browse/HIVE-21280)|
-| Hive query with large size via knox fails with Broken pipe Write failed|[HIVE-22231](https://issues.apache.org/jira/browse/HIVE-22231)|
+| Hive query with large size via `knox` fails with Broken pipe Write failed|[HIVE-22231](https://issues.apache.org/jira/browse/HIVE-22231)|
| Adding ability for user to set bind user|[HIVE-21009](https://issues.apache.org/jira/browse/HIVE-21009)| | Implement UDF to interpret date/timestamp using its internal representation and Gregorian-Julian hybrid calendar|[HIVE-22241](https://issues.apache.org/jira/browse/HIVE-22241)| | Beeline option to show/not show execution report|[HIVE-22204](https://issues.apache.org/jira/browse/HIVE-22204)| | Tez: SplitGenerator tries to look for plan files, which doesn't exist for Tez|[HIVE-22169](https://issues.apache.org/jira/browse/HIVE-22169)|
-| Remove expensive logging from the LLAP cache hotpath|[HIVE-22168](https://issues.apache.org/jira/browse/HIVE-22168)|
+| Remove expensive logging from the LLAP cache `hotpath`|[HIVE-22168](https://issues.apache.org/jira/browse/HIVE-22168)|
| UDF: FunctionRegistry synchronizes on org.apache.hadoop.hive.ql.udf.UDFType class|[HIVE-22161](https://issues.apache.org/jira/browse/HIVE-22161)| | Prevent the creation of query routing appender if property is set to false|[HIVE-22115](https://issues.apache.org/jira/browse/HIVE-22115)| | Remove cross-query synchronization for the partition-eval|[HIVE-22106](https://issues.apache.org/jira/browse/HIVE-22106)| | Skip setting up hive scratch dir during planning|[HIVE-21182](https://issues.apache.org/jira/browse/HIVE-21182)| | Skip creating scratch dirs for tez if RPC is on|[HIVE-21171](https://issues.apache.org/jira/browse/HIVE-21171)|
-| switch Hive UDFs to use Re2J regex engine|[HIVE-19661](https://issues.apache.org/jira/browse/HIVE-19661)|
+| switch Hive UDFs to use `Re2J` regex engine|[HIVE-19661](https://issues.apache.org/jira/browse/HIVE-19661)|
| Migrated clustered tables using bucketing_version 1 on hive 3 uses bucketing_version 2 for inserts|[HIVE-22429](https://issues.apache.org/jira/browse/HIVE-22429)| | Bucketing: Bucketing version 1 is incorrectly partitioning data|[HIVE-21167](https://issues.apache.org/jira/browse/HIVE-21167)| | Adding ASF License header to the newly added file|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than the one is present in table schema|[HIVE-22360](https://issues.apache.org/jira/browse/HIVE-22360)| | LLAP external client - Need to reduce LlapBaseInputFormat#getSplits() footprint|[HIVE-22221](https://issues.apache.org/jira/browse/HIVE-22221)| | Column name with reserved keyword is unescaped when query including join on table with mask column is rewritten (Zoltan Matyus via Zoltan Haindrich)|[HIVE-22208](https://issues.apache.org/jira/browse/HIVE-22208)|
-|Prevent LLAP shutdown on AMReporter related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
+|Prevent LLAP shutdown on `AMReporter` related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
| LLAP status service driver may get stuck with wrong Yarn app ID|[HIVE-21866](https://issues.apache.org/jira/browse/HIVE-21866)| | OperationManager.queryIdOperation doesn't properly clean up multiple queryIds|[HIVE-22275](https://issues.apache.org/jira/browse/HIVE-22275)| | Bringing a node manager down blocks restart of LLAP service|[HIVE-22219](https://issues.apache.org/jira/browse/HIVE-22219)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Remove distribution management tag from pom.xml|[HIVE-19667](https://issues.apache.org/jira/browse/HIVE-19667)| | Parsing time can be high if there's deeply nested subqueries|[HIVE-21980](https://issues.apache.org/jira/browse/HIVE-21980)| | For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057](https://issues.apache.org/jira/browse/HIVE-20057)|
-| JDBC: HiveConnection shades log4j interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)|
-| Update repo URLs in poms - branch 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
-| DBInstall tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)|
+| JDBC: HiveConnection shades `log4j` interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)|
+| Update repo URLs in `poms` - branch 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
+| `DBInstall` tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)|
| Load data into a bucketed table is ignoring partitions specs and loads data into default partition|[HIVE-21564](https://issues.apache.org/jira/browse/HIVE-21564)| | Queries with join condition having timestamp or timestamp with local time zone literal throw SemanticException|[HIVE-21613](https://issues.apache.org/jira/browse/HIVE-21613)| | Analyze compute stats for column leave behind staging dir on HDFS|[HIVE-21342](https://issues.apache.org/jira/browse/HIVE-21342)|
For more information on migration, see the [migration guide.](https://spark.apac
### Kafka 2.4 is now generally available Kafka 2.4.1 is now Generally Available. For more information, please see [Kafka 2.4.1 Release Notes.](http://kafka.apache.org/24/documentation.html)
-Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand mmap of index files, More consumer metrics to observe user poll behavior.
+Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand `mmap` of index files, More consumer metrics to observe user poll behavior.
### Map Datatype in HWC is now supported in HDInsight 4.0
OSS backports that are included in Hive including HWC 1.0 (Spark 2.4) which supp
| Impacted Feature | Apache JIRA | ||--| | Metastore direct sql queries with IN/(NOT IN) should be split based on max parameters allowed by SQL DB | [HIVE-25659](https://issues.apache.org/jira/browse/HIVE-25659) |
-| Upgrade log4j 2.16.0 to 2.17.0 | [HIVE-25825](https://issues.apache.org/jira/browse/HIVE-25825) |
-| Update Flatbuffer version | [HIVE-22827](https://issues.apache.org/jira/browse/HIVE-22827) |
+| Upgrade `log4j` 2.16.0 to 2.17.0 | [HIVE-25825](https://issues.apache.org/jira/browse/HIVE-25825) |
+| Update `Flatbuffer` version | [HIVE-22827](https://issues.apache.org/jira/browse/HIVE-22827) |
| Support Map data-type natively in Arrow format | [HIVE-25553](https://issues.apache.org/jira/browse/HIVE-25553) | | LLAP external client - Handle nested values when the parent struct is null | [HIVE-25243](https://issues.apache.org/jira/browse/HIVE-25243) | | Upgrade arrow version to 0.11.0 | [HIVE-23987](https://issues.apache.org/jira/browse/HIVE-23987) |
HDInsight will no longer use Azure Virtual Machine Scale Sets to provision the c
#### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
-Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
+Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
## Release date: 12/27/2021
This release applies for HDInsight 4.0. HDInsight release is made available to a
The OS versions for this release are: - HDInsight 4.0: Ubuntu 18.04.5 LTS
-HDInsight 4.0 image has been updated to mitigate Log4j vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/)
+HDInsight 4.0 image has been updated to mitigate `Log4j` vulnerability as described in [MicrosoftΓÇÖs Response to CVE-2021-44228 Apache Log4j 2.](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/)
> [!Note]
-> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the log4j vulnerabilities. Hence, customers need not patch/reboot these clusters.
+> * Any HDI 4.0 clusters created post 27 Dec 2021 00:00 UTC are created with an updated version of the image which mitigates the `log4j` vulnerabilities. Hence, customers need not patch/reboot these clusters.
> * For new HDInsight 4.0 clusters created between 16 Dec 2021 at 01:15 UTC and 27 Dec 2021 00:00 UTC, HDInsight 3.6 or in pinned subscriptions after 16 Dec 2021 the patch is auto applied within the hour in which the cluster is created, however customers must then reboot their nodes for the patching to complete (except for Kafka Management nodes, which are automatically rebooted). ## Release date: 07/27/2021
HDInsight 4.0 ESP Spark cluster has built-in LLAP components running on both hea
### New region - West US 3-- Jio India West
+- `Jio` India West
- Australia Central ### Component version change
Here are the back ported Apache JIRAs for this release:
| | [HIVE-23046](https://issues.apache.org/jira/browse/HIVE-23046) | | Materialized view | [HIVE-22566](https://issues.apache.org/jira/browse/HIVE-22566) |
-### Price Correction for HDInsight Dv2 Virtual Machines
+### Price Correction for HDInsight `Dv2` Virtual Machines
-A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
+A pricing error was corrected on April 25, 2021, for the `Dv2` VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used `Dv2` VMs:
- Canada Central - Canada East
A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsi
- Southeast Asia - UAE Central
-Starting on April 25, 2021, the corrected amount for the Dv2 VMs will be on your account. Customer notifications were sent to subscription owners prior to the change. You can use the Pricing calculator, HDInsight pricing page, or the Create HDInsight cluster blade in the Azure portal to see the corrected costs for Dv2 VMs in your region.
+Starting on April 25, 2021, the corrected amount for the `Dv2` VMs will be on your account. Customer notifications were sent to subscription owners prior to the change. You can use the Pricing calculator, HDInsight pricing page, or the Create HDInsight cluster blade in the Azure portal to see the corrected costs for `Dv2` VMs in your region.
-No other action is needed from you. The price correction will only apply for usage on or after April 25, 2021 in the specified regions, and not to any usage prior to this date. To ensure you have the most performant and cost-effective solution, we recommended that you review the pricing, VCPU, and RAM for your Dv2 clusters and compare the Dv2 specifications to the Ev3 VMs to see if your solution would benefit from utilizing one of the newer VM series.
+No other action is needed from you. The price correction will only apply for usage on or after April 25, 2021 in the specified regions, and not to any usage prior to this date. To ensure you have the most performant and cost-effective solution, we recommended that you review the pricing, VCPU, and RAM for your `Dv2` clusters and compare the `Dv2` specifications to the `Ev3` VMs to see if your solution would benefit from utilizing one of the newer VM series.
## Release date: 06/02/2021
HDInsight added [Spark 3.0.0](https://spark.apache.org/docs/3.0.0/) support to H
#### Kafka 2.4 preview HDInsight added [Kafka 2.4.1](http://kafka.apache.org/24/documentation.html) support to HDInsight 4.0 as a Preview feature.
-#### Eav4-series support
-HDInsight added Eav4-series support in this release.
+#### `Eav4`-series support
+HDInsight added `Eav4`-series support in this release.
#### Moving to Azure virtual machine scale sets HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
No deprecation in this release.
#### Default cluster version is changed to 4.0 The default version of HDInsight cluster is changed from 3.6 to 4.0. For more information about available versions, see [available versions](./hdinsight-component-versioning.md). Learn more about what is new in [HDInsight 4.0](./hdinsight-version-release.md).
-#### Default cluster VM sizes are changed to Ev3-series
-Default cluster VM sizes are changed from D-series to Ev3-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
+#### Default cluster VM sizes are changed to `Ev3`-series
+Default cluster VM sizes are changed from D-series to `Ev3`-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
HDInsight now uses Azure virtual machines to provision the cluster. The service
Starting from January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption. ### Behavior changes
-#### Default cluster VM size changes to Ev3-series
-Default cluster VM sizes will be changed from D-series to Ev3-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
+#### Default cluster VM size changes to `Ev3`-series
+Default cluster VM sizes will be changed from D-series to `Ev3`-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
A minimum 4-core VM is required for Head Node to ensure the high availability an
#### Cluster worker node provisioning change When 80% of the worker nodes are ready, the cluster enters **operational** stage. At this stage, customers can do all the data plane operations like running scripts and jobs. But customers can't do any control plane operation like scaling up/down. Only deletion is supported.
-After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minutes, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes aren't available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
+After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minute, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes aren't available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
#### Create new service principal through HDInsight
-Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15 2020, customers can't create new service principal in HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
+Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15, 2020, new service principal creation is not possible in the HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
#### Time out for script actions with cluster creation HDInsight supports running script actions with cluster creation. From this release, all script actions with cluster creation must finish within **60 minutes**, or they time out. Script actions submitted to running clusters aren't impacted. Learn more details [here](./hdinsight-hadoop-customize-cluster-linux.md#script-action-in-the-cluster-creation-process).
F-series virtual machines(VMs) is a good choice to get started with HDInsight wi
#### G-series virtual machine deprecation From this release, G-series VMs are no longer offered in HDInsight.
-#### Dv1 virtual machine deprecation
-From this release, the use of Dv1 VMs with HDInsight is deprecated. Any customer request for Dv1 will be served with Dv2 automatically. There's no price difference between Dv1 and Dv2 VMs.
+#### `Dv1` virtual machine deprecation
+From this release, the use of `Dv1` VMs with HDInsight is deprecated. Any customer request for `Dv1` will be served with `Dv2` automatically. There's no price difference between `Dv1` and `Dv2` VMs.
### Behavior changes
This release provides Hadoop Common 2.7.3 and the following Apache patches:
- [HDFS-11384](https://issues.apache.org/jira/browse/HDFS-11384): Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike. -- [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689): New exception thrown by DFSClient%isHDFSEncryptionEnabled broke hacky hive code.
+- [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689): New exception thrown by `DFSClient%isHDFSEncryptionEnabled` broke `hacky` hive code.
- [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711): DN shouldn't delete the block On "Too many open files" Exception. - [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347): TestBalancerRPCDelay\#testBalancerRPCDelay fails frequently. -- [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781): After Datanode down, In Namenode UI Datanode tab is throwing warning message.
+- [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781): After `Datanode` down, In `Namenode` UI `Datanode` tab is throwing warning message.
-- [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054): Handling PathIsNotEmptyDirectoryException in DFSClient delete call.
+- [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054): Handling PathIsNotEmptyDirectoryException in `DFSClient` delete call.
- [HDFS-13120](https://issues.apache.org/jira/browse/HDFS-13120): Snapshot diff could be corrupted after concat. -- [YARN-3742](https://issues.apache.org/jira/browse/YARN-3742): YARN RM will shut down if ZKClient creation times out.
+- [YARN-3742](https://issues.apache.org/jira/browse/YARN-3742): YARN RM will shut down if `ZKClient` creation times out.
- [YARN-6061](https://issues.apache.org/jira/browse/YARN-6061): Add an UncaughtExceptionHandler for critical threads in RM.
This release provides Hadoop Common 2.7.3 and the following Apache patches:
HDP 2.6.4 provided Hadoop Common 2.7.3 and the following Apache patches: -- [HADOOP-13700](https://issues.apache.org/jira/browse/HADOOP-13700): Remove unthrown IOException from TrashPolicy\#initialize and \#getInstance signatures.
+- [HADOOP-13700](https://issues.apache.org/jira/browse/HADOOP-13700): Remove unthrown `IOException` from TrashPolicy\#initialize and \#getInstance signatures.
- [HADOOP-13709](https://issues.apache.org/jira/browse/HADOOP-13709): Ability to clean up subprocesses spawned by Shell when the process exits. -- [HADOOP-14059](https://issues.apache.org/jira/browse/HADOOP-14059): typo in s3a rename(self, subdir) error message.
+- [HADOOP-14059](https://issues.apache.org/jira/browse/HADOOP-14059): typo in `s3a` rename(self, subdir) error message.
- [HADOOP-14542](https://issues.apache.org/jira/browse/HADOOP-14542): Add IOUtils.cleanupWithLogger that accepts slf4j logger API.
This release provides HBase 1.1.2 and the following Apache patches.
- [HBASE-14473](https://issues.apache.org/jira/browse/HBASE-14473): Compute region locality in parallel. -- [HBASE-14517](https://issues.apache.org/jira/browse/HBASE-14517): Show regionserver's version in master status page.
+- [HBASE-14517](https://issues.apache.org/jira/browse/HBASE-14517): Show `regionserver's` version in master status page.
- [HBASE-14606](https://issues.apache.org/jira/browse/HBASE-14606): TestSecureLoadIncrementalHFiles tests timed out in trunk build on apache.
This release provides HBase 1.1.2 and the following Apache patches.
- [HBASE-15515](https://issues.apache.org/jira/browse/HBASE-15515): Improve LocalityBasedCandidateGenerator in Balancer. -- [HBASE-15615](https://issues.apache.org/jira/browse/HBASE-15615): Wrong sleep time when RegionServerCallable need retry.
+- [HBASE-15615](https://issues.apache.org/jira/browse/HBASE-15615): Wrong sleep time when `RegionServerCallable` need retry.
- [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135): PeerClusterZnode under rs of removed peer may never be deleted. - [HBASE-16570](https://issues.apache.org/jira/browse/HBASE-16570): Compute region locality in parallel at startup. -- [HBASE-16810](https://issues.apache.org/jira/browse/HBASE-16810): HBase Balancer throws ArrayIndexOutOfBoundsException when regionservers are in /hbase/draining znode and unloaded.
+- [HBASE-16810](https://issues.apache.org/jira/browse/HBASE-16810): HBase Balancer throws ArrayIndexOutOfBoundsException when `regionservers` are in /hbase/draining znode and unloaded.
- [HBASE-16852](https://issues.apache.org/jira/browse/HBASE-16852): TestDefaultCompactSelection failed on branch-1.3.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-17419*](https://issues.apache.org/jira/browse/HIVE-17419): ANALYZE TABLE...COMPUTE STATISTICS FOR COLUMNS command shows computed stats for masked tables. -- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting uniontype.
+- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting `uniontype`.
- [*HIVE-17621*](https://issues.apache.org/jira/browse/HIVE-17621): Hive-site settings are ignored during HCatInputFormat split-calculation. -- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for blobstores.
+- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for `blobstores`.
- [*HIVE-17729*](https://issues.apache.org/jira/browse/HIVE-17729): Add Database and Explain related blobstore tests. -- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward compat option for external users to HIVE-11985.
+- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward `compat` option for external users to HIVE-11985.
- [*HIVE-17803*](https://issues.apache.org/jira/browse/HIVE-17803): With Pig multi-query, 2 HCatStorers writing to the same table will trample each other's outputs. -- [*HIVE-17829*](https://issues.apache.org/jira/browse/HIVE-17829): ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2.
+- [*HIVE-17829*](https://issues.apache.org/jira/browse/HIVE-17829): ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in `Hive2`.
- [*HIVE-17845*](https://issues.apache.org/jira/browse/HIVE-17845): insert fails if target table columns are not lowercase.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18353*](https://issues.apache.org/jira/browse/HIVE-18353): CompactorMR should call jobclient.close() to trigger cleanup. -- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when query a partitioned view in ColumnPruner.
+- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when querying a partitioned view in ColumnPruner.
- [*HIVE-18429*](https://issues.apache.org/jira/browse/HIVE-18429): Compaction should handle a case when it produces no output.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-16828*](https://issues.apache.org/jira/browse/HIVE-16828): With CBO enabled, Query on partitioned views throws IndexOutOfBoundException. -- [*HIVE-17063*](https://issues.apache.org/jira/browse/HIVE-17063): insert overwrite partition onto an external table fail when drop partition first.
+- [*HIVE-17063*](https://issues.apache.org/jira/browse/HIVE-17063): insert overwrite partition onto an external table fails when drop partition first.
- [*HIVE-17259*](https://issues.apache.org/jira/browse/HIVE-17259): Hive JDBC does not recognize UNIONTYPE columns. -- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting uniontype.
+- [*HIVE-17530*](https://issues.apache.org/jira/browse/HIVE-17530): ClassCastException when converting `uniontype`.
- [*HIVE-17600*](https://issues.apache.org/jira/browse/HIVE-17600): Make OrcFile's enforceBufferSize user-settable.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-17629*](https://issues.apache.org/jira/browse/HIVE-17629): CachedStore: Have an approved/not-approved config to allow selective caching of tables/partitions and allow read while prewarming. -- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for blobstores.
+- [*HIVE-17636*](https://issues.apache.org/jira/browse/HIVE-17636): Add multiple\_agg.q test for `blobstores`.
- [*HIVE-17702*](https://issues.apache.org/jira/browse/HIVE-17702): incorrect isRepeating handling in decimal reader in ORC. - [*HIVE-17729*](https://issues.apache.org/jira/browse/HIVE-17729): Add Database and Explain related blobstore tests. -- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward compat option for external users to HIVE-11985.
+- [*HIVE-17731*](https://issues.apache.org/jira/browse/HIVE-17731): add a backward `compat` option for external users to HIVE-11985.
- [*HIVE-17803*](https://issues.apache.org/jira/browse/HIVE-17803): With Pig multi-query, 2 HCatStorers writing to the same table will trample each other's outputs.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18090*](https://issues.apache.org/jira/browse/HIVE-18090): acid heartbeat fails when metastore is connected via hadoop credential. -- [*HIVE-18189*](https://issues.apache.org/jira/browse/HIVE-18189): Order by position does not work when cbo is disabled.
+- [*HIVE-18189*](https://issues.apache.org/jira/browse/HIVE-18189): Order by position does not work when `cbo` is disabled.
- [*HIVE-18258*](https://issues.apache.org/jira/browse/HIVE-18258): Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken. -- [*HIVE-18269*](https://issues.apache.org/jira/browse/HIVE-18269): LLAP: Fast llap io with slow processing pipeline can lead to OOM.
+- [*HIVE-18269*](https://issues.apache.org/jira/browse/HIVE-18269): LLAP: Fast `llap` io with slow processing pipeline can lead to OOM.
- [*HIVE-18293*](https://issues.apache.org/jira/browse/HIVE-18293): Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18353*](https://issues.apache.org/jira/browse/HIVE-18353): CompactorMR should call jobclient.close() to trigger cleanup. -- [*HIVE-18384*](https://issues.apache.org/jira/browse/HIVE-18384): ConcurrentModificationException in log4j2.x library.
+- [*HIVE-18384*](https://issues.apache.org/jira/browse/HIVE-18384): ConcurrentModificationException in `log4j2.x` library.
-- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when query a partitioned view in ColumnPruner.
+- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when querying a partitioned view in ColumnPruner.
- [*HIVE-18447*](https://issues.apache.org/jira/browse/HIVE-18447): JDBC: Provide a way for JDBC users to pass cookie info via connection string.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18530*](https://issues.apache.org/jira/browse/HIVE-18530): Replication should skip MM table (for now). -- [*HIVE-18548*](https://issues.apache.org/jira/browse/HIVE-18548): Fix log4j import.
+- [*HIVE-18548*](https://issues.apache.org/jira/browse/HIVE-18548): Fix `log4j` import.
- [*HIVE-18551*](https://issues.apache.org/jira/browse/HIVE-18551): Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18587*](https://issues.apache.org/jira/browse/HIVE-18587): insert DML event may attempt to calculate a checksum on directories. -- [*HIVE-18597*](https://issues.apache.org/jira/browse/HIVE-18597): LLAP: Always package the log4j2 API jar for org.apache.log4j.
+- [*HIVE-18597*](https://issues.apache.org/jira/browse/HIVE-18597): LLAP: Always package the `log4j2` API jar for `org.apache.log4j`.
- [*HIVE-18613*](https://issues.apache.org/jira/browse/HIVE-18613): Extend JsonSerDe to support BINARY type.
This release provides Kafka 1.0.0 and the following Apache patches.
- [KAFKA-6261](https://issues.apache.org/jira/browse/KAFKA-6261): Request logging throws exception if acks=0. -- [KAFKA-6274](https://issues.apache.org/jira/browse/KAFKA-6274): Improve KTable Source state store auto-generated names.
+- [KAFKA-6274](https://issues.apache.org/jira/browse/KAFKA-6274): Improve `KTable` Source state store auto-generated names.
#### Mahout
This release provides Oozie 4.2.0 with the following Apache patches.
- [OOZIE-2787](https://issues.apache.org/jira/browse/OOZIE-2787): Oozie distributes application jar twice making the spark job fail. -- [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792): Hive2 action isn't parsing Spark application ID from log file properly when Hive is on Spark.
+- [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792): `Hive2` action isn't parsing Spark application ID from log file properly when Hive is on Spark.
- [OOZIE-2799](https://issues.apache.org/jira/browse/OOZIE-2799): Setting log location for spark sql on hive. -- [OOZIE-2802](https://issues.apache.org/jira/browse/OOZIE-2802): Spark action failure on Spark 2.1.0 due to duplicate sharelibs.
+- [OOZIE-2802](https://issues.apache.org/jira/browse/OOZIE-2802): Spark action failure on Spark 2.1.0 due to duplicate `sharelibs`.
- [OOZIE-2923](https://issues.apache.org/jira/browse/OOZIE-2923): Improve Spark options parsing.
This release provides Phoenix 4.7.0 and the following Apache patches:
- [PHOENIX-4525](https://issues.apache.org/jira/browse/PHOENIX-4525): Integer overflow in GroupBy execution. -- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there's WHERE on pk column.
+- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there's WHERE on `pk` column.
- [PHOENIX-4586](https://issues.apache.org/jira/browse/PHOENIX-4586): UPSERT SELECT doesn't take in account comparison operators for subqueries.
This release provides Pig 0.16.0 with the following Apache patches.
- [PIG-5159](https://issues.apache.org/jira/browse/PIG-5159): Fix Pig not saving grunt history. -- [PIG-5175](https://issues.apache.org/jira/browse/PIG-5175): Upgrade jruby to 1.7.26.
+- [PIG-5175](https://issues.apache.org/jira/browse/PIG-5175): Upgrade `jruby` to 1.7.26.
#### Ranger
This release provides Ranger 0.7.0 and the following Apache patches:
- [RANGER-1990](https://issues.apache.org/jira/browse/RANGER-1990): Add One-way SSL MySQL support in Ranger Admin. -- [RANGER-2006](https://issues.apache.org/jira/browse/RANGER-2006): Fix problems detected by static code analysis in ranger usersync for ldap sync source.
+- [RANGER-2006](https://issues.apache.org/jira/browse/RANGER-2006): Fix problems detected by static code analysis in ranger `usersync` for `ldap` sync source.
- [RANGER-2008](https://issues.apache.org/jira/browse/RANGER-2008): Policy evaluation is failing for multiline policy conditions.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23406](https://issues.apache.org/jira/browse/SPARK-23406): Enable stream-stream self-joins for branch-2.3. -- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark shouldn't warn \`metadata directory\` for a HDFS file path.
+- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark shouldn't warn \`metadata directory\` for an HDFS file path.
- [SPARK-23436](https://issues.apache.org/jira/browse/SPARK-23436): Infer partition as Date only if it can be cast to Date.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23599](https://issues.apache.org/jira/browse/SPARK-23599): Use RandomUUIDGenerator in Uuid expression. -- [SPARK-23601](https://issues.apache.org/jir5 files from release.
+- [SPARK-23601](https://issues.apache.org/jir5` files from release.
- [SPARK-23608](https://issues.apache.org/jira/browse/SPARK-23608): Add synchronization in SHS between attachSparkUI and detachSparkUI functions to avoid concurrent modification issue to Jetty Handlers.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23639](https://issues.apache.org/jira/browse/SPARK-23639): Obtain token before init metastore client in SparkSQL CLI. -- [SPARK-23642](https://issues.apache.org/jira/browse/SPARK-23642): AccumulatorV2 subclass isZero scaladoc fix.
+- [SPARK-23642](https://issues.apache.org/jira/browse/SPARK-23642): AccumulatorV2 subclass isZero `scaladoc` fix.
- [SPARK-23644](https://issues.apache.org/jira/browse/SPARK-23644): Use absolute path for REST call in SHS.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23760](https://issues.apache.org/jira/browse/SPARK-23760): CodegenContext.withSubExprEliminationExprs should save/restore CSE state correctly. -- [SPARK-23769](https://issues.apache.org/jira/browse/SPARK-23769): Remove comments that unnecessarily disable Scalastyle check.
+- [SPARK-23769](https://issues.apache.org/jira/browse/SPARK-23769): Remove comments that unnecessarily disable `Scalastyle` check.
- [SPARK-23788](https://issues.apache.org/jira/browse/SPARK-23788): Fix race in StreamingQuerySuite.
This release provides Zeppelin 0.7.3 with no more Apache patches.
- [ZEPPELIN-3129](https://issues.apache.org/jira/browse/ZEPPELIN-3129): Zeppelin UI doesn't sign out in IE. -- [ZEPPELIN-903](https://issues.apache.org/jira/browse/ZEPPELIN-903): Replace CXF with Jersey2.
+- [ZEPPELIN-903](https://issues.apache.org/jira/browse/ZEPPELIN-903): Replace CXF with `Jersey2`.
#### ZooKeeper
This release provides ZooKeeper 3.4.6 and the following Apache patches:
- [ZOOKEEPER-2693](https://issues.apache.org/jira/browse/ZOOKEEPER-2693): DOS attack on wchp/wchc four letter words (4lw). -- [ZOOKEEPER-2726](https://issues.apache.org/jira/browse/ZOOKEEPER-2726): Patch for introduces potential race condition.
+- [ZOOKEEPER-2726](https://issues.apache.org/jira/browse/ZOOKEEPER-2726): Patch introduces a potential race condition.
### Fixed Common Vulnerabilities and Exposures
This section covers all Common Vulnerabilities and Exposures (CVE) that are addr
#### **ΓÇïCVE-2016-4970**
-| **Summary:** handler/ssl/OpenSslEngine.java in Netty 4.0.x before 4.0.37.Final and 4.1.x before 4.1.1.Final allows remote attackers to cause a denial of service (infinite loop) |
+| **Summary:** handler/ssl/OpenSslEngine.java in Netty 4.0.x before 4.0.37. Final and 4.1.x before 4.1.1. Final allows remote attackers to cause a denial of service (infinite loop) |
|--| | **Severity:** Moderate | | **Vendor:** Hortonworks |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag, which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. | | BUG-94618 | [YARN-5037](https://issues.apache.org/jira/browse/YARN-5037), [YARN-7274](https://issues.apache.org/jira/browse/YARN-7274) | Ability to disable elasticity at leaf queue level | | BUG-94901 | [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | Add per-table latency histograms |
-| BUG-95259 | [HADOOP-15185](https://issues.apache.org/jira/browse/HADOOP-15185), [HADOOP-15186](https://issues.apache.org/jira/browse/HADOOP-15186) | Update adls connector to use the current version of ADLS SDK |
+| BUG-95259 | [HADOOP-15185](https://issues.apache.org/jira/browse/HADOOP-15185), [HADOOP-15186](https://issues.apache.org/jira/browse/HADOOP-15186) | Update `adls` connector to use the current version of ADLS SDK |
| BUG-95619 | [HIVE-18551](https://issues.apache.org/jira/browse/HIVE-18551) | Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace |
-| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark shouldn't warn \`metadata directory\` for a HDFS file path |
+| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark shouldn't warn \`metadata directory\` for an HDFS file path |
**Performance**
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94345 | [HIVE-18429](https://issues.apache.org/jira/browse/HIVE-18429) | Compaction should handle a case when it produces no output | | BUG-94381 | [HADOOP-13227](https://issues.apache.org/jira/browse/HADOOP-13227), [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054) | Handling RequestHedgingProxyProvider RetryAction order: FAIL &lt; RETRY &lt; FAILOVER\_AND\_RETRY. | | BUG-94432 | [HIVE-18353](https://issues.apache.org/jira/browse/HIVE-18353) | CompactorMR should call jobclient.close() to trigger cleanup |
-| BUG-94869 | [PHOENIX-4290](https://issues.apache.org/jira/browse/PHOENIX-4290), [PHOENIX-4373](https://issues.apache.org/jira/browse/PHOENIX-4373) | Requested row out of range for Get on HRegion for local indexed salted phoenix table. |
+| BUG-94869 | [PHOENIX-4290](https://issues.apache.org/jira/browse/PHOENIX-4290), [PHOENIX-4373](https://issues.apache.org/jira/browse/PHOENIX-4373) | Requested row out of range for Get on `HRegion` for local indexed salted phoenix table. |
| BUG-94928 | [HDFS-11078](https://issues.apache.org/jira/browse/HDFS-11078) | Fix NPE in LazyPersistFileScrubber | | BUG-94964 | [HIVE-18269](https://issues.apache.org/jira/browse/HIVE-18269), [HIVE-18318](https://issues.apache.org/jira/browse/HIVE-18318), [HIVE-18326](https://issues.apache.org/jira/browse/HIVE-18326) | Multiple LLAP fixes |
-| BUG-95669 | [HIVE-18577](https://issues.apache.org/jira/browse/HIVE-18577), [HIVE-18643](https://issues.apache.org/jira/browse/HIVE-18643) | When run update/delete query on ACID partitioned table, HS2 read all each partitions. |
+| BUG-95669 | [HIVE-18577](https://issues.apache.org/jira/browse/HIVE-18577), [HIVE-18643](https://issues.apache.org/jira/browse/HIVE-18643) | When run update/delete query on ACID partitioned table, HS2 read all each partition. |
| BUG-96390 | [HDFS-10453](https://issues.apache.org/jira/browse/HDFS-10453) | ReplicationMonitor thread could be stuck for long time due to the race between replication and delete the same file in a large cluster. | | BUG-96625 | [HIVE-16110](https://issues.apache.org/jira/browse/HIVE-16110) | Revert of "Vectorization: Support 2 Value CASE WHEN instead of fallback to VectorUDFAdaptor" | | BUG-97109 | [HIVE-16757](https://issues.apache.org/jira/browse/HIVE-16757) | Use of deprecated getRows() instead of new estimateRowCount(RelMetadataQuery...) has serious performance impact |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | ||-|--| | BUG-100180 | [CALCITE-2232](https://issues.apache.org/jira/browse/CALCITE-2232) | Assertion error on AggregatePullUpConstantsRule while adjusting Aggregate indices |
-| BUG-100422 | [HIVE-19085](https://issues.apache.org/jira/browse/HIVE-19085) | FastHiveDecimal abs(0) sets sign to +ve |
+| BUG-100422 | [HIVE-19085](https://issues.apache.org/jira/browse/HIVE-19085) | FastHiveDecimal abs(0) sets sign to `+ve` |
| BUG-100834 | [PHOENIX-4658](https://issues.apache.org/jira/browse/PHOENIX-4658) | IllegalStateException: requestSeek can't be called on ReversedKeyValueHeap | | BUG-102078 | [HIVE-17978](https://issues.apache.org/jira/browse/HIVE-17978) | TPCDS queries 58 and 83 generate exceptions in vectorization. | | BUG-92483 | [HIVE-17900](https://issues.apache.org/jira/browse/HIVE-17900) | analyze stats on columns triggered by Compactor generates malformed SQL with &gt; 1 partition column | | BUG-93135 | [HIVE-15874](https://issues.apache.org/jira/browse/HIVE-15874), [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Hive query returning wrong results when set hive.groupby.orderby.position.alias to true |
-| BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position does not work when cbo is disabled |
+| BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position does not work when `cbo` is disabled |
| BUG-93595 | [HIVE-12378](https://issues.apache.org/jira/browse/HIVE-12378), [HIVE-15883](https://issues.apache.org/jira/browse/HIVE-15883) | HBase mapped table in Hive insert fail for decimal and binary columns | | BUG-94007 | [PHOENIX-1751](https://issues.apache.org/jira/browse/PHOENIX-1751), [PHOENIX-3112](https://issues.apache.org/jira/browse/PHOENIX-3112) | Phoenix Queries returns Null values due to HBase Partial rows |
-| BUG-94144 | [HIVE-17063](https://issues.apache.org/jira/browse/HIVE-17063) | insert overwrite partition into an external table fail when drop partition first |
+| BUG-94144 | [HIVE-17063](https://issues.apache.org/jira/browse/HIVE-17063) | insert overwrite partition into an external table fails when drop partition first |
| BUG-94280 | [HIVE-12785](https://issues.apache.org/jira/browse/HIVE-12785) | View with union type and UDF to \`cast\` the struct is broken | | BUG-94505 | [PHOENIX-4525](https://issues.apache.org/jira/browse/PHOENIX-4525) | Integer overflow in GroupBy execution | | BUG-95618 | [HIVE-18506](https://issues.apache.org/jira/browse/HIVE-18506) | LlapBaseInputFormat - negative array index | | BUG-95644 | [HIVE-9152](https://issues.apache.org/jira/browse/HIVE-9152) | CombineHiveInputFormat: Hive query is failing in Tez with java.lang.IllegalArgumentException exception | | BUG-96762 | [PHOENIX-4588](https://issues.apache.org/jira/browse/PHOENIX-4588) | Clone expression also if its children have Determinism.PER\_INVOCATION | | BUG-97145 | [HIVE-12245](https://issues.apache.org/jira/browse/HIVE-12245), [HIVE-17829](https://issues.apache.org/jira/browse/HIVE-17829) | Support column comments for an HBase backed table |
-| BUG-97741 | [HIVE-18944](https://issues.apache.org/jira/browse/HIVE-18944) | Groupping sets position is set incorrectly during DPP |
-| BUG-98082 | [HIVE-18597](https://issues.apache.org/jira/browse/HIVE-18597) | LLAP: Always package the log4j2 API jar for org.apache.log4j |
+| BUG-97741 | [HIVE-18944](https://issues.apache.org/jira/browse/HIVE-18944) | Grouping sets position is set incorrectly during DPP |
+| BUG-98082 | [HIVE-18597](https://issues.apache.org/jira/browse/HIVE-18597) | LLAP: Always package the `log4j2` API jar for `org.apache.log4j` |
| BUG-99849 | N/A | Create a new table from a file wizard tries to use default database | **Security** | **Bug ID** | **Apache JIRA** | **Summary** | |||--|
-| BUG-100436 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
+| BUG-100436 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | `Knox` proxy with `knox-sso` isn't working for ranger |
| BUG-101038 | [SPARK-24062](https://issues.apache.org/jira/browse/SPARK-24062) | Zeppelin %Spark interpreter "Connection refused" error, "A secret key must be specified..." error in HiveThriftServer | | BUG-101359 | [ACCUMULO-4056](https://issues.apache.org/jira/browse/ACCUMULO-4056) | Update version of commons-collection to 3.2.2 when released | | BUG-54240 | [HIVE-18879](https://issues.apache.org/jira/browse/HIVE-18879) | Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in classpath |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-95349 | [ZOOKEEPER-1256](https://issues.apache.org/jira/browse/ZOOKEEPER-1256), [ZOOKEEPER-1901](https://issues.apache.org/jira/browse/ZOOKEEPER-1901) | Upgrade netty | | BUG-95483 | N/A | Fix for CVE-2017-15713 | | BUG-95646 | [OOZIE-3167](https://issues.apache.org/jira/browse/OOZIE-3167) | Upgrade tomcat version on Oozie 4.3 branch |
-| BUG-95823 | N/A | Knox: Upgrade Beanutils |
+| BUG-95823 | N/A | `Knox`: Upgrade `Beanutils` |
| BUG-95908 | [RANGER-1960](https://issues.apache.org/jira/browse/RANGER-1960) | HBase auth does not take table namespace into consideration for deleting snapshot | | BUG-96191 | [FALCON-2322](https://issues.apache.org/jira/browse/FALCON-2322), [FALCON-2323](https://issues.apache.org/jira/browse/FALCON-2323) | Upgrade Jackson and Spring versions to avoid security vulnerabilities | | BUG-96502 | [RANGER-1990](https://issues.apache.org/jira/browse/RANGER-1990) | Add One-way SSL MySQL support in Ranger Admin | | BUG-96712 | [FLUME-3194](https://issues.apache.org/jira/browse/FLUME-3194) | upgrade derby to the latest (1.14.1.0) version | | BUG-96713 | [FLUME-2678](https://issues.apache.org/jira/browse/FLUME-2678) | Upgrade xalan to 2.7.2 to take care of CVE-2014-0107 vulnerability |
-| BUG-96714 | [FLUME-2050](https://issues.apache.org/jira/browse/FLUME-2050) | Upgrade to log4j2 (when GA) |
+| BUG-96714 | [FLUME-2050](https://issues.apache.org/jira/browse/FLUME-2050) | Upgrade to `log4j2` (when GA) |
| BUG-96737 | N/A | Use Java io filesystem methods to access local files | | BUG-96925 | N/A | Upgrade Tomcat from 6.0.48 to 6.0.53 in Hadoop |
-| BUG-96977 | [FLUME-3132](https://issues.apache.org/jira/browse/FLUME-3132) | Upgrade tomcat jasper library dependencies |
+| BUG-96977 | [FLUME-3132](https://issues.apache.org/jira/browse/FLUME-3132) | Upgrade tomcat `jasper` library dependencies |
| BUG-97022 | [HADOOP-14799](https://issues.apache.org/jira/browse/HADOOP-14799), [HADOOP-14903](https://issues.apache.org/jira/browse/HADOOP-14903), [HADOOP-15265](https://issues.apache.org/jira/browse/HADOOP-15265) | Upgrading Nimbus-JOSE-JWT library with version above 4.39 | | BUG-97101 | [RANGER-1988](https://issues.apache.org/jira/browse/RANGER-1988) | Fix insecure randomness | | BUG-97178 | [ATLAS-2467](https://issues.apache.org/jira/browse/ATLAS-2467) | Dependency upgrade for Spring and nimbus-jose-jwt |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-100040 | [ATLAS-2536](https://issues.apache.org/jira/browse/ATLAS-2536) | NPE in Atlas Hive Hook | | BUG-100057 | [HIVE-19251](https://issues.apache.org/jira/browse/HIVE-19251) | ObjectStore.getNextNotification with LIMIT should use less memory | | BUG-100072 | [HIVE-19130](https://issues.apache.org/jira/browse/HIVE-19130) | NPE is thrown when REPL LOAD applied drop partition event. |
-| BUG-100073 | N/A | too many close\_wait connections from hiveserver to data node |
+| BUG-100073 | N/A | too many close\_wait connections from `hiveserver` to data node |
| BUG-100319 | [HIVE-19248](https://issues.apache.org/jira/browse/HIVE-19248) | REPL LOAD doesn't throw error if file copy fails. | | BUG-100352 | N/A | CLONE - RM purging logic scans /registry znode too frequently | | BUG-100427 | [HIVE-19249](https://issues.apache.org/jira/browse/HIVE-19249) | Replication: WITH clause isn't passing the configuration to Task correctly in all cases | | BUG-100430 | [HIVE-14483](https://issues.apache.org/jira/browse/HIVE-14483) | java.lang.ArrayIndexOutOfBoundsException org.apache.orc.impl.TreeReaderFactory\$BytesColumnVectorUtil.commonReadByteArrays | | BUG-100432 | [HIVE-19219](https://issues.apache.org/jira/browse/HIVE-19219) | Incremental REPL DUMP should throw error if requested events are cleaned-up. |
-| BUG-100448 | [SPARK-23637](https://issues.apache.org/jira/browse/SPARK-23637), [SPARK-23802](https://issues.apache.org/jira/browse/SPARK-23802), [SPARK-23809](https://issues.apache.org/jira/browse/SPARK-23809), [SPARK-23816](https://issues.apache.org/jira/browse/SPARK-23816), [SPARK-23822](https://issues.apache.org/jira/browse/SPARK-23822), [SPARK-23823](https://issues.apache.org/jira/browse/SPARK-23823), [SPARK-23838](https://issues.apache.org/jira/browse/SPARK-23838), [SPARK-23881](https://issues.apache.org/jira/browse/SPARK-23881) | Update Spark2 to 2.3.0+ (4/11) |
+| BUG-100448 | [SPARK-23637](https://issues.apache.org/jira/browse/SPARK-23637), [SPARK-23802](https://issues.apache.org/jira/browse/SPARK-23802), [SPARK-23809](https://issues.apache.org/jira/browse/SPARK-23809), [SPARK-23816](https://issues.apache.org/jira/browse/SPARK-23816), [SPARK-23822](https://issues.apache.org/jira/browse/SPARK-23822), [SPARK-23823](https://issues.apache.org/jira/browse/SPARK-23823), [SPARK-23838](https://issues.apache.org/jira/browse/SPARK-23838), [SPARK-23881](https://issues.apache.org/jira/browse/SPARK-23881) | Update `Spark2` to 2.3.0+ (4/11) |
| BUG-100740 | [HIVE-16107](https://issues.apache.org/jira/browse/HIVE-16107) | JDBC: HttpClient should retry one more time on NoHttpResponseException | | BUG-100810 | [HIVE-19054](https://issues.apache.org/jira/browse/HIVE-19054) | Hive Functions replication fails |
-| BUG-100937 | [MAPREDUCE-6889](https://issues.apache.org/jira/browse/MAPREDUCE-6889) | Add Job\#close API to shutdown MR client services. |
-| BUG-101065 | [ATLAS-2587](https://issues.apache.org/jira/browse/ATLAS-2587) | Set read ACL for /apache\_atlas/active\_server\_info znode in HA for Knox proxy to read. |
+| BUG-100937 | [MAPREDUCE-6889](https://issues.apache.org/jira/browse/MAPREDUCE-6889) | Add Job\#close API to shut down MR client services. |
+| BUG-101065 | [ATLAS-2587](https://issues.apache.org/jira/browse/ATLAS-2587) | Set read ACL for /apache\_atlas/active\_server\_info znode in HA for `Knox` proxy to read. |
| BUG-101093 | [STORM-2993](https://issues.apache.org/jira/browse/STORM-2993) | Storm HDFS bolt throws ClosedChannelException when Time rotation policy is used | | BUG-101181 | N/A | PhoenixStorageHandler doesn't handle AND in predicate correctly | | BUG-101266 | [PHOENIX-4635](https://issues.apache.org/jira/browse/PHOENIX-4635) | HBase Connection leak in org.apache.phoenix.hive.mapreduce.PhoenixInputFormat |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-101485 | N/A | hive metastore thrift api is slow and causing client timeout | | BUG-101628 | [HIVE-19331](https://issues.apache.org/jira/browse/HIVE-19331) | Hive incremental replication to cloud failed. | | BUG-102048 | [HIVE-19381](https://issues.apache.org/jira/browse/HIVE-19381) | Hive Function Replication to cloud fails with FunctionTask |
-| BUG-102064 | N/A | Hive Replication \[ onprem to onprem \] tests failed in ReplCopyTask |
-| BUG-102137 | [HIVE-19423](https://issues.apache.org/jira/browse/HIVE-19423) | Hive Replication \[ Onprem to Cloud \] tests failed in ReplCopyTask |
+| BUG-102064 | N/A | Hive Replication `\[ onprem to onprem \]` tests failed in ReplCopyTask |
+| BUG-102137 | [HIVE-19423](https://issues.apache.org/jira/browse/HIVE-19423) | Hive Replication `\[ Onprem to Cloud \]` tests failed in ReplCopyTask |
| BUG-102305 | [HIVE-19430](https://issues.apache.org/jira/browse/HIVE-19430) | HS2 and hive metastore OOM dumps |
-| BUG-102361 | N/A | multiple insert results in single insert replicated to target hive cluster ( onprem - s3 ) |
+| BUG-102361 | N/A | multiple insert results in single insert replicated to target hive cluster ( `onprem - s3` ) |
| BUG-87624 | N/A | Enabling storm event logging causes workers to continuously die | | BUG-88929 | [HBASE-15615](https://issues.apache.org/jira/browse/HBASE-15615) | Wrong sleep time when RegionServerCallable need retry | | BUG-89628 | [HIVE-17613](https://issues.apache.org/jira/browse/HIVE-17613) | remove object pools for short, same-thread allocations |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-92373 | [FALCON-2314](https://issues.apache.org/jira/browse/FALCON-2314) | Bump TestNG version to 6.13.1 to avoid BeanShell dependency | | BUG-92381 | N/A | testContainerLogsWithNewAPI and testContainerLogsWithOldAPI UT fails | | BUG-92389 | [STORM-2841](https://issues.apache.org/jira/browse/STORM-2841) | testNoAcksIfFlushFails UT fails with NullPointerException |
-| BUG-92586 | [SPARK-17920](https://issues.apache.org/jira/browse/SPARK-17920), [SPARK-20694](https://issues.apache.org/jira/browse/SPARK-20694), [SPARK-21642](https://issues.apache.org/jira/browse/SPARK-21642), [SPARK-22162](https://issues.apache.org/jira/browse/SPARK-22162), [SPARK-22289](https://issues.apache.org/jira/browse/SPARK-22289), [SPARK-22373](https://issues.apache.org/jira/browse/SPARK-22373), [SPARK-22495](https://issues.apache.org/jira/browse/SPARK-22495), [SPARK-22574](https://issues.apache.org/jira/browse/SPARK-22574), [SPARK-22591](https://issues.apache.org/jira/browse/SPARK-22591), [SPARK-22595](https://issues.apache.org/jira/browse/SPARK-22595), [SPARK-22601](https://issues.apache.org/jira/browse/SPARK-22601), [SPARK-22603](https://issues.apache.org/jira/browse/SPARK-22603), [SPARK-22607](https://issues.apache.org/jira/browse/SPARK-22607), [SPARK-22635](https://issues.apache.org/jira/browse/SPARK-22635), [SPARK-22637](https://issues.apache.org/jira/browse/SPARK-22637), [SPARK-22653](https://issues.apache.org/jira/browse/SPARK-22653), [SPARK-22654](https://issues.apache.org/jira/browse/SPARK-22654), [SPARK-22686](https://issues.apache.org/jira/browse/SPARK-22686), [SPARK-22688](https://issues.apache.org/jira/browse/SPARK-22688), [SPARK-22817](https://issues.apache.org/jira/browse/SPARK-22817), [SPARK-22862](https://issues.apache.org/jira/browse/SPARK-22862), [SPARK-22889](https://issues.apache.org/jira/browse/SPARK-22889), [SPARK-22972](https://issues.apache.org/jira/browse/SPARK-22972), [SPARK-22975](https://issues.apache.org/jira/browse/SPARK-22975), [SPARK-22982](https://issues.apache.org/jira/browse/SPARK-22982), [SPARK-22983](https://issues.apache.org/jira/browse/SPARK-22983), [SPARK-22984](https://issues.apache.org/jira/browse/SPARK-22984), [SPARK-23001](https://issues.apache.org/jira/browse/SPARK-23001), [SPARK-23038](https://issues.apache.org/jira/browse/SPARK-23038), [SPARK-23095](https://issues.apache.org/jira/browse/SPARK-23095) | Update Spark2 up-to-date to 2.2.1 (Jan. 16) |
+| BUG-92586 | [SPARK-17920](https://issues.apache.org/jira/browse/SPARK-17920), [SPARK-20694](https://issues.apache.org/jira/browse/SPARK-20694), [SPARK-21642](https://issues.apache.org/jira/browse/SPARK-21642), [SPARK-22162](https://issues.apache.org/jira/browse/SPARK-22162), [SPARK-22289](https://issues.apache.org/jira/browse/SPARK-22289), [SPARK-22373](https://issues.apache.org/jira/browse/SPARK-22373), [SPARK-22495](https://issues.apache.org/jira/browse/SPARK-22495), [SPARK-22574](https://issues.apache.org/jira/browse/SPARK-22574), [SPARK-22591](https://issues.apache.org/jira/browse/SPARK-22591), [SPARK-22595](https://issues.apache.org/jira/browse/SPARK-22595), [SPARK-22601](https://issues.apache.org/jira/browse/SPARK-22601), [SPARK-22603](https://issues.apache.org/jira/browse/SPARK-22603), [SPARK-22607](https://issues.apache.org/jira/browse/SPARK-22607), [SPARK-22635](https://issues.apache.org/jira/browse/SPARK-22635), [SPARK-22637](https://issues.apache.org/jira/browse/SPARK-22637), [SPARK-22653](https://issues.apache.org/jira/browse/SPARK-22653), [SPARK-22654](https://issues.apache.org/jira/browse/SPARK-22654), [SPARK-22686](https://issues.apache.org/jira/browse/SPARK-22686), [SPARK-22688](https://issues.apache.org/jira/browse/SPARK-22688), [SPARK-22817](https://issues.apache.org/jira/browse/SPARK-22817), [SPARK-22862](https://issues.apache.org/jira/browse/SPARK-22862), [SPARK-22889](https://issues.apache.org/jira/browse/SPARK-22889), [SPARK-22972](https://issues.apache.org/jira/browse/SPARK-22972), [SPARK-22975](https://issues.apache.org/jira/browse/SPARK-22975), [SPARK-22982](https://issues.apache.org/jira/browse/SPARK-22982), [SPARK-22983](https://issues.apache.org/jira/browse/SPARK-22983), [SPARK-22984](https://issues.apache.org/jira/browse/SPARK-22984), [SPARK-23001](https://issues.apache.org/jira/browse/SPARK-23001), [SPARK-23038](https://issues.apache.org/jira/browse/SPARK-23038), [SPARK-23095](https://issues.apache.org/jira/browse/SPARK-23095) | Update `Spark2` up-to-date to 2.2.1 (Jan. 16) |
| BUG-92680 | [ATLAS-2288](https://issues.apache.org/jira/browse/ATLAS-2288) | NoClassDefFoundError Exception while running import-hive script when hbase table is created via Hive | | BUG-92760 | [ACCUMULO-4578](https://issues.apache.org/jira/browse/ACCUMULO-4578) | Cancel compaction FATE operation does not release namespace lock | | BUG-92797 | [HDFS-10267](https://issues.apache.org/jira/browse/HDFS-10267), [HDFS-8496](https://issues.apache.org/jira/browse/HDFS-8496) | Reducing the datanode lock contentions on certain use cases |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93361 | [HIVE-12360](https://issues.apache.org/jira/browse/HIVE-12360) | Bad seek in uncompressed ORC with predicate pushdown | | BUG-93426 | [CALCITE-2086](https://issues.apache.org/jira/browse/CALCITE-2086) | HTTP/413 in certain circumstances due to large Authorization headers | | BUG-93429 | [PHOENIX-3240](https://issues.apache.org/jira/browse/PHOENIX-3240) | ClassCastException from Pig loader |
-| BUG-93485 | N/A | can'tcan'tCan't get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
+| BUG-93485 | N/A | can't get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
| BUG-93512 | [PHOENIX-4466](https://issues.apache.org/jira/browse/PHOENIX-4466) | java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data | | BUG-93550 | N/A | Zeppelin %spark.r does not work with spark1 due to scala version mismatch | | BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93986 | [YARN-7697](https://issues.apache.org/jira/browse/YARN-7697) | NM goes down with OOM due to leak in log-aggregation (part\#2) | | BUG-94030 | [ATLAS-2332](https://issues.apache.org/jira/browse/ATLAS-2332) | Creation of type with attributes having nested collection datatype fails | | BUG-94080 | [YARN-3742](https://issues.apache.org/jira/browse/YARN-3742), [YARN-6061](https://issues.apache.org/jira/browse/YARN-6061) | Both RM are in standby in secure cluster |
-| BUG-94081 | [HIVE-18384](https://issues.apache.org/jira/browse/HIVE-18384) | ConcurrentModificationException in log4j2.x library |
+| BUG-94081 | [HIVE-18384](https://issues.apache.org/jira/browse/HIVE-18384) | ConcurrentModificationException in `log4j2.x` library |
| BUG-94168 | N/A | Yarn RM goes down with Service Registry is in wrong state ERROR |
-| BUG-94330 | [HADOOP-13190](https://issues.apache.org/jira/browse/HADOOP-13190), [HADOOP-14104](https://issues.apache.org/jira/browse/HADOOP-14104), [HADOOP-14814](https://issues.apache.org/jira/browse/HADOOP-14814), [HDFS-10489](https://issues.apache.org/jira/browse/HDFS-10489), [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689) | HDFS should support for multiple KMS Uris |
+| BUG-94330 | [HADOOP-13190](https://issues.apache.org/jira/browse/HADOOP-13190), [HADOOP-14104](https://issues.apache.org/jira/browse/HADOOP-14104), [HADOOP-14814](https://issues.apache.org/jira/browse/HADOOP-14814), [HDFS-10489](https://issues.apache.org/jira/browse/HDFS-10489), [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689) | HDFS should support for multiple `KMS Uris` |
| BUG-94345 | [HIVE-18429](https://issues.apache.org/jira/browse/HIVE-18429) | Compaction should handle a case when it produces no output | | BUG-94372 | [ATLAS-2229](https://issues.apache.org/jira/browse/ATLAS-2229) | DSL query: hive\_table name = \["t1","t2"\] throws invalid DSL query exception | | BUG-94381 | [HADOOP-13227](https://issues.apache.org/jira/browse/HADOOP-13227), [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054) | Handling RequestHedgingProxyProvider RetryAction order: FAIL &lt; RETRY &lt; FAILOVER\_AND\_RETRY. |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94928 | [HDFS-11078](https://issues.apache.org/jira/browse/HDFS-11078) | Fix NPE in LazyPersistFileScrubber | | BUG-95013 | [HIVE-18488](https://issues.apache.org/jira/browse/HIVE-18488) | LLAP ORC readers are missing some null checks | | BUG-95077 | [HIVE-14205](https://issues.apache.org/jira/browse/HIVE-14205) | Hive doesn't support union type with AVRO file format |
-| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend shouldn'tshould'n trust a partially trusted channel |
+| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend shouldn't trust a partially trusted channel |
| BUG-95201 | [HDFS-13060](https://issues.apache.org/jira/browse/HDFS-13060) | Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver | | BUG-95284 | [HBASE-19395](https://issues.apache.org/jira/browse/HBASE-19395) | \[branch-1\] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE | | BUG-95301 | [HIVE-18517](https://issues.apache.org/jira/browse/HIVE-18517) | Vectorization: Fix VectorMapOperator to accept VRBs and check vectorized flag correctly to support LLAP Caching | | BUG-95542 | [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135) | PeerClusterZnode under rs of removed peer may never be deleted | | BUG-95595 | [HIVE-15563](https://issues.apache.org/jira/browse/HIVE-15563) | Ignore Illegal Operation state transition exception in SQLOperation.runQuery to expose real exception. | | BUG-95596 | [YARN-4126](https://issues.apache.org/jira/browse/YARN-4126), [YARN-5750](https://issues.apache.org/jira/browse/YARN-5750) | TestClientRMService fails |
-| BUG-96019 | [HIVE-18548](https://issues.apache.org/jira/browse/HIVE-18548) | Fix log4j import |
+| BUG-96019 | [HIVE-18548](https://issues.apache.org/jira/browse/HIVE-18548) | Fix `log4j` import |
| BUG-96196 | [HDFS-13120](https://issues.apache.org/jira/browse/HDFS-13120) | Snapshot diff could be corrupted after concat | | BUG-96289 | [HDFS-11701](https://issues.apache.org/jira/browse/HDFS-11701) | NPE from Unresolved Host causes permanent DFSInputStream failures | | BUG-96291 | [STORM-2652](https://issues.apache.org/jira/browse/STORM-2652) | Exception thrown in JmsSpout open method |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-96390 | [HDFS-10453](https://issues.apache.org/jira/browse/HDFS-10453) | ReplicationMonitor thread could be stuck for a long time due to the race between replication and delete of the same file in a large cluster. | | BUG-96454 | [YARN-4593](https://issues.apache.org/jira/browse/YARN-4593) | Deadlock in AbstractService.getConfig() | | BUG-96704 | [FALCON-2322](https://issues.apache.org/jira/browse/FALCON-2322) | ClassCastException while submitAndSchedule feed |
-| BUG-96720 | [SLIDER-1262](https://issues.apache.org/jira/browse/SLIDER-1262) | Slider functests are failing in Kerberized environment |
-| BUG-96931 | [SPARK-23053](https://issues.apache.org/jira/browse/SPARK-23053), [SPARK-23186](https://issues.apache.org/jira/browse/SPARK-23186), [SPARK-23230](https://issues.apache.org/jira/browse/SPARK-23230), [SPARK-23358](https://issues.apache.org/jira/browse/SPARK-23358), [SPARK-23376](https://issues.apache.org/jira/browse/SPARK-23376), [SPARK-23391](https://issues.apache.org/jira/browse/SPARK-23391) | Update Spark2 up-to-date (Feb. 19) |
+| BUG-96720 | [SLIDER-1262](https://issues.apache.org/jira/browse/SLIDER-1262) | Slider functests are failing in `Kerberized` environment |
+| BUG-96931 | [SPARK-23053](https://issues.apache.org/jira/browse/SPARK-23053), [SPARK-23186](https://issues.apache.org/jira/browse/SPARK-23186), [SPARK-23230](https://issues.apache.org/jira/browse/SPARK-23230), [SPARK-23358](https://issues.apache.org/jira/browse/SPARK-23358), [SPARK-23376](https://issues.apache.org/jira/browse/SPARK-23376), [SPARK-23391](https://issues.apache.org/jira/browse/SPARK-23391) | Update `Spark2` up-to-date (Feb. 19) |
| BUG-97067 | [HIVE-10697](https://issues.apache.org/jira/browse/HIVE-10697) | ObjectInspectorConvertors\#UnionConvertor does a faulty conversion | | BUG-97244 | [KNOX-1083](https://issues.apache.org/jira/browse/KNOX-1083) | HttpClient default timeout should be a sensible value | | BUG-97459 | [ZEPPELIN-3271](https://issues.apache.org/jira/browse/ZEPPELIN-3271) | Option for disabling scheduler |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-97743 | N/A | java.lang.NoClassDefFoundError exception while deploying storm topology | | BUG-97756 | [PHOENIX-4576](https://issues.apache.org/jira/browse/PHOENIX-4576) | Fix LocalIndexSplitMergeIT tests failing | | BUG-97771 | [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711) | DN should not delete the block On "Too many open files" Exception |
-| BUG-97869 | [KNOX-1190](https://issues.apache.org/jira/browse/KNOX-1190) | Knox SSO support for Google OIDC is broken. |
+| BUG-97869 | [KNOX-1190](https://issues.apache.org/jira/browse/KNOX-1190) | `Knox` SSO support for Google OIDC is broken. |
| BUG-97879 | [PHOENIX-4489](https://issues.apache.org/jira/browse/PHOENIX-4489) | HBase Connection leak in Phoenix MR Jobs | | BUG-98392 | [RANGER-2007](https://issues.apache.org/jira/browse/RANGER-2007) | ranger-tagsync's Kerberos ticket fails to renew | | BUG-98484 | N/A | Hive Incremental Replication to Cloud not working | | BUG-98533 | [HBASE-19934](https://issues.apache.org/jira/browse/HBASE-19934), [HBASE-20008](https://issues.apache.org/jira/browse/HBASE-20008) | HBase snapshot restore is failing due to Null pointer exception | | BUG-98555 | [PHOENIX-4662](https://issues.apache.org/jira/browse/PHOENIX-4662) | NullPointerException in TableResultIterator.java on cache resend | | BUG-98579 | [HBASE-13716](https://issues.apache.org/jira/browse/HBASE-13716) | Stop using Hadoop's FSConstants |
-| BUG-98705 | [KNOX-1230](https://issues.apache.org/jira/browse/KNOX-1230) | Many Concurrent Requests to Knox causes URL Mangling |
+| BUG-98705 | [KNOX-1230](https://issues.apache.org/jira/browse/KNOX-1230) | Many Concurrent Requests to `Knox` causes URL Mangling |
| BUG-98983 | [KNOX-1108](https://issues.apache.org/jira/browse/KNOX-1108) | NiFiHaDispatch not failing over | | BUG-99107 | [HIVE-19054](https://issues.apache.org/jira/browse/HIVE-19054) | Function replication shall use "hive.repl.replica.functions.root.dir" as root | | BUG-99145 | [RANGER-2035](https://issues.apache.org/jira/browse/RANGER-2035) | Errors accessing servicedefs with empty implClass with Oracle backend |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-99453 | [HIVE-19065](https://issues.apache.org/jira/browse/HIVE-19065) | Metastore client compatibility check should include syncMetaStoreClient | | BUG-99521 | N/A | ServerCache for HashJoin isn't re-created when iterators are reinstantiated | | BUG-99590 | [PHOENIX-3518](https://issues.apache.org/jira/browse/PHOENIX-3518) | Memory Leak in RenewLeaseTask |
-| BUG-99618 | [SPARK-23599](https://issues.apache.org/jira/browse/SPARK-23599), [SPARK-23806](https://issues.apache.org/jira/browse/SPARK-23806) | Update Spark2 to 2.3.0+ (3/28) |
+| BUG-99618 | [SPARK-23599](https://issues.apache.org/jira/browse/SPARK-23599), [SPARK-23806](https://issues.apache.org/jira/browse/SPARK-23806) | Update `Spark2` to 2.3.0+ (3/28) |
| BUG-99672 | [ATLAS-2524](https://issues.apache.org/jira/browse/ATLAS-2524) | Hive hook with V2 notifications - incorrect handling of 'alter view as' operation | | BUG-99809 | [HBASE-20375](https://issues.apache.org/jira/browse/HBASE-20375) | Remove use of getCurrentUserCredentials in hbase-spark module |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | |||--| | BUG-87343 | [HIVE-18031](https://issues.apache.org/jira/browse/HIVE-18031) | Support replication for Alter Database operation. |
-| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
+| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | `Knox` proxy with `knox-sso` isn't working for ranger |
| BUG-93116 | [RANGER-1957](https://issues.apache.org/jira/browse/RANGER-1957) | Ranger Usersync isn't syncing users or groups periodically when incremental sync is enabled. | | BUG-93577 | [RANGER-1938](https://issues.apache.org/jira/browse/RANGER-1938) | Solr for Audit setup doesn't use DocValues effectively |
-| BUG-96082 | [RANGER-1982](https://issues.apache.org/jira/browse/RANGER-1982) | Error Improvement for Analytics Metric of Ranger Admin and Ranger Kms |
-| BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After Datanode down, In Namenode UI Datanode tab is throwing warning message. |
+| BUG-96082 | [RANGER-1982](https://issues.apache.org/jira/browse/RANGER-1982) | Error Improvement for Analytics Metric of Ranger Admin and Ranger `Kms` |
+| BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After `Datanode` down, In `Namenode` UI `Datanode` tab is throwing warning message. |
| BUG-97864 | [HIVE-18833](https://issues.apache.org/jira/browse/HIVE-18833) | Auto Merge fails when "insert into directory as orcfile" | | BUG-98814 | [HDFS-13314](https://issues.apache.org/jira/browse/HDFS-13314) | NameNode should optionally exit if it detects FsImage corruption |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | ||--|--| | BUG-100134 | [SPARK-22919](https://issues.apache.org/jira/browse/SPARK-22919) | Revert of "Bump Apache httpclient versions" |
-| BUG-95823 | N/A | Knox: Upgrade Beanutils |
+| BUG-95823 | N/A | `Knox`: Upgrade `Beanutils` |
| BUG-96751 | [KNOX-1076](https://issues.apache.org/jira/browse/KNOX-1076) | Update nimbus-jose-jwt to 4.41.2 | | BUG-97864 | [HIVE-18833](https://issues.apache.org/jira/browse/HIVE-18833) | Auto Merge fails when "insert into directory as orcfile" | | BUG-99056 | [HADOOP-13556](https://issues.apache.org/jira/browse/HADOOP-13556) | Change Configuration.getPropsWithPrefix to use getProps instead of iterator |
Fixed issues represent selected issues that were previously logged via Hortonwor
| **Bug ID** | **Apache JIRA** | **Summary** | ||--|--| | BUG-100045 | [HIVE-19056](https://issues.apache.org/jira/browse/HIVE-19056) | IllegalArgumentException in FixAcidKeyIndex when ORC file has 0 rows |
-| BUG-100139 | [KNOX-1243](https://issues.apache.org/jira/browse/KNOX-1243) | Normalize the required DNs that are Configured in KnoxToken Service |
-| BUG-100570 | [ATLAS-2557](https://issues.apache.org/jira/browse/ATLAS-2557) | Fix to allow to lookup hadoop ldap groups when are groups from UGI are wrongly set or aren't empty |
+| BUG-100139 | [KNOX-1243](https://issues.apache.org/jira/browse/KNOX-1243) | Normalize the required DNs that are Configured in `KnoxToken` Service |
+| BUG-100570 | [ATLAS-2557](https://issues.apache.org/jira/browse/ATLAS-2557) | Fix to allow to `lookup` hadoop `ldap` groups when are groups from UGI are wrongly set or aren't empty |
| BUG-100646 | [ATLAS-2102](https://issues.apache.org/jira/browse/ATLAS-2102) | Atlas UI Improvements: Search results page | | BUG-100737 | [HIVE-19049](https://issues.apache.org/jira/browse/HIVE-19049) | Add support for Alter table add columns for Druid |
-| BUG-100750 | [KNOX-1246](https://issues.apache.org/jira/browse/KNOX-1246) | Update service config in Knox to support latest configurations for Ranger. |
+| BUG-100750 | [KNOX-1246](https://issues.apache.org/jira/browse/KNOX-1246) | Update service config in `Knox` to support latest configurations for Ranger. |
| BUG-100965 | [ATLAS-2581](https://issues.apache.org/jira/browse/ATLAS-2581) | Regression with V2 Hive hook notifications: Moving table to a different database | | BUG-84413 | [ATLAS-1964](https://issues.apache.org/jira/browse/ATLAS-1964) | UI: Support to order columns in Search table | | BUG-90570 | [HDFS-11384](https://issues.apache.org/jira/browse/HDFS-11384), [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347) | Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike | | BUG-90584 | [HBASE-19052](https://issues.apache.org/jira/browse/HBASE-19052) | FixedFileTrailer should recognize CellComparatorImpl class in branch-1.x |