Updates from: 07/27/2023 01:14:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
Previously updated : 06/29/2023 Last updated : 07/26/2023
Azure Active Directory allows [FIDO2 security keys](./concept-authentication-pas
## Supported browsers
-This table shows support for authenticating Azure Active Directory (Azure AD) and Microsoft Accounts (MSA). Microsoft accounts are created by consumers for services such as Xbox, Skype, or Outlook.com. Supported device types include **USB**, near-field communication (**NFC**), and bluetooth low energy (**BLE**).
+This table shows support for authenticating Azure Active Directory (Azure AD) and Microsoft Accounts (MSA). Microsoft accounts are created by consumers for services such as Xbox, Skype, or Outlook.com.
-| OS | Chrome | Chrome | Chrome | Edge | Edge | Edge | Firefox | Firefox | Firefox | Safari | Safari | Safari
-|::|::|::|::|::|::|::|::|::|::|::|::|::|
-| | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE | USB | NFC | BLE |
-| **Windows** | ![Chrome supports USB on Windows for Azure AD accounts.][y] | ![Chrome supports NFC on Windows for Azure AD accounts.][y] | ![Chrome supports BLE on Windows for Azure AD accounts.][y] | ![Edge supports USB on Windows for Azure AD accounts.][y] | ![Edge supports NFC on Windows for Azure AD accounts.][y] | ![Edge supports BLE on Windows for Azure AD accounts.][y] | ![Firefox supports USB on Windows for Azure AD accounts.][y] | ![Firefox supports NFC on Windows for Azure AD accounts.][y] | ![Firefox supports BLE on Windows for Azure AD accounts.][y] | ![Safari supports USB on Windows for Azure AD accounts.][n] | ![Safari supports NFC on Windows for Azure AD accounts.][n] | ![Safari supports BLE on Windows for Azure AD accounts.][n] |
-| **macOS** | ![Chrome supports USB on macOS for Azure AD accounts.][y] | ![Chrome supports NFC on macOS for Azure AD accounts.][n] | ![Chrome supports BLE on macOS for Azure AD accounts.][n] | ![Edge supports USB on macOS for Azure AD accounts.][y] | ![Edge supports NFC on macOS for Azure AD accounts.][n] | ![Edge supports BLE on macOS for Azure AD accounts.][n] | ![Firefox supports USB on macOS for Azure AD accounts.][n] | ![Firefox supports NFC on macOS for Azure AD accounts.][n] | ![Firefox supports BLE on macOS for Azure AD accounts.][n] | ![Safari supports USB on macOS for Azure AD accounts.][y] | ![Safari supports NFC on macOS for Azure AD accounts.][n] | ![Safari supports BLE on macOS for Azure AD accounts.][n] |
-| **ChromeOS** | ![Chrome supports USB on ChromeOS for Azure AD accounts.][y] | ![Chrome supports NFC on ChromeOS for Azure AD accounts.][n] | ![Chrome supports BLE on ChromeOS for Azure AD accounts.][n] | ![Edge supports USB on ChromeOS for Azure AD accounts.][n] | ![Edge supports NFC on ChromeOS for Azure AD accounts.][n] | ![Edge supports BLE on ChromeOS for Azure AD accounts.][n] | ![Firefox supports USB on ChromeOS for Azure AD accounts.][n] | ![Firefox supports NFC on ChromeOS for Azure AD accounts.][n] | ![Firefox supports BLE on ChromeOS for Azure AD accounts.][n] | ![Safari supports USB on ChromeOS for Azure AD accounts.][n] | ![Safari supports NFC on ChromeOS for Azure AD accounts.][n] | ![Safari supports BLE on ChromeOS for Azure AD accounts.][n] |
-| **Linux** | ![Chrome supports USB on Linux for Azure AD accounts.][y] | ![Chrome supports NFC on Linux for Azure AD accounts.][n] | ![Chrome supports BLE on Linux for Azure AD accounts.][n] | ![Edge supports USB on Linux for Azure AD accounts.][n] | ![Edge supports NFC on Linux for Azure AD accounts.][n] | ![Edge supports BLE on Linux for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on Linux for Azure AD accounts.][n] | ![Firefox supports BLE on Linux for Azure AD accounts.][n] | ![Safari supports USB on Linux for Azure AD accounts.][n] | ![Safari supports NFC on Linux for Azure AD accounts.][n] | ![Safari supports BLE on Linux for Azure AD accounts.][n] |
-| **iOS** | ![Chrome supports USB on iOS for Azure AD accounts.][y] | ![Chrome supports NFC on iOS for Azure AD accounts.][y] | ![Chrome supports BLE on iOS for Azure AD accounts.][n] | ![Edge supports USB on iOS for Azure AD accounts.][y] | ![Edge supports NFC on iOS for Azure AD accounts.][y] | ![Edge supports BLE on iOS for Azure AD accounts.][n] | ![Firefox supports USB on Linux for Azure AD accounts.][n] | ![Firefox supports NFC on iOS for Azure AD accounts.][n] | ![Firefox supports BLE on iOS for Azure AD accounts.][n] | ![Safari supports USB on iOS for Azure AD accounts.][y] | ![Safari supports NFC on iOS for Azure AD accounts.][y] | ![Safari supports BLE on iOS for Azure AD accounts.][n] |
-| **Android** | ![Chrome supports USB on Android for Azure AD accounts.][n] | ![Chrome supports NFC on Android for Azure AD accounts.][n] | ![Chrome supports BLE on Android for Azure AD accounts.][n] | ![Edge supports USB on Android for Azure AD accounts.][n] | ![Edge supports NFC on Android for Azure AD accounts.][n] | ![Edge supports BLE on Android for Azure AD accounts.][n] | ![Firefox supports USB on Android for Azure AD accounts.][n] | ![Firefox supports NFC on Android for Azure AD accounts.][n] | ![Firefox supports BLE on Android for Azure AD accounts.][n] | ![Safari supports USB on Android for Azure AD accounts.][n] | ![Safari supports NFC on Android for Azure AD accounts.][n] | ![Safari supports BLE on Android for Azure AD accounts.][n] |
--- Key registration is currently not supported with ChromeOS/Chrome Browser.-- For iOS and macOS on Safari browser, PIN requests fail if the PIN isn't already set on the security key.-- Security key PIN for user verification isn't currently supported with Android.
+| OS | Chrome | Edge | Firefox | Safari |
+|::|::|:-:|:-:|::|
+| **Windows** | ✅ | ✅ | ✅ | N/A |
+| **macOS** | ✅ | ✅ | ✅ | ✅ |
+| **ChromeOS** | ✅ | N/A | N/A | N/A |
+| **Linux** | ✅ | ❌ | ❌ | N/A |
+| **iOS** | ✅ | ✅ | ✅ | ✅ |
+| **Android** | ❌ | ❌ | ❌ | N/A |
>[!NOTE]
->This is the view for web support. Authentication for native apps in iOS and Android are not available yet.
+>This is the view for web support. Authentication for native apps in iOS and Android isn't available yet.
-## Unsupported browsers
+## Browser support for each platform
-The following operating system and browser combinations aren't supported, but future support and testing is being investigated. If you would like to see other operating system and browser support, please leave feedback on our [product feedback site](https://feedback.azure.com/d365community/).
+The following tables show which transports are supported for each platform. Supported device types include **USB**, near-field communication (**NFC**), and bluetooth low energy (**BLE**).
-| Operating system | Browser |
-| - | - |
-| Android | Chrome |
+### Windows
+
+| Browser | USB | NFC | BLE |
+|||--|--|
+| Edge | ✅ | ✅ | ✅ |
+| Chrome | ✅ | ✅ | ✅ |
+| Firefox | ✅ | ✅ | ✅ |
+
+### macOS
+
+| Browser | USB | NFC<sup>1</sup> | BLE<sup>1</sup> |
+|||--|--|
+| Edge | &#x2705; | N/A | N/A |
+| Chrome | &#x2705; | N/A | N/A |
+| Firefox<sup>2</sup> | &#x2705; | N/A | N/A |
+| Safari<sup>2</sup> | &#x2705; | N/A | N/A |
+
+<sup>1</sup>NFC and BLE security keys aren't supported on macOS by Apple.
+
+<sup>2</sup>New security key registration doesn't work on these macOS browsers because they don't prompt to set up biometrics or PIN.
+
+### ChromeOS
+
+| Browser<sup>1</sup> | USB | NFC | BLE |
+|||--|--|
+| Chrome | &#x2705; | &#10060; | &#10060; |
+
+<sup>1</sup>Security key registration isn't supported on ChromeOS or Chrome browser.
+
+### Linux
+
+| Browser | USB | NFC | BLE |
+|||--|--|
+| Edge | &#10060; | &#10060; | &#10060; |
+| Chrome | &#x2705; | &#10060; | &#10060; |
+| Firefox | &#10060; | &#10060; | &#10060; |
++
+### iOS
+
+| Browser<sup>1</sup> | Lightning | NFC | BLE<sup>2</sup> |
+|||--|--|
+| Edge | &#x2705; | &#x2705; | N/A |
+| Chrome | &#x2705; | &#x2705; | N/A |
+| Firefox | &#x2705; | &#x2705; | N/A |
+| Safari | &#x2705; | &#x2705; | N/A |
+
+<sup>1</sup>New security key registration doesn't work on iOS browsers because they don't prompt to set up biometrics or PIN.
+
+<sup>2</sup>BLE security keys aren't supported on iOS by Apple.
+
+### Android
+
+| Browser<sup>1</sup> | USB | NFC | BLE |
+|||--|--|
+| Edge | &#10060; | &#10060; | &#10060; |
+| Chrome | &#10060; | &#10060; | &#10060; |
+| Firefox | &#10060; | &#10060; | &#10060; |
+
+<sup>1</sup>Security key biometrics or PIN for user verficiation isn't currently supported on Android by Google. Azure AD requires user verification for all FIDO2 authentications.
## Minimum browser version
The following are the minimum browser version requirements.
| Edge | Windows 10 version 1903<sup>1</sup> | | Firefox | 66 |
-<sup>1</sup>All versions of the new Chromium-based Microsoft Edge support Fido2. Support on Microsoft Edge legacy was added in 1903.
+<sup>1</sup>All versions of the new Chromium-based Microsoft Edge support FIDO2. Support on Microsoft Edge legacy was added in 1903.
## Next steps [Enable passwordless security key sign-in](./howto-authentication-passwordless-security-key.md)
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
Previously updated : 09/26/2022 Last updated : 07/18/2023
-# Block legacy authentication with Azure AD with Conditional Access
+# Block legacy authentication with Azure AD Conditional Access
To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy authentication doesn't support things like multifactor authentication (MFA). MFA is a common requirement to improve security posture in organizations.
+Based on Microsoft's analysis more than 97 percent of credential stuffing attacks use legacy authentication and more than 99 percent of password spray attacks use legacy authentication protocols. These attacks would stop with basic authentication disabled or blocked.
+ > [!NOTE] > Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication. For more information, see the article [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online) Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020 blog post [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302#) emphasizes why organizations should block legacy authentication and what other tools Microsoft provides to accomplish this task:
-> For MFA to be effective, you also need to block legacy authentication. This is because legacy authentication protocols like POP, SMTP, IMAP, and MAPI can't enforce MFA, making them preferred entry points for adversaries attacking your organization...
->
-> ...The numbers on legacy authentication from an analysis of Azure Active Directory (Azure AD) traffic are stark:
->
-> - More than 99 percent of password spray attacks use legacy authentication protocols
-> - More than 97 percent of credential stuffing attacks use legacy authentication
-> - Azure AD accounts in organizations that have disabled legacy authentication experience 67 percent fewer compromises than those where legacy authentication is enabled
->
-
-If you're ready to block legacy authentication to improve your tenant's protection, you can accomplish this goal with Conditional Access. This article explains how you can configure Conditional Access policies that block legacy authentication for all workloads within your tenant.
+This article explains how you can configure Conditional Access policies that block legacy authentication for all workloads within your tenant.
While rolling out legacy authentication blocking protection, we recommend a phased approach, rather than disabling it for all users all at once. Customers may choose to first begin disabling basic authentication on a per-protocol basis, by applying Exchange Online authentication policies, then (optionally) also blocking legacy authentication via Conditional Access policies when ready.
For more information about these authentication protocols and services, see [Sig
### Identify legacy authentication use
-Before you can block legacy authentication in your directory, you need to first understand if your users have clients that use legacy authentication. Below, you'll find useful information to identify and triage where clients are using legacy authentication.
+Before you can block legacy authentication in your directory, you need to first understand if your users have client apps that use legacy authentication.
-#### Indicators from Azure AD
+#### Sign-in log indicators
1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**. 1. Add the **Client App** column if it isn't shown by clicking on **Columns** > **Client App**. 1. Select **Add filters** > **Client App** > choose all of the legacy authentication protocols and select **Apply**. 1. If you've activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab.
-Filtering will only show you sign-in attempts that were made by legacy authentication protocols. Clicking on each individual sign-in attempt will show you more details. The **Client App** field under the **Basic Info** tab will indicate which legacy authentication protocol was used.
+Filtering shows you sign-in attempts made by legacy authentication protocols. Clicking on each individual sign-in attempt shows you more details. The **Client App** field under the **Basic Info** tab indicates which legacy authentication protocol was used.
-These logs will indicate where users are using clients that are still depending on legacy authentication. For users that don't appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy for these users only.
+These logs indicate where users are using clients that are still depending on legacy authentication. For users that don't appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy for these users only.
Additionally, to help triage legacy authentication within your tenant use the [Sign-ins using legacy authentication workbook](../reports-monitoring/workbook-legacy%20authentication.md).
To determine if a client is using legacy or modern authentication based on the d
## Important considerations
-Many clients that previously only supported legacy authentication now support modern authentication. Clients that support both legacy and modern authentication may require configuration update to move from legacy to modern authentication. If you see **modern mobile**, **desktop client** or **browser** for a client in the Azure AD logs, it's using modern authentication. If it has a specific client or protocol name, such as **Exchange ActiveSync**, it's using legacy authentication. The client types in Conditional Access, Azure AD Sign-in logs, and the legacy authentication workbook distinguish between modern and legacy authentication clients for you.
+Many clients that previously only supported legacy authentication now support modern authentication. Clients that support both legacy and modern authentication may require configuration update to move from legacy to modern authentication. If you see **modern mobile**, **desktop client** or **browser** for a client in the Sign-in logs, it's using modern authentication. If it has a specific client or protocol name, such as **Exchange ActiveSync**, it's using legacy authentication. The client types in Conditional Access, Sign-in logs, and the legacy authentication workbook distinguish between modern and legacy authentication clients for you.
- Clients that support modern authentication but aren't configured to use modern authentication should be updated or reconfigured to use modern authentication. - All clients that don't support modern authentication should be replaced. > [!IMPORTANT] >
-> **Exchange Active Sync with Certificate-based authentication(CBA)**
+> **Exchange Active Sync with Certificate-based authentication (CBA)**
> > When implementing Exchange Active Sync (EAS) with CBA, configure clients to use modern authentication. Clients not using modern authentication for EAS with CBA **are not blocked** with [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). However, these clients **are blocked** by Conditional Access policies configured to block legacy authentication. >
The easiest way to block legacy authentication across your entire organization i
### Indirectly blocking legacy authentication
-If your organization isn't ready to block legacy authentication across the entire organization, you should ensure that sign-ins using legacy authentication aren't bypassing policies that require grant controls such as requiring multifactor authentication or compliant/hybrid Azure AD joined devices. During authentication, legacy authentication clients don't support sending MFA, device compliance, or join state information to Azure AD. Therefore, apply policies with grant controls to all client applications so that legacy authentication based sign-ins that canΓÇÖt satisfy the grant controls are blocked. With the general availability of the client apps condition in August 2020, newly created Conditional Access policies apply to all client apps by default.
+If your organization isn't ready to block legacy authentication completely, you should ensure that sign-ins using legacy authentication aren't bypassing policies that require grant controls like multifactor authentication. During authentication, legacy authentication clients don't support sending MFA, device compliance, or join state information to Azure AD. Therefore, apply policies with grant controls to all client applications so that legacy authentication based sign-ins that canΓÇÖt satisfy the grant controls are blocked. With the general availability of the client apps condition in August 2020, newly created Conditional Access policies apply to all client apps by default.
## What you should know
You can select all available grant controls for the **Other clients** condition;
## Next steps -- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+- [Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
- If you aren't familiar with configuring Conditional Access policies yet, see [require MFA for specific apps with Azure Active Directory Conditional Access](../authentication/tutorial-enable-azure-mfa.md) for an example. - For more information about modern authentication support, see [How modern authentication works for Office client apps](/office365/enterprise/modern-auth-for-office-2013-and-2016) - [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
description: Use filter for devices in Conditional Access to enhance security po
Previously updated : 01/25/2023 Last updated : 07/18/2023
The following steps will help create two Conditional Access policies to support
Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **Directory roles** and choose **Global Administrator**.
Policy 1: All users with the directory role of Global Administrator, accessing t
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **Select apps**, and select **Microsoft Azure Management**.
+1. Under **Target resources** > **Cloud apps** > **Include** > **Select apps**, choose **Microsoft Azure Management**, and select **Select**.
1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and **Require device to be marked as compliant**, then select **Select**. 1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy. Policy 2: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block.
-1. Select **New policy**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **Directory roles** and choose **Global Administrator**.
Policy 2: All users with the directory role of Global Administrator, accessing t
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **Select apps**, and select **Microsoft Azure Management**.
+1. Under **Target resources** > **Cloud apps** > **Include** > **Select apps**, choose **Microsoft Azure Management**, and select **Select**.
1. Under **Conditions**, **Filter for devices**. 1. Toggle **Configure** to **Yes**. 1. Set **Devices matching the rule** to **Exclude filtered devices from policy**.
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md
The article [Common Conditional Access policies](concept-conditional-access-poli
[Create a Conditional Access policy](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json#create-a-conditional-access-policy)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
[Planning a cloud-based Azure AD Multifactor Authentication deployment](../authentication/howto-mfa-getstarted.md)
active-directory Concept Conditional Access Policy Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md
Title: Conditional Access templates
-description: Deploy commonly used Conditional Access policies with templates
+ Title: Secure your resources with Conditional Access policy templates
+description: Deploy recommended Conditional Access policies from easy to use templates.
Previously updated : 11/29/2022 Last updated : 07/17/2023 -+
-# Conditional Access templates (Preview)
+# Conditional Access templates
-Conditional Access templates provide a convenient method to deploy new policies aligned with Microsoft recommendations. These templates are designed to provide maximum protection aligned with commonly used policies across various customer types and locations.
+Conditional Access templates provide a convenient method to deploy new policies aligned with Microsoft recommendations. These templates are designed to provide maximum protection aligned with commonly used policies across various customer types and locations.
-There are 14 Conditional Access policy templates, filtered by five different scenarios:
+## Template categories
-- Secure foundation-- Zero Trust-- Remote work-- Protect administrators-- Emerging threats-- All
+The 16 Conditional Access policy templates are organized into the following categories:
-Find the templates in the **Azure portal** > **Azure Active Directory** > **Security** > **Conditional Access** > **New policy from template (Preview)**. Select **Show more** to see all policy templates in each scenario.
+# [Secure foundation](#tab/secure-foundation)
+Microsoft recommends these policies as the base for all organizations. We recommend these policies be deployed as a group.
-> [!IMPORTANT]
-> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. Simply navigate to **Azure portal** > **Azure Active Directory** > **Security** > **Conditional Access** > **Policies**, select the policy to open the editor and modify the excluded users and groups to select accounts you want to exclude.
->
-> By default, each policy is created in [report-only mode](concept-conditional-access-report-only.md), we recommended organizations test and monitor usage, to ensure intended result, before turning each policy on.
-
-Organizations can select individual policy templates and:
--- View a summary of the policy settings.-- Edit, to customize based on organizational needs.-- Export the JSON definition for use in programmatic workflows.
- - These JSON definitions can be edited and then imported on the main Conditional Access policies page using the **Import policy file** option.
-
-## Conditional Access template policies
+- [Require multifactor authentication for admins](howto-conditional-access-policy-admin-mfa.md)
+- [Securing security info registration](howto-conditional-access-policy-registration.md)
+- [Block legacy authentication](howto-conditional-access-policy-block-legacy.md)
+- [Require multifactor authentication for all users](howto-conditional-access-policy-all-users-mfa.md)
+- [Require multifactor authentication for Azure management](howto-conditional-access-policy-azure-management.md)
+- [Require compliant or hybrid Azure AD joined device or multifactor authentication for all users](howto-conditional-access-policy-compliant-device.md)
-- [Block legacy authentication](howto-conditional-access-policy-block-legacy.md)\*-- [Require multifactor authentication for admins](howto-conditional-access-policy-admin-mfa.md)\*-- [Require multifactor authentication for all users](howto-conditional-access-policy-all-users-mfa.md)\*-- [Require multifactor authentication for Azure management](howto-conditional-access-policy-azure-management.md)\*
+# [Zero Trust](#tab/zero-trust)
-> \* These four policies when configured together, provide similar functionality enabled by [security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
+These policies as a group help support a [Zero Trust architecture](/security/zero-trust/deploy/identity).
+- [Require multifactor authentication for admins](howto-conditional-access-policy-admin-mfa.md)
+- [Securing security info registration](howto-conditional-access-policy-registration.md)
+- [Block legacy authentication](howto-conditional-access-policy-block-legacy.md)
+- [Require multifactor authentication for all users](howto-conditional-access-policy-all-users-mfa.md)
+- [Require multifactor authentication for guest access](howto-policy-guest-mfa.md)
+- [Require multifactor authentication for Azure management](howto-conditional-access-policy-azure-management.md)
+- [Require multifactor authentication for risky sign-ins](howto-conditional-access-policy-risk.md) **Requires Azure AD Premium P2**
+- [Require password change for high-risk users](howto-conditional-access-policy-risk-user.md) **Requires Azure AD Premium P2**
- [Block access for unknown or unsupported device platform](howto-policy-unknown-unsupported-device.md) - [No persistent browser session](howto-policy-persistent-browser-session.md)-- [Require approved client apps or app protection](howto-policy-approved-app-or-app-protection.md)
+- [Require approved client apps or app protection policies](howto-policy-approved-app-or-app-protection.md)
- [Require compliant or hybrid Azure AD joined device or multifactor authentication for all users](howto-conditional-access-policy-compliant-device.md)-- [Require compliant or Hybrid Azure AD joined device for administrators](howto-conditional-access-policy-compliant-device-admin.md)-- [Require multifactor authentication for risky sign-in](howto-conditional-access-policy-risk.md) **Requires Azure AD Premium P2**
+- [Require multifactor authentication for admins accessing Microsoft admin portals](how-to-policy-mfa-admin-portals.md)
+
+# [Remote work](#tab/remote-work)
+
+These policies help secure organizations with remote workers.
+
+- [Securing security info registration](howto-conditional-access-policy-registration.md)
+- [Block legacy authentication](howto-conditional-access-policy-block-legacy.md)
+- [Require multifactor authentication for all users](howto-conditional-access-policy-all-users-mfa.md)
- [Require multifactor authentication for guest access](howto-policy-guest-mfa.md)
+- [Require multifactor authentication for risky sign-ins](howto-conditional-access-policy-risk.md) **Requires Azure AD Premium P2**
- [Require password change for high-risk users](howto-conditional-access-policy-risk-user.md) **Requires Azure AD Premium P2**-- [Securing security info registration](howto-conditional-access-policy-registration.md)
+- [Require compliant or hybrid Azure AD joined device for administrators](howto-conditional-access-policy-compliant-device-admin.md)
+- [Block access for unknown or unsupported device platform](howto-policy-unknown-unsupported-device.md)
+- [No persistent browser session](howto-policy-persistent-browser-session.md)
+- [Require approved client apps or app protection policies](howto-policy-approved-app-or-app-protection.md)
- [Use application enforced restrictions for unmanaged devices](howto-policy-app-enforced-restriction.md)
+# [Protect administrator](#tab/protect-administrator)
+
+These policies are directed at highly privileged administrators in your environment, where compromise may cause the most damage.
+
+- [Require multifactor authentication for admins](howto-conditional-access-policy-admin-mfa.md)
+- [Block legacy authentication](howto-conditional-access-policy-block-legacy.md)
+- [Require multifactor authentication for Azure management](howto-conditional-access-policy-azure-management.md)
+- [Require compliant or hybrid Azure AD joined device for administrators](howto-conditional-access-policy-compliant-device-admin.md)
+- [Require phishing-resistant multifactor authentication for administrators](how-to-policy-phish-resistant-admin-mfa.md)
+
+# [Emerging threats](#tab/emerging-threats)
+
+Policies in this category provide new ways to protect against compromise.
+
+- [Require phishing-resistant multifactor authentication for administrators](how-to-policy-phish-resistant-admin-mfa.md)
+++
+Find these templates in the **[Microsoft Entra admin center](https://entra.microsoft.com)** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Create new policy from templates**. Select **Show more** to see all policy templates in each category.
++
+> [!IMPORTANT]
+> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. Simply navigate to **Microsoft Entra admin center** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Policies**, select the policy to open the editor and modify the excluded users and groups to select accounts you want to exclude.
+
+By default, each policy is created in [report-only mode](concept-conditional-access-report-only.md), we recommended organizations test and monitor usage, to ensure intended result, before turning on each policy.
+
+Organizations can select individual policy templates and:
+
+- View a summary of the policy settings.
+- Edit, to customize based on organizational needs.
+- Export the JSON definition for use in programmatic workflows.
+ - These JSON definitions can be edited and then imported on the main Conditional Access policies page using the **Upload policy file** option.
+ ## Other common policies - [Block access by location](howto-conditional-access-policy-location.md)
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
description: Use filter for applications in Conditional Access to manage conditi
Previously updated : 09/30/2022 Last updated : 07/18/2023
Follow the instructions in the article, [Add or deactivate custom security attri
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**.
-1. Under **Cloud apps or actions**, select the following options:
+1. Under **Target resources**, select the following options:
1. Select what this policy applies to **Cloud apps**. 1. Include **Select apps**. 1. Select **Edit filter**.
Follow the instructions in the article, [Add or deactivate custom security attri
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Configure custom attributes
Sign in as a user who the policy would apply to and test to see that MFA is requ
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+[Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
description: Learn how to use token protection in Conditional Access policies.
Previously updated : 06/21/2023 Last updated : 07/18/2023
The steps that follow help create a Conditional Access policy to require token p
1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select the users or groups who are testing this policy. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **Select apps**.
+1. Under **Target resources** > **Cloud apps** > **Include** > **Select apps**
1. Under **Select**, select the following applications supported by the preview: 1. Office 365 Exchange Online 1. Office 365 SharePoint Online
The steps that follow help create a Conditional Access policy to require token p
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
### Capture logs and analyze
active-directory Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/controls.md
Custom controls can't be used with Identity Protection's automation requiring Az
- [Conditional Access common policies](concept-conditional-access-policy-common.md) - [Report-only mode](concept-conditional-access-report-only.md)-- [Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+- [Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory How To Policy Mfa Admin Portals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-mfa-admin-portals.md
+
+ Title: Require multifactor authentication for Microsoft admin portals
+description: Create a Conditional Access policy requiring multifactor authentication for admins accessing Microsoft admin portals.
+++++ Last updated : 07/18/2023++++++++
+# Common Conditional Access policy: Require multifactor authentication for admins accessing Microsoft admin portals
+
+Microsoft recommends securing access to any Microsoft admin portals like Microsoft Entra, Microsoft 365, Exchange, and Azure. Using the [Microsoft Admin Portals (Preview)](concept-conditional-access-cloud-apps.md#microsoft-admin-portals-preview) app organizations can control interactive access to Microsoft admin portals.
+
+## User exclusions
+
+## Create a Conditional Access policy
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **Directory roles** and choose built-in roles like:
+
+ - Global Administrator
+ - Application Administrator
+ - Authentication Administrator
+ - Billing Administrator
+ - Cloud Application Administrator
+ - Conditional Access Administrator
+ - Exchange Administrator
+ - Helpdesk Administrator
+ - Password Administrator
+ - Privileged Authentication Administrator
+ - Privileged Role Administrator
+ - Security Administrator
+ - SharePoint Administrator
+ - User Administrator
+
+ > [!WARNING]
+ > Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md).
+
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Target resources** > **Cloud apps** > **Include**, **Select apps**, select **Microsoft Admin Portals (Preview)**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require authentication strength**, select **Multifactor authentication**, then select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Next steps
+
+[Conditional Access templates](concept-conditional-access-policy-common.md)
+
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory How To Policy Phish Resistant Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-phish-resistant-admin-mfa.md
+
+ Title: Require phishing-resistant multifactor authentication for Azure AD administrator roles
+description: Create a Conditional Access policy requiring stronger authentication methods for highly privileged roles in your organization.
+++++ Last updated : 07/18/2023++++++++
+# Common Conditional Access policy: Require phishing-resistant multifactor authentication for administrators
+
+Accounts that are assigned highly privileged administrative rights are frequent targets of attackers. Requiring phishing-resistant multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.
+
+> [!CAUTION]
+> Before creating a policy requiring phishing-resistant multifactor authentication, ensure your administrators have the appropriate methods registered. If you enable this policy without completing this step you risk locking yourself out of your tenant.
+
+Microsoft recommends you require phishing-resistant multifactor authentication on the following roles at a minimum:
+
+- Global Administrator
+- Application Administrator
+- Authentication Administrator
+- Billing Administrator
+- Cloud Application Administrator
+- Conditional Access Administrator
+- Exchange Administrator
+- Helpdesk Administrator
+- Password Administrator
+- Privileged Authentication Administrator
+- Privileged Role Administrator
+- Security Administrator
+- SharePoint Administrator
+- User Administrator
+
+Organizations can choose to include or exclude roles as they see fit.
+
+## User exclusions
++
+## Create a Conditional Access policy
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **Directory roles** and choose built-in roles like:
+
+ - Global Administrator
+ - Application Administrator
+ - Authentication Administrator
+ - Billing Administrator
+ - Cloud Application Administrator
+ - Conditional Access Administrator
+ - Exchange Administrator
+ - Helpdesk Administrator
+ - Password Administrator
+ - Privileged Authentication Administrator
+ - Privileged Role Administrator
+ - Security Administrator
+ - SharePoint Administrator
+ - User Administrator
+
+ > [!WARNING]
+ > Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md).
+
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require authentication strength**, select **Phishing-resistant MFA**, then select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Next steps
+
+[Conditional Access templates](concept-conditional-access-policy-common.md)
+
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Previously updated : 08/22/2022 Last updated : 07/26/2023
Accounts that are assigned administrative rights are targeted by attackers. Requ
Microsoft recommends you require MFA on the following roles at a minimum, based on [identity score recommendations](../fundamentals/identity-secure-score.md): -- Global administrator-- Application administrator-- Authentication Administrator-- Billing administrator-- Cloud application administrator-- Conditional Access administrator-- Exchange administrator-- Helpdesk administrator-- Password administrator-- Privileged authentication administrator
+- Global Administrator
+- Application Administrator
+- Authentication Administrator
+- Billing Administrator
+- Cloud Application Administrator
+- Conditional Access Administrator
+- Exchange Administrator
+- Helpdesk Administrator
+- Password Administrator
+- Privileged Authentication Administrator
- Privileged Role Administrator-- Security administrator-- SharePoint administrator-- User administrator
+- Security Administrator
+- SharePoint Administrator
+- User Administrator
Organizations can choose to include or exclude roles as they see fit.
Organizations can choose to include or exclude roles as they see fit.
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication.
+The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication. Some organizations may be ready to move to stronger authentication methods for their administrators. These organizations may choose to implement a policy like the one described in the article [Require phishing-resistant multifactor authentication for administrators](how-to-policy-phish-resistant-admin-mfa.md).
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **Directory roles** and choose built-in roles like:+ - Global Administrator
- - Application administrator
+ - Application Administrator
- Authentication Administrator
- - Billing administrator
- - Cloud application administrator
+ - Billing Administrator
+ - Cloud Application Administrator
- Conditional Access Administrator
- - Exchange administrator
- - Helpdesk administrator
- - Password administrator
- - Privileged authentication administrator
+ - Exchange Administrator
+ - Helpdesk Administrator
+ - Password Administrator
+ - Privileged Authentication Administrator
- Privileged Role Administrator
- - Security administrator
- - SharePoint administrator
- - User administrator
+ - Security Administrator
+ - SharePoint Administrator
+ - User Administrator
> [!WARNING] > Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md). 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Conditional Access Policy All Users Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md
Previously updated : 08/22/2022 Last updated : 07/18/2023
As Alex Weinert, the Directory of Identity Security at Microsoft, mentions in hi
> Your password doesn't matter, but MFA does! Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.
-The guidance in this article will help your organization create an MFA policy for your environment.
+The guidance in this article helps your organization create an MFA policy for your environment.
## User exclusions [!INCLUDE [active-directory-policy-exclusions](../../../includes/active-directory-policy-exclude-user.md)]
Organizations that use [Subscription Activation](/windows/deployment/windows-10-
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to require all users do multifactor authentication.
+The following steps help create a Conditional Access policy to require all users do multifactor authentication.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users** 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Exclude**, select any applications that don't require multifactor authentication. 1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
### Named locations Organizations may choose to incorporate known network locations known as **Named locations** to their Conditional Access policies. These named locations may include trusted IPv4 networks like those for a main office location. For more information about configuring named locations, see the article [What is the location condition in Azure Active Directory Conditional Access?](location-condition.md)
-In the example policy above, an organization may choose to not require multifactor authentication if accessing a cloud app from their corporate network. In this case they could add the following configuration to the policy:
+In the previous example policy, an organization may choose to not require multifactor authentication if accessing a cloud app from their corporate network. In this case they could add the following configuration to the policy:
1. Under **Assignments**, select **Conditions** > **Locations**. 1. Configure **Yes**.
In the example policy above, an organization may choose to not require multifact
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Conditional Access Policy Authentication Strength External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md
Previously updated : 04/03/2023 Last updated : 07/18/2023
Determine if one of the built-in authentication strengths will work for your sce
Use the following steps to create a Conditional Access policy that applies an authentication strength to external users.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, choose **Select users and groups**, and then select **Guest or external users**.-
- <!![Screenshot showing where to select guest and external user types.](media/howto-conditional-access-policy-authentication-strength-external/assignments-external-user-types.png)>
- 1. Select the types of [guest or external users](../external-identities/authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types) you want to apply the policy to. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions**, under **Include** or **Exclude**, select any applications you want to include in or exclude from the authentication strength requirements.
+1. Under **Target resources** > **Cloud apps**, under **Include** or **Exclude**, select any applications you want to include in or exclude from the authentication strength requirements.
1. Under **Access controls** > **Grant**: 1. Choose **Grant access**. 1. Select **Require authentication strength**, and then select the built-in or custom authentication strength from the list.
After you confirm your settings using [report-only mode](howto-conditional-acces
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
Previously updated : 08/22/2022 Last updated : 07/18/2023
The following steps will help create a Conditional Access policy to require user
> [!CAUTION] > Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **Select apps**, choose **Microsoft Azure Management**, and select **Select**.
+1. Under **Target resources** > **Cloud apps** > **Include** > **Select apps**, choose **Microsoft Azure Management**, and select **Select**.
1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Conditional Access Policy Block Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-access.md
Previously updated : 08/22/2022 Last updated : 07/18/2023
The following steps will help create Conditional Access policies to block access
The first policy blocks access to all apps except for Microsoft 365 applications if not on a trusted location.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions**, select the following options:
+1. Under **Target resources** > **Cloud apps**, select the following options:
1. Under **Include**, select **All cloud apps**. 1. Under **Exclude**, select **Office 365**, select **Select**. 1. Under **Conditions**:
The first policy blocks access to all apps except for Microsoft 365 applications
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
A second policy is created below to require multifactor authentication or a compliant device for users of Microsoft 365.
-1. Select **New policy**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **Select apps**, choose **Office 365**, and select **Select**.
+1. Under **Target resources** > **Cloud apps** > **Include** > **Select apps**, choose **Office 365**, and select **Select**.
1. Under **Access controls** > **Grant**, select **Grant access**. 1. Select **Require multifactor authentication** and **Require device to be marked as compliant** select **Select**. 1. Ensure **Require one of the selected controls** is selected.
A second policy is created below to require multifactor authentication or a comp
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
> [!NOTE] > Conditional Access policies are enforced after first-factor authentication is completed. Conditional Access isn't intended to be an organization's first line of defense for scenarios like denial-of-service (DoS) attacks, but it can use signals from these events to determine access. ## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+[Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Conditional Access Policy Block Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md
Previously updated : 08/22/2022 Last updated : 07/18/2023
Due to the increased risk associated with legacy authentication protocols, Micro
## Template deployment
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates).
## Create a Conditional Access policy The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose any accounts that must maintain the ability to use legacy authentication. Exclude at least one account to prevent yourself from being locked out. If you don't exclude any account, you won't be able to create this policy.
-1. Under **Cloud apps or actions**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions** > **Client apps**, set **Configure** to **Yes**. 1. Check only the boxes **Exchange ActiveSync clients** and **Other clients**. 1. Select **Done**.
The following steps will help create a Conditional Access policy to block legacy
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
> [!NOTE] > Conditional Access policies are enforced after first-factor authentication is completed. Conditional Access isn't intended to be an organization's first line of defense for scenarios like denial-of-service (DoS) attacks, but it can use signals from these events to determine access. ## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
[How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
active-directory Howto Conditional Access Policy Compliant Device Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device-admin.md
Previously updated : 09/30/2022 Last updated : 07/18/2023
Organizations can choose to include or exclude roles as they see fit.
The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **Directory roles** and choose built-in roles like:
The following steps will help create a Conditional Access policy to require mult
> Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md). 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Access controls** > **Grant**. 1. Select **Require device to be marked as compliant**, and **Require hybrid Azure AD joined device** 1. **For multiple controls** select **Require one of the selected controls**.
The following steps will help create a Conditional Access policy to require mult
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
> [!NOTE] > You can enroll your new devices to Intune even if you select **Require device to be marked as compliant** for **All users** and **All cloud apps** using the steps above. **Require device to be marked as compliant** control does not block Intune enrollment.
Organizations that use the [Subscription Activation](/windows/deployment/windows
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+[Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
[Device compliance policies work with Azure AD](/intune/device-compliance-get-started#device-compliance-policies-work-with-azure-ad)
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Previously updated : 09/30/2022 Last updated : 07/18/2023
Requiring a hybrid Azure AD joined device is dependent on your devices already b
The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. If you must exclude specific applications from your policy, you can choose them from the **Exclude** tab under **Select excluded cloud apps** and choose **Select**. 1. Under **Access controls** > **Grant**. 1. Select **Require multifactor authentication**, **Require device to be marked as compliant**, and **Require hybrid Azure AD joined device**
The following steps will help create a Conditional Access policy to require mult
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
> [!NOTE] > You can enroll your new devices to Intune even if you select **Require device to be marked as compliant** for **All users** and **All cloud apps** using the steps above. **Require device to be marked as compliant** control does not block Intune enrollment and the access to the Microsoft Intune Web Company Portal application.
Organizations that use the [Subscription Activation](/windows/deployment/windows
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+[Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
[Device compliance policies work with Azure AD](/intune/device-compliance-get-started#device-compliance-policies-work-with-azure-ad)
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
Previously updated : 02/23/2023 Last updated : 07/18/2023
With the location condition in Conditional Access, you can control access to you
## Define locations
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**.
-1. Choose **New location**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Named locations**.
+1. Choose the type of location to create.
+ 1. **Countries location** or **IP ranges location**.
1. Give your location a name.
-1. Choose **IP ranges** if you know the specific externally accessible IPv4 address ranges that make up that location or **Countries/Regions**.
- 1. Provide the **IP ranges** or select the **Countries/Regions** for the location you're specifying.
- * If you choose Countries/Regions, you can optionally choose to include unknown areas.
-1. Choose **Save**
+1. Provide the **IP ranges** or select the **Countries/Regions** for the location you're specifying.
+ - If you select IP ranges, you can optionally **Mark as trusted location**.
+ - If you choose Countries/Regions, you can optionally choose to include unknown areas.
+1. Select **Create**
More information about the location condition in Conditional Access can be found in the article, [What is the location condition in Azure Active Directory Conditional Access](location-condition.md) ## Create a Conditional Access policy
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, and select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions** > **Location**. 1. Set **Configure** to **Yes** 1. Under **Include**, select **Selected locations**
More information about the location condition in Conditional Access can be found
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+[Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
Previously updated : 11/28/2022 Last updated : 07/18/2023
Some organizations in the past may have used trusted network location or device
## Template deployment
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates).
## Create a policy to secure registration The following policy applies to the selected users, who attempt to register using the combined registration experience. The policy requires users to be in a trusted network location, do multifactor authentication or use Temporary Access Pass credentials.
-1. In the **Azure portal**, browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration with TAP**. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**.
The following policy applies to the selected users, who attempt to register usin
> Temporary Access Pass does not work for guest users. 1. Select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions**, select **User actions**, check **Register security information**.
+1. Under **Target resources** > **User actions**, check **Register security information**.
1. Under **Conditions** > **Locations**. 1. Set **Configure** to **Yes**. 1. Include **Any location**.
The following policy applies to the selected users, who attempt to register usin
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
Administrators will now have to issue Temporary Access Pass credentials to new users so they can satisfy the requirements for multifactor authentication to register. Steps to accomplish this task, are found in the section [Create a Temporary Access Pass in the Azure AD Portal](../authentication/howto-authentication-temporary-access-pass.md#create-a-temporary-access-pass).
Organizations may choose to require other grant controls with or in place of **R
For [guest users](../external-identities/what-is-b2b.md) who need to register for multifactor authentication in your directory you may choose to block registration from outside of [trusted network locations](concept-conditional-access-conditions.md#locations) using the following guide.
-1. In the **Azure portal**, browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration on Trusted Networks**. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All guest and external users**.
-1. Under **Cloud apps or actions**, select **User actions**, check **Register security information**.
+1. Under **Target resources** > **User actions**, check **Register security information**.
1. Under **Conditions** > **Locations**. 1. Configure **Yes**. 1. Include **Any location**.
For [guest users](../external-identities/what-is-b2b.md) who need to register fo
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+[Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
[Require users to reconfirm authentication information](../authentication/concept-sspr-howitworks.md#reconfirm-authentication-information)
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Previously updated : 01/06/2023 Last updated : 07/18/2023
There are two locations where this policy may be configured, Conditional Access
## Template deployment
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates).
## Enable with Conditional Access policy
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions** > **User risk**, set **Configure** to **Yes**. 1. Under **Configure user risk levels needed for policy to be enforced**, select **High**. 1. Select **Done**.
After administrators confirm the settings using [report-only mode](howto-conditi
- [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md) - [Conditional Access common policies](concept-conditional-access-policy-common.md) - [Sign-in risk-based Conditional Access](howto-conditional-access-policy-risk.md)-- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)-- [Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+- [Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+- [Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
- [What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Previously updated : 08/22/2022 Last updated : 07/18/2023
The Sign-in risk-based policy protects users from registering MFA in risky sessi
## Template deployment
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates).
## Enable with Conditional Access policy
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**. 1. Select **High** and **Medium**. 1. Select **Done**.
After administrators confirm the settings using [report-only mode](howto-conditi
- [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md) - [Conditional Access common policies](concept-conditional-access-policy-common.md) - [User risk-based Conditional Access](howto-conditional-access-policy-risk-user.md)-- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)-- [Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+- [Determine effect using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+- [Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
- [What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Previously updated : 08/22/2022 Last updated : 07/18/2023
To make sure that your policy works as expected, the recommended best practice i
### Policy 1: Sign-in frequency control
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Choose all required conditions for customerΓÇÖs environment, including the target cloud apps.
To make sure that your policy works as expected, the recommended best practice i
### Policy 2: Persistent browser session
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Choose all required conditions.
To make sure that your policy works as expected, the recommended best practice i
### Policy 3: Sign-in frequency control every time risky user
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md). 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions** > **User risk**, set **Configure** to **Yes**. Under **Configure user risk levels needed for policy to be enforced** select **High**, then select **Done**. 1. Under **Access controls** > **Grant**, select **Grant access**, **Require password change**, and select **Select**. 1. Under **Session controls** > **Sign-in frequency**, select **Every time**.
active-directory Howto Policy App Enforced Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-app-enforced-restriction.md
Previously updated : 09/27/2022 Last updated : 07/18/2023
Block or limit access to SharePoint, OneDrive, and Exchange content from unmanag
## Create a Conditional Access policy
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users** 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions**, select the following options:
+1. Under **Target resources** > **Cloud apps**, select the following options:
1. Under **Include**, choose **Select apps**. 1. Choose **Office 365**, then select **Select**. 1. Under **Access controls** > **Session**, select **Use app enforced restrictions**, then select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
Previously updated : 06/26/2023 Last updated : 07/18/2023
The policies below are put in to [Report-only mode](howto-conditional-access-ins
The following steps will help create a Conditional Access policy requiring an approved client app **or** an app protection policy when using an iOS/iPadOS or Android device. This policy will also prevent the use of Exchange ActiveSync clients using basic authentication on mobile devices. This policy works in tandem with an [app protection policy created in Microsoft Intune](/mem/intune/apps/app-protection-policies).
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates).
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy.
-1. Under **Cloud apps or actions**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**. 1. Under **Include**, **Select device platforms**. 1. Choose **Android** and **iOS**.
Organizations can choose to deploy this policy using the steps outlined below or
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
### Block Exchange ActiveSync on all devices This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy. 1. Select **Done**.
-1. Under **Cloud apps or actions**, select **Select apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **Select apps**.
1. Select **Office 365 Exchange Online**. 1. Select **Select**. 1. Under **Conditions** > **Client apps**, set **Configure** to **Yes**.
This policy will block all Exchange ActiveSync clients using basic authenticatio
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
active-directory Howto Policy Guest Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-guest-mfa.md
Previously updated : 09/27/2022 Last updated : 07/18/2023
Require guest users perform multifactor authentication when accessing your organ
## Create a Conditional Access policy
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All guest and external users** 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Exclude**, select any applications that don't require multifactor authentication. 1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Policy Persistent Browser Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-persistent-browser-session.md
Previously updated : 09/27/2022 Last updated : 07/18/2023
Protect user access on unmanaged devices by preventing browser sessions from rem
## Create a Conditional Access policy
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users** 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions** > **Filter for devices**, set **Configure** to **Yes**. 1. Under **Devices matching the rule:**, set to **Include filtered devices in policy**. 1. Under **Rule syntax** select the **Edit** pencil and paste the following expressing in the box, then select **Apply**.
Protect user access on unmanaged devices by preventing browser sessions from rem
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Howto Policy Unknown Unsupported Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-unknown-unsupported-device.md
Previously updated : 09/27/2022 Last updated : 07/18/2023
Users will be blocked from accessing company resources when the device type is u
## Create a Conditional Access policy
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users** 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**.
1. Under **Conditions**, select **Device platforms** 1. Set **Configure** to **Yes**. 1. Under **Include**, select **Any device**
Users will be blocked from accessing company resources when the device type is u
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
+[Conditional Access templates](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Previously updated : 03/17/2023 Last updated : 07/26/2023
Locations such as your organization's public network ranges can be marked as tru
- Conditional Access policies can include or exclude these locations. - Sign-ins from trusted named locations improve the accuracy of Azure AD Identity Protection's risk calculation, lowering a user's sign-in risk when they authenticate from a location marked as trusted.-- Locations marked as trusted cannot be deleted. Remove the trusted designation before attempting to delete.
+- Locations marked as trusted can't be deleted. Remove the trusted designation before attempting to delete.
> [!WARNING] > Even if you know the network and mark it as trusted does not mean you should exclude it from policies being applied. Verify explicitly is a core principle of a Zero Trust architecture. To find out more about Zero Trust and other ways to align your organization to the guiding principles, see the [Zero Trust Guidance Center](/security/zero-trust/).
If you have these trusted IPs configured, they show up as **MFA Trusted IPs** in
### All Network Access locations of my tenant
-Organizations with access to Global Secure Access preview features will have an additional location listed that is made up of users and devices that comply with your organization's security policies. For more information, see the section [Enable Global Secure Access signaling for Conditional Access](../../global-secure-access/how-to-compliant-network.md#enable-global-secure-access-signaling-for-conditional-access). It can be used with Conditional Access policies to perform a compliant network check for access to resources.
+Organizations with access to Global Secure Access preview features have an another location listed that is made up of users and devices that comply with your organization's security policies. For more information, see the section [Enable Global Secure Access signaling for Conditional Access](../../global-secure-access/how-to-compliant-network.md#enable-global-secure-access-signaling-for-conditional-access). It can be used with Conditional Access policies to perform a compliant network check for access to resources.
### Selected locations
You can also find the client IP by clicking a row in the report, and then going
:::image type="content" source="media/location-condition/sign-in-logs-showing-ip-address-filter-for-ipv6.png" alt-text="A screenshot showing Azure AD Sign-in logs and an IP address filter for IPv6 addresses." lightbox="media/location-condition/sign-in-logs-showing-ip-address-filter-for-ipv6.png":::
+> [!NOTE]
+> IPv6 addresses from service endpoints may appear in the sign-in logs with failures due to the way they handle traffic. It's important to note that [service endpoints are not supported](/azure/virtual-network/virtual-network-service-endpoints-overview#limitations). If users are seeing these IPv6 addresses, remove the service endpoint from their virtual network subnet configuration.
+ ## What you should know ### Cloud proxies and VPNs
active-directory Migrate Approved Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/migrate-approved-client-app.md
Previously updated : 03/28/2023 Last updated : 07/18/2023
Organizations can choose to update their policies using the following steps.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
Repeat the previous steps on all of your policies that use the approved client app grant.
The following steps help create a Conditional Access policy requiring an approve
Organizations can choose to deploy this policy using the following steps.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy.
-1. Under **Cloud apps or actions**, select **All cloud apps**.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**
1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**. 1. Under **Include**, **Select device platforms**. 1. Choose **Android** and **iOS**
Organizations can choose to deploy this policy using the following steps.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
> [!NOTE] > If an app does not support **Require app protection policy**, end users trying to access resources from that app will be blocked.
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Previously updated : 04/04/2023 Last updated : 07/18/2023
Conditional Access for workload identities enables blocking service principals f
Create a location based Conditional Access policy that applies to service principals.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **What does this policy apply to?**, select **Workload identities**. 1. Under **Include**, choose **Select service principals**, and select the appropriate service principals from the list.
-1. Under **Cloud apps or actions**, select **All cloud apps**. The policy applies only when a service principal requests a token.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**. The policy applies only when a service principal requests a token.
1. Under **Conditions** > **Locations**, include **Any location** and exclude **Selected locations** where you want to allow access. 1. Under **Grant**, **Block access** is the only available option. Access is blocked when a token request is made from outside the allowed range. 1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**.
Create a risk-based Conditional Access policy that applies to service principals
:::image type="content" source="media/workload-identity/conditional-access-workload-identity-risk-policy.png" alt-text="Creating a Conditional Access policy with a workload identity and risk as a condition." lightbox="media/workload-identity/conditional-access-workload-identity-risk-policy.png":::
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **What does this policy apply to?**, select **Workload identities**. 1. Under **Include**, choose **Select service principals**, and select the appropriate service principals from the list.
-1. Under **Cloud apps or actions**, select **All cloud apps**. The policy applies only when a service principal requests a token.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**. The policy applies only when a service principal requests a token.
1. Under **Conditions** > **Service principal risk** 1. Set the **Configure** toggle to **Yes**. 1. Select the levels of risk where you want this policy to trigger.
active-directory Identity Platform Integration Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-platform-integration-checklist.md
Use the following checklist to ensure that your application is effectively integ
## Branding
-![checkbox](./medi).
+![checkbox](./media/integration-checklist/checkbox-two.svg) Adhere to the [Branding guidelines for applications](/azure/active-directory/develop/howto-add-branding-in-apps).
![checkbox](./medi). Make sure your name and logo are representative of your company/product so that users can make informed decisions. Ensure that you're not violating any trademarks.
active-directory V2 Conditional Access Dev Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-conditional-access-dev-guide.md
error_description=AADSTS50076: Due to a configuration change made by your admini
Our app needs to catch the `error=interaction_required`. The application can then use either `acquireTokenPopup()` or `acquireTokenRedirect()` on the same resource. The user is forced to do a multi-factor authentication. After the user completes the multi-factor authentication, the app is issued a fresh access token for the requested resource.
-To try out this scenario, see our [JavaScript SPA calling Node.js web API using on-behalf-of flow](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/2-call-api-api-ca) code sample. This code sample uses the Conditional Access policy and web API you registered earlier with a JavaScript SPA to demonstrate this scenario. It shows how to properly handle the claims challenge and get an access token that can be used for your web API.
- ## See also * To learn more about the capabilities, see [Conditional Access in Azure Active Directory](../conditional-access/overview.md).
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
Follow these steps to add the tenant you want to collaborate with to your Organi
## Sign-in endpoints
-After enabling collaboration with an organization from a different Microsoft cloud, cross-cloud Azure AD guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their Azure AD credentials.
+After enabling collaboration with an organization from a different Microsoft cloud, cross-cloud Azure AD guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-process-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their Azure AD credentials.
Cross-cloud Azure AD guest users can also use application endpoints that include your tenant information, for example:
active-directory How To User Flow Add Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-add-application.md
Previously updated : 05/09/2023 Last updated : 07/24/2023
If you already registered your application in your customer tenant, you can add
1. Choose **Select**.
+## Extension app
+
+You might find an app named **b2c-extensions-app** in the application list. This app is created automatically inside the new directory, and it contains all extension attributes for your customer tenant.
+If you want to collect information beyond the built-in attributes, you can create [custom user attributes](how-to-define-custom-attributes.md) and add them to your sign-up user flow. Custom attributes are also known as directory extension attributes, as they extend the user profile information stored in your customer directory. All extension attributes for your customer tenant are stored in the **b2c-extensions-app**. Do not delete this app.
+You can learn more about this app [here](/azure/active-directory-b2c/extensions-app).
+ ## Next steps - If you selected email with password sign-in, [enable password reset](how-to-enable-password-reset-customers.md).
active-directory How To Web App Node Sign In Call Api Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-sign-in-call-api-prepare-tenant.md
Previously updated : 05/22/2023 Last updated : 07/26/2023
In this step, you create the web and the web API application registrations, and
[!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/add-app-role.md)]
-### Configure optional claims
+### Configure idtyp token claim
[!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/add-optional-claims-access.md)]
active-directory Troubleshooting Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/troubleshooting-known-issues.md
You get the following error when you try to delete a customer tenant:
`Unable to delete tenant`
-**Cause**: This error occurs when you try to delete a customer tenant but you haven't deleted the b2c-extensions-app.
+**Cause**: This error occurs when you try to delete a customer tenant but you haven't deleted the **b2c-extensions-app**.
+
+Custom attributes are also known as directory extension attributes expand the user profile information stored in your customer directory. All extension attributes for your customer tenant are stored in the app named **b2c-extensions-app**.
**Workaround**: When deleting a customer tenant, delete the **b2c-extensions-app**, found in **App registrations** under **All applications**.
active-directory Tutorial Daemon Dotnet Call Api Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-dotnet-call-api-prepare-tenant.md
In this tutorial, you learn how to:
[!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/add-app-role.md)]
-## 3. Configure optional claims
+## 3. Configure idtyp token claim
[!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/add-optional-claims-access.md)]
active-directory Tutorial Daemon Node Call Api Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-node-call-api-prepare-tenant.md
If you've already registered a client daemon application and a web API in the Mi
[!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/add-app-role.md)]
-## Configure optional claims
+## Configure idtyp token claim
[!INCLUDE [active-directory-b2c-app-integration-add-user-flow](./includes/register-app/add-optional-claims-access.md)]
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
With SAML/WS-Fed IdP federation, guest users sign into your Azure AD tenant usin
## Sign-in endpoints
-SAML/WS-Fed IdP federation guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their own credentials.
+SAML/WS-Fed IdP federation guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-process-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their own credentials.
SAML/WS-Fed IdP federation guest users can also use application endpoints that include your tenant information, for example:
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Guest users who see a "header too long" error can clear their cookies or open a
## Sign-in endpoints
-Google guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their Google credentials.
+Google guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-process-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their Google credentials.
Google guest users can also use application endpoints that include your tenant information, for example:
active-directory Invitation Email Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invitation-email-elements.md
# The elements of the B2B collaboration invitation email
-Invitation emails are a critical component to bring partners on board as B2B collaboration users in Azure AD. ItΓÇÖs [not required that you send an email to invite someone using B2B collaboration](redemption-experience.md#redemption-through-a-direct-link), but it gives the user all the information they need to decide if they accept your invite or not. It also gives them a link they can always refer to in the future when they need to return to your resources.
+Invitation emails are a critical component to bring partners on board as B2B collaboration users in Azure AD. ItΓÇÖs [not required that you send an email to invite someone using B2B collaboration](redemption-experience.md#redemption-process-through-a-direct-link), but it gives the user all the information they need to decide if they accept your invite or not. It also gives them a link they can always refer to in the future when they need to return to your resources.
![Screenshot showing the B2B invitation email](media/invitation-email-elements/invitation-email.png)
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
The email one-time passcode feature is a way to authenticate B2B collaboration u
## Sign-in endpoints
-Email one-time passcode guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using one-time passcode.
+Email one-time passcode guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-process-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using one-time passcode.
Email one-time passcode guest users can also use application endpoints that include your tenant information, for example:
The email one-time passcode feature is now turned on by default for all new tena
**What happens to my existing guest users if I enable email one-time passcode?**
-Your existing guest users won't be affected if you enable email one-time passcode, as your existing users are already past the point of redemption. Enabling email one-time passcode will only affect future redemption activities where new guest users are redeeming into the tenant.
+Your existing guest users won't be affected if you enable email one-time passcode, as your existing users are already past the point of redemption. Enabling email one-time passcode will only affect future redemption process activities where new guest users are redeeming into the tenant.
**What is the user experience when email one-time passcode is disabled?**
If youΓÇÖve disabled the email one-time passcode feature, the user is prompted t
Also, when email one-time passcode is disabled, users might see a sign-in error when they're redeeming a direct application link and they weren't added to your directory in advance.
-For more information about the different redemption pathways, see [B2B collaboration invitation redemption](redemption-experience.md).
+For more information about the different redemption process pathways, see [B2B collaboration invitation redemption](redemption-experience.md).
**Will the ΓÇ£No account? Create one!ΓÇ¥ option for self-service sign-up go away?**
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
When you add a guest user to your directory, the guest user account has a consen
> - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - The [email one-time passcode feature](one-time-passcode.md) is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. When this feature is turned off, the fallback authentication method is to prompt invitees to create a Microsoft account.
-## Redemption and sign-in through a common endpoint
+## Redemption process and sign-in through a common endpoint
Guest users can now sign in to your multi-tenant or Microsoft first-party apps through a common endpoint (URL), for example `https://myapps.microsoft.com`. Previously, a common URL would redirect a guest user to their home tenant instead of your resource tenant for authentication, so a tenant-specific link was required (for example `https://myapps.microsoft.com/?tenantid=<tenant id>`). Now the guest user can go to the application's common URL, choose **Sign-in options**, and then select **Sign in to an organization**. The user then types the domain name of your organization.
Guest users can now sign in to your multi-tenant or Microsoft first-party apps t
The user is then redirected to your tenant-specific endpoint, where they can either sign in with their email address or select an identity provider you've configured.
-## Redemption through a direct link
+## Redemption process through a direct link
As an alternative to the invitation email or an application's common URL, you can give a guest a direct link to your app or portal. You first need to add the guest user to your directory via the [Azure portal](./b2b-quickstart-add-guest-users-portal.md) or [PowerShell](./b2b-quickstart-invite-powershell.md). Then you can use any of the [customizable ways to deploy applications to users](../manage-apps/end-user-experiences.md), including direct sign-on links. When a guest uses a direct link instead of the invitation email, theyΓÇÖll still be guided through the first-time consent experience.
There are some cases where the invitation email is recommended over a direct lin
- Sometimes the invited user object may not have an email address because of a conflict with a contact object (for example, an Outlook contact object). In this case, the user must select the redemption URL in the invitation email. - The user may sign in with an alias of the email address that was invited. (An alias is another email address associated with an email account.) In this case, the user must select the redemption URL in the invitation email.
-## Redemption through the invitation email
+## Redemption process through the invitation email
When you add a guest user to your directory by [using the Azure portal](./b2b-quickstart-add-guest-users-portal.md), an invitation email is sent to the guest in the process. You can also choose to send invitation emails when youΓÇÖre [using PowerShell](./b2b-quickstart-invite-powershell.md) to add guest users to your directory. HereΓÇÖs a description of the guestΓÇÖs experience when they redeem the link in the email.
When you add a guest user to your directory by [using the Azure portal](./b2b-qu
3. The guest will use their own credentials to sign in to your directory. If the guest doesn't have an account that can be federated to your directory and the [email one-time passcode (OTP)](./one-time-passcode.md) feature isn't enabled; the guest is prompted to create a personal [MSA](https://support.microsoft.com/help/4026324/microsoft-account-how-to-create). Refer to the [invitation redemption flow](#invitation-redemption-flow) for details. 4. The guest is guided through the [consent experience](#consent-experience-for-the-guest) described below.
-## Redemption limitation with conflicting Contact object
+## Redemption process limitation with conflicting Contact object
Sometimes the invited external guest user's email may conflict with an existing [Contact object](/graph/api/resources/contact), resulting in the guest user being created without a proxyAddress. This is a known limitation that prevents guest users from redeeming an invitation through a direct link using [SAML/WS-Fed IdP](./direct-federation.md), [MSAs](./microsoft-account.md), [Google Federation](./google-federation.md), or [Email One-Time Passcode](./one-time-passcode.md) accounts. However, the following scenarios should continue to work: - Redeeming an invitation through an invitation email redemption link using [SAML/WS-Fed IdP](./direct-federation.md), [Email One-Time Passcode](./one-time-passcode.md), and [Google Federation](./google-federation.md) accounts.-- Signing back into an application after redemption using [SAML/WS-Fed IdP](./direct-federation.md) and [Google Federation](./google-federation.md) accounts.
+- Signing back into an application after redemption process using [SAML/WS-Fed IdP](./direct-federation.md) and [Google Federation](./google-federation.md) accounts.
To unblock users who can't redeem an invitation due to a conflicting [Contact object](/graph/api/resources/contact), follow these steps: 1. Delete the conflicting Contact object.
When a user selects the **Accept invitation** link in an [invitation email](invi
:::image type="content" source="media/redemption-experience/invitation-redemption.png" alt-text="Screenshot showing the redemption flow diagram.":::
-1. Azure AD performs user-based discovery to determine if the user already exists in a managed Azure AD tenant. (Unmanaged Azure AD accounts can no longer be used for redemption.) If the userΓÇÖs User Principal Name ([UPN](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname)) matches both an existing Azure AD account and a personal MSA, the user is prompted to choose which account they want to redeem with.
+1. Azure AD performs user-based discovery to determine if the user already exists in a managed Azure AD tenant. (Unmanaged Azure AD accounts can no longer be used for the redemption flow.) If the userΓÇÖs User Principal Name ([UPN](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname)) matches both an existing Azure AD account and a personal MSA, the user is prompted to choose which account they want to redeem with.
2. If an admin has enabled [SAML/WS-Fed IdP federation](direct-federation.md), Azure AD checks if the userΓÇÖs domain suffix matches the domain of a configured SAML/WS-Fed identity provider and redirects the user to the pre-configured identity provider.
When a guest signs in to a resource in a partner organization for the first time
In your directory, the guest's **Invitation accepted** value changes to **Yes**. If an MSA was created, the guestΓÇÖs **Source** shows **Microsoft Account**. For more information about guest user account properties, see [Properties of an Azure AD B2B collaboration user](user-properties.md). If you see an error that requires admin consent while accessing an application, see [how to grant admin consent to apps](../develop/v2-admin-consent.md).
-### Automatic redemption setting
+### Automatic redemption process setting
You might want to automatically redeem invitations so users don't have to accept the consent prompt when they're added to another tenant for B2B collaboration. When configured, a notification email is sent to the B2B collaboration user that requires no action from the user. Users are sent the notification email directly and they don't need to access the tenant first before they receive the email. The following shows an example notification email if you automatically redeem invitations in both tenants.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
If you're notified that you don't have permissions to invite users, verify that
If you've recently modified these settings or assigned the Guest Inviter role to a user, there might be a 15-60 minute delay before the changes take effect.
-## The user that I invited is receiving an error during redemption
+## The user that I invited is receiving an error during the redemption process
Common errors include:
This happens when another object in the directory has the same invited email add
## The guest user object doesn't have a proxyAddress
-Sometimes, the external guest user you're inviting conflicts with an existing [Contact object](/graph/api/resources/contact). When this occurs, the guest user is created without a proxyAddress. This means that the user won't be able to redeem this account using [just-in-time redemption](redemption-experience.md#redemption-through-a-direct-link) or [email one-time passcode authentication](one-time-passcode.md#user-experience-for-one-time-passcode-guest-users). Also, if the contact object you're synchronizing from on-premises AD conflicts with an existing guest user, the conflicting proxyAddress is removed from the existing guest user.
+Sometimes, the external guest user you're inviting conflicts with an existing [Contact object](/graph/api/resources/contact). When this occurs, the guest user is created without a proxyAddress. This means that the user won't be able to redeem this account using [just-in-time redemption](redemption-experience.md#redemption-process-through-a-direct-link) or [email one-time passcode authentication](one-time-passcode.md#user-experience-for-one-time-passcode-guest-users). Also, if the contact object you're synchronizing from on-premises AD conflicts with an existing guest user, the conflicting proxyAddress is removed from the existing guest user.
## How does ΓÇÿ\#ΓÇÖ, which isn't normally a valid character, sync with Azure AD?
active-directory Custom Security Attributes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md
Custom security attributes in Azure Active Directory (Azure AD) are business-spe
## Why use custom security attributes? -- Extend user profiles, such as add Employee Hire Date and Hourly Salary to all my employees.
+- Extend user profiles, such as add Hourly Salary to all my employees.
- Ensure only administrators can see the Hourly Salary attribute in my employees' profiles. - Categorize hundreds or thousands of applications to easily create a filterable inventory for auditing. - Grant users access to the Azure Storage blobs belonging to a project.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Create a [dedicated account for all personnel with privileged access](/windows-server/identity/securing-privileged-access/securing-privileged-access). Administrators shouldn't be browsing the web, checking their email, and doing day-to-day productivity tasks with highly privileged accounts. - Follow the guidance provided in [Securing privileged access](/windows-server/identity/securing-privileged-access/securing-privileged-access). - Deny use of NTLM authentication with the AADConnect server. Here are some ways to do this: [Restricting NTLM on the AADConnect Server](/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-outgoing-ntlm-traffic-to-remote-servers) and [Restricting NTLM on a domain](/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-ntlm-authentication-in-this-domain)-- Ensure every machine has a unique local administrator password. For more information, see [Local Administrator Password Solution (LAPS)](https://support.microsoft.com/help/3062591/microsoft-security-advisory-local-administrator-password-solution-laps) can configure unique random passwords on each workstation and server store them in Active Directory protected by an ACL. Only eligible authorized users can read or request the reset of these local administrator account passwords. You can obtain the LAPS for use on workstations and servers from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=46899). Additional guidance for operating an environment with LAPS and privileged access workstations (PAWs) can be found in [Operational standards based on clean source principle](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material#operational-standards-based-on-clean-source-principle).
+- Ensure every machine has a unique local administrator password. For more information, see [Local Administrator Password Solution (Windows LAPS)](/windows-server/identity/laps/laps-overview) can configure unique random passwords on each workstation and server store them in Active Directory protected by an ACL. Only eligible authorized users can read or request the reset of these local administrator account passwords. Additional guidance for operating an environment with Windows LAPS and privileged access workstations (PAWs) can be found in [Operational standards based on clean source principle](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material#operational-standards-based-on-clean-source-principle).
- Implement dedicated [privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) for all personnel with privileged access to your organization's information systems. - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to set up alerts to monitor changes to the trust established between your Idp and Azure AD.
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md
If a user is in the scope of password hash synchronization, by default the cloud
You can continue to sign in to your cloud services by using a synchronized password that is expired in your on-premises environment. Your cloud password is updated the next time you change the password in the on-premises environment.
-##### EnforceCloudPasswordPolicyForPasswordSyncedUsers
+##### CloudPasswordPolicyForPasswordSyncedUsersEnabled
-If there are synchronized users that only interact with Azure AD integrated services and must also comply with a password expiration policy, you can force them to comply with your Azure AD password expiration policy by enabling the *EnforceCloudPasswordPolicyForPasswordSyncedUsers* feature.
+If there are synchronized users that only interact with Azure AD integrated services and must also comply with a password expiration policy, you can force them to comply with your Azure AD password expiration policy by enabling the *CloudPasswordPolicyForPasswordSyncedUsersEnabled* feature (in the deprecated MSOnline PowerShell module it was called *EnforceCloudPasswordPolicyForPasswordSyncedUsers*).
-When *EnforceCloudPasswordPolicyForPasswordSyncedUsers* is disabled (which is the default setting), Azure AD Connect sets the PasswordPolicies attribute of synchronized users to "DisablePasswordExpiration". This is done every time a user's password is synchronized and instructs Azure AD to ignore the cloud password expiration policy for that user. You can check the value of the attribute using the Azure AD PowerShell module with the following command:
+When *CloudPasswordPolicyForPasswordSyncedUsersEnabled* is disabled (which is the default setting), Azure AD Connect sets the PasswordPolicies attribute of synchronized users to "DisablePasswordExpiration". This is done every time a user's password is synchronized and instructs Azure AD to ignore the cloud password expiration policy for that user. You can check the value of the attribute using the Azure AD PowerShell module with the following command:
-`(Get-AzureADUser -objectID <User Object ID>).passwordpolicies`
+`(Get-MgUser -UserId <User Object ID> -Property PasswordPolicies).PasswordPolicies`
-To enable the EnforceCloudPasswordPolicyForPasswordSyncedUsers feature, run the following command using the MSOnline PowerShell module as shown below. You would have to type yes for the Enable parameter as shown below:
+To enable the CloudPasswordPolicyForPasswordSyncedUsersEnabled feature, run the following commands using the Graph PowerShell module as shown below:
```
-Set-MsolDirSyncFeature -Feature EnforceCloudPasswordPolicyForPasswordSyncedUsers
-cmdlet Set-MsolDirSyncFeature at command pipeline position 1
-Supply values for the following parameters:
-Enable: yes
-Confirm
-Continue with this operation?
-[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
+$OnPremSync = Get-MgDirectoryOnPremiseSynchronization
+$OnPremSync.Features.CloudPasswordPolicyForPasswordSyncedUsersEnabled = $true
+
+Update-MgDirectoryOnPremiseSynchronization `
+ -OnPremisesDirectorySynchronizationId $OnPremSync.Id `
+ -Features $OnPremSync.Features
``` Once enabled, Azure AD does not go to each synchronized user to remove the `DisablePasswordExpiration` value from the PasswordPolicies attribute. Instead, the `DisablePasswordExpiration` value is removed from PasswordPolicies during the next password hash sync for each user, upon their next password change in on-premises AD.
-After the *EnforceCloudPasswordPolicyForPasswordSyncedUsers* feature is enabled, new users are provisioned without a PasswordPolicies value.
+After the *CloudPasswordPolicyForPasswordSyncedUsersEnabled* feature is enabled, new users are provisioned without a PasswordPolicies value.
>[!TIP]
->It is recommended to enable *EnforceCloudPasswordPolicyForPasswordSyncedUsers* prior to enabling password hash sync, so that the initial sync of password hashes does not add the `DisablePasswordExpiration` value to the PasswordPolicies attribute for the users.
+>It is recommended to enable *CloudPasswordPolicyForPasswordSyncedUsersEnabled* prior to enabling password hash sync, so that the initial sync of password hashes does not add the `DisablePasswordExpiration` value to the PasswordPolicies attribute for the users.
-The default Azure AD password policy requires users to change their passwords every 90 days. If your policy in AD is also 90 days, the two policies should match. However, if the AD policy is not 90 days, you can update the Azure AD password policy to match by using the Set-MsolPasswordPolicy PowerShell command.
+The default Azure AD password policy requires users to change their passwords every 90 days. If your policy in AD is also 90 days, the two policies should match. However, if the AD policy is not 90 days, you can update the Azure AD password policy to match by using the Update-MgDomain PowerShell command (previously: Set-MsolPasswordPolicy).
Azure AD supports a separate password expiration policy per registered domain. Caveat: If there are synchronized accounts that need to have non-expiring passwords in Azure AD, you must explicitly add the `DisablePasswordExpiration` value to the PasswordPolicies attribute of the user object in Azure AD. You can do this by running the following command.
-`Set-AzureADUser -ObjectID <User Object ID> -PasswordPolicies "DisablePasswordExpiration"`
+`Update-MgUser -UserID <User Object ID> -PasswordPolicies "DisablePasswordExpiration"`
> [!NOTE] > For hybrid users that have a PasswordPolicies value set to `DisablePasswordExpiration`, this value switches to `None` after a password change is executed on-premises. > [!NOTE]
-> The Set-MsolPasswordPolicy PowerShell command will not work on federated domains.
+> Neither the Update-MgDomain, nor the deprecated Set-MsolPasswordPolicy PowerShell commands will work on federated domains.
> [!NOTE]
-> The Set-AzureADUser PowerShell command will not work on federated domains.
+> Neither the Set-MgUser, nor the deprecated Set-AzureADUser PowerShell commands will work on federated domains.
#### Synchronizing temporary passwords and "Force Password Change on Next Logon"
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-staged-rollout.md
The following scenarios are not supported for Staged Rollout:
- When you first add a security group for Staged Rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required. -- While users are in Staged Rollout with Password Hash Synchronization (PHS), by default no password expiration is applied. Password expiration can be applied by enabling "EnforceCloudPasswordPolicyForPasswordSyncedUsers". When "EnforceCloudPasswordPolicyForPasswordSyncedUsers" is enabled, password expiration policy is set to 90 days from the time password was set on-prem with no option to customize it. Programmatically updating PasswordPolicies attribute is not supported while users are in Staged Rollout. To learn how to set 'EnforceCloudPasswordPolicyForPasswordSyncedUsers' see [Password expiration policy](./how-to-connect-password-hash-synchronization.md#enforcecloudpasswordpolicyforpasswordsyncedusers).
+- While users are in Staged Rollout with Password Hash Synchronization (PHS), by default no password expiration is applied. Password expiration can be applied by enabling "CloudPasswordPolicyForPasswordSyncedUsersEnabled". When "CloudPasswordPolicyForPasswordSyncedUsersEnabled" is enabled, password expiration policy is set to 90 days from the time password was set on-prem with no option to customize it. Programmatically updating PasswordPolicies attribute is not supported while users are in Staged Rollout. To learn how to set 'CloudPasswordPolicyForPasswordSyncedUsersEnabled' see [Password expiration policy](./how-to-connect-password-hash-synchronization.md#cloudpasswordpolicyforpasswordsyncedusersenabled).
- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for Windows 10 version older than 1903. This scenario will fall back to the WS-Trust endpoint of the federation server, even if the user signing in is in scope of Staged Rollout.
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md
Last updated 7/6/2022 -+
Required permissions | For permissions required to apply an update, see [Azure A
> We will begin retiring past versions of Azure AD Connect Sync 2.x 12 months from the date they are superseded by a newer version. > This policy will go into effect on 15 March 2023, when we will retire all versions that are superseded by a newer version on 15 March 2022. >
-> The following versions will retire on 15 March 2023:
->
-> - 2.0.89.0
-> - 2.0.88.0
-> - 2.0.28.0
-> - 2.0.25.1
-> - 2.0.10.0
-> - 2.0.9.0
-> - 2.0.8.0
-> - 2.0.3.0
+> Currently only builds 2.1.16.0 (release August 8th 2022) or later are supported.
> > If you are not already using the latest release version of Azure AD Connect Sync, you should upgrade your Azure AD Connect Sync software before that date.
->
+ If you run a retired version of Azure AD Connect, it might unexpectedly stop working. You also might not have the latest security fixes, performance improvements, troubleshooting and diagnostic tools, and service enhancements. If you require support, we might not be able to provide you with the level of service your organization needs.
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-connectivity.md
Of these URLs, the URLs listed in the following table are the absolute bare mini
| `*.microsoftonline.com` |HTTPS/443 |Used to configure your Azure AD directory and import/export data. | | `*.crl3.digicert.com` |HTTP/80 |Used to verify certificates. | | `*.crl4.digicert.com` |HTTP/80 |Used to verify certificates. |
+| `*.digicert.cn` |HTTP/80 |Used to verify certificates. |
| `*.ocsp.digicert.com` |HTTP/80 |Used to verify certificates. | | `*.www.d-trust.net` |HTTP/80 |Used to verify certificates. | | `*.root-c3-ca2-2009.ocsp.d-trust.net` |HTTP/80 |Used to verify certificates. |
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Previously updated : 01/06/2023 Last updated : 07/18/2023
Policies allow for excluding users such as your [emergency access or break-glass
## Enable policies
-Organizations can choose to deploy risk-based policies in Conditional Access using the steps outlined below or using the [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+Organizations can choose to deploy risk-based policies in Conditional Access using the steps outlined below or using the [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md#conditional-access-templates).
Before organizations enable remediation policies, they may want to [investigate](howto-identity-protection-investigate-risk.md) and [remediate](howto-identity-protection-remediate-unblock.md) any active risks.
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
For more information on Azure AD My Apps, see the [introduction to My Apps](http
## Microsoft 365 application launcher
-For organizations that have deployed Microsoft 365, applications assigned to users through Azure AD will also appear in the Office 365 portal at [https://portal.office.com/myapps](https://portal.office.com/myapps). It makes it convenient for users in an organization to launch their apps without using a second portal. Microsoft 365 application launcher is the recommended app launching solution for organizations using Microsoft 365.
+Microsoft 365 application launcher is the recommended app launching solution for organizations using Microsoft 365.
For more information about the Office 365 application launcher, see [Have your app appear in the Office 365 app launcher](/previous-versions/office/office-365-api/).
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 07/05/2023 Last updated : 07/26/2023
Assign the Yammer Administrator role to users who need to do the following tasks
- View announcements in the Message center, but not security announcements - View service health
-[Learn more](/yammer/manage-yammer-users/manage-yammer-admins)
+[Learn more](/Viva/engage/eac-key-admin-roles-permissions)
> [!div class="mx-tableFixed"] > | Actions | Description |
All custom roles | | | :heavy_check_mark: | :heavy_check_mark:
- [Assign Azure AD roles to groups](groups-assign-role.md) - [Understand the different roles](../../role-based-access-control/rbac-and-directory-admin-roles.md)-- [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)
+- [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)
active-directory 10000Ftplans Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/10000ftplans-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
c. In the **Sign-on URL** text box, type the URL: ` https://rm.smartsheet.com`
- > [!NOTE]
- > The value for **Identifier** is different if you have a custom domain. Contact [10,000ft Plans Client support team](https://www.10000ft.com/plans/support) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, select the copy icon to copy **App Federation Metadata Url**. Save it on your computer. ![Screenshot of SAML Signing Certificate, with copy icon highlighted](common/copy-metadataurl.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
In this section, a user called Britta Simon is created in 10,000ft Plans. 10,000ft Plans supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in 10,000ft Plans, a new one is created after authentication.
-> [!NOTE]
-> If you need to create a user manually, you need to contact the [10,000ft Plans Client support team](https://www.10000ft.com/plans/support).
- ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Figma Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/figma-provisioning-tutorial.md
# Tutorial: Configure Figma for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in Figma and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to Figma.
+The objective of this tutorial is to demonstrate the steps to be performed in Figma and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision user accounts to Figma.
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
For more information on how to read the Azure AD provisioning logs, see [Reporti
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Mural Identity Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-tutorial.md
e. In the **Claim mapping** section, fill the following fields.
f. Click **Test single sign-on** to test the configuration and **Save** it. > [!NOTE]
-> For more information on how to configure the SSO at MURAL, please follow [this](https://support.mural.co/articles/6224385-mural-s-azure-ad-integration) support page.
+> For more information on how to configure the SSO at MURAL, please follow [this](https://support.mural.co/s/article/configure-sso-with-mural-and-azure-ad) support page.
### Create Mural Identity test user
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Mural Identity you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Mural Identity you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Skydeskemail Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skydeskemail-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
In the **Sign-on URL** text box, type a URL using the following pattern: `https://mail.skydesk.jp/portal/<companyname>`
- > [!NOTE]
- > The value is not real. Update the value with the actual Sign-On URL. Contact [SkyDesk Email Client support team](https://www.skydesk.jp/apps/support/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
Click on **User Access** from the left panel in SkyDesk Email and then enter you
![Screenshot shows User Access selected from Control Panel.](./media/skydeskemail-tutorial/create-users.png)
-> [!NOTE]
-> If you need to create bulk users, you need to contact the [SkyDesk Email Client support team](https://www.skydesk.jp/apps/support/).
- ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Tyeexpress Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tyeexpress-tutorial.md
To configure Azure AD single sign-on with T&E Express, perform the following ste
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<domain>.tyeexpress.com/authorize/samlConsume.aspx`
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Here we suggest you to use the unique value of string in the Identifier. Contact [T&E Express Client support team](https://www.tyeexpress.com/contacto.aspx) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/metadataxml.png)
When you click the T&E Express tile in the Access Panel, you should be automatic
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md
Use the following values from your Azure AD Managed Identity for your GitHub wor
[![Screenshot that demonstrates how to copy the managed identity ID and subscription ID from Azure portal.](./media/workload-identity-federation-create-trust-user-assigned-managed-identity/copy-managed-identity-id.png)](./media/workload-identity-federation-create-trust-user-assigned-managed-identity/copy-managed-identity-id.png#lightbox) -- `AZURE_TENANT_ID` the **Directory (tenant) ID**. Learn [how to find your Azure Active Directory tenant ID](../fundamentals/active-directory-how-to-find-tenant.md).
+- `AZURE_TENANT_ID` the **Directory (tenant) ID**. Learn [how to find your Azure Active Directory tenant ID](/azure/active-directory-b2c/tenant-management-read-tenant-name).
#### Entity type examples
ai-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-detection.md
The Detection_03 model currently has the most accurate landmark detection. The e
Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected: * **Accessories**. Indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
-* **Age**. The estimated age in years of a particular face.
* **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
-* **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
* **Exposure**. The exposure of the face in the image. This attribute returns a value between zero and one and an informal rating of underExposure, goodExposure, or overExposure.
-* **Facial hair**. The estimated facial hair presence and the length for the given face.
-* **Gender**. The estimated gender of the given face. Possible values are male, female, and genderless.
* **Glasses**. Whether the given face has eyeglasses. Possible values are NoGlasses, ReadingGlasses, Sunglasses, and Swimming Goggles.
-* **Hair**. The hair type of the face. This attribute shows whether the hair is visible, whether baldness is detected, and what hair colors are detected.
* **Head pose**. The face's orientation in 3D space. This attribute is described by the roll, yaw, and pitch angles in degrees, which are defined according to the [right-hand rule](https://en.wikipedia.org/wiki/Right-hand_rule). The order of three angles is roll-yaw-pitch, and each angle's value range is from -180 degrees to 180 degrees. 3D orientation of the face is estimated by the roll, yaw, and pitch angles in order. See the following diagram for angle mappings: ![A head with the pitch, roll, and yaw axes labeled](./media/headpose.1.jpg) For more information on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
-* **Makeup**. Indicates whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
* **Mask**. Indicates whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered. * **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high. * **Occlusion**. Indicates whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
-* **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
* **QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios. >[!NOTE] > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/language-support.md
> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
-| Language detection | Profanity | OCR | Auto-correction |
-| -- |-|--||
-| Arabic (Romanized) | Afrikaans | Arabic | Arabic |
-| Balinese | Albanian | Chinese (Simplified) | Danish |
-| Bengali | Amharic | Chinese (Traditional) | Dutch |
-| Buginese | Arabic | Czech | English |
-| Buhid | Armenian | Danish | Finnish |
-| Carian | Assamese | Dutch | French |
-| Chinese (Simplified) | Azerbaijani | English | Greek (modern) |
-| Chinese (Traditional) | Bangla - Bangladesh | Finnish | Italian |
-| Church (Slavic) | Bangla - India | French | Korean |
-| Coptic | Basque | German | Norwegian |
-| Czech | Belarusian | Greek (modern) | Polish |
-| Dhivehi | Bosnian - Cyrillic | Hungarian | Portuguese |
-| Dutch | Bosnian - Latin | Italian | Romanian |
-| English (Creole) | Breton [non-GeoPol] | Japanese | Russian |
-| Farsi | Bulgarian | Korean | Slovak |
-| French | Catalan | Norwegian | Spanish |
-| German | Central Kurdish | Polish | Turkish |
-| Greek | Cherokee | Portuguese | |
-| Haitian | Chinese (Simplified) | Romanian | |
-| Hebrew | Chinese (Traditional) - Hong Kong SAR | Russian | |
-| Hindi | Chinese (Traditional) - Taiwan | Serbian Cyrillic | |
-| Hmong | Croatian | Serbian Latin | |
-| Hungarian | Czech | Slovak | |
-| Italian | Danish | Spanish | |
-| Japanese | Dari | Swedish | |
-| Korean | Dutch | Turkish | |
-| Kurdish (Arabic) | English | | |
-| Kurdish (Latin) | Estonian | | |
-| Lepcha | Filipino | | |
-| Limbu | Finnish | | |
-| Lu | French | | |
-| Lycian | | |
-| Lydian | Georgian | | |
-| Mycenaean (Greek) | German | | |
-| Nko | Greek | | |
-| Norwegian (Bokmal) | Gujarati | | |
-| Norwegian (Nynorsk) | Hausa | | |
-| Old (Persian) | Hebrew | | |
-| Pashto | Hindi | | |
-| Polish | Hungarian | | |
-| Portuguese | Icelandic | | |
-| Punjabi | Igbo | | |
-| Rejang | Indonesian | | |
-| Russian | Inuktitut | | |
-| Santali | Irish | | |
-| Sasak | isiXhosa | | |
-| Saurashtra | isiZulu | | |
-| Serbian (Cyrillic) | Italian | | |
-| Serbian (Latin) | Japanese | | |
-| Sinhala | Kannada | | |
-| Slovenian | Kazakh | | |
-| Spanish | Khmer | | |
-| Swedish | K'iche | | |
-| Sylheti | Kinyarwanda | | |
-| Syriac | Kiswahili | | |
-| Tagbanwa | Konkani | | |
-| Tai (Nua) | Korean | | |
-| Tamashek | Kyrgyz | | |
-| Turkish | Lao | | |
-| Ugaritic | Latvian | | |
-| Uzbek (Cyrillic) | Lithuanian | | |
-| Uzbek (Latin) | Luxembourgish | | |
-| Vai | Macedonian | | |
-| Yi | Malay | | |
-| Zhuang, Chuang | Malayalam | | |
-| | Maltese | | |
-| | Maori | | |
-| | Marathi | | |
-| | Mongolian | | |
-| | Nepali | | |
-| | Norwegian (Bokmål) | | |
-| | Norwegian (Nynorsk) | | |
-| | Odia | | |
-| | Pashto | | |
-| | Persian | | |
-| | Polish | | |
-| | Portuguese - Brazil | | |
-| | Portuguese - Portugal | | |
-| | Pulaar | | |
-| | Punjabi | | |
-| | Punjabi (Pakistan) | | |
-| | Quechua (Peru) | | |
-| | Romanian | | |
-| | Russian | | |
-| | Serbian (Cyrillic) | | |
-| | Serbian (Cyrillic, Bosnia and Herzegovina) | | |
-| | Serbian (Latin) | | |
-| | Sesotho | | |
-| | Sesotho sa Leboa | | |
-| | Setswana | | |
-| | Sindhi | | |
-| | Sinhala | | |
-| | Slovak | | |
-| | Slovenian | | |
-| | Spanish | | |
-| | Swedish | | |
-| | Tajik | | |
-| | Tamil | | |
-| | Tatar | | |
-| | Telugu | | |
-| | Thai | | |
-| | Tigrinya | | |
-| | Turkish | | |
-| | Turkmen | | |
-| | Ukrainian | | |
-| | Urdu | | |
-| | Uyghur | | |
-| | Uzbek | | |
-| | Valencian | | |
-| | Vietnamese | | |
-| | Wolof | | |
-| | Yoruba | | |
+| Language | Language detection | Profanity | OCR | Auto-correction |
+| - |-|--|--|-|
+|Afrikaans | |✔️ | | |
+|Albanian | |✔️ | | |
+|Amharic | |✔️ | | |
+|Arabic | |✔️ |✔️ | ✔️|
+|Arabic (Romanized) |✔️ | | | |
+|Armenian | |✔️ | | |
+|Assamese | |✔️ | | |
+|Azerbaijani | |✔️ | | |
+|Bangla - Bangladesh | |✔️ | | |
+|Bangla - India | |✔️ | | |
+|Balinese |✔️ | | | |
+|Basque | |✔️ | | |
+|Belarusian | |✔️ | | |
+|Bengali | ✔️| | | |
+|Bosnian - Cyrillic | |✔️ | | |
+|Bosnian - Latin | |✔️ | | |
+|Buginese |✔️ | | | |
+|Buhid |✔️ | | | |
+|Bulgarian | |✔️ | | |
+|Breton (non-GeoPol) | |✔️ | | |
+|Carian |✔️ | | | |
+|Catalan | |✔️ | | |
+|Central Kurdish | |✔️ | | |
+|Cherokee | |✔️ | | |
+|Chinese (Simplified) |✔️ |✔️ |✔️ | |
+|Chinese (Traditional) |✔️ | |✔️ | |
+|Chinese (Traditional) - Hong Kong SAR | |✔️ | | |
+|Chinese (Traditional) - Taiwan | |✔️ | | |
+|Church (Slavic) |✔️ | | | |
+|Coptic |✔️ | | | |
+|Croatian | |✔️ | | |
+|Czech |✔️ |✔️ |✔️ | |
+|Danish | |✔️ |✔️ |✔️ |
+|Dari |✔️ |✔️ | | |
+|Dhivehi |✔️ | | | |
+|Dutch |✔️ |✔️ |✔️ |✔️ |
+|English | |✔️ |✔️ |✔️ |
+|English (Creole) |✔️ | | | |
+|Estonian | |✔️ | | |
+|Filipino | |✔️ | | |
+|Finnish | |✔️ |✔️ |✔️ |
+|French | ✔️|✔️ |✔️ |✔️ |
+|Georgian | |✔️ | | |
+|German |✔️ |✔️ |✔️ | |
+|Greek |✔️ |✔️ | | |
+|Greek (modern) | | |✔️ |✔️ |
+|Gujarati | |✔️ | | |
+|Haitian |✔️ | | | |
+|Hausa | |✔️ | | |
+|Hebrew |✔️ |✔️ | | |
+|Hindi |✔️ |✔️ | | |
+|Hmong |✔️ | | | |
+|Hungarian |✔️ |✔️ |✔️ | |
+|Icelandic | |✔️ | | |
+|Igbo | |✔️ | | |
+|Indonesian | |✔️ | | |
+|Inuktitut | |✔️ | | |
+|Irish | |✔️ | | |
+|isiXhosa | |✔️ | | |
+|isiZulu | |✔️ | | |
+|Italian |✔️ |✔️ |✔️ |✔️ |
+|Japanese |✔️ |✔️ |✔️ | |
+|Kannada | |✔️ | | |
+|Kazakh | |✔️ | | |
+|Khmer | |✔️ | | |
+|K'iche | |✔️ | | |
+|Kinyarwanda | |✔️ | | |
+|Kiswahili | |✔️ | | |
+|Konkani | |✔️ | | |
+|Korean |✔️ |✔️ |✔️ |✔️ |
+|Kurdish (Arabic) |✔️ | | | |
+|Kurdish (Latin) |✔️ | | | |
+|Kyrgyz | |✔️ | | |
+|Lao | |✔️ | | |
+|Latvian | |✔️ | | |
+|Lepcha |✔️ | | | |
+|Limbu |✔️ | | | |
+|Lithuanian | |✔️ | | |
+|Lu |✔️ | | | |
+|Luxembourgish | |✔️ | | |
+|Lycian |✔️ | | | |
+|Lydian |✔️ | | | |
+|Macedonian | |✔️ | | |
+|Malay | |✔️ | | |
+|Malayalam | |✔️ | | |
+|Maltese | |✔️ | | |
+|Maori | |✔️ | | |
+|Marathi | |✔️ | | |
+|Mongolian | |✔️ | | |
+|Mycenaean (Greek) |✔️ | | | |
+|Nepali | |✔️ | | |
+|Nko |✔️ | | | |
+|Norwegian | | |✔️ |✔️ |
+|Norwegian (Bokmal) |✔️ |✔️ | | |
+|Norwegian (Nynorsk) |✔️ |✔️ | | |
+|Odia | |✔️ | | |
+|Pashto |✔️ |✔️ | | |
+|Persian |✔️ |✔️ | | |
+|Polish |✔️ |✔️ |✔️ |✔️ |
+|Portuguese - Brazil |✔️ |✔️ |✔️ |✔️ |
+|Portuguese - Portugal |✔️ |✔️ |✔️ |✔️ |
+|Pulaar | |✔️ | | |
+|Punjabi |✔️ |✔️ | | |
+|Punjabi (Pakistan) | |✔️ | | |
+|Quechua (Peru) | |✔️ | | |
+|Rejang |✔️ | | | |
+|Romanian | |✔️ |✔️ |✔️ |
+|Russian |✔️ |✔️ |✔️ |✔️ |
+|Santali |✔️ | | | |
+|Sasak |✔️ | | | |
+|Saurashtra |✔️ | | | |
+|Serbian (Cyrillic) |✔️ |✔️ |✔️ | |
+|Serbian (Cyrillic, Bosnia and Herzegovina) | |✔️ | | |
+|Serbian (Latin) |✔️ |✔️ |✔️ | |
+|Sesotho | |✔️ | | |
+|Sesotho sa Leboa | |✔️ | | |
+|Setswana | |✔️ | | |
+|Sindhi | |✔️ | | |
+|Sinhala |✔️ |✔️ | | |
+|Slovak | |✔️ |✔️ |✔️ |
+|Slovenian |✔️ |✔️ | | |
+|Spanish |✔️ |✔️ |✔️ |✔️ |
+|Swedish |✔️ |✔️ |✔️ | |
+|Sylheti |✔️ | | | |
+|Syriac |✔️ | | | |
+|Tagbanwa |✔️ | | | |
+|Tai (Nua) |✔️ | | | |
+|Tajik | |✔️ | | |
+|Tamashek |✔️ | | | |
+|Tamil | |✔️ | | |
+|Tatar | |✔️ | | |
+|Telugu | |✔️ | | |
+|Thai | |✔️ | | |
+|Tigrinya | |✔️ | | |
+|Turkish |✔️ |✔️ |✔️ |✔️ |
+|Turkmen | |✔️ | | |
+|Ugaritic |✔️ | | | |
+|Ukrainian | |✔️ | | |
+|Urdu | |✔️ | | |
+|Uyghur | |✔️ | | |
+|Uzbek (Cyrillic) |✔️ |✔️ | | |
+|Uzbek (Latin) |✔️ |✔️ | | |
+|Valencian | |✔️ | | |
+|Vai |✔️ | | | |
+|Vietnamese | |✔️ | | |
+|Wolof | |✔️ | | |
+|Yi |✔️ | | | |
+|Yoruba | |✔️ | | |
+|Zhuang, Chuang |✔️ | | | |
+
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available wit
Previously updated : 07/21/2023 Last updated : 07/25/2023
The DALL-E models, currently in preview, generate images from text prompts that
## Model summary table and region availability > [!IMPORTANT]
-> South Central US and East US are temporarily unavailable for creating new resources and deployments due to high demand.
+> Due to high demand:
+>
+> - South Central US is temporarily unavailable for creating new resources and deployments.
+> - East US is temporarily unavailable for new deployments of GPT-4 version 0314 models.
### GPT-4 models
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | East US, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo-16k` (0613) | East US, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 |
+| `gpt-35-turbo` (0613) | Canada East, East US, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo-16k` (0613) | Canada East, East US, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 |
<sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.
These models can only be used with Embedding API requests.
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| text-embedding-ada-002 (version 2) | East US, South Central US, West Europe | N/A |8,191 | Sep 2021 |
+| text-embedding-ada-002 (version 2) | Canada East, East US, South Central US, West Europe | N/A |8,191 | Sep 2021 |
| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | ### DALL-E models (Preview)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
keywords:
- Azure OpenAI now [supports arrays with up to 16 inputs](./how-to/switching-endpoints.md#azure-openai-embeddings-multiple-input-support) per API request with text-embedding-ada-002 Version 2.
+### New Regions
+
+- Azure OpenAI is now also available in the Canada East, Japan East, and North Central US regions. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+ ## June 2023 ### Use Azure OpenAI on your own data (preview)
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
+
+ Title: "Deploy Quarkus on Azure Kubernetes Service"
+description: Shows how to quickly stand up Quarkus on Azure Kubernetes Service.
++++ Last updated : 07/26/2023+
+#CustomerIntent: As a developer, I want deploy a simple CRUD Quarkus app on AKS so that can start iterating it into a proper LOB app.
+# external contributor: danieloh30
++
+# Deploy a Java application with Quarkus on an Azure Kubernetes Service cluster
+
+This article shows you how to quickly deploy Red Hat Quarkus on Azure Kubernetes Service (AKS) with a simple CRUD application. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL provides the persistence layer for the app. The article shows you how to test your app locally and deploy it to AKS.
+
+## Prerequisites
+
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- Azure Cloud Shell has all of these prerequisites preinstalled. For more, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart).
+- If you're running the commands in this guide locally (instead of using Azure Cloud Shell), complete the following steps:
+ - Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, or Windows Subsystem for Linux).
+ - Install a Java SE implementation (for example, [Microsoft build of OpenJDK](/java/openjdk)).
+ - Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
+ - Install [Docker](https://docs.docker.com/get-docker/) or [Podman](https://podman.io/docs/installation) for your OS.
+ - Install [jq](https://jqlang.github.io/jq/download/).
+ - Install [cURL](https://curl.se/download.html).
+ - Install the [Quarkus CLI](https://quarkus.io/guides/cli-tooling).
+- Azure CLI for Unix-like environments. This article requires only the Bash variant of Azure CLI.
+ - [!INCLUDE [azure-cli-login](../../includes/azure-cli-login.md)]
+ - This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create the app project
+
+Use the following command to clone the sample Java project for this article. The sample is on [GitHub](https://github.com/Azure-Samples/quarkus-azure).
+
+```bash
+git clone https://github.com/Azure-Samples/quarkus-azure
+cd quarkus-azure
+git checkout 2023-07-17
+cd aks-quarkus
+```
+
+If you see a message about being in *detached HEAD* state, this message is safe to ignore. Because this article doesn't require any commits, detached HEAD state is appropriate.
+
+## Test your Quarkus app locally
+
+The steps in this section show you how to run the app locally.
+
+Quarkus supports the automatic provisioning of unconfigured services in development and test mode. Quarkus refers to this capability as dev services. Let's say you include a Quarkus feature, such as connecting to a database service. You want to test the app, but haven't yet fully configured the connection to a real database. Quarkus automatically starts a stub version of the relevant service and connects your application to it. For more information, see [Dev Services Overview](https://quarkus.io/guides/dev-services#databases) in the Quarkus documentation.
+
+Make sure your container environment, Docker or Podman, is running and use the following command to enter Quarkus dev mode:
+
+```azurecli-interactive
+quarkus dev
+```
+
+Instead of `quarkus dev`, you can accomplish the same thing with Maven by using `mvn quarkus:dev`.
+
+You may be asked if you want to send telemetry of your usage of Quarkus dev mode. If so, answer as you like.
+
+Quarkus dev mode enables live reload with background compilation. If you modify any aspect of your app source code and refresh your browser, you can see the changes. If there are any issues with compilation or deployment, an error page lets you know. Quarkus dev mode listens for a debugger on port 5005. If you want to wait for the debugger to attach before running, pass `-Dsuspend` on the command line. If you donΓÇÖt want the debugger at all, you can use `-Ddebug=false`.
+
+The output should look like the following example:
+
+```output
+__ ____ __ _____ ___ __ ____ ______
+ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
+ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
+--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
+INFO [io.quarkus] (Quarkus Main Thread) quarkus-todo-demo-app-aks 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.2.0.Final) started in 3.377s. Listening on: http://localhost:8080
+
+INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
+INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, hibernate-validator, jdbc-postgresql, narayana-jta, resteasy-reactive, resteasy-reactive-jackson, smallrye-context-propagation, vertx]
+
+--
+Tests paused
+Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>
+```
+
+Press <kbd>w</kbd> on the terminal where Quarkus dev mode is running. The <kbd>w</kbd> key opens your default web browser to show the `Todo` application. You can also access the application GUI at `http://localhost:8080` directly.
++
+Try selecting a few todo items in the todo list. The UI indicates selection with a strikethrough text style. You can also add a new todo item to the todo list by typing *Verify Todo apps* and pressing <kbd>ENTER</kbd>, as shown in the following screenshot:
++
+Access the RESTful API (`/api`) to get all todo items that store in the local PostgreSQL database:
+
+```azurecli-interactive
+curl --verbose http://localhost:8080/api | jq .
+```
+
+The output should look like the following example:
+
+```output
+* Connected to localhost (127.0.0.1) port 8080 (#0)
+> GET /api HTTP/1.1
+> Host: localhost:8080
+> User-Agent: curl/7.88.1
+> Accept: */*
+>
+< HTTP/1.1 200 OK
+< content-length: 664
+< Content-Type: application/json;charset=UTF-8
+<
+{ [664 bytes data]
+100 664 100 664 0 0 13278 0 --:--:-- --:--:-- --:--:-- 15441
+* Connection #0 to host localhost left intact
+[
+ {
+ "id": 1,
+ "title": "Introduction to Quarkus Todo App",
+ "completed": false,
+ "order": 0,
+ "url": null
+ },
+ {
+ "id": 2,
+ "title": "Quarkus on Azure App Service",
+ "completed": false,
+ "order": 1,
+ "url": "https://learn.microsoft.com/en-us/azure/developer/java/eclipse-microprofile/deploy-microprofile-quarkus-java-app-with-maven-plugin"
+ },
+ {
+ "id": 3,
+ "title": "Quarkus on Azure Container Apps",
+ "completed": false,
+ "order": 2,
+ "url": "https://learn.microsoft.com/en-us/training/modules/deploy-java-quarkus-azure-container-app-postgres/"
+ },
+ {
+ "id": 4,
+ "title": "Quarkus on Azure Functions",
+ "completed": false,
+ "order": 3,
+ "url": "https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-first-quarkus"
+ },
+ {
+ "id": 5,
+ "title": "Verify Todo apps",
+ "completed": false,
+ "order": 5,
+ "url": null
+ }
+]
+```
+
+Press <kbd>q</kbd> to exit Quarkus dev mode.
+
+## Create the Azure resources to run the Quarkus app
+
+The steps in this section show you how to create the following Azure resources to run the Quarkus sample app:
+
+- Azure Database for PostgreSQL
+- Azure Container Registry (ACR)
+- Azure Kubernetes Service (AKS)
+
+Some of these resources must have unique names within the scope of the Azure subscription. To ensure this uniqueness, you can use the *initials, sequence, date, suffix* pattern. To apply this pattern, name your resources by listing your initials, some sequence number, today's date, and some kind of resource specific suffix - for example, `rg` for "resource group". Use the following commands to define some environment variables to use later:
+
+```azurecli-interactive
+export UNIQUE_VALUE=<your unique value, such as ejb010717>
+export RESOURCE_GROUP_NAME=${UNIQUE_VALUE}rg
+export LOCATION=<your desired Azure region for deploying your resources. For example, eastus>
+export REGISTRY_NAME=${UNIQUE_VALUE}reg
+export DB_SERVER_NAME=${UNIQUE_VALUE}db
+export CLUSTER_NAME=${UNIQUE_VALUE}aks
+export AKS_NS=${UNIQUE_VALUE}ns
+```
+
+### Create an Azure Database for PostgreSQL
+
+Azure Database for PostgreSQL is a managed service to run, manage, and scale highly available PostgreSQL databases in the Azure cloud. This section directs you to a separate quickstart that shows you how to create a single Azure Database for PostgreSQL server and connect to it. However, when you follow the steps in the quickstart, you need to use the settings in the following table to customize the database deployment for the sample Quarkus app. Replace the environment variables with their actual values when filling out the fields in the Azure portal.
+
+| Setting | Value | Description |
+|:|:-|:-|
+| Resource group | `${RESOURCE_GROUP_NAME}` | Select **Create new**. The deployment creates this new resource group. |
+| Server name | `${DB_SERVER_NAME}` | This value forms part of the hostname for the database server. |
+| Location | `${LOCATION}` | Select a location from the dropdown list. Take note of the location. You must use this same location for other Azure resources you create. |
+| Admin username | *quarkus* | The sample code assumes this value. |
+| Password | *Secret123456* | The sample code assumes this value. |
+
+With these value substitutions in mind, follow the steps in [Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal](/azure/postgresql/quickstart-create-server-database-portal) up to the "Configure a firewall rule" section. Then, in the "Configure a firewall rule" section, be sure to select **Yes** for **Allow access to Azure services**, then select **Save**. If you neglect to do this, your Quarkus app can't access the database and simply fails to ever start.
+
+After you complete the steps in the quickstart through the "Configure a firewall rule" section, including the step to allow access to Azure services, return to this article.
+
+### Create a Todo database in PostgreSQL
+
+The PostgreSQL server that you created earlier is empty. It doesn't have any database that you can use with the Quarkus application. Create a new database called `todo` by using the following command:
+
+```azurecli-interactive
+az postgres db create \
+ --resource-group ${RESOURCE_GROUP_NAME} \
+ --name todo \
+ --server-name ${DB_SERVER_NAME}
+```
+
+You must use `todo` as the name of the database because the sample code assumes that database name.
+
+If the command is successful, the output looks similar to the following example:
+
+```output
+{
+ "charset": "UTF8",
+ "collation": "English_United States.1252",
+ "id": "/subscriptions/REDACTED/resourceGroups/ejb010718rg/providers/Microsoft.DBforPostgreSQL/servers/ejb010718db/databases/todo",
+ "name": "todo",
+ "resourceGroup": "ejb010718rg",
+ "type": "Microsoft.DBforPostgreSQL/servers/databases"
+}
+```
+
+### Create a Microsoft Azure Container Registry instance
+
+Because Quarkus is a cloud native technology, it has built-in support for creating containers that run in Kubernetes. Kubernetes is entirely dependent on having a container registry from which it finds the container images to run. AKS has built-in support for Azure Container Registry (ACR).
+
+Use the [az acr create](/cli/azure/acr#az-acr-create) command to create the ACR instance. The following example creates an ACR instance named with the value of your environment variable `${REGISTRY_NAME}`:
+
+```azurecli-interactive
+az acr create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --location ${LOCATION} \
+ --name $REGISTRY_NAME \
+ --sku Basic \
+ --admin-enabled
+```
+
+After a short time, you should see JSON output that contains the following lines:
+
+```output
+ "provisioningState": "Succeeded",
+ "publicNetworkAccess": "Enabled",
+ "resourceGroup": "<YOUR_RESOURCE_GROUP>",
+```
+
+### Connect your docker to the ACR instance
+
+Sign in to the ACR instance. Signing in lets you push an image. Use the following commands to verify the connection:
+
+```azurecli-interactive
+export LOGIN_SERVER=$(az acr show \
+ --name $REGISTRY_NAME \
+ --query 'loginServer' \
+ --output tsv)
+echo $LOGIN_SERVER
+export USER_NAME=$(az acr credential show \
+ --name $REGISTRY_NAME \
+ --query 'username' \
+ --output tsv)
+echo $USER_NAME
+export PASSWORD=$(az acr credential show \
+ --name $REGISTRY_NAME \
+ --query 'passwords[0].value' \
+ --output tsv)
+echo $PASSWORD
+docker login $LOGIN_SERVER -u $USER_NAME -p $PASSWORD
+```
+
+If you're using Podman instead of Docker, make the necessary changes to the command.
+
+If you've signed into the ACR instance successfully, you should see `Login Succeeded` at the end of command output.
+
+### Create an AKS cluster
+
+Use the [az aks create](/cli/azure/aks#az-aks-create) command to create an AKS cluster. The following example creates a cluster named with the value of your environment variable `${CLUSTER_NAME}` with one node. The cluster is connected to the ACR instance you created in a preceding step. This command takes several minutes to complete.
+
+```azurecli-interactive
+az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --location ${LOCATION} \
+ --name $CLUSTER_NAME \
+ --attach-acr $REGISTRY_NAME \
+ --node-count 1 \
+ --generate-ssh-keys \
+ --enable-managed-identity
+```
+
+After a few minutes, the command completes and returns JSON-formatted information about the cluster, including the following output:
+
+```output
+ "nodeResourceGroup": "MC_<your resource_group_name>_<your cluster name>_<your region>",
+ "privateFqdn": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "<your resource group name>",
+```
+
+### Connect to the AKS cluster
+
+To manage a Kubernetes cluster, you use `kubectl`, the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command, as shown in the following example:
+
+```azurecli-interactive
+az aks install-cli
+```
+
+For more information about `kubectl`, see [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/overview/) in the Kubernetes documentation.
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az-aks-get-credentials) command, as shown in the following example. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurecli-interactive
+az aks get-credentials \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --overwrite-existing \
+ --admin
+```
+
+Successful output includes text similar to the following example:
+
+```output
+Merged "ejb010718aks-admin" as current context in /Users/edburns/.kube/config
+```
+
+You might find it useful to alias `k` to `kubectl`. If so, use the following command:
+
+```azurecli-interactive
+alias k=kubectl
+```
+
+To verify the connection to your cluster, use the `kubectl get` command to return a list of the cluster nodes, as shown in the following example:
+
+```azurecli-interactive
+kubectl get nodes
+```
+
+The following example output shows the single node created in the previous steps. Make sure that the status of the node is **Ready**:
+
+```output
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.23.8
+```
+
+### Create a new namespace in AKS
+
+Use the following command to create a new namespace in your Kubernetes service for your Quarkus app:
+
+```azurecli-interactive
+kubectl create namespace ${AKS_NS}
+```
+
+The output should look like the following example:
+
+```output
+namespace/<your namespace> created
+```
+
+### Customize the cloud native configuration
+
+As a cloud native technology, Quarkus offers the ability to automatically configure resources for standard Kubernetes, Red Hat OpenShift, and Knative. For more information, see the [Quarkus Kubernetes guide](https://quarkus.io/guides/deploying-to-kubernetes#kubernetes), [Quarkus OpenShift guide](https://quarkus.io/guides/deploying-to-kubernetes#openshift) and [Quarkus Knative guide](https://quarkus.io/guides/deploying-to-kubernetes#knative). Developers can deploy the application to a target Kubernetes cluster by applying the generated manifests.
+
+To generate the appropriate Kubernetes resources, use the following command to add the `quarkus-kubernetes` and `container-image-jib` extensions in your local terminal:
+
+```azurecli-interactive
+quarkus ext add kubernetes container-image-jib
+```
+
+Quarkus modifies the POM to ensure these extensions are listed as `<dependencies>`. If asked to install something called `JBang`, answer *yes* and allow it to be installed.
+
+The output should look like the following example:
+
+```output
+[SUCCESS] ✅ Extension io.quarkus:quarkus-kubernetes has been installed
+[SUCCESS] ✅ Extension io.quarkus:quarkus-container-image-jib has been installed
+```
+
+To verify the extensions are added, you can run `git diff` and examine the output.
+
+As a cloud native technology, Quarkus supports the notion of configuration profiles. Quarkus has the following three built-in profiles:
+
+- `dev` - Activated when in development mode
+- `test` - Activated when running tests
+- `prod` - The default profile when not running in development or test mode
+
+Quarkus supports any number of named profiles, as needed.
+
+The remaining steps in this section direct you to uncomment and customize values in the *src/main/resources/application.properties* file. Ensure that all lines starting with `# %prod.` are uncommented by removing the leading `#`.
+
+The `prod.` prefix indicates that these properties are active when running in the `prod` profile. For more information on configuration profiles, see the [Quarkus documentation](https://access.redhat.com/search/?q=Quarkus+Using+configuration+profiles).
+
+#### Database configuration
+
+Add the following database configuration variables. Replace the values of `<DB_SERVER_NAME_VALUE>` with the actual values of the `${DB_SERVER_NAME}` environment variable.
+
+```yaml
+# Database configurations
+%prod.quarkus.datasource.db-kind=postgresql
+%prod.quarkus.datasource.jdbc.url=jdbc:postgresql://<DB_SERVER_NAME_VALUE>.postgres.database.azure.com:5432/todo
+%prod.quarkus.datasource.jdbc.driver=org.postgresql.Driver
+%prod.quarkus.datasource.username=quarkus@<DB_SERVER_NAME_VALUE>
+%prod.quarkus.datasource.password=Secret123456
+%prod.quarkus.hibernate-orm.database.generation=drop-and-create
+```
+
+#### Kubernetes configuration
+
+Add the following Kubernetes configuration variables. Make sure to set `service-type` to `load-balancer` to access the app externally.
+
+```yaml
+# AKS configurations
+%prod.quarkus.kubernetes.deployment-target=kubernetes
+%prod.quarkus.kubernetes.service-type=load-balancer
+```
+
+#### Container image configuration
+
+As a cloud native technology, Quarkus supports generating OCI container images compatible with Docker and Podman. Add the following container-image variables. Replace the values of `<LOGIN_SERVER_VALUE>` and `<USER_NAME_VALUE>` with the values of the actual values of the `${LOGIN_SERVER}` and `${USER_NAME}` environment variables, respectively.
+
+```yaml
+# Container Image Build
+%prod.quarkus.container-image.build=true
+%prod.quarkus.container-image.registry=<LOGIN_SERVER_VALUE>
+%prod.quarkus.container-image.group=<USER_NAME_VALUE>
+%prod.quarkus.container-image.name=todo-quarkus-aks
+%prod.quarkus.container-image.tag=1.0
+```
+
+### Build the container image and push it to ACR
+
+Now, use the following command to build the application itself. This command uses the Kubernetes and Jib extensions to build the container image.
+
+```azurecli-interactive
+quarkus build --no-tests
+```
+
+The output should end with `BUILD SUCCESS`. The Kubernetes manifest files are generated in *target/kubernetes*, as shown in the following example:
+
+```output
+tree target/kubernetes
+target/kubernetes
+Γö£ΓöÇΓöÇ kubernetes.json
+ΓööΓöÇΓöÇ kubernetes.yml
+
+0 directories, 2 files
+```
+
+You can verify whether the container image is generated as well using `docker` or `podman` command line (CLI). Output looks similar to the following example:
+
+```output
+docker images | grep todo
+<LOGIN_SERVER_VALUE>/<USER_NAME_VALUE>/todo-quarkus-aks 1.0 b13c389896b7 18 minutes ago 420MB
+```
+
+Push the container images to ACR by using the following command:
+
+```azurecli-interactive
+export TODO_QUARKUS_TAG=$(docker images | grep todo-quarkus-aks | head -n1 | cut -d " " -f1)
+echo ${TODO_QUARKUS_TAG}
+docker push ${TODO_QUARKUS_TAG}:1.0
+```
+
+The output should look similar to the following example:
+
+```output
+The push refers to repository [<LOGIN_SERVER_VALUE>/<USER_NAME_VALUE>/todo-quarkus-aks]
+dfd615499b3a: Pushed
+56f5cf1aa271: Pushed
+4218d39b228e: Pushed
+b0538737ed64: Pushed
+d13845d85ee5: Pushed
+60609ec85f86: Pushed
+1.0: digest: sha256:0ffd70d6d5bb3a4621c030df0d22cf1aa13990ca1880664d08967bd5bab1f2b6 size: 1995
+```
+
+Now that you've pushed the app to ACR, you can tell AKS to run the app.
+
+## Deploy the Quarkus app to AKS
+
+The steps in this section show you how to run the Quarkus sample app on the Azure resources you've created.
+
+### Use kubectl apply to deploy the Quarkus app to AKS
+
+Deploy the Kubernetes resources using `kubectl` on the command line, as shown in the following example:
+
+```azurecli-interactive
+kubectl apply -f target/kubernetes/kubernetes.yml -n ${AKS_NS}
+```
+
+The output should look like the following example:
+
+```output
+deployment.apps/quarkus-todo-demo-app-aks created
+```
+
+Verify the app is running by using the following command:
+
+```azurecli-interactive
+kubectl -n $AKS_NS get pods
+```
+
+If the value of the `STATUS` field shows anything other than `Running`, troubleshoot and resolve the problem before continuing. It may help to examine the pod logs by using the following command:
+
+```azurecli-interactive
+kubectl -n $AKS_NS logs $(kubectl -n $AKS_NS get pods | grep quarkus-todo-demo-app-aks | cut -d " " -f1)
+```
+
+Get the `EXTERNAL-IP` to access the Todo application by using the following command:
+
+```azurecli-interactive
+kubectl get svc -n ${AKS_NS}
+```
+
+The output should look like the following example:
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+quarkus-todo-demo-app-aks LoadBalancer 10.0.236.101 20.12.126.200 80:30963/TCP 37s
+```
+
+You can use the following command to save the value of `EXTERNAL-IP` to an environment variable as a fully qualified URL:
+
+```azurecli-interactive
+export QUARKUS_URL=http://$(kubectl get svc -n ${AKS_NS} | grep quarkus-todo-demo-app-aks | cut -d " " -f10)
+echo $QUARKUS_URL
+```
+
+Open a new web browser to the value of `${QUARKUS_URL}`. Then, add a new todo item with the text `Deployed the Todo app to AKS`. Also, select the `Introduction to Quarkus Todo App` item as complete.
++
+Access the RESTful API (`/api`) to get all todo items stored in the Azure PostgreSQL database, as shown in the following example:
+
+```azurecli-interactive
+curl --verbose ${QUARKUS_URL}/api | jq .
+```
+
+The output should look like the following example:
+
+```output
+* Connected to 20.237.68.225 (20.237.68.225) port 80 (#0)
+> GET /api HTTP/1.1
+> Host: 20.237.68.225
+> User-Agent: curl/7.88.1
+> Accept: */*
+>
+< HTTP/1.1 200 OK
+< content-length: 828
+< Content-Type: application/json;charset=UTF-8
+<
+[
+ {
+ "id": 2,
+ "title": "Quarkus on Azure App Service",
+ "completed": false,
+ "order": 1,
+ "url": "https://learn.microsoft.com/en-us/azure/developer/java/eclipse-microprofile/deploy-microprofile-quarkus-java-app-with-maven-plugin"
+ },
+ {
+ "id": 3,
+ "title": "Quarkus on Azure Container Apps",
+ "completed": false,
+ "order": 2,
+ "url": "https://learn.microsoft.com/en-us/training/modules/deploy-java-quarkus-azure-container-app-postgres/"
+ },
+ {
+ "id": 4,
+ "title": "Quarkus on Azure Functions",
+ "completed": false,
+ "order": 3,
+ "url": "https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-first-quarkus"
+ },
+ {
+ "id": 5,
+ "title": "Deployed the Todo app to AKS",
+ "completed": false,
+ "order": 5,
+ "url": null
+ },
+ {
+ "id": 1,
+ "title": "Introduction to Quarkus Todo App",
+ "completed": true,
+ "order": 0,
+ "url": null
+ }
+]
+```
+
+### Verify the database has been updated using Azure Cloud Shell
+
+Open Azure Cloud Shell in the Azure portal by selecting the **Cloud Shell** icon, as shown in the following screenshot:
++
+Run the following command locally and paste the result into Azure Cloud Shell:
+
+```azurecli-interactive
+echo psql --host=${DB_SERVER_NAME}.postgres.database.azure.com --port=5432 --username=quarkus@${DB_SERVER_NAME} --dbname=todo
+```
+
+When asked for the password, use the value you used when you created the database.
+
+Use the following query to get all the todo items:
+
+```azurecli-interactive
+select * from todo;
+```
+
+The output should look similar to the following example, and should include the same items in the Todo app GUI shown previously:
++
+If you see `MORE` in the output, type <kbd>q</kbd> to exit the pager.
+
+Enter *\q* to exit from the `psql` program and return to the Cloud Shell.
+
+## Clean up resources
+
+To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, container service, container registry, and all related resources.
+
+```azurecli-interactive
+git reset --hard
+docker rmi ${TODO_QUARKUS_TAG}:1.0
+docker rmi postgres
+az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait
+```
+
+You may also want to use `docker rmi` to delete the container images `postgres` and `testcontainers` generated by Quarkus dev mode.
+
+## Next steps
+
+- [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/)
+- [Deploy serverless Java apps with Quarkus on Azure Functions](/azure/azure-functions/functions-create-first-quarkus)
+- [Quarkus](https://quarkus.io/)
+- [Jakarta EE on Azure](/azure/developer/java/ee)
aks Kubelet Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelet-logs.md
Last updated 05/09/2023
When operating an Azure Kubernetes Service (AKS) cluster, you may need to review logs to troubleshoot a problem. Azure portal has a built-in capability that allows you to view logs for AKS [main components][aks-main-logs] and [cluster containers][azure-container-logs]. Occasionally, you may need to get *kubelet* logs from AKS nodes for troubleshooting purposes. This article shows you how you can use `journalctl` to view *kubelet* logs on an AKS node.
+Alternatively, customers can collect kubelet logs using the [syslog collection feature in Azure Monitor - Container Insights](https://aka.ms/CISyslog).
## Before you begin
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
To help simplify steps to configure the identities required, the steps below def
export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)" ```
+ The variable should contain the Issuer URL similar to the following example:
+
+ ```output
+ https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/
+ ```
+
+ By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com`, where the value for `{region}` matches the location the AKS cluster is deployed in.
+ ## Create an Azure Key Vault and secret 1. Create an Azure Key Vault in resource group you created in this tutorial using the [`az keyvault create`][az-keyvault-create] command.
aks Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-azure-ad.md
Title: AKS-managed Azure Active Directory integration description: Learn how to configure Azure AD for your Azure Kubernetes Service (AKS) clusters. Previously updated : 07/05/2023 Last updated : 07/25/2023
Azure AD integrated clusters using a Kubernetes version newer than version 1.24
> [!NOTE] > If you receive the message **error: The Azure auth plugin has been removed.**, you need to run the command `kubelogin convert-kubeconfig` to convert the kubeconfig format manually.
+>
+> For more information, you can refer to [Azure Kubelogin Known Issues][azure-kubelogin-known-issues].
## Troubleshoot access issues with AKS-managed Azure AD
If you're permanently blocked by not having access to a valid Azure AD group wit
<!-- LINKS - external --> [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters [kubelogin]: https://github.com/Azure/kubelogin
+[azure-kubelogin-known-issues]: https://azure.github.io/kubelogin/known-issues.html
<!-- LINKS - Internal --> [aks-concepts-identity]: concepts-identity.md
aks Open Ai Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-quickstart.md
Now that the application is deployed, you can deploy the Python-based microservi
value: "" - name: OPENAI_API_KEY value: ""
- resources: {}
+ resources:
+ requests:
+ cpu: 20m
+ memory: 46Mi
+ limits:
+ cpu: 30m
+ memory: 50Mi
apiVersion: v1 kind: Service
Now that the application is deployed, you can deploy the Python-based microservi
value: "" - name: OPENAI_ORG_ID value: ""
- resources: {}
+ resources:
+ requests:
+ cpu: 20m
+ memory: 46Mi
+ limits:
+ cpu: 30m
+ memory: 50Mi
apiVersion: v1 kind: Service
Now that the application is deployed, you can deploy the Python-based microservi
1. In store admin, click on the products tab, then select **Add Products**. 1. When the ai-service is running successfully, you should see the Ask OpenAI button next to the description field. Fill in the name, price, and keywords, then click Ask OpenAI to generate a product description. Then click save product. See the picture for an example of adding a new product.
+ :::image type="content" source="media/ai-walkthrough/ai-generate-description.png" alt-text="Screenshot of how to use openAI to generate a product description.":::
+
1. You can now see the new product you created on Store Admin used by sellers. In the picture, you can see Jungle Monkey Chew Toy is added.
+ :::image type="content" source="media/ai-walkthrough/new-product-store-admin.png" alt-text="Screenshot viewing the new product in the store admin page.":::
+
1. You can also see the new product you created on Store Front used by buyers. In the picture, you can see Jungle Monkey Chew Toy is added. Remember to get the IP address of store front by using [kubectl get service][kubectl-get].
+ :::image type="content" source="media/ai-walkthrough/new-product-store-front.png" alt-text="Screenshot viewing the new product in the store front page.":::
## Next steps Now that you've seen how to add OpenAI functionality to an AKS application, learn more about what you can do with generative AI for your use cases. Here are some resources to get started:
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
aks Use Oidc Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md
Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS) cluster description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS) Previously updated : 04/28/2023 Last updated : 07/26/2023 -- # Create an OpenID Connect provider on Azure Kubernetes Service (AKS) [OpenID Connect][open-id-connect-overview] (OIDC) extends the OAuth 2.0 authorization protocol for use as an additional authentication protocol issued by Azure Active Directory (Azure AD). You can use OIDC to enable single sign-on (SSO) between your OAuth-enabled applications, on your Azure Kubernetes Service (AKS) cluster, by using a security token called an ID token. With your AKS cluster, you can enable OpenID Connect (OIDC) Issuer, which allows Azure Active Directory (Azure AD) or other cloud provider identity and access management platform, to discover the API server's public signing keys.
AKS rotates the key automatically and periodically. If you don't want to wait, y
In this article, you learn how to create, update, and manage the OIDC Issuer for your cluster.
-> [!Important]
-> After enabling OIDC issuer on the cluster, it's not supported to disable it.
+> [!IMPORTANT]
+> After enabling OIDC issuer on the cluster, it's not supported to disable it.
## Prerequisites
To get the OIDC Issuer URL, run the [az aks show][az-aks-show] command. Replace
az aks show -n myAKScluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv ```
+By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com`, where the value for `{region}` matches the location the AKS cluster is deployed in.
+ ## Rotate the OIDC key To rotate the OIDC key, run the [az aks oidc-issuer][az-aks-oidc-issuer] command. Replace the default values for the cluster name and the resource group name.
az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
> [!IMPORTANT] > Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid.
-## Check the OIDC keys
+## Check the OIDC keys
### Get the OIDC Issuer URL+ To get the OIDC Issuer URL, run the [az aks show][az-aks-show] command. Replace the default values for the cluster name and the resource group name. ```azurecli-interactive
The output should resemble the following:
https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/ ```
+By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key.
+ ### Get the discovery document
-To get the discovery document, copy the URL `https://(OIDC issuer URL).well-known/openid-configuration` and open it in browser.
+To get the discovery document, copy the URL `https://(OIDC issuer URL).well-known/openid-configuration` and open it in browser.
The output should resemble the following:
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
Title: Enable Ultra Disk support on Azure Kubernetes Service (AKS) description: Learn how to enable and configure Ultra Disks in an Azure Kubernetes Service (AKS) cluster Previously updated : 04/10/2023 Last updated : 07/26/2023
This feature can only be set at cluster creation or when creating a node pool.
### Limitations - Azure ultra disks require node pools deployed in availability zones and regions that support these disks, and are only supported by specific VM series. Review the corresponding table under the [Ultra disk limitations][ultra-disk-limitations] section for more information.-- Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review the [Ultra disk limitations][ultra-disk-limitations] for the latest information. -- The supported size range for ultra disks is between *100* and *1500*.
+- Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review the [Ultra disk limitations][ultra-disk-limitations] for the latest information.
## Create a cluster that can use ultra disks
Once the persistent volume claim has been created and the disk successfully prov
[ultra-disk-limitations]: ../virtual-machines/disks-types.md#ultra-disk-limitations [azure-disk-volume]: azure-disk-csi.md [operator-best-practices-storage]: operator-best-practices-storage.md
-[use-tags]: use-tags.md
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl
description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity. Previously updated : 05/24/2023 Last updated : 07/26/2023 # Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster
To get the OIDC Issuer URL and save it to an environmental variable, run the fol
export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)" ```
+The variable should contain the Issuer URL similar to the following example:
+
+```output
+https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/
+```
+
+By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key.
+ ## Create a managed identity Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity.
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity
description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity. Previously updated : 05/23/2023 Last updated : 07/26/2023 # Migrate from pod managed-identity to workload identity
If you don't have a managed identity created and assigned to your pod, perform t
export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)" ```
+ The variable should contain the Issuer URL similar to the following example:
+
+ ```output
+ https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/
+ ```
+
+ By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key.
+ ## Create Kubernetes service account If you don't have a dedicated Kubernetes service account created for this application, perform the following steps to create and then annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
Title: Deploy files to App Service description: Learn to deploy various app packages or discrete libraries, static files, or startup scripts to Azure App Service Previously updated : 08/13/2021- Last updated : 07/21/2023+
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
application-gateway Alb Controller Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md
Previously updated : 07/24/2023 Last updated : 07/25/2023
Instructions for new or existing deployments of ALB Controller are found in the
- [Upgrade existing ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-existing-deployments) ## Latest Release (Recommended)
-July 24, 2023 - 0.4.023961 - Improved Ingress support
+July 25, 2023 - 0.4.023971 - Ingress + Gateway co-existence improvements
## Release history
+July 24, 2023 - 0.4.023961 - Improved Ingress support
+ July 24, 2023 - 0.4.023921 - Initial release of ALB Controller * Minimum supported Kubernetes version: v1.25
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
Previously updated : 07/24/2023 Last updated : 07/25/2023
You need to complete the following tasks prior to deploying Application Gateway
```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
- --version 0.4.023961 \
+ --version 0.4.023971 \
--set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
You need to complete the following tasks prior to deploying Application Gateway
```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
- --version 0.4.023961 \
+ --version 0.4.023971 \
--set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
application-gateway Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/log-analytics.md
Previously updated : 11/14/2019 Last updated : 07/24/2023 # Use Log Analytics to examine Application Gateway Web Application Firewall (WAF) Logs
-Once your Application Gateway WAF is operational, you can enable logs to inspect what is happening with each request. Firewall logs give insight to what the WAF is evaluating, matching, and blocking. With Log Analytics, you can examine the data inside the firewall logs to give even more insights. For more information about creating a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md). For more information about log queries, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
+Once your Application Gateway WAF is operational, you can enable logs to inspect what is happening with each request. Firewall logs give insight to what the WAF is evaluating, matching, and blocking. With Log Analytics, you can examine the data inside the firewall logs to give even more insights. For more information about log queries, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
+
+## Prerequisites
+
+* An Azure account with an active subscription is required. If you don't already have an account, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure Web Application Firewall with logs enabled. For more information, see [Azure Web Application Firewall on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md).
+* A Log Analytics workspace. For more information about creating a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
## Import WAF logs
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.NETWORK" and Category == "ApplicationGatewayFirewallLog" ```
-This will look similar to the following query:
+This looks similar to the following query:
-![Log Analytics query](media/log-analytics/log-query.png)
You can drill down into the data, and plot graphs or create visualizations from here. See the following queries as a starting point:
AzureDiagnostics
Once you create a query, you can add it to your dashboard. Select the **Pin to dashboard** in the top right of the log analytics workspace. With the previous four queries pinned to an example dashboard, this is the data you can see at a glance:
-![Screenshot shows an Azure dashboard where you can add your query.](media/log-analytics/dashboard.png)
## Next steps
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
automanage Virtual Machines Custom Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-custom-profile.md
Title: Create a custom profile in Azure Automanage for VMs description: Learn how to create a custom profile in Azure Automanage and select your services and settings.-+ Previously updated : 08/01/2022- Last updated : 07/01/2023+
Sign in to the [Azure portal](https://portal.azure.com/).
:::image type="content" source="media\virtual-machine-custom-profile\create-custom-profile.png" alt-text="Fill out custom profile details.":::
-5. Adjust the profile with the desired services and settings and click **Create**.
+5. Adjust the profile with the desired services and settings and select **Create**.
## Create a custom profile using Azure Resource Manager Templates
-The following ARM template will create an Automanage custom profile. Details on the ARM template and steps on how to deploy are located in the ARM template deployment [section](#arm-template-deployment).
+The following ARM template creates an Automanage custom profile. Details on the ARM template and steps on how to deploy are located in the ARM template deployment [section](#arm-template-deployment).
> [!NOTE] > If you want to use a specific log analytics workspace, specify the ID of the workspace like this: "/subscriptions/**subscriptionId**/resourceGroups/**resourceGroupName**/providers/Microsoft.OperationalInsights/workspaces/**workspaceName**"
The following ARM template will create an Automanage custom profile. Details on
"Backup/RetentionPolicy/DailySchedule/RetentionDuration/DurationType": "Days", "BootDiagnostics/Enable": true, "ChangeTrackingAndInventory/Enable": true,
+ "DefenderForCloud/Enable": true,
"LogAnalytics/Enable": true, "LogAnalytics/Reprovision": "[parameters('LogAnalyticsBehavior')]", "LogAnalytics/Workspace": "[parameters('logAnalyticsWorkspace')]", "UpdateManagement/Enable": true, "VMInsights/Enable": true, "WindowsAdminCenter/Enable": true,
- "GuestConfiguration/Enable": true,
- "DefenderForCloud/Enable": true,
"Tags/ResourceGroup": { "foo": "rg" },
The following ARM template will create an Automanage custom profile. Details on
``` ### ARM template deployment
-This ARM template will create a custom configuration profile that you can assign to your specified machine.
+This ARM template creates a custom configuration profile that you can assign to your specified machine.
The `customProfileName` value is the name of the custom configuration profile that you would like to create.
The `location` value is the region where you would like to store this custom con
The `azureSecurityBaselineAssignmentType` is the audit mode that you can choose for the Azure server security baseline. Your options are
-* ApplyAndAutoCorrect : This setting will apply the Azure security baseline through the Guest Configuration extension, and if any setting within the baseline drifts, we'll auto-remediate the setting so it stays compliant.
-* ApplyAndMonitor : This setting will apply the Azure security baseline through the Guest Configuration extention when you first assign this profile to each machine. After it's applied, the Guest Configuration service will monitor the server baseline and report any drift from the desired state. However, it will not auto-remdiate.
-* Audit : This setting will install the Azure security baseline using the Guest Configuration extension. You'll be able to see where your machine is out of compliance with the baseline, but noncompliance won't be automatically remediated.
+* ApplyAndAutoCorrect : This setting applies the Azure security baseline through the Guest Configuration extension, and if any setting within the baseline drifts, we'll auto-remediate the setting so it stays compliant.
+* ApplyAndMonitor : This setting applies the Azure security baseline through the Guest Configuration extention when you first assign this profile to each machine. After it's applied, the Guest Configuration service will monitor the server baseline and report any drift from the desired state. However, it will not auto-remdiate.
+* Audit : This setting installs the Azure security baseline using the Guest Configuration extension. You're able to see where your machine is out of compliance with the baseline, but noncompliance isn't automatically remediated.
You can also specify an existing log analytics workspace by adding this setting to the configuration section of properties below: * "LogAnalytics/Workspace": "/subscriptions/**subscriptionId**/resourceGroups/**resourceGroupName**/providers/Microsoft.OperationalInsights/workspaces/**workspaceName**" * "LogAnalytics/Reprovision": false
-Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Reprovision` setting to true if you would like this log analytics workspace to be used in all cases. This means that any machine with this custom profile will use this workspace, even it is already connected to one. By default, the `LogAnalytics/Reprovision` is set to false. If your machine is already connected to a workspace, then that workspace will continue to be used. If it's not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used.
+Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Reprovision` setting to true if you would like this log analytics workspace to be used in all cases. Any machine with this custom profile then uses this workspace, even it's already connected to one. By default, the `LogAnalytics/Reprovision` is set to false. If your machine is already connected to a workspace, then that workspace is still used. If it's not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used.
Also, you can add tags to resources specified in the custom profile like below:
Also, you can add tags to resources specified in the custom profile like below:
}, "Tags/RecoveryVault/Behavior": "Preserve" ```
-The `Tags/Behavior` can be set either to Preserve or Replace. If the resource you are tagging already has the same tag key in the key/value pair, you can replace that key with the specified value in the configuration profile by using the *Replace* behavior. By default, the behavior is set to *Preserve*, meaning that the tag key that is already associated with that resource will be retained and not overwritten by the key/value pair specified in the configuration profile.
+The `Tags/Behavior` can be set either to Preserve or Replace. If the resource you're tagging already has the same tag key in the key/value pair, you can replace that key with the specified value in the configuration profile by using the *Replace* behavior. By default, the behavior is set to *Preserve*, meaning that the tag key that is already associated with that resource is retained and not overwritten by the key/value pair specified in the configuration profile.
Follow these steps to deploy the ARM template: 1. Save this ARM template as `azuredeploy.json`
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|--|--|--|--|--| | [Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated| | [PowerStore T](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated|
-| [PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.21.5|1.4.1_2022-03-08|15.0.2255.119 | 12.3 (Ubuntu 12.3-1) |
+| [PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.25.0 | 1.21.0_2023-07-11 | 16.0.5100.7242 | 14.5 (Ubuntu 20.04) |
| [PowerStore X](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance/powerstore-x-series.htm)|1.20.6|1.0.0_2021-07-30|15.0.2148.140 | 12.3 (Ubuntu 12.3-1) | ### Hitachi
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
+| [OpenShift 4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html) | 1.26.5 | 1.21.0_2023-07-11 | 16.0.5100.7242 | 14.5 (Ubuntu 20.04) |
| OpenShift 4.10.16 | 1.23.5 | 1.11.0_2022-09-13 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)| ### VMware
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023 #
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - |
-| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6 |
+| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6, [4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html) |
| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br> TKGm 2.1.0; upstream K8s v1.24.9+vmware.1 <br> TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5+vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 | | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 |
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Each trigger and binding extension also has its own minimum version requirement,
| Service | Trigger | Input binding | Output binding | |-|-|-|-| | [Azure Blobs][blob-sdk-types] | **Generally Available** | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Queues][queue-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Queues][queue-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Service Bus][servicebus-sdk-types] | **Preview support<sup>2</sup>** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Event Hubs][eventhub-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
azure-functions Functions Bindings Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md
description: Learn to manage return values for Azure Functions
ms.devlang: csharp, fsharp, java, javascript, powershell, python Previously updated : 01/14/2019 Last updated : 07/25/2023
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Using the Azure Function return value
-This article explains how return values work inside a function.
+This article explains how return values work inside a function. In languages that have a return value, you can bind a function [output binding](./functions-triggers-bindings.md#binding-direction) to the return value.
-In languages that have a return value, you can bind a function [output binding](./functions-triggers-bindings.md#binding-direction) to the return value:
-* In a C# class library, apply the output binding attribute to the method return value.
-* In Java, apply the output binding annotation to the function method.
-* In other languages, set the `name` property in *function.json* to `$return`.
+Set the `name` property in *function.json* to `$return`. If there are multiple output bindings, use the return value for only one of them.
-If there are multiple output bindings, use the return value for only one of them.
-In C# and C# script, alternative ways to send data to an output binding are `out` parameters and [collector objects](functions-reference-csharp.md#writing-multiple-output-values).
-# [C#](#tab/csharp)
+How return values are used depends on the C# mode you're using in your function app:
+
+# [In-process](#tab/in-process)
++
+In a C# class library, apply the output binding attribute to the method return value. In C# and C# script, alternative ways to send data to an output binding are `out` parameters and [collector objects](functions-reference-csharp.md#writing-multiple-output-values).
Here's C# code that uses the return value for an output binding, followed by an async example:
public static Task<string> Run([QueueTrigger("inputqueue")]WorkItem input, ILogg
} ```
-# [C# Script](#tab/csharp-script)
-
-Here's the output binding in the *function.json* file:
-
-```json
-{
- "name": "$return",
- "type": "blob",
- "direction": "out",
- "path": "output-container/{id}"
-}
-```
-
-Here's the C# script code, followed by an async example:
-
-```cs
-public static string Run(WorkItem input, ILogger log)
-{
- string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
- log.LogInformation($"C# script processed queue message. Item={json}");
- return json;
-}
-```
-
-```cs
-public static Task<string> Run(WorkItem input, ILogger log)
-{
- string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
- log.LogInformation($"C# script processed queue message. Item={json}");
- return Task.FromResult(json);
-}
-```
-
-# [F#](#tab/fsharp)
+# [Isolated process](#tab/isolated-process)
-Here's the output binding in the *function.json* file:
+See [Output bindings in the .NET worker guide](./dotnet-isolated-process-guide.md#output-bindings) for details and examples.
-```json
-{
- "name": "$return",
- "type": "blob",
- "direction": "out",
- "path": "output-container/{id}"
-}
-```
-
-Here's the F# code:
+
-```fsharp
-let Run(input: WorkItem, log: ILogger) =
- let json = String.Format("{{ \"id\": \"{0}\" }}", input.Id)
- log.LogInformation(sprintf "F# script processed queue message '%s'" json)
- json
-```
-# [JavaScript](#tab/javascript)
Here's the output binding in the *function.json* file:
module.exports = function (context, input) {
return json; } ```
-# [PowerShell](#tab/PowerShell)
+++ Here's the output binding in the *function.json* file:
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
}) ```
-# [Python](#tab/python)
+ Here's the output binding in the *function.json* file:
def main(input: azure.functions.InputStream) -> str:
}) ```
-# [Java](#tab/java)
+++
+Apply the output binding annotation to the function method. If there are multiple output bindings, use the return value for only one of them.
+ Here's Java code that uses the return value for an output binding:
public static String run(
} ``` -+ ## Next steps
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
If you require faster or more reliable blob processing, you should instead imple
+ Change your binding definition to consume [blob events](../storage/blobs/storage-blob-event-overview.md) instead of polling the container. You can do this in one of two ways: + Add the `source` parameter with a value of `EventGrid` to your binding definition and create an event subscription on the same container. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). + Replace the Blob Storage trigger with an [Event Grid trigger](functions-bindings-event-grid-trigger.md) using an event subscription on the same container. For more information, see the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial.
-+ Consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob.
++ Consider creating a [queue message](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. + Switch your hosting to use an App Service plan with Always On enabled, which may result in increased costs. ## Blob receipts
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
# Upgrade to version 4 of the Node.js programming model for Azure Functions
-This article discusses the differences between version 3 and version 4 of the Node.js programming model and how to upgrade an existing v3 app. If you want to create a brand new v4 app instead of upgrading an existing v3 app, see the tutorial for either [VS Code](./create-first-function-cli-node.md) or [Azure Functions Core Tools](./create-first-function-vs-code-node.md). This article uses "TIP" sections to highlight the most important concrete actions you should take to upgrade your app.
+This article discusses the differences between version 3 and version 4 of the Node.js programming model and how to upgrade an existing v3 app. If you want to create a new v4 app instead of upgrading an existing v3 app, see the tutorial for either [Visual Studio Code (VS Code)](./create-first-function-cli-node.md) or [Azure Functions Core Tools](./create-first-function-vs-code-node.md). This article uses "tip" alerts to highlight the most important concrete actions that you should take to upgrade your app.
-Version 4 was designed with the following goals in mind:
+Version 4 is designed to provide Node.js developers with the following benefits:
-- Provide a familiar and intuitive experience to Node.js developers-- Make the file structure flexible with support for full customization-- Switch to a code-centric approach for defining function configuration
+- Provide a familiar and intuitive experience to Node.js developers.
+- Make the file structure flexible with support for full customization.
+- Switch to a code-centric approach for defining function configuration.
[!INCLUDE [Programming Model Considerations](../../includes/functions-nodejs-model-considerations.md)]
Version 4 of the Node.js programming model requires the following minimum versio
- [Azure Functions Runtime](./functions-versions.md) v4.16+ - [Azure Functions Core Tools](./functions-run-local.md) v4.0.5095+ (if running locally)
-## Enable v4 programming model
+## Enable the v4 programming model
-The following application setting is required to run the v4 programming model while it is in preview:
-- Name: `AzureWebJobsFeatureFlags`-- Value: `EnableWorkerIndexing`-
-If you're running locally using [Azure Functions Core Tools](functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice:
+To indicate that your function code is using the v4 model, you need to set the `EnableWorkerIndexing` flag on the `AzureWebJobsFeatureFlags` application setting. When you're running locally, add `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing` to your *local.settings.json* file. When you're running in Azure, you add this application setting by using the tool of your choice.
# [Azure CLI](#tab/azure-cli-set-indexing-flag)
Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOUR
# [VS Code](#tab/vs-code-set-indexing-flag)
-1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed
-1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`.
-1. Choose your subscription and function app when prompted
-1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>.
-1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.
+1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed.
+1. Select the <kbd>F1</kbd> key to open the command palette. In the command palette, search for and select **Azure Functions: Add New Setting**.
+1. Choose your subscription and function app when prompted.
+1. For the name, type **AzureWebJobsFeatureFlags** and select the <kbd>Enter</kbd> key.
+1. For the value, type **EnableWorkerIndexing** and select the <kbd>Enter</kbd> key.
## Include the npm package
-For the first time, the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package contains the primary source code that backs the Node.js programming model. In previous versions, that code shipped directly in Azure and the npm package only had the TypeScript types. Moving forward, you need to include this package for both TypeScript and JavaScript apps. You _can_ include the package for existing v3 apps, but it isn't required.
+In v4, the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package contains the primary source code that backs the Node.js programming model. In previous versions, that code shipped directly in Azure and the npm package had only the TypeScript types. You now need to include this package for both TypeScript and JavaScript apps. You _can_ include the package for existing v3 apps, but it isn't required.
> [!TIP]
-> Make sure the `@azure/functions` package is listed in the `dependencies` section (not `devDependencies`) of your `package.json` file. You can install v4 with the command
+> Make sure the `@azure/functions` package is listed in the `dependencies` section (not `devDependencies`) of your *package.json* file. You can install v4 by using the following command:
> ``` > npm install @azure/functions@preview > ``` ## Set your app entry point
-In v4 of the programming model, you can structure your code however you want. The only files you need at the root of your app are `host.json` and `package.json`. Otherwise, you define the file structure by setting the `main` field in your `package.json` file. The `main` field can be set to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). Common values for the `main` field may be:
-- TypeScript
- - `dist/src/index.js`
- - `dist/src/functions/*.js`
-- JavaScript
- - `src/index.js`
- - `src/functions/*.js`
+In v4 of the programming model, you can structure your code however you want. The only files that you need at the root of your app are *host.json* and *package.json*.
+
+Otherwise, you define the file structure by setting the `main` field in your *package.json* file. You can set the `main` field to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). Common values for the `main` field might be:
+
+- TypeScript:
+ - `dist/src/index.js`
+ - `dist/src/functions/*.js`
+- JavaScript:
+ - `src/index.js`
+ - `src/functions/*.js`
> [!TIP]
-> Make sure you define a `main` field in your `package.json` file
+> Make sure you define a `main` field in your *package.json* file.
## Switch the order of arguments
-The trigger input is now the first argument to your function handler instead of the invocation context. The invocation context, now the second argument, was simplified in v4 and isn't as required as the trigger input - it can be left off if you aren't using it.
+The trigger input, instead of the invocation context, is now the first argument to your function handler. The invocation context, now the second argument, is simplified in v4 and isn't as required as the trigger input. You can leave it off if you aren't using it.
> [!TIP]
-> Switch the order of your arguments. For example if you are using an http trigger, switch `(context, request)` to either `(request, context)` or just `(request)` if you aren't using the context.
+> Switch the order of your arguments. For example, if you're using an HTTP trigger, switch `(context, request)` to either `(request, context)` or just `(request)` if you aren't using the context.
## Define your function in code
-Say goodbye to `function.json` files! All of the configuration that was previously specified in a `function.json` file is now defined directly in your TypeScript or JavaScript files. In addition, many properties now have a default so that you don't have to specify them every time.
+You no longer have to create and maintain those separate *function.json* configuration files. You can now fully define your functions directly in your TypeScript or JavaScript files. In addition, many properties now have defaults so that you don't have to specify them every time.
# [v4](#tab/v4)
module.exports = async function (context, req) {
> [!TIP]
-> Move the config from your `function.json` file to your code. The type of the trigger will correspond to a method on the `app` object in the new model. For example, if you use an `httpTrigger` type in `function.json`, you will now call `app.http()` in your code to register the function. If you use `timerTrigger`, you will now call `app.timer()` and so on.
-
+> Move the configuration from your *function.json* file to your code. The type of the trigger corresponds to a method on the `app` object in the new model. For example, if you use an `httpTrigger` type in *function.json*, call `app.http()` in your code to register the function. If you use `timerTrigger`, call `app.timer()`.
## Review your usage of context
-The `context` object has been simplified to reduce duplication and make it easier to write unit tests. For example, we streamlined the primary input and output so that they're only accessed as the argument and return value of your function handler. The primary input and output can't be accessed on the `context` object anymore, but you must still access _secondary_ inputs and outputs on the `context` object. For more information about secondary inputs and outputs, see the [Node.js developer guide](./functions-reference-node.md#extra-inputs-and-outputs).
+In v4, the `context` object is simplified to reduce duplication and to make writing unit tests easier. For example, we streamlined the primary input and output so that they're accessed only as the argument and return value of your function handler.
+
+You can't access the primary input and output on the `context` object anymore, but you must still access _secondary_ inputs and outputs on the `context` object. For more information about secondary inputs and outputs, see the [Node.js developer guide](./functions-reference-node.md#extra-inputs-and-outputs).
### Get the primary input as an argument
-The primary input is also called the "trigger" and is the only required input or output. You must have one and only one trigger.
+The primary input is also called the *trigger* and is the only required input or output. You must have one (and only one) trigger.
# [v4](#tab/v4)
-v4 only supports one way of getting the trigger input, as the first argument.
+Version 4 supports only one way of getting the trigger input, as the first argument:
```javascript async function helloWorld1(request, context) {
async function helloWorld1(request, context) {
# [v3](#tab/v3)
-v3 supports several different ways of getting the trigger input.
+Version 3 supports several ways of getting the trigger input:
```javascript async function helloWorld1(context, request) {
async function helloWorld1(context, request) {
# [v4](#tab/v4)
-v4 only supports one way of setting the primary output, through the return value.
+Version 4 supports only one way of setting the primary output, through the return value:
```javascript return {
return {
# [v3](#tab/v3)
-v3 supports several different ways of setting the primary output.
+Version 3 supports several ways of setting the primary output:
```javascript // Option 1
context.done(null, {
}); // Option 3, but you can't use this option with any async code: context.res.send(`Hello, ${name}!`);
-// Option 4, if "name" in "function.json" is "res":
+// Option 4, if "name" in function.json is "res":
context.bindings.res = { body: `Hello, ${name}!` }
-// Option 5, if "name" in "function.json" is "$return":
+// Option 5, if "name" in function.json is "$return":
return { body: `Hello, ${name}!` };
return {
> [!TIP]
-> Make sure you are always returning the output in your function handler, instead of setting it with the `context` object.
+> Make sure you always return the output in your function handler, instead of setting it with the `context` object.
### Create a test context
-v3 doesn't support creating an invocation context outside of the Azure Functions runtime, making it difficult to author unit tests. v4 allows you to create an instance of the invocation context, although the information during tests isn't detailed unless you add it yourself.
+Version 3 doesn't support creating an invocation context outside the Azure Functions runtime, so authoring unit tests can be difficult. Version 4 allows you to create an instance of the invocation context, although the information during tests isn't detailed unless you add it yourself.
# [v4](#tab/v4)
Not possible.
## Review your usage of HTTP types
-The http request and response types are now a subset of the [fetch standard](https://developer.mozilla.org/docs/Web/API/fetch) instead of being types unique to Azure Functions. The types use Node.js's [`undici`](https://undici.nodejs.org/) package, which follows the fetch standard and is [currently being integrated](https://github.com/nodejs/undici/issues/1737) into Node.js core.
+The HTTP request and response types are now a subset of the [fetch standard](https://developer.mozilla.org/docs/Web/API/fetch). They're no longer unique to Azure Functions.
+
+The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. This package follows the fetch standard and is [currently being integrated](https://github.com/nodejs/undici/issues/1737) into Node.js core.
### HttpRequest # [v4](#tab/v4)-- _**Body**_. You can access the body using a method specific to the type you would like to receive:
- ```javascript
+
+- *Body*. You can access the body by using a method specific to the type that you want to receive:
+
+ ```javascript
const body = await request.text(); const body = await request.json(); const body = await request.formData(); const body = await request.arrayBuffer(); const body = await request.blob(); ```-- _**Header**_:+
+- *Header*:
+ ```javascript const header = request.headers.get('content-type'); ```-- _**Query param**_:+
+- *Query parameter*:
+ ```javascript const name = request.query.get('name'); ``` # [v3](#tab/v3)-- _**Body**_. You can access the body in several ways, but the type returned isn't always consistent:+
+- *Body*. You can access the body in several ways, but the type returned isn't always consistent:
+ ```javascript // returns a string, object, or Buffer const body = request.body;
The http request and response types are now a subset of the [fetch standard](htt
// returns an object representing a form const body = await request.parseFormBody(); ```-- _**Header**_. A header can be retrieved in several different ways:+
+- *Header*. You can retrieve a header in several ways:
+ ```javascript const header = request.get('content-type'); const header = request.headers.get('content-type'); const header = context.bindingData.headers['content-type']; ```-- _**Query param**_:+
+- *Query parameter*:
+ ```javascript const name = request.query.name; ```+ ### HttpResponse # [v4](#tab/v4)-- _**Status**_:+
+- *Status*:
+ ```javascript return { status: 200 }; ```-- _**Body**_:+
+- *Body*:
+ ```javascript return { body: "Hello, world!" }; ```-- _**Header**_. You can set the header in two ways, depending if you're using the `HttpResponse` class or `HttpResponseInit` interface:+
+- *Header*. You can set the header in two ways, depending on whether you're using the `HttpResponse` class or the `HttpResponseInit` interface:
+ ```javascript const response = new HttpResponse(); response.headers.set('content-type', 'application/json'); return response; ```+ ```javascript return { headers: { 'content-type': 'application/json' }
The http request and response types are now a subset of the [fetch standard](htt
``` # [v3](#tab/v3)-- _**Status**_. A status can be set in several different ways:+
+- *Status*. You can set a status in several ways:
+ ```javascript context.res.status(200); context.res = { status: 200}
The http request and response types are now a subset of the [fetch standard](htt
return { status: 200}; return { statusCode: 200 }; ```-- _**Body**_. A body can be set in several different ways:+
+- *Body*. You can set a body in several ways:
+ ```javascript context.res.send("Hello, world!"); context.res.end("Hello, world!"); context.res = { body: "Hello, world!" } return { body: "Hello, world!" }; ```-- _**Header**_. A header can be set in several different ways:+
+- *Header*. You can set a header in several ways:
+ ```javascript response.set('content-type', 'application/json'); response.setHeader('content-type', 'application/json');
The http request and response types are now a subset of the [fetch standard](htt
> [!TIP]
-> Update any logic using the http request or response types to match the new methods. If you are using TypeScript, you should receive build errors if you use old methods.
+> Update any logic by using the HTTP request or response types to match the new methods. If you're using TypeScript, you'll get build errors if you use old methods.
-## Troubleshooting
+## Troubleshoot
-If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements):
+If you get the following error, make sure that you [set the `EnableWorkerIndexing` flag](#enable-the-v4-programming-model) and that you're using the minimum version of all [requirements](#requirements):
> No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
-If you see the following error, make sure you're using Node.js version 18.x:
+If you get the following error, make sure that you're using Node.js version 18.x:
> System.Private.CoreLib: Exception while executing function: Functions.httpTrigger1. System.Private.CoreLib: Result: Failure > Exception: undici_1.Request is not a constructor
-For any other issues or feedback, feel free to file an issue on our [GitHub repo](https://github.com/Azure/azure-functions-nodejs-library/issues).
+For any other problems or to give feedback, file an issue in the [Azure Functions Node.js repository](https://github.com/Azure/azure-functions-nodejs-library/issues).
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The `#load` directive works only with *.csx* files, not with *.cs* files.
## Binding to method return value
-You can use a method return value for an output binding, by using the name `$return` in *function.json*. For examples, see [Triggers and bindings](./functions-bindings-return-value.md).
+You can use a method return value for an output binding, by using the name `$return` in *function.json*.
+
+```json
+{
+ "name": "$return",
+ "type": "blob",
+ "direction": "out",
+ "path": "output-container/{id}"
+}
+```
+
+Here's the C# script code using the return value, followed by an async example:
+
+```csharp
+public static string Run(WorkItem input, ILogger log)
+{
+ string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
+ log.LogInformation($"C# script processed queue message. Item={json}");
+ return json;
+}
+```
+
+```csharp
+public static Task<string> Run(WorkItem input, ILogger log)
+{
+ string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
+ log.LogInformation($"C# script processed queue message. Item={json}");
+ return Task.FromResult(json);
+}
+```
Use the return value only if a successful function execution always results in a return value to pass to the output binding. Otherwise, use `ICollector` or `IAsyncCollector`, as shown in the following section.
azure-functions Functions Reference Fsharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-fsharp.md
F# for Azure Functions is a solution for easily running small pieces of code, or
This article assumes that you've already read the [Azure Functions developer reference](functions-reference.md).
-## How .fsx works
+## How an F# script works
An `.fsx` file is an F# script. It can be thought of as an F# project that's contained in a single file. The file contains both the code for your program (in this case, your Azure Function) and directives for managing dependencies. When you use an `.fsx` for an Azure Function, commonly required assemblies are automatically included for you, allowing you to focus on the function rather than "boilerplate" code. ## Folder structure
-The folder structure for an F# script project looks like the following:
+The folder structure for an F# script project adheres to the following pattern:
``` FunctionsProject
FunctionsProject
There's a shared [host.json](functions-host-json.md) file that can be used to configure the function app. Each function has its own code file (.fsx) and binding configuration file (function.json).
-The binding extensions required in [version 2.x and later versions](functions-versions.md) of the Functions runtime are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When developing locally, you must [register binding extensions](./functions-bindings-register.md#extension-bundles). When developing functions in the Azure portal, this registration is done for you.
+The binding extensions required in [version 2.x and later versions](functions-versions.md) of the Functions runtime are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When you're developing locally, you must [register binding extensions](./functions-bindings-register.md#extension-bundles). When you're developing functions in the Azure portal, this registration is done for you.
## Binding to arguments Each binding supports some set of arguments, as detailed in the [Azure Functions triggers and bindings developer reference](functions-triggers-bindings.md). For example, one of the argument bindings a blob trigger supports is a POCO, which can be expressed using an F# record. For example:
let Run(input: string, item: byref<Item>) =
item <- result ```
+You can use a method return value for an output binding, by using the name `$return` in *function.json*:
+
+```json
+{
+ "name": "$return",
+ "type": "blob",
+ "direction": "out",
+ "path": "output-container/{id}"
+}
+```
+
+Here's the F# code that uses the return value:
+
+```fsharp
+let Run(input: WorkItem, log: ILogger) =
+ let json = String.Format("{{ \"id\": \"{0}\" }}", input.Id)
+ log.LogInformation(sprintf "F# script processed queue message '%s'" json)
+ json
+```
+ ## Logging
-To log output to your [streaming logs](../app-service/troubleshoot-diagnostic-logs.md) in F#, your function should take an argument of type [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger). For consistency, we recommend this argument is named `log`. For example:
+To log output to your [streaming logs](../app-service/troubleshoot-diagnostic-logs.md) in F#, your function should take an argument of type [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger). For consistency, we recommend this argument is named `log`. For example:
```fsharp let Run(blob: string, output: byref<string>, log: ILogger) =
The following assemblies are automatically added by the Azure Functions hosting
* `System.Web.Http` * `System.Net.Http.Formatting`.
-In addition, the following assemblies are special cased and may be referenced by simplename (e.g. `#r "AssemblyName"`):
+In addition, the following assemblies are special cased and may be referenced by simple name (e.g. `#r "AssemblyName"`):
* `Newtonsoft.Json` * `Microsoft.WindowsAzure.Storage`
In addition, the following assemblies are special cased and may be referenced by
If you need to reference a private assembly, you can upload the assembly file into a `bin` folder relative to your function and reference it by using the file name (e.g. `#r "MyAssembly.dll"`). For information on how to upload files to your function folder, see the following section on package management. ## Editor Prelude
-An editor that supports F# Compiler Services will not be aware of the namespaces and assemblies that Azure Functions automatically includes. As such, it can be useful to include a prelude that helps the editor find the assemblies you are using, and to explicitly open namespaces. For example:
+An editor that supports F# Compiler Services won't be aware of the namespaces and assemblies that Azure Functions automatically includes. As such, it can be useful to include a prelude that helps the editor find the assemblies you're using, and to explicitly open namespaces. For example:
```fsharp #if !COMPILED
When Azure Functions executes your code, it processes the source with `COMPILED`
<a name="package"></a> ## Package management
-To use NuGet packages in an F# function, add a `project.json` file to the function's folder in the function app's file system. Here is an example `project.json` file that adds a NuGet package reference to `Microsoft.ProjectOxford.Face` version 1.1.0:
+To use NuGet packages in an F# function, add a `project.json` file to the function's folder in the function app's file system. Here's an example `project.json` file that adds a NuGet package reference to `Microsoft.ProjectOxford.Face` version 1.1.0:
```json {
You may wish to put automatically references assemblies in your editor prelude,
### How to add a `project.json` file to your Azure Function 1. Begin by making sure your function app is running, which you can do by opening your function in the Azure portal. This also gives access to the streaming logs where package installation output will be displayed.
-2. To upload a `project.json` file, use one of the methods described in [how to update function app files](functions-reference.md#fileupdate). If you are using [Continuous Deployment for Azure Functions](functions-continuous-deployment.md), you can add a `project.json` file to your staging branch in order to experiment with it before adding it to your deployment branch.
-3. After the `project.json` file is added, you will see output similar to the following example in your function's streaming log:
+2. To upload a `project.json` file, use one of the methods described in [how to update function app files](functions-reference.md#fileupdate). If you're using [Continuous Deployment for Azure Functions](functions-continuous-deployment.md), you can add a `project.json` file to your staging branch in order to experiment with it before adding it to your deployment branch.
+3. After the `project.json` file is added, you'll see output similar to the following example in your function's streaming log:
``` 2016-04-04T19:02:48.745 Restoring packages.
let Run(timer: TimerInfo, log: ILogger) =
log.LogInformation("Site = " + GetEnvironmentVariable("WEBSITE_SITE_NAME")) ```
-## Reusing .fsx code
+## Reusing F# script code
You can use code from other `.fsx` files by using a `#load` directive. For example: `run.fsx`
let mylog(log: ILogger, text: string) =
log.LogInformation(text); ```
-Paths provides to the `#load` directive are relative to the location of your `.fsx` file.
+Paths provided to the `#load` directive are relative to the location of your `.fsx` file.
* `#load "logger.fsx"` loads a file located in the function folder. * `#load "package\logger.fsx"` loads a file located in the `package` folder in the function folder.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 06/23/2023 Last updated : 07/26/2023 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure OpenAI](../../ai-services/openai/index.yml) | &#x2705; | &#x2705; |
| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | | [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; |
azure-large-instances What Is Azure Large Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-large-instances/what-is-azure-large-instances.md
Last updated 06/01/2023
# What is Azure Large Instances?
-While Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs,
-in some cases, you may need to run services on Azure large servers without a virtualization layer. You may also require root access and control over the operating system (OS). To meet these needs, Azure offers Azure Large Instances for several high-value, mission-critical applications.
+While Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs, in some cases, you may need to run services on Azure large servers without a virtualization layer. You may also require root access and control over the operating system (OS). To meet these needs, Azure offers Azure Large Instances for several high-value, mission-critical applications.
Azure Large Instances is comprised of dedicated large compute instances with the following key features:
Storage and compute units assigned to different tenants cannot see each other or
The Linux OS version for Azure Large Instances is Red Hat Enterprise Linux (RHEL) 8.4. >[!Note]
-> Remember,Check properties of an instance Azure Large Instances is a BYOL model.
-
-Microsoft loads base image with RHEL 8.4, but customers can choose to upgrade to newer versions in collaboration with Microsoft team.
+> Remember, Azure Large Instances is a BYOL model. Microsoft loads base image with RHEL 8.4, but customers can choose to upgrade to newer versions in collaboration with Microsoft team.
## Storage
azure-maps Add Tile Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-tile-layer-map-ios.md
map.layers.insertLayer(
) ```
-The following screenshot shows the above code overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map](https://viewer.nationalmap.gov/services/) on top of a map, below the roads and labels.
+The following screenshot shows the above code overlaying a web-mapping tile service of imagery from the U.S. Geological Survey (USGS) National Map on top of a map, below the roads and labels.
:::image type="content" source="./media/ios-sdk/Add-tile-layer-to-map-ios/wmts.png" alt-text="This image shows the above code overlaying a web-mapping tile service of imagery from the U.S. Geological Survey (USGS) National Map on top of a map, below the roads and labels.":::
azure-maps How To Add Tile Layer Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-tile-layer-android-map.md
map.layers.add(layer, "transit")
::: zone-end
-The following screenshot shows the above code overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map](https://viewer.nationalmap.gov/services/) on top of a map, below the roads and labels.
+The following screenshot shows the above code overlaying a web-mapping tile service of imagery from the U.S. Geological Survey (USGS) National Map on top of a map, below the roads and labels.
![Android map displaying WMTS tile layer](media/how-to-add-tile-layer-android-map/android-tile-layer-wmts.jpg)
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
A web-mapping tile service (WMTS) is an Open Geospatial Consortium (OGC) standar
For a fully functional sample that shows how to create a tile layer that points to a Web Mapping Tile Service (WMTS), see the [WMTS Tile Layer] sample in the [Azure Maps Samples]. For the source code for this sample, see [WMTS Tile Layer source code].
-The following screenshot shows the [WMTS Tile Layer] sample overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map] on top of a map, below roads and labels.
+The following screenshot shows the WMTS Tile Layer sample overlaying a web-mapping tile service of imagery from the U.S. Geological Survey (USGS) National Map on top of a map, below roads and labels.
:::image type="content" source="./media/map-add-tile-layer/wmts-tile-layer.png"alt-text="A screenshot of a map with a tile layer that points to a Web Mapping Tile Service (WMTS) overlay.":::
See the following articles for more code samples to add to your maps:
[OpenSeaMap project]: https://openseamap.org/index.php [U.S. Geological Survey (USGS)]: https://mrdata.usgs.gov/
-[U.S. Geological Survey (USGS) National Map]:https://viewer.nationalmap.gov/services
+[U.S. Geological Survey (USGS) National Map]:https://viewer.nationalmap.gov/services
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
Title: 'Tutorial: Implement IoT spatial analytics | Microsoft Azure Maps'
+ Title: 'Tutorial: Implement IoT spatial analytics'
+ description: Tutorial on how to Integrate IoT Hub with Microsoft Azure Maps service APIs
If you don't have an Azure subscription, create a [free account] before you begi
This tutorial uses the [Postman] application, but you can choose a different API development environment.
+>[!IMPORTANT]
+> In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+ ## Use case: rental car tracking Let's say that a car rental company wants to log location information, distance traveled, and running state for its rental cars. The company also wants to store this information whenever a car leaves the correct authorized geographic region.
The following figure highlights the geofence area in blue. The rental car's rout
## Create an Azure storage account
-To store car violation tracking data, create a [general-purpose v2 storage account] in your resource group. If you haven't created a resource group, follow the directions in [create resource groups][resource group]. In this tutorial, you'll name your resource group *ContosoRental*.
+To store car violation tracking data, create a [general-purpose v2 storage account] in your resource group. If you haven't created a resource group, follow the directions in [create resource groups][resource group]. Name your resource group *ContosoRental*.
To create a storage account, follow the instructions in [create a storage account]. In this tutorial, name the storage account *contosorentalstorage*, but in general you can name it anything you like.
When you successfully create your storage account, you then need to create a con
## Upload a geofence
-Next, use the [Postman] app to [upload the geofence] to Azure Maps. The geofence defines the authorized geographical area for our rental vehicle. You'll be using the geofence in your Azure function to determine whether a car has moved outside the geofence area.
+Next, use the [Postman] app to [upload the geofence] to Azure Maps. The geofence defines the authorized geographical area for our rental vehicle. Use the geofence in your Azure function to determine whether a car has moved outside the geofence area.
Follow these steps to upload the geofence by using the Azure Maps Data Upload API: 1. Open the Postman app, select **New** again. In the **Create New** window, select **HTTP Request**, and enter a request name for the request.
-2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API. Make sure to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API.
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson
Follow these steps to upload the geofence by using the Azure Maps Data Upload AP
https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0 ```
-5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your subscription key to the URL for authentication. The **GET** request should like the following URL:
+5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. Add your subscription key to the URL for authentication. The **GET** request should like the following URL:
```HTTP https://us.atlas.microsoft.com/mapData/{operationId}/status?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
Follow these steps to upload the geofence by using the Azure Maps Data Upload AP
## Create an IoT hub
-IoT Hub enables secure and reliable bi-directional communication between an IoT application and the devices it manages. For this tutorial, you want to get information from your in-vehicle device to determine the location of the rental car. In this section, you create an IoT hub within the *ContosoRental* resource group. This hub will be responsible for publishing your device telemetry events.
+IoT Hub enables secure and reliable bi-directional communication between an IoT application and the devices it manages. For this tutorial, you want to get information from your in-vehicle device to determine the location of the rental car. In this section, you create an IoT hub within the *ContosoRental* resource group. This hub is responsible for publishing your device telemetry events.
To create an IoT hub in the *ContosoRental* resource group, follow the steps in [create an IoT hub]. ## Register a device in your IoT hub
-Devices can't connect to the IoT hub unless they're registered in the IoT hub identity registry. Here, you'll create a single device with the name, *InVehicleDevice*. To create and register the device within your IoT hub, follow the steps in [register a new device in the IoT hub]. Make sure to copy the primary connection string of your device. You'll need it later.
+Devices can't connect to the IoT hub unless they're registered in the IoT hub identity registry. Create a single device with the name, *InVehicleDevice*. To create and register the device within your IoT hub, follow the steps in [register a new device in the IoT hub]. Make sure to copy the primary connection string of your device. You'll need it later.
## Create a function and add an Event Grid subscription Azure Functions is a serverless compute service that allows you to run small pieces of code ("functions"), without the need to explicitly provision or manage compute infrastructure. To learn more, see [Azure Functions].
-A function is triggered by a certain event. Here, you'll create a function that is triggered by an Event Grid trigger. Create the relationship between trigger and function by creating an event subscription for IoT Hub device telemetry events. When a device telemetry event occurs, your function is called as an endpoint, and receives the relevant data for the device you previously registered in IoT Hub.
+A function is triggered by a certain event. Create a function triggered by an Event Grid trigger. Create the relationship between trigger and function by creating an event subscription for IoT Hub device telemetry events. When a device telemetry event occurs, your function is called as an endpoint, and receives the relevant data for the device you previously registered in IoT Hub.
-Here's the [C# script] code that your function will contain.
+Here's the [C# script] code that your function contains.
Now, set up your Azure function.
Now, set up your Azure function.
:::image type="content" source="./media/tutorial-iot-hub-maps/function-create.png" alt-text="Screenshot of create a function.":::
-1. Give the function a name. In this tutorial, you'll use the name, *GetGeoFunction*, but in general you can use any name you like. Select **Create function**.
+1. Give the function a name. In this tutorial, use the name *GetGeoFunction*, but in general you can use any name you like. Select **Create function**.
1. In the left menu, select the **Code + Test** pane. Copy and paste the [C# script] into the code window.
In your example scenario, you only want to receive messages when the rental car
## Send telemetry data to IoT Hub
-When your Azure function is running, you can now send telemetry data to the IoT hub, which will route it to Event Grid. Use a C# application to simulate location data for an in-vehicle device of a rental car. To run the application, you need [.NET Core SDK 3.1] on your development computer. Follow these steps to send simulated telemetry data to the IoT hub:
+When your Azure function is running, you can now send telemetry data to the IoT hub, which routes it to Event Grid. Use a C# application to simulate location data for an in-vehicle device of a rental car. To run the application, you need [.NET Core SDK 3.1] on your development computer. Follow these steps to send simulated telemetry data to the IoT hub:
1. If you haven't done so already, download the [rentalCarSimulation] C# project.
When your Azure function is running, you can now send telemetry data to the IoT
dotnet run ```
- Your local terminal should look like the one below.
+ Your local terminal should look like the following screenshot.
:::image type="content" source="./media/tutorial-iot-hub-maps/terminal.png" alt-text="Screenshot of terminal output.":::
There are no resources that require cleanup.
To learn more about how to send device-to-cloud telemetry, and the other way around, see: > [!div class="nextstepaction"]
-> [Send telemetry from a device](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp)
+> [Send telemetry from a device]
+[.NET Core SDK 3.1]: https://dotnet.microsoft.com/download/dotnet/3.1
+[Azure certified devices]: https://devicecatalog.azure.com/
+[Azure Functions]: ../azure-functions/functions-overview.md
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Azure Maps REST APIs]: /rest/api/maps/spatial/getgeofence
+[C# script]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx
+[create a storage account]: ../storage/common/storage-account-create.md?tabs=azure-portal
+[Create an Azure storage account]: #create-an-azure-storage-account
+[create an IoT hub]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp#create-an-iot-hub
[free account]: https://azure.microsoft.com/free/
-[resource group]: ../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups
-[rentalCarSimulation]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation
-[Postman]: https://www.postman.com/
-[Plug and Play schema for geospatial data]: https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v1-preview/schemas/geospatial.md
-[Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence
-[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
[general-purpose v2 storage account]: ../storage/common/storage-account-overview.md
-[create a storage account]: ../storage/common/storage-account-create.md?tabs=azure-portal
-[upload the geofence]: ./geofence-geojson.md
+[Get Geofence]: /rest/api/maps/spatial/getgeofence
+[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
+[IoT Hub message routing]: ../iot-hub/iot-hub-devguide-routing-query-syntax.md
+[IoT Plug and Play]: ../iot-develop/index.yml
[Open the JSON data file]: https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4
-[create an IoT hub]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp#create-an-iot-hub
+[Plug and Play schema for geospatial data]: https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v1-preview/schemas/geospatial.md
+[Postman]: https://www.postman.com/
[register a new device in the IoT hub]: ../iot-hub/iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub
-[Azure Functions]: ../azure-functions/functions-overview.md
-[C# script]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx
-[Create an Azure storage account]: #create-an-azure-storage-account
+[rentalCarSimulation]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation
+[resource group]: ../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups
+[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
+[Send telemetry from a device]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp
+[Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Upload a geofence]: #upload-a-geofence
+[upload the geofence]: ./geofence-geojson.md
[Use IoT Hub message routing]: ../iot-hub/iot-hub-devguide-messages-d2c.md
-[IoT Hub message routing]: ../iot-hub/iot-hub-devguide-routing-query-syntax.md
-[.NET Core SDK 3.1]: https://dotnet.microsoft.com/download/dotnet/3.1
-[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
-[Get Geofence]: /rest/api/maps/spatial/getgeofence
-[Azure Maps REST APIs]: /rest/api/maps/spatial/getgeofence
-[IoT Plug and Play]: ../iot-develop/index.yml
-[Azure certified devices]: https://devicecatalog.azure.com/
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
# Tutorial: How to display route directions using Azure Maps Route service and Map control
-This tutorial shows you how to use the Azure Maps [Route service API] and [Map control] to display route directions from start to end point. In this tutorial, you'll learn how to:
+This tutorial shows you how to use the Azure Maps [Route service API] and [Map control] to display route directions from start to end point. This tutorial demonstrates how to:
> [!div class="checklist"]
-> * Create and display the Map control on a web page.
+>
+> * Create and display the Map control on a web page.
> * Define the display rendering of the route by defining [Symbol layers] and [Line layers]. > * Create and add GeoJSON objects to the Map to represent start and end points. > * Get route directions from start and end points using the [Get Route directions API].
The following steps show you how to create and display the Map control in a web
4. Save your changes to the file and open the HTML page in a browser. The map shown is the most basic map that you can make by calling `atlas.Map` using your Azure Maps account subscription key.
- :::image type="content" source="./media/tutorial-route-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling atlas.Map using your Azure Maps account key.":::
+ :::image type="content" source="./media/tutorial-route-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling `atlas.Map` using your Azure Maps account key.":::
## Define route display rendering
-In this tutorial, you'll render the route using a line layer. The start and end points are rendered using a symbol layer. For more information on adding line layers, see [Add a line layer to a map](map-add-line-layer.md). To learn more about symbol layers, see [Add a symbol layer to a map].
+In this tutorial, the route is rendered using a line layer. The start and end points are rendered using a symbol layer. For more information on adding line layers, see [Add a line layer to a map]. To learn more about symbol layers, see [Add a symbol layer to a map].
1. In the `GetMap` function, after initializing the map, add the following JavaScript code.
In this tutorial, you'll render the route using a line layer. The start and end
Some things to know about the above JavaScript:
- * This code implements the Map control's `ready` event handler. The rest of the code in this tutorial are placed inside the `ready` event handler.
+ * This code implements the Map control's `ready` event handler. The rest of the code in this tutorial is placed inside the `ready` event handler.
* In the map control's `ready` event handler, a data source is created to store the route from start to end point. * To define how the route line is rendered, a line layer is created and attached to the data source. To ensure that the route line doesn't cover up the road labels, we've passed a second parameter with the value of `'labels'`.
- Next, a symbol layer is created and attached to the data source. This layer specifies how the start and end points are rendered.Expressions have been added to retrieve the icon image and text label information from properties on each point object. To learn more about expressions, see [Data-driven style expressions].
+ Next, a symbol layer is created and attached to the data source. This layer specifies how the start and end points are rendered. Expressions have been added to retrieve the icon image and text label information from properties on each point object. To learn more about expressions, see [Data-driven style expressions].
-2. Next, set the start point at Microsoft, and the end point at a gas station in Seattle. Do this by appending the following code in the Map control's `ready` event handler:
+2. Next, set the start point at Microsoft, and the end point at a gas station in Seattle. Start and points are created by appending the following code in the Map control's `ready` event handler:
```JavaScript //Create the GeoJSON objects which represent the start and end points of the route.
This section shows you how to use the Azure Maps Route Directions API to get rou
The next tutorial shows you how to create a route query with restrictions, like mode of travel or type of cargo. You can then display multiple routes on the same map. > [!div class="nextstepaction"]
-> [Find routes for different modes of travel](./tutorial-prioritized-routes.md)
+> [Find routes for different modes of travel]
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Route service API]: /rest/api/maps/route
-[Map control]: ./how-to-use-map-control.md
-[Symbol layers]: map-add-pin.md
-[Line layers]: map-add-line-layer.md
-[Get Route directions API]: /rest/api/maps/route/getroutedirections
-[route tutorial]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Route
-[Route to a destination]: https://samples.azuremaps.com/?sample=route-to-a-destination
-[atlas]: /javascript/api/azure-maps-control/atlas
-[atlas.Map]: /javascript/api/azure-maps-control/atlas.map
+[Add a line layer to a map]: map-add-line-layer.md
[Add a symbol layer to a map]: map-add-pin.md
+[atlas.Map]: /javascript/api/azure-maps-control/atlas.map
+[atlas]: /javascript/api/azure-maps-control/atlas
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
[Data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[Find routes for different modes of travel]: tutorial-prioritized-routes.md
[GeoJSON Point objects]: https://en.wikipedia.org/wiki/GeoJSON
-[setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-
+[Get Route directions API]: /rest/api/maps/route/getroutedirections
+[Line layers]: map-add-line-layer.md
+[Map control]: ./how-to-use-map-control.md
[MapControlCredential]: /javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential [pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline
+[Route service API]: /rest/api/maps/route
+[Route to a destination]: https://samples.azuremaps.com/?sample=route-to-a-destination
+[route tutorial]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Route
[routeURL]: /javascript/api/azure-maps-rest/atlas.service.routeurl
+[setCamera(CameraOptions | CameraBoundsOptions & AnimationOptions)]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Symbol layers]: map-add-pin.md
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The Map Control API is a convenient client library. This API allows you to easil
Some things to know regarding the above JavaScript:
- * This is the core of the `GetMap` function, which initializes the Map Control API for your Azure Maps account key.
+ * The core of the `GetMap` function, which initializes the Map Control API for your Azure Maps account key.
* `atlas` is the namespace that contains the API and related visual components. * `atlas.Map` provides the control for a visual and interactive web map. 4. Save your changes to the file and open the HTML page in a browser. The map shown is the most basic map that you can make by calling `atlas.Map` using your account key.
- :::image type="content" source="./media/tutorial-search-location/basic-map.png" alt-text="A screen shot showing the most basic map that you can make by calling atlas.Map using your Azure Maps account key.":::
+ :::image type="content" source="./media/tutorial-search-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling atlas.Map using your Azure Maps account key.":::
5. In the `GetMap` function, after initializing the map, add the following JavaScript code.
The Map Control API is a convenient client library. This API allows you to easil
* A `ready` event is added to the map, which fires when the map resources finish loading and the map is ready to be accessed. * In the map `ready` event handler, a data source is created to store result data.
- * A symbol layer is created and attached to the data source. This layer specifies how the result data in the data source should be rendered. In this case, the result is rendered with a dark blue round pin icon, centered over the results coordinate, that allows other icons to overlap.
+ * A symbol layer is created and attached to the data source. This layer specifies how the result data in the data source should be rendered. In this case, the result is rendered with a dark blue round pin icon, centered over the results coordinate that allows other icons to overlap.
* The result layer is added to the map layers. <a id="usesearch"></a>
This section shows how to use the Maps [Search API] to find a point of interest
3. Save the **MapSearch.html** file and refresh your browser. You should see the map centered on Seattle with round-blue pins for locations of gas stations in the area.
- :::image type="content" source="./media/tutorial-search-location/pins-map.png" alt-text="A screen shot showing the map resulting from the search, which is a map showing Seattle with round-blue pins at locations of gas stations.":::
+ :::image type="content" source="./media/tutorial-search-location/pins-map.png" alt-text="A screenshot showing the map resulting from the search, which is a map showing Seattle with round-blue pins at locations of gas stations.":::
4. You can see the raw data that the map is rendering by entering the following HTTPRequest in your browser. Replace `<Your Azure Maps Subscription Key>` with your subscription key.
At this point, the MapSearch page can display the locations of points of interes
The map that we've made so far only looks at the longitude/latitude data for the search results. However, the raw JSON that the Maps Search service returns contains additional information about each gas station. Including the name and street address. You can incorporate that data into the map with interactive popup boxes.
-1. Add the following lines of code in the map `ready` event handler after the code to query the fuzzy search service. This code creates an instance of a Popup and add a mouseover event to the symbol layer.
+1. Add the following lines of code in the map `ready` event handler after the code to query the fuzzy search service. This code creates an instance of a Popup and adds a mouseover event to the symbol layer.
```javascript // Create a popup but leave it closed so we can update it and display it later.
The map that we've made so far only looks at the longitude/latitude data for the
3. Save the file and refresh your browser. Now the map in the browser shows information popups when you hover over any of the search pins.
- :::image type="content" source="./media/tutorial-search-location/popup-map.png" alt-text="A screen shot of a map with information popups that appear when you hover over a search pin.":::
+ :::image type="content" source="./media/tutorial-search-location/popup-map.png" alt-text="A screenshot of a map with information popups that appear when you hover over a search pin.":::
* For the completed code used in this tutorial, see the [search tutorial] on GitHub. * To view this sample live, see [Search for points of interest] on the **Azure Maps Code Samples** site.
The map that we've made so far only looks at the longitude/latitude data for the
The next tutorial demonstrates how to display a route between two locations. > [!div class="nextstepaction"]
-> [Route to a destination](./tutorial-route-location.md)
+> [Route to a destination]
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[free account]: https://azure.microsoft.com/free/
+[Fuzzy Search service]: /rest/api/maps/search/get-search-fuzzy
[manage authentication in Azure Maps]: how-to-manage-authentication.md
-[search tutorial]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Search
-[Search for points of interest]: https://samples.azuremaps.com/?sample=search-for-points-of-interest
[MapControlCredential]: /javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential [pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline
-[searchURL]: /javascript/api/azure-maps-rest/atlas.service.searchurl
+[Route to a destination]: tutorial-route-location.md
[Search API]: /rest/api/maps/search
-[Fuzzy Search service]: /rest/api/maps/search/get-search-fuzzy
+[Search for points of interest]: https://samples.azuremaps.com/?sample=search-for-points-of-interest
+[search tutorial]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Search
+[searchURL]: /javascript/api/azure-maps-rest/atlas.service.searchurl
[setCamera]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
The Log Analytics agent for Linux is composed of multiple packages. The release
Package | Version | Description -- | -- | --
-omsagent | 1.14.19 | The Log Analytics agent for Linux.
-omsconfig | 1.1.1 | Configuration agent for the Log Analytics agent.
-omi | 1.6.9 | Open Management Infrastructure (OMI), a lightweight CIM Server. *OMI requires root access to run a cron job necessary for the functioning of the service*.
-scx | 1.6.9 | OMI CIM providers for operating system performance metrics.
+omsagent | 1.16.0 | The Log Analytics agent for Linux.
+omsconfig | 1.2.0 | Configuration agent for the Log Analytics agent.
+omi | 1.7.1 | Open Management Infrastructure (OMI), a lightweight CIM Server. *OMI requires root access to run a cron job necessary for the functioning of the service*.
+scx | 1.7.1 | OMI CIM providers for operating system performance metrics.
apache-cimprov | 1.0.1 | Apache HTTP Server performance monitoring provider for OMI. Only installed if Apache HTTP Server is detected. mysql-cimprov | 1.0.1 | MySQL Server performance monitoring provider for OMI. Only installed if MySQL/MariaDB server is detected. docker-cimprov | 1.0.0 | Docker provider for OMI. Only installed if Docker is detected.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Amazon Linux 2 | X | X | | | CentOS Linux 8 | X | X | | | CentOS Linux 7 | X<sup>3</sup> | X | X |
-| CentOS Linux 6 | | X | |
| CBL-Mariner 2.0 | X<sup>3,4</sup> | | | | Debian 11 | X<sup>3</sup> | | | | Debian 10 | X | X | |
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| OpenSUSE 15 | X | | | | Oracle Linux 8 | X | X | | | Oracle Linux 7 | X | X | X |
-| Oracle Linux 6 | | X | |
-| Oracle Linux 6.4+ | | X | X |
+| Oracle Linux 6.4+ | | | X |
| Red Hat Enterprise Linux Server 9+ | X | | | | Red Hat Enterprise Linux Server 8.6 | X<sup>3</sup> | X<sup>2</sup> | X<sup>2</sup> | | Red Hat Enterprise Linux Server 8+ | X | X<sup>2</sup> | X<sup>2</sup> | | Red Hat Enterprise Linux Server 7 | X | X | X |
-| Red Hat Enterprise Linux Server 6.7+ | | X | X |
-| Red Hat Enterprise Linux Server 6 | | X | |
+| Red Hat Enterprise Linux Server 6.7+ | | | X |
| Rocky Linux 8 | X | X | | | SUSE Linux Enterprise Server 15 SP4 | X<sup>3</sup> | | | | SUSE Linux Enterprise Server 15 SP3 | X | | |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
| Release Date | Release notes | Windows | Linux | |:|:|:|:| | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0| Comming Soon|
-| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li></ul></li></ul>|1.17.0 |1.27.2|
+| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li><li>hot fix (1.26.3) for Syslog</li></ul><</li><ul> | 1.16.0.0 | 1.26.2 1.26.3<sup>Hotfix</sup>| | Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0| Coming soon| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon |
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
For more information on how to troubleshoot syslog issues with Azure Monitor Age
6. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA fails to collect syslog events' and **Problem type** as 'I need help with Azure Monitor Linux Agent'. ## Troubleshooting issues on Arc-enabled server
-If after checking basic troubleshooting steps you don't see the Azure Monitor Agent emitting logs or find **'Failed to get MSI token from IMDS endpoint'** errors in `mdsd.err` log file, it's likely `syslog` user isn't a member of the group `himds`. Add `syslog` user to `himds` user group if the user isn't a member of this group. Create user `syslog` and the group `syslog`, if necessary, and make sure that the user is in that group. For more information check out Azure Arc-enabled server authentication requirements [here](../../azure-arc/servers/managed-identity-authentication.md).
+If after checking basic troubleshooting steps you don't see the Azure Monitor Agent emitting logs or find **'Failed to get MSI token from IMDS endpoint'** errors in `/var/opt/microsoft/azuremonitoragent/log/mdsd.err` log file, it's likely `syslog` user isn't a member of the group `himds`. Add `syslog` user to `himds` user group if the user isn't a member of this group. Create user `syslog` and the group `syslog`, if necessary, and make sure that the user is in that group. For more information check out Azure Arc-enabled server authentication requirements [here](../../azure-arc/servers/managed-identity-authentication.md).
[!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)]
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
```cli msiexec /i AzureMonitorAgentClientSetup.msi /qn ```
-4. To install with custom file paths or [network proxy settings](./azure-monitor-agent-overview.md#proxy-configuration), use the command below with the values from the following table:
+4. To install with custom file paths, [network proxy settings](./azure-monitor-agent-overview.md#proxy-configuration), or on a Non-Public Cloud use the command below with the values from the following table:
```cli msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder" ```
Here is a comparison between client installer and VM extension for Azure Monitor
| PROXYUSEAUTH | Set to "true" if proxy requires authentication | | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" | | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | CLOUDENV | Set to Cloud. "Azure Commercial", "Azure China", "Azure US Gov", "Azure USNat", or "Azure USSec
-5. Verify successful installation:
+6. Verify successful installation:
- Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
-6. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating.
+7. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating.
> [!NOTE] > The agent installed with the client installer currently doesn't support updating local agent settings once it is installed. Uninstall and reinstall AMA to update above settings.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
For the full Azure CLI documentation for this command, see the [Azure CLI docume
### Azure PowerShell
-The `Update-AzApplicationInsights` PowerShell command doesn't currently support migrating a classic Application Insights resource to workspace based. To create a workspace-based resource with PowerShell, use the following Azure Resource Manager templates and deploy them with PowerShell.
+Starting with version 8.0 or higher of [Azure PowerShell](https://learn.microsoft.com/powershell/azure/what-is-azure-powershell), you can use the `Update-AzApplicationInsights` PowerShell command to migrate a classic Application Insights resource to workspace based.
+
+To use this cmdlet, you need to specify the name and resource group of the Application Insights resource that you want to update. Use the `IngestionMode` and `WorkspaceResoruceId` parameters to migrate your classic instance to workspace-based. For more information on the parameters and syntax of this cmdlet, see [Update-AzApplicationInsights](https://learn.microsoft.com/powershell/module/az.applicationinsights/update-azapplicationinsights).
+
+#### Example
+
+```powershell
+# Get the resource ID of the Log Analytics workspace
+$workspaceResourceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName "rgName" -Name "laName").ResourceId
+
+# Update the Application Insights resource with the workspace parameter
+Update-AzApplicationInsights -Name "aiName" -ResourceGroupName "rgName" -IngestionMode LogAnalytics -WorkspaceResourceId $workspaceResourceId
+```
### Azure Resource Manager templates
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
For tracing information, see [Distributed tracing and correlation through Azure
### Azure Storage queue
-The following example shows how to track the [Azure Storage queue](../../storage/queues/storage-dotnet-how-to-use-queues.md) operations and correlate telemetry between the producer, the consumer, and Azure Storage.
+The following example shows how to track the [Azure Storage queue](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli) operations and correlate telemetry between the producer, the consumer, and Azure Storage.
The Storage queue has an HTTP API. All calls to the queue are tracked by the Application Insights Dependency Collector for HTTP requests. It's configured by default on ASP.NET and ASP.NET Core applications. With other kinds of applications, see the [Console applications documentation](./console.md).
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
This article walks you through migrating from [instrumentation keys](separate-re
1. Configure the Application Insights SDK by following [How to set connection strings](sdk-connection-string.md#set-a-connection-string). > [!IMPORTANT]
-> Using both a connection string and instrumentation key isn't recommended. Whichever was set last takes precedence. Also, using both could lead to [missing data](#missing-data).
+> Don't use both a connection string and an instrumentation key. The latter one set supersedes the other, and could result in telemetry not appearing on the portal. [missing data](#missing-data).
## Migration at scale
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Telemetry emitted by these Azure SDKs is automatically collected by default:
#### [Node.js](#tab/nodejs)
-The following OpenTelemetry Instrumentation libraries are included as part of Azure Monitor Application Insights Distro.
+The following OpenTelemetry Instrumentation libraries are included as part of the Azure Monitor Application Insights Distro. See [this](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#officially-supported-instrumentations) for more details.
Requests - [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http) <sup>[2](#FOOTNOTETWO)</sup>
Logs
Examples of using the Python logging library can be found on [GitHub](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples/logging).
+Telemetry emitted by Azure SDKS is automatically [collected](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#azure-core-distributed-tracing) by default.
+ **Footnotes** - <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of *unhandled/uncaught* exceptions - <a name="FOOTNOTETWO">2</a>: Supports OpenTelemetry Metrics - <a name="FOOTNOTETHREE">3</a>: By default, logging is only collected at INFO level or higher. To change this setting, see the [configuration options](./java-standalone-config.md#autocollected-logging).-- <a name="FOOTNOTEFOUR">4</a>: By default, logging is only collected at WARNING level or higher.
+- <a name="FOOTNOTEFOUR">4</a>: By default, logging is only collected when that logging is performed at the WARNING level or higher.
> [!NOTE] > The Azure Monitor OpenTelemetry Distros include custom mapping and logic to automatically emit [Application Insights standard metrics](standard-metrics.md).
Other OpenTelemetry Instrumentations are available [here](https://github.com/ope
``` ### [Python](#tab/python)
-Currently unavailable.
+
+To add a community instrumentation library (not officially supported/included in Azure Monitor distro), you can instrument directly with the instrumentations. The list of community instrumentation libraries can be found [here](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation).
+
+> [!NOTE]
+> Instrumenting a [supported instrumentation library](.\opentelemetry-add-modify.md?tabs=python#included-instrumentation-libraries) manually with `instrument()` in conjunction with the distro `configure_azure_monitor()` is not recommended. This is not a supported scenario and you may get undesired behavior for your telemetry.
+
+```python
+from azure.monitor.opentelemetry import configure_azure_monitor
+from opentelemetry.instrumentation.sqlalchemy import SQLAlchemyInstrumentor
+from sqlalchemy import create_engine, text
+
+configure_azure_monitor()
+
+engine = create_engine("sqlite:///:memory:")
+# SQLAlchemy instrumentation is not officially supported by this package
+# However, you can use the OpenTelemetry instrument() method manually in
+# conjunction with configure_azure_monitor
+SQLAlchemyInstrumentor().instrument(
+ engine=engine,
+)
+
+# Database calls using the SqlAlchemy library will be automatically captured
+with engine.connect() as conn:
+ result = conn.execute(text("select 'hello world'"))
+ print(result.all())
+
+```
logHandler.trackEvent({
#### [Python](#tab/python)
-The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](#logs). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
+The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](.\opentelemetry-add-modify.md?tabs=python#included-instrumentation-libraries). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
```python ...
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
export OTEL_SERVICE_NAME="my-helloworld-service"
You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. For more information, see [Learn More about sampling](sampling.md#brief-summary). > [!NOTE]
-> Metrics are unaffected by sampling.
+> Metrics and Logs are unaffected by sampling.
#### [ASP.NET Core](#tab/aspnetcore)
For more information about OpenTelemetry SDK configuration, see the [OpenTelemet
### [Python](#tab/python)
-For more information about OpenTelemetry SDK configuration, see the [OpenTelemetry documentation](https://opentelemetry.io/docs/concepts/sdk-configuration).
+For more information about OpenTelemetry SDK configuration, see the [OpenTelemetry documentation](https://opentelemetry.io/docs/concepts/sdk-configuration). For additional details, see [Azure monitor Distro Usage](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#usage).
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
To enable Azure Monitor Application Insights, you make a minor modification to y
Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET, it is in either your `startup.cs` or `program.cs` class. ```csharp
+using Azure.Monitor.OpenTelemetry.AspNetCore;
+ var builder = WebApplication.CreateBuilder(args); builder.Services.AddOpenTelemetry().UseAzureMonitor();
azure-monitor Container Insights Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-authentication.md
https://aka.ms/enable-monitoring-msi-syslog-terraform
- **workspace_resource_id**: Use the resource ID of your Log Analytics workspace. - **workspace_region**: Use the location of your Log Analytics workspace. - **resource_tag_values**: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values. 4. Run `terraform init -upgrade` to initialize the Terraform deployment. 5. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment. 6. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
azure-monitor Kql Machine Learning Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/kql-machine-learning-azure-monitor.md
Previously updated : 07/01/2022 Last updated : 07/26/2023 # Customer intent: As a data analyst, I want to use the native machine learning capabilities of Azure Monitor Logs to gain insights from my log data without having to export data outside of Azure Monitor.
Learn more about:
- [Log queries in Azure Monitor](log-query-overview.md). - [How to use Kusto queries](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor).-- [Analyze logs in Azure Monitor with KQL](/training/modules/analyze-logs-with-kql/)
+- [Analyze logs in Azure Monitor with KQL](/training/modules/analyze-logs-with-kql/)
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Data is sent to Storage Accounts as it reaches Azure Monitor and exported to des
Blobs are stored in 5-minute folders in the following path structure: *WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/\<resource-group\>/providers/microsoft.operationalinsights/workspaces/\<workspace\>/y=\<four-digit numeric year\>/m=\<two-digit numeric month\>/d=\<two-digit numeric day\>/h=\<two-digit 24-hour clock hour\>/m=\<two-digit 60-minute clock minute\>/PT05M.json*. Appends to blobs are limited to 50-K writes. More blobs will be added in the folder as *PT05M_#.json**, where '#' is the incremental blob count. > [!NOTE]
-> Appends to blobs are written based on the "TimeGenerated" field and occur when receiving source data. Data arriving to Azure Monitor with delay, or retried following destinations throttling, is written to blobs according to its TimeGenerate.
+> Appends to blobs are written based on the "TimeGenerated" field and occur when receiving source data. Data arriving to Azure Monitor with delay, or retried following destinations throttling, is written to blobs according to its TimeGenerated.
The format of blobs in a Storage Account is in [JSON lines](/previous-versions/azure/azure-monitor/essentials/resource-logs-blob-format), where each record is delimited by a new line, with no outer records array and no commas between JSON records.
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
If your private link setup was created before April 19, 2021, it won't reach the
### Collect custom logs and IIS log over a private link Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. To ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspaces.
-For more information on how to connect your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Use private links](private-storage.md#use-private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace).
+For more information on how to connect your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) and specifically [Use private links](private-storage.md#private-links) and [Link storage accounts to your Log Analytics workspace](private-storage.md#link-storage-accounts-to-your-log-analytics-workspace).
### Automation If you use Log Analytics solutions that require an Azure Automation account (such as Update Management, Change Tracking, or Inventory), you should also create a private link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Title: Use customer-managed storage accounts in Azure Monitor Log Analytics
-description: Use your own Azure Storage account for Azure Monitor Log Analytics scenarios.
+ Title: Use customer-managed storage accounts in Azure Monitor Logs
+description: Use your own Azure Storage account to ingest logs into Azure Monitor Logs.
Last updated 04/04/2022
-# Use customer-managed storage accounts in Azure Monitor Log Analytics
+# Use customer-managed storage accounts in Azure Monitor Logs
-Log Analytics relies on Azure Storage in various scenarios. This use is typically managed automatically. But some cases require you to provide and manage your own storage account, which is also known as a customer-managed storage account. This article covers the use of customer-managed storage for WAD/LAD logs, Azure Private Link, and customer-managed key (CMK) encryption.
+Azure Monitor Logs relies on Azure Storage in various scenarios. Azure Monitor typically manages this type of storage automatically, but some cases require you to provide and manage your own storage account, also known as a customer-managed storage account. This article describes the use cases and requirements for setting up customer-managed storage for Azure Monitor Logs and explains how to link a storage account to a Log Analytics workspace.
> [!NOTE]
-> We recommend that you don't take a dependency on the contents that Log Analytics uploads to customer-managed storage because formatting and content might change.
+> We recommend that you don't take a dependency on the contents that Azure Monitor Logs uploads to customer-managed storage because formatting and content might change.
-## Ingest Azure Diagnostics extension logs (WAD/LAD)
-The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents, respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them.
-
-### Collect Azure Diagnostics extension logs from your storage account
-Connect the storage account to your Log Analytics workspace as a storage data source by using the [Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage). You can also call the [Storage Insights API](/rest/api/loganalytics/storage-insights/create-or-update).
-
-Supported data types are:
-
-* [Syslog](../agents/data-sources-syslog.md)
-* [Windows events](../agents/data-sources-windows-events.md)
-* Azure Service Fabric
-* [Event Tracing for Windows (ETW) events](../agents/data-sources-event-tracing-windows.md)
-* [IIS logs](../agents/data-sources-iis-logs.md)
-
-## Use private links
+## Private links
Customer-managed storage accounts are used to ingest custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
-> [!IMPORTANT]
-> Collection of IIS logs isn't supported with private links.
-
-### Use a customer-managed storage account over a private link
-
-Meet the following requirements.
-
-#### Workspace requirements
-When you connect to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces accessible over a private link. This requirement means you should:
+### Workspace requirements
+When you connect to Azure Monitor over a private link, Azure Monitor Agent can only send logs to workspaces accessible over a private link. This requirement means you should:
* Configure an Azure Monitor Private Link Scope (AMPLS) object. * Connect it to your workspaces.
When you connect to Azure Monitor over a private link, Log Analytics agents are
For more information on the AMPLS configuration procedure, see [Use Azure Private Link to securely connect networks to Azure Monitor](./private-link-security.md).
-#### Storage account requirements
-For the storage account to successfully connect to your private link, it must:
+### Storage account requirements
+For the storage account to connect to your private link, it must:
* Be located on your virtual network or a peered network and connected to your virtual network over a private link. * Be located on the same region as the workspace it's linked to.
-* Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, select the exception **Allow trusted Microsoft services to access this storage account**.
+* Allow Azure Monitor to access the storage account. To allow only specific networks to access your storage account, select the exception **Allow trusted Microsoft services to access this storage account**.
![Screenshot that shows Storage account trust Microsoft services.](./media/private-storage/storage-trust.png) If your workspace handles traffic from other networks, configure the storage account to allow incoming traffic coming from the relevant networks/internet.
-Coordinate the TLS version between the agents and the storage account. We recommend that you send data to Log Analytics by using TLS 1.2 or higher. Review the [platform-specific guidance](./data-security.md#sending-data-securely-using-tls-12). If required, [configure your agents to use TLS 1.2](../agents/agent-windows.md#configure-agent-to-use-tls-12). If that's not possible, configure the storage account to accept TLS 1.0.
+Coordinate the TLS version between the agents and the storage account. We recommend that you send data to Azure Monitor Logs by using TLS 1.2 or higher. Review the [platform-specific guidance](./data-security.md#sending-data-securely-using-tls-12). If necessary, [configure your agents to use TLS 1.2](../agents/agent-windows.md#configure-agent-to-use-tls-12). If that's not possible, configure the storage account to accept TLS 1.0.
-### Use a customer-managed storage account for CMK data encryption
-Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMKs) to encrypt the data. However, Azure Storage also allows you to use CMKs from Azure Key Vault to encrypt your storage data. You can either import your own keys into Key Vault or use the Key Vault APIs to generate keys.
+## Customer-managed key data encryption
+Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMKs) to encrypt the data. However, Azure Storage also allows you to use customer-managed keys (CMKs) from Azure Key Vault to encrypt your storage data. You can either import your own keys into Key Vault or use the Key Vault APIs to generate keys.
-#### CMK scenarios that require a customer-managed storage account
+### CMK scenarios that require a customer-managed storage account
A customer-managed storage account is required for: * Encrypting log-alert queries with CMKs. * Encrypting saved queries with CMKs.
-#### Apply CMKs to customer-managed storage accounts
+### Apply CMKs to customer-managed storage accounts
Follow this guidance to apply CMKs to customer-managed storage accounts.
-##### Storage account requirements
+#### Storage account requirements
The storage account and the key vault must be in the same region, but they also can be in different subscriptions. For more information about Azure Storage encryption and key management, see [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md).
-##### Apply CMKs to your storage accounts
+#### Apply CMKs to your storage accounts
To configure your Azure Storage account to use CMKs with Key Vault, use the [Azure portal](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), [PowerShell](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), or the [Azure CLI](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json). ## Link storage accounts to your Log Analytics workspace
The applicable `dataSourceType` values are:
Follow this guidance to manage your linked storage accounts. ### Create or modify a link
-When you link a storage account to a workspace, Log Analytics will start using it instead of the storage account owned by the service. You can:
+When you link a storage account to a workspace, Azure Monitor Logs starts using it instead of the storage account owned by the service. You can:
* Register multiple storage accounts to spread the load of logs between them. * Reuse the same storage account for multiple workspaces. ### Unlink a storage account
-To stop using a storage account, unlink the storage from the workspace. Unlinking all storage accounts from a workspace means Log Analytics will attempt to rely on service-managed storage accounts. If your network has limited access to the internet, these storage accounts might not be available and any scenario that relies on storage will fail.
+To stop using a storage account, unlink the storage from the workspace. When you unlink all storage accounts from a workspace, Azure Monitor Logs uses service-managed storage accounts. If your network has limited access to the internet, these storage accounts might not be available and any scenario that relies on storage will fail.
### Replace a storage account To replace a storage account used for ingestion:
To replace a storage account used for ingestion:
Follow this guidance to maintain your storage accounts. #### Manage log retention
-When you use your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences.
+When you use your own storage account, retention is up to you. Azure Monitor Logs doesn't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences.
#### Consider load Storage accounts can handle a certain load of read and write requests before they start throttling requests. For more information, see [Scalability and performance targets for Azure Blob Storage](../../storage/common/scalability-targets-standard-account.md).
Storage accounts can handle a certain load of read and write requests before the
Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register another storage account to spread the load between them. To monitor your storage account's capacity and performance, review its [Insights in the Azure portal](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json). ### Related charges
-Storage accounts are charged by the volume of stored data, the type of storage, and the type of redundancy. For more information, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Azure Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
+You're charged for storage accounts based on the volume of stored data, the type of storage, and the type of redundancy. For more information, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Azure Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
## Next steps
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
If your application runs in Azure Service Fabric, Azure Cloud Services, Azure Vi
## Before you begin - [Enable Application Insights in your web app](../app/asp-net.md).-- Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.3.5 or above in your app.
+- Include the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package version 1.4.2 or above in your app.
## Configure snapshot collection for ASP.NET applications
-The default Snapshot Debugger configuration is mostly empty and all settings are optional. You can customize the Snapshot Debugger configuration added to [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md).
+When you add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package to your application, the `SnapshotCollectorTelemetryProcessor` should be added automatically to the `TelemetryProcessors` section of [ApplicationInsights.config](../app/configuration-with-applicationinsights-config.md).
+
+If you don't see `SnapshotCollectorTelemetryProcessor` in ApplicationInsights.config, or if you want to customize the Snapshot Debugger configuration, you may edit it by hand. However, these edits may get overwritten if you later upgrade to a newer version of the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package.
The following example shows a configuration equivalent to the default configuration: ```xml <TelemetryProcessors>
- <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
<!-- The default is true, but you can disable Snapshot Debugging by setting it to false --> <IsEnabled>true</IsEnabled> <!-- Snapshot Debugging is usually disabled in developer mode, but you can enable it by setting this to true. -->
The following example shows a configuration equivalent to the default configurat
<ProvideAnonymousTelemetry>true</ProvideAnonymousTelemetry> <!-- The limit on the number of failed requests to request snapshots before the telemetry processor is disabled. --> <FailedRequestLimit>3</FailedRequestLimit>
- </Add>
+ </Add>
</TelemetryProcessors> ``` Snapshots are collected _only_ on exceptions reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](../app/asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal. -
-## Configure snapshot collection for applications using ASP.NET Core
+## Configure snapshot collection for ASP.NET Core applications or Worker Services
### Prerequisites
-Create a new class called `SnapshotCollectorTelemetryProcessorFactory` to add and configure the Snapshot Collector's telemetry processor.
-
-```csharp
-using Microsoft.ApplicationInsights.AspNetCore;
-using Microsoft.ApplicationInsights.Extensibility;
-using Microsoft.ApplicationInsights.SnapshotCollector;
-using Microsoft.Extensions.Options;
-
-internal class SnapshotCollectorTelemetryProcessorFactory : ITelemetryProcessorFactory
-{
- private readonly IServiceProvider _serviceProvider;
+Your application should already reference one of the following Application Insights NuGet packages:
+- [Microsoft.ApplicationInsights.AspNetCore](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore)
+- [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService)
- public SnapshotCollectorTelemetryProcessorFactory(IServiceProvider serviceProvider) =>
- _serviceProvider = serviceProvider;
+### Add the NuGet package
- public ITelemetryProcessor Create(ITelemetryProcessor next)
- {
- IOptions<SnapshotCollectorConfiguration> snapshotConfigurationOptions = _serviceProvider.GetRequiredService<IOptions<SnapshotCollectorConfiguration>>();
- return new SnapshotCollectorTelemetryProcessor(next, configuration: snapshotConfigurationOptions.Value);
- }
-}
-```
+Add the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package to your app.
-Add the `SnapshotCollectorConfiguration` and `SnapshotCollectorTelemetryProcessorFactory` services to `Program.cs`:
+### Update the services collection
+In your application's startup code, where services are configured, add a call to the `AddSnapshotCollector` extension method. It's a good idea to add this line immediately after the call to `AddApplicationInsightsTelemetry`. For example:
```csharp
-using Microsoft.ApplicationInsights.AspNetCore;
-using Microsoft.ApplicationInsights.SnapshotCollector;
- var builder = WebApplication.CreateBuilder(args);
+// Add services to the container.
builder.Services.AddApplicationInsightsTelemetry();
-builder.Services.AddSnapshotCollector(config => builder.Configuration.Bind(nameof(SnapshotCollectorConfiguration), config));
-builder.Services.AddSingleton<ITelemetryProcessorFactory>(sp => new SnapshotCollectorTelemetryProcessorFactory(sp));
+builder.Services.AddSnapshotCollector();
+```
+
+### Configure the Snapshot Collector
+For most situations, the default settings are sufficient. If not, customize the settings by adding the following code before the call to `AddSnapshotCollector()`
+```csharp
+using Microsoft.ApplicationInsights.SnapshotCollector;
+...
+builder.Services.Configure<SnapshotCollectorConfiguration>(builder.Configuration.GetSection("SnapshotCollector"));
```
-If needed, customize the Snapshot Debugger configuration by adding a `SnapshotCollectorConfiguration` section to *appsettings.json*. The following example shows a configuration equivalent to the default configuration:
+Next, add a `SnapshotCollector` section to *appsettings.json* where you can override the defaults. The following example shows a configuration equivalent to the default configuration:
```json {
- "SnapshotCollectorConfiguration": {
+ "SnapshotCollector": {
"IsEnabledInDeveloperMode": false, "ThresholdForSnapshotting": 1, "MaximumSnapshotsRequired": 3,
If needed, customize the Snapshot Debugger configuration by adding a `SnapshotCo
} ```
+If you need to customize the Snapshot Collector's behavior manually, without using *appsettings.json*, use the overload of `AddSnapshotCollector` that takes a delegate. For example:
+```csharp
+builder.Services.AddSnapshotCollector(config => config.IsEnabledInDeveloperMode = true);
+```
+ ## Configure snapshot collection for other .NET applications
-Snapshots are collected only on exceptions that are reported to Application Insights. You might need to modify your code to report them. The exception handling code depends on the structure of your application. Here's an example:
+Snapshots are collected only on exceptions that are reported to Application Insights. For ASP.NET and ASP.NET Core applications, the Application Insights SDK automatically reports unhandled exceptions that escape a controller method or endpoint route handler. For other applications, you might need to modify your code to report them. The exception handling code depends on the structure of your application. Here's an example:
```csharp TelemetryClient _telemetryClient = new TelemetryClient();
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references for solutions for Linux OSS applications and da
* [Use Teamcenter PLM with Azure NetApp Files](/azure/architecture/example-scenario/manufacturing/teamcenter-plm-netapp-files) * [Siemens Teamcenter baseline architecture](/azure/architecture/example-scenario/manufacturing/teamcenter-baseline)
+* [Migrate Product Lifecycle Management (PLM) to Azure](/industry/manufacturing/architecture/ra-migrate-plm-azure)
### Machine Learning * [Cloudera Machine Learning](https://docs.cloudera.com/machine-learning/cloud/requirements-azure/topics/ml-requirements-azure.html)
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
There are some important factors to consider when defining your resource group:
To ensure state consistency for the resource group, all [control plane operations](./control-plane-and-data-plane.md) are routed through the resource group's location. When selecting a resource group location, we recommend that you select a location close to where your control operations originate. Typically, this location is the one closest to your current location. This routing requirement only applies to control plane operations for the resource group. It doesn't affect requests that are sent to your applications.
- If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them. This condition doesn't apply to global resources like Azure Content Delivery Network, Azure DNS, Azure DNS Private Zones, Azure Traffic Manager, and Azure Front Door.
+ If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them.
For more information about building reliable applications, see [Designing reliable Azure applications](/azure/architecture/checklist/resiliency-per-service).
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Currently, by default, new Bastion deployments don't support zone redundancies.
Yes, [Azure AD guest accounts](../active-directory/external-identities/what-is-b2b.md) can be granted access to Bastion and can connect to virtual machines. However, Azure AD guest users can't connect to Azure VMs via Azure AD authentication. Non-guest users are supported via Azure AD authentication. For more information about Azure AD authentication for Azure VMs (for non-guest users), see [Log in to a Windows virtual machine in Azure by using Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
+### <a name="shareable-links-domains"></a>Are custom domains supported with Bastion shareable links?
+
+No, custom domains are not supported with Bastion shareable links. Users will receive a certificate error upon trying to add specific domains in the CN/SAN of the Bastion host certificate.
+ ## <a name="vm"></a>VM features and connection FAQs ### <a name="roles"></a>Are any roles required to access a virtual machine?
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
cloud-services-extended-support Configure Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/configure-scaling.md
Conditions can be configured to enable Cloud Services (extended support) deploym
Consider the following information when configuring scaling of your Cloud Service deployments: - Scaling impacts core usage. Larger role instances consume more cores and you can only scale within the core limit of your subscription. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).-- Scaling based on queue messaging threshold is supported. For more information, see [Get started with Azure Queue storage](../storage/queues/storage-dotnet-how-to-use-queues.md).
+- Scaling based on queue messaging threshold is supported. For more information, see [Get started with Azure Queue storage](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli).
- To ensure high availability of your Cloud Service (extended support) applications, ensure to deploy with two or more role instances. - Custom autoscale can only occur when all roles are in a **Ready** state.
Consider the following information when configuring scaling of your Cloud Servic
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services Cloud Services How To Scale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-portal.md
You should consider the following information before you configure scaling for y
Larger role instances use more cores. You can scale an application only within the limit of cores for your subscription. For example, say your subscription has a limit of 20 cores. If you run an application with two medium-sized cloud services (a total of 4 cores), you can only scale up other cloud service deployments in your subscription by the remaining 16 cores. For more information about sizes, see [Cloud Service Sizes](cloud-services-sizes-specs.md).
-* You can scale based on a queue message threshold. For more information about how to use queues, see [How to use the Queue Storage Service](../storage/queues/storage-dotnet-how-to-use-queues.md).
+* You can scale based on a queue message threshold. For more information about how to use queues, see [How to use the Queue Storage Service](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli).
* You can also scale other resources associated with your subscription.
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
sms.delete_deployment('myhostedservice', 'v1')
``` ## <a name="CreateStorageService"> </a>Create a storage service
-A [storage service](../storage/common/storage-account-create.md) gives you access to Azure [blobs](../storage/blobs/storage-quickstart-blobs-python.md), [tables](../cosmos-db/table-storage-how-to-use-python.md), and [queues](../storage/queues/storage-python-how-to-use-queue-storage.md). To create a storage service, you need a name for the service (between 3 and 24 lowercase characters and unique within Azure). You also need a description, a label (up to 100 characters, automatically encoded to base64), and a location. The following example shows how to create a storage service by specifying a location:
+A [storage service](../storage/common/storage-account-create.md) gives you access to Azure [blobs](../storage/blobs/storage-quickstart-blobs-python.md), [tables](../cosmos-db/table-storage-how-to-use-python.md), and [queues](/azure/storage/queues/storage-quickstart-queues-python?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli). To create a storage service, you need a name for the service (between 3 and 24 lowercase characters and unique within Azure). You also need a description, a label (up to 100 characters, automatically encoded to base64), and a location. The following example shows how to create a storage service by specifying a location:
```python from azure import *
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md
For more details about using Azure services from your web and worker roles, such
* [Blob Service][Blob Service] * [Table Service][Table Service]
-* [Queue Service][Queue Service]
+* [Queue Service](/azure/storage/queues/storage-quickstart-queues-python?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli)
* [Service Bus Queues][Service Bus Queues] * [Service Bus Topics][Service Bus Topics]
communication-services Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advisor-overview.md
Title: Use Azure Advisor for Azure Communication Services
description: Learn about Azure Advisor offerings for Azure Communication Services. - - Last updated 10/10/2022
communication-services Chat Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/chat-logs.md
Title: Azure Communication Services chat logs description: Learn about logging for Azure Communication Services chat.-+ --+ Last updated 03/21/2023
communication-services Network Traversal Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/network-traversal-logs.md
Title: Azure Communication Services Network Traversal logs description: Learn about logging for Azure Communication Services Network Traversal.-+ --+ Last updated 03/21/2023
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-flows.md
description: Learn about call flows in Azure Communication Services.
- Last updated 06/30/2021
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
Title: Developer tools - Network Diagnostics Tool for Azure Communication Services description: Conceptual documentation outlining the capabilities provided by the Network Test Tool.-+ --+ Last updated 11/16/2022
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
Title: Developer Tools - Azure Communication Services Communication Monitoring description: Conceptual documentation outlining the capabilities provided by the Communication Monitoring tool.-+ --+ Last updated 03/29/2022
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md
Title: Region availability and data residency for Azure Communication Services description: Learn about data residency, and privacy related matters on Azure Communication Services-+
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
Title: Teams interoperability description: Teams interoperability-+ Last updated 06/30/2021
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
Title: Azure Communication Services Pre-Call diagnostics description: Overview of Pre-Call Diagnostic APIs-+ --+ Last updated 04/01/2021
communication-services Record Every Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/record-every-call.md
Title: Record a call when it starts description: In this how-to document, you can learn how to record a call through Azure Communication Services once it starts.-+ -+ Last updated 03/01/2023
communication-services Local Testing Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/local-testing-event-grid.md
Title: Test your Event Grid handler locally description: In this how-to document, you can learn how to locally test your Event Grid handler for Azure Communication Services events with Postman.-+ -+ Last updated 02/09/2023
communication-services View Events Request Bin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/view-events-request-bin.md
Title: Validate Azure Communication Services events description: In this how-to document, you can learn how to validate Azure Communication Services events with RequestBin or Azure Event Viewer.-+ -+ Last updated 02/09/2023
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/call-recording/bring-your-own-storage.md
Refer to this example of the event schema.
For more information, see the following articles: -- Download our [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/ServerRecording) and [.NET](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/tree/main/ServerRecording) call recording sample apps
+- Download our [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/ServerRecording) call recording sample app
- Learn more about [Call Recording](../../../concepts/voice-video-calling/call-recording.md) - Learn more about [Call Automation](../../../concepts/call-automation/call-automation.md)
communication-services Receive Sms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/receive-sms.md
Title: Quickstart - Receive and Reply to SMS description: "In this quickstart, you'll learn how to receive an SMS message by using Azure Communication Services."-+ -+ Last updated 02/09/2023
communication-services Get Started Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-call-recording.md
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: -- Download our [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/ServerRecording), [.NET](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/tree/main/ServerRecording), [Python](https://github.com/Azure-Samples/communication-services-python-quickstarts/tree/main/call-recording), and [JavaScript](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/call-recording) call recording sample apps
+- Download our [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/ServerRecording), [Python](https://github.com/Azure-Samples/communication-services-python-quickstarts/tree/main/call-recording), and [JavaScript](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/call-recording) call recording sample apps
- Learn more about [Call Recording](../../concepts/voice-video-calling/call-recording.md) - Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
Title: Quickstart - Teams interop on Azure Communication Services description: In this quickstart, you learn how to join a Teams meeting with the Azure Communication Calling SDK.-+ Last updated 06/30/2021
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
Title: Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services description: Learn how to use Microsoft Teams, Graph and Azure Communication Services to build a custom event management platform.-+ --+ Last updated 03/31/2022
communication-services Sms Url Shortener https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/sms-url-shortener.md
Title: Tutorial - Send shortener links through SMS with Azure Communication Services description: Learn how to use the Azure URL Shortener sample to send short links through SMS.-+ -+ Last updated 03/8/2023
communication-services Trusted Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/trusted-service-tutorial.md
Title: Build a trusted user access service using Azure Functions in Azure Communication Services description: Learn how to create a trusted user access service for Communication Services with Azure Functions-+ --+ Last updated 06/30/2021
communication-services Click To Call Widget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/widgets/click-to-call-widget.md
Title: Tutorial - Embed a Teams call widget into your web application description: Learn how to use Azure Communication Services to embed a calling widget into your web application.-+ --+ Last updated 04/17/2023
communications-gateway Manage Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise.md
+
+ Title: Manage an enterprise in Azure Communications Gateway's Number Management Portal
+description: Learn how to manage enterprises and numbers for Operator Connect and Teams Phone Mobile with Azure Communication Gateway's API Bridge Number Management Portal.
++++ Last updated : 07/17/2023+++
+# Manage an enterprise in Azure Communications Gateway's Number Management Portal
+
+Azure Communications Gateway's Number Management Portal enables you to manage enterprise customers and their numbers through the Azure portal.
+
+The Operator Connect and Teams Phone Mobile programs don't allow you to use the Operator Connect portal for provisioning after you've launched your service in the Teams Admin Center. The Number Management Portal is a simple alternative that you can use until you've finished integrating with the Operator Connect APIs.
+
+> [!IMPORTANT]
+> You must have selected Azure Communications Gateway's API Bridge option to use the Number Management Portal.
+
+## Prerequisites
+
+Confirm that you have [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** access to the Azure portal for your subscription. If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
+
+If you're assigning new numbers to an enterprise customer:
+
+* You must know the numbers you need to assign (as E.164 numbers). Each number must:
+ * Contain only digits (0-9), with an optional `+` at the start.
+ * Include the country code.
+ * Be up to 19 characters long.
+* You must have completed any internal procedures for assigning numbers.
+* You need to know the following information for each range of numbers.
+
+|Information for each range of numbers |Notes |
+|||
+|Calling profile |One of the Calling Profiles created by Microsoft for you.|
+|Intended usage | Individuals (calling users), applications or conference calls.|
+|Capabilities |Which types of call to allow (for example, inbound calls or outbound calls).|
+|Civic address | A physical location for emergency calls. The enterprise must have configured this address in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
+|Location | A description of the location for emergency calls. The enterprise must have configured this location in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
+|Whether the enterprise can update the civic address or location | If you don't allow the enterprise to update the civic address or location, you must specify a civic address or location. You can specify an address or location and also allow the enterprise to update it.|
+|Country | The country for the number. Only required if you're uploading a North American Toll-Free number, otherwise optional.|
+|Ticket number (optional) |The ID of any ticket or other request that you want to associate with this range of numbers. Up to 64 characters. |
+
+## 1. Go to your Communications Gateway resource
+
+1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+1. In the search bar at the top of the page, search for your Communications Gateway resource.
+1. Select your Communications Gateway resource.
+
+## 2. Select an enterprise customer to manage
+
+When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a **consent**. This consent represents the relationship between you and the enterprise.
+
+The Number Management Portal allows you to update the status of these consents. Finding the consent for an enterprise is also the easiest way to manage numbers for an enterprise.
+
+1. From the overview page for your Communications Gateway resource, select **Consents** in the sidebar.
+1. Find the enterprise that you want to manage.
+1. If you need to change the status of the relationship, select **Update Relationship Status** from the menu for the enterprise. Set the new status. For example, if you're agreeing to provide service to a customer, set the status to **Agreement signed**. If you set the status to **Consent Declined** or **Contract Terminated**, you must provide a reason.
+
+## 3. Manage numbers for the enterprise
+
+Assigning numbers to an enterprise allows IT administrators at the enterprise to allocate those numbers to their users.
+
+1. Go to the number management page for the enterprise.
+ * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Manage numbers** from the menu.
+ * Otherwise, select **Numbers** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID.
+1. To add new numbers for an enterprise:
+ 1. Select **Upload numbers**.
+ 1. Fill in the fields based on the information you determined in [Prerequisites](#prerequisites). These settings apply to all the numbers you upload in the **Telephone numbers** section.
+ 1. In **Telephone numbers**, upload the numbers, as a comma-separated list.
+ 1. Select **Review + upload** and **Upload**. Uploading creates an order for uploading numbers over the Operator Connect API.
+ 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers are available to the enterprise. You might need to refresh more than once.
+1. To remove numbers from an enterprise:
+ 1. Select the numbers.
+ 1. Select **Release numbers**.
+ 1. 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers have been removed.
+
+## 4. View civic addresses for an enterprise
+
+You can view civic addresses for an enterprise. The enterprise configures the details of each civic address, so you can't configure these details.
+
+1. Go to the civic address page for the enterprise.
+ * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Civic addresses** from the menu.
+ * Otherwise, select **Civic addresses** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID.
+1. View the civic addresses. You can see the address, the company name, the description and whether the address was validated when the enterprise configured the address.
+1. Optionally, select an individual address to view additional information provided by the enterprise (for example, the ELIN information).
+
+## Next steps
+
+Learn more about [the metrics you can use to monitor calls](monitoring-azure-communications-gateway-data-reference.md).
communications-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md
Azure Communications Gateway can use the Operator Connect APIs to upload informa
### API Bridge Number Management Portal
-Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This portal enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project
+Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This Azure portal feature enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
The Number Management Portal is available as part of the optional API Bridge feature.
communications-gateway Provision User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md
Your staff might need different user roles, depending on the tasks they need to
| Deploying Azure Communications Gateway |**Contributor** access to your subscription| | Raising support requests |**Owner**, **Contributor** or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level| |Monitoring logs and metrics | **Reader** access to your subscription|
-|Using the API Bridge Number Management Portal|**NumberManagement.Read**, **NumberManagement.Write**, **PartnerSettings.Read**, and **PartnerSettings.Write** permissions for the Project Synergy enterprise application and **Reader** permissions to the Azure portal for your subscription|
+|Using the API Bridge Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** permissions to the Azure portal for your subscription|
## 2. Configure user roles
You need to use the Azure portal to configure user roles.
### 2.2 Assign a user role 1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [1. Understand the user roles required for Azure Communications Gateway](#1-understand-the-user-roles-required-for-azure-communications-gateway).
-1. If you're managing access to the API Bridge Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign **NumberManagement.Read**, **NumberManagement.Write**, **PartnerSettings.Read**, and **PartnerSettings.Write** permissions for each user in the Project Synergy application.
+1. If you're managing access to the API Bridge Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for each user in the Project Synergy application.
## Next steps
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
ms.suite: integration Previously updated : 06/13/2023 Last updated : 07/25/2023 tags: connectors
The Service Bus connector has different versions, based on [logic app workflow t
|--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [Service Bus managed connector reference](/connectors/servicebus/) <br>- [Managed connectors in Azure Logic Apps](managed.md) | | **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version usually provides better performance, capabilities, pricing, and so on. <br><br>For more information, review the following documentation: <br><br>- [Service Bus managed connector reference](/connectors/servicebus/) <br>- [Service Bus built-in connector operations](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version usually provides better performance, capabilities, pricing, and so on. <br><br>**Note**: Service Bus built-in connector triggers follow the [*polling trigger*](introduction.md#triggers) pattern, which means that the trigger continually checks for messages in the queue or topic subscription. <br><br>For more information, review the following documentation: <br><br>- [Service Bus managed connector reference](/connectors/servicebus/) <br>- [Service Bus built-in connector operations](/azure/logic-apps/connectors/built-in/reference/servicebus) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
## Prerequisites
The built-in Service Bus connector is a stateless connector, by default. To run
-<a name="built-in-connector-operations"></a>
+<a name="built-in-connector-app-settings"></a>
-## Service Bus built-in connector operations
+## Service Bus built-in connector app settings
-The Service Bus built-in connector is available only for Standard logic app workflows and provides the following triggers and actions:
+In a Standard logic app resource, the Service Bus built-in connector includes app settings that control various thresholds, such as timeout for sending messages and number of message senders per processor core in the message pool. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
-| Trigger | Description |
-|-- |-|
-| When messages are available in a queue | Start a workflow when one or more messages are available in a queue. |
-| When messages are available in a topic subscription | Start a workflow when one or more messages are available in a topic subscription. |
+<a name="read-messages-dead-letter-queues"></a>
-These Service Bus triggers follow the *polling trigger* pattern, which means that the trigger continually checks for messages in the queue or topic subscription. For more general information about polling triggers, review [Triggers](introduction.md#triggers).
+## Read messages from dead-letter queues with Service Bus built-in triggers
-| Action | Description |
-|--|-|
-| Send message | Send a message to a queue or topic. |
-| Send multiple messages | Send more than one message to a queue or topic. |
+In Standard workflows, to read a message from a dead-letter queue in a queue or a topic subscription, follow these steps using the specified triggers:
-<a name="built-in-connector-app-settings"></a>
+1. In your blank workflow, based on your scenario, add the Service Bus *built-in* connector trigger named **When messages are available in a queue** or **When a message are available in a topic subscription (peek-lock)**.
-## Service Bus built-in connector app settings
+1. In the trigger, set the following parameter values to specify your queue or topic subscription's default dead-letter queue, which you can access like any other queue:
-In a Standard logic app resource, the Service Bus built-in connector includes app settings that control various thresholds, such as timeout for sending messages and number of message senders per processor core in the message pool. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
+ * **When messages are available in a queue** trigger: Set the **Queue name** parameter to **queuename/$deadletterqueue**.
+
+ * **When a message are available in a topic subscription (peek-lock)** trigger: Set the **Topic name** parameter to **topicname/Subscriptions/subscriptionname/$deadletterqueue**.
+
+ For more information, see [Service Bus dead-letter queues overview](../service-bus-messaging/service-bus-dead-letter-queues.md#path-to-the-dead-letter-queue).
## Troubleshooting
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
A resource's `properties` object has the following properties:
||||| | `daprAIInstrumentationKey` | The Application Insights instrumentation key used by Dapr. | string | No | | `appLogsConfiguration` | The environment's logging configuration. | Object | No |
+| `peerAuthentication` | How to enable mTLS encryption. | Object | No |
### <a name="container-apps-environment-examples"></a>Examples
container-apps Ingress Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md
Container Apps supports IP restrictions for ingress. You can create rules to eit
Azure Container Apps provides built-in authentication and authorization features to secure your external ingress-enabled container app. For more information, see [Authentication and authorization in Azure Container Apps](authentication.md).
-You can configure your app to support client certificates (mTLS) for authentication and traffic encryption. For more information, see [Configure client certificates](client-certificate-authorization.md)
+You can configure your app to support client certificates (mTLS) for authentication and traffic encryption. For more information, see [Configure client certificates](client-certificate-authorization.md).
+For details on how to use mTLS for environment level network encryption, see the [networking overview](./networking.md#mtls).
## Traffic splitting
container-apps Network Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/network-proxy.md
Requests that come in to ports `80` and `443` are internally routed to the appro
## Security - HTTP requests are automatically redirected to HTTPs-- Envoy terminates TLS after crossing its boundary
- - Envoy sends requests to apps over HTTP in plain text
-- mTLS is only available when using Dapr
- - When you use Dapr service invocation APIs, mTLS is enabled. However, because Envoy terminates mTLS, inbound calls from Envoy to Dapr-enabled container apps isn't encrypted.
+ - You can disable this by setting `allowInsecure` to `true` in the ingress configuration
+- TLS terminates at the ingress
+ - You can enable [Environment level network encryption](networking.md#mtls) for full end-to-end encryption for requests between the ingress and an app and between different apps
HTTPs, GRPC, and HTTP/2 all follow the same architectural model.
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
With the workload profiles environment (preview), you can fully secure your ingr
- Integrate your Container Apps with an Application Gateway. For steps, see [here](./waf-app-gateway.md). - Configure UDR to route all traffic through Azure Firewall. For steps, see [here](./user-defined-routes.md).
+## <a name="mtls"></a> Environment level network encryption - preview
+
+Azure Container Apps supports environment level network encryption using mutual transport layer security (mTLS). When end-to-end encryption is required, mTLS will encrypt data transmitted between applications within an environment. Applications within a Container Apps environment are automatically authenticated. However, Container Apps currently does not support authorization for access control between applications using the built-in mTLS.
+
+When your apps are communicating with a client outside of the environment, two-way authentication with mTLS is supported, to learn more see [configure client certificates](client-certificate-authorization.md).
+
+> [!NOTE]
+> Enabling mTLS for your applications may increase response latency and reduce maximum throughput in high-load scenarios.
+
+# [Azure CLI](#tab/azure-cli)
+
+You can enable mTLS using the following commands.
+
+On create:
+```azurecli
+az containerapp env create \
+ --name <environment-name> \
+ --resource-group <resource-group> \
+ --location <location> \
+ --enable-mtls
+```
+
+For an existing container app:
+```azurecli
+az containerapp env update \
+ --name <environment-name> \
+ --resource-group <resource-group> \
+ --enable-mtls
+```
+
+# [ARM template](#tab/arm-template)
+
+You can enable mTLS in the ARM template for Container Apps environments using the following configuration.
+
+```json
+{
+ ...
+ "properties": {
+ "peerAuthentication":{
+ "mtls": {
+ "enabled": "true|false"
+ }
+ }
+ ...
+}
+```
++ ## DNS - **Custom DNS**: If your VNet uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. When configuring your NSG or Firewall, don't block the `168.63.129.16` address, otherwise, your Container Apps environment won't function.
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
cosmos-db Social Media Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/social-media-apps.md
Creating feeds is just a matter of creating documents that can hold a list of po
You could have a "latest" stream with posts ordered by creation date. Or you could have a "hottest" stream with those posts with more likes in the last 24 hours. You could even implement a custom stream for each user based on logic like followers and interests. It would still be a list of posts. ItΓÇÖs a matter of how to build these lists, but the reading performance stays unhindered. Once you acquire one of these lists, you issue a single query to Azure Cosmos DB using the [IN keyword](sql-query-keywords.md#in) to get pages of posts at a time.
-The feed streams could be built using [Azure App ServicesΓÇÖ](https://azure.microsoft.com/services/app-service/) background processes: [Webjobs](../app-service/webjobs-create.md). Once a post is created, background processing can be triggered by using [Azure Storage](https://azure.microsoft.com/services/storage/) [Queues](../storage/queues/storage-dotnet-how-to-use-queues.md) and Webjobs triggered using the [Azure Webjobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki), implementing the post propagation inside streams based on your own custom logic.
+The feed streams could be built using [Azure App ServicesΓÇÖ](https://azure.microsoft.com/services/app-service/) background processes: [Webjobs](../app-service/webjobs-create.md). Once a post is created, background processing can be triggered by using [Azure Storage](https://azure.microsoft.com/services/storage/) [Queues](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli) and Webjobs triggered using the [Azure Webjobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki), implementing the post propagation inside streams based on your own custom logic.
Points and likes over a post can be processed in a deferred manner using this same technique to create an eventually consistent environment.
data-factory How To Change Data Capture Resource With Schema Evolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource-with-schema-evolution.md
Last updated 07/21/2023
-# How to capture changed data with Schema evolution from Azure SQL DB to Delta sink using a Change Data Capture (CDC) resource
+# How to capture changed data with schema evolution from Azure SQL DB to Delta sink using a Change Data Capture (CDC) resource
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] In this tutorial, you will use the Azure Data Factory user interface (UI) to create a new Change Data Capture (CDC) resource that picks up changed data from an Azure SQL Database source to Delta Lake stored in Azure Data Lake Storage (ADLS) Gen2 in real-time showcasing the support of schema evolution. The configuration pattern in this tutorial can be modified and expanded upon.
In this tutorial, you follow these steps:
> [!NOTE] > To enable Change Data Capture (CDC) with schema evolution in SQL Azure Database source, we should choose watermark column-based tables rather than native SQL CDC enabled tables.
-8. Once youΓÇÖve selected a folder path, select **Continue** to set your data target.
+8. Once youΓÇÖve selected the source table(s), select **Continue** to set your data target.
:::image type="content" source="media/adf-cdc/change-data-capture-resource-107.png" alt-text="Screenshot of the continue button in the guided process to proceed to select data targets.":::
In this tutorial, you follow these steps:
## Make dynamic schema changes at source
-1. Now you can proceed to make schema level changes to the source tables. For this tutorial, we will use the Alter table T-SQL to add a new column to the source table.
+1. Now you can proceed to make schema level changes to the source tables. For this tutorial, we will use the Alter table T-SQL to add a new column "PersonalEmail" to the source table.
:::image type="content" source="media/adf-cdc/change-data-capture-resource-125.png" alt-text="Screenshot of Alter command in Azure Data Studio.":::
-2. You can validate that the new column as been added to the existing table.
+2. You can validate that the new column "PersonalEmail" has been added to the existing table.
:::image type="content" source="media/adf-cdc/change-data-capture-resource-126.png" alt-text="Screenshot of the new table design."::: ## Validate schema changes at target Delta
-1. Validate change data with schema changes have landed at the Delta sink. For this tutorial, you can see the new column has been added to the sink.
+1. Validate change data with schema changes have landed at the Delta sink. For this tutorial, you can see the new column "PersonalEmail" has been added to the sink.
:::image type="content" source="media/adf-cdc/change-data-capture-resource-128.png" alt-text="Screenshot of actual Delta file with schema change." lightbox="media/adf-cdc/change-data-capture-resource-128.png":::
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
data-lake-store Data Lake Store Copy Data Azure Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-copy-data-azure-storage-blob.md
> >
-Data Lake Storage Gen1 provides a command-line tool, [AdlCopy](https://www.microsoft.com/download/details.aspx?id=50358), to copy data from the following sources:
+Data Lake Storage Gen1 provides a command-line tool, AdlCopy, to copy data from the following sources:
* From Azure Storage blobs into Data Lake Storage Gen1. You can't use AdlCopy to copy data from Data Lake Storage Gen1 to Azure Storage blobs. * Between two Data Lake Storage Gen1 accounts.
Before you begin this article, you must have the following:
* **Azure Storage blobs** container with some data. * **A Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md) * **Data Lake Analytics account (optional)** - See [Get started with Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-get-started-portal.md) for instructions on how to create a Data Lake Analytics account.
-* **AdlCopy tool**. Install the [AdlCopy tool](https://www.microsoft.com/download/details.aspx?id=50358).
+* **AdlCopy tool**. Install the AdlCopy tool.
## Syntax of the AdlCopy tool
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |-|-|:--:|-| | **Anomalous network protocol usage**<br>(AzureDNS_ProtocolAnomaly) | Analysis of DNS transactions from %{CompromisedEntity} detected anomalous protocol usage. Such traffic, while possibly benign, may indicate abuse of this common protocol to bypass network traffic filtering. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | Exfiltration | - |
-| **Anonymity network activity**<br>(AzureDNS_DarkWeb) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Anonymity network activity using web proxy**<br>(AzureDNS_DarkWebProxy) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Attempted communication with suspicious sinkholed domain**<br>(AzureDNS_SinkholedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected request for sinkholed domain. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | - |
-| **Communication with possible phishing domain**<br>(AzureDNS_PhishingDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a request for a possible phishing domain. Such activity, while possibly benign, is frequently performed by attackers to harvest credentials to remote services. Typical related attacker activity is likely to include the exploitation of any credentials on the legitimate service. | Exfiltration | - |
-| **Communication with suspicious algorithmically generated domain**<br>(AzureDNS_DomainGenerationAlgorithm) | Analysis of DNS transactions from %{CompromisedEntity} detected possible usage of a domain generation algorithm. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Anonymity network activity**<br>(AzureDNS_DarkWeb) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
+| **Anonymity network activity using web proxy**<br>(AzureDNS_DarkWebProxy) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
+| **Attempted communication with suspicious sinkholed domain**<br>(AzureDNS_SinkholedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected request for sinkholed domain. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | Medium |
+| **Communication with possible phishing domain**<br>(AzureDNS_PhishingDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a request for a possible phishing domain. Such activity, while possibly benign, is frequently performed by attackers to harvest credentials to remote services. Typical related attacker activity is likely to include the exploitation of any credentials on the legitimate service. | Exfiltration | Low |
+| **Communication with suspicious algorithmically generated domain**<br>(AzureDNS_DomainGenerationAlgorithm) | Analysis of DNS transactions from %{CompromisedEntity} detected possible usage of a domain generation algorithm. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access | Medium |
-| **Communication with suspicious random domain name**<br>(AzureDNS_RandomizedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected usage of a suspicious randomly generated domain name. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Digital currency mining activity**<br>(AzureDNS_CurrencyMining) | Analysis of DNS transactions from %{CompromisedEntity} detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | - |
-| **Network intrusion detection signature activation**<br>(AzureDNS_SuspiciousDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a known malicious network signature. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | - |
-| **Possible data download via DNS tunnel**<br>(AzureDNS_DataInfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Possible data exfiltration via DNS tunnel**<br>(AzureDNS_DataExfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| **Possible data transfer via DNS tunnel**<br>(AzureDNS_DataObfuscation) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
+| **Communication with suspicious random domain name**<br>(AzureDNS_RandomizedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected usage of a suspicious randomly generated domain name. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
+| **Digital currency mining activity**<br>(AzureDNS_CurrencyMining) | Analysis of DNS transactions from %{CompromisedEntity} detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low |
+| **Network intrusion detection signature activation**<br>(AzureDNS_SuspiciousDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a known malicious network signature. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | Medium |
+| **Possible data download via DNS tunnel**<br>(AzureDNS_DataInfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
+| **Possible data exfiltration via DNS tunnel**<br>(AzureDNS_DataExfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
+| **Possible data transfer via DNS tunnel**<br>(AzureDNS_DataObfuscation) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
## <a name="alerts-azurestorage"></a>Alerts for Azure Storage
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Agentless scanning for VMs provides vulnerability assessment and software invent
| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning (Preview) | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |
-| Instance types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
+| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK | ## How agentless scanning for VMs works
The scanning environment where disks are analyzed is regional, volatile, isolate
This article explains how agentless scanning works and how it helps you collect data from your machines. -- Learn more about how to [enable vulnerability assessment with agentless scanning](enable-vulnerability-assessment-agentless.md).
+- Learn more about how to [enable agentless scanning for VMs](enable-vulnerability-assessment-agentless.md).
- Check out Defender for Cloud's [common questions](faq-data-collection-agents.yml) for more information on agentless scanning for machines.
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023|
-| [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | July 2023 |
-| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | July 2023 |
| [General availability release of agentless container posture in Defender CSPM](#general-availability-ga-release-of-agentless-container-posture-in-defender-cspm) | July 2023 |
+| [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | August 2023 |
+| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | August 2023 |
| [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 | | [Update naming format of Azure Center for Internet Security standards in regulatory compliance](#update-naming-format-of-azure-center-for-internet-security-standards-in-regulatory-compliance) | August 2023 | | [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 |
The `Key Vaults should have purge protection enabled` recommendation is deprecat
See the [full index of Azure Policy built-in policy definitions for Key Vault](../key-vault/policy-reference.md)
-### Changes to the Defender for DevOps recommendations environment source and resource ID
+### General Availability (GA) release of Agentless Container Posture in Defender CSPM
**Estimated date for change: July 2023**
+The new Agentless Container Posture capabilities are set for General Availability (GA) as part of the Defender CSPM (Cloud Security Posture Management) plan.
+
+Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md).
+
+### Changes to the Defender for DevOps recommendations environment source and resource ID
+
+**Estimated date for change: August 2023**
+ The Security DevOps recommendations will be updated to align with the overall Microsoft Defender for Cloud features and experience. Affected recommendations will point to a new recommendation source environment and have an updated resource ID. Security DevOps recommendations impacted:
The recommendations page's experience will have minimal impact and deprecated as
### DevOps Resource Deduplication for Defender for DevOps
-**Estimated date for change: July 2023**
+**Estimated date for change: August 2023**
To improve the Defender for DevOps user experience and enable further integration with Defender for Cloud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
If you don't have an instance of a DevOps organization onboarded more than once
Customers will have until July 31, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps.
-### General Availability (GA) release of Agentless Container Posture in Defender CSPM
-
-**Estimated date for change: July 2023**
-
-The new Agentless Container Posture capabilities are set for General Availability (GA) as part of the Defender CSPM (Cloud Security Posture Management) plan.
-
-Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md).
- ### Business model and pricing updates for Defender for Cloud plans **Estimated date for change: August 2023**
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Microsoft tracks the regulatory standards themselves and automatically improves
## What regulatory compliance standards are available in Defender for Cloud? By default:-- Azure subscriptions get the Microsoft cloud security benchmark assigned. This is the Microsoft-authored, cloud specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Microsoft cloud security benchmark](/security/benchmark/azure/introduction).-- AWS accounts get the AWS Foundational Security Best Practices assigned. This is the AWS-specific guideline for security and compliance best practices based on common compliance frameworks.-- GCP projects get the "GCP Default" standard assigned.
+- Azure subscriptions get the **Microsoft cloud security benchmark** assigned. This is the Microsoft-authored, cloud specific guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+- AWS accounts get the **AWS Foundational Security Best Practices** standard assigned. This is the AWS-specific guideline for security and compliance best practices based on common compliance frameworks.
+- GCP projects get the **GCP Default** standard assigned.
If a subscription, account, or project has *any* Defender plan enabled, additional standards can be applied. - **Available regulatory standards**:
-| Standards for Azure subscriptions | Standards for AWS accounts | Standards for GCP projects |
-| | - | |
-| - PCI-DSS v3.2.1 **(deprecated)** | - CIS 1.2.0 | - CIS 1.1.0, 1.2.0 |
-| - PCI DSS v4 | - CIS 1.5.0 | - PCI DSS 3.2.1 |
-| - SOC TSP | - PCI DSS 3.2.1 | - NIST 800 53 |
-| - SOC 2 Type 2 | - AWS Foundational Security Best Practices | - ISO 27001 |
-| - ISO 27001:2013 |||
-| - Azure CIS 1.1.0 |||
-| - Azure CIS 1.3.0 |||
-| - Azure CIS 1.4.0 |||
-| - NIST SP 800-53 R4 |||
-| - NIST SP 800-53 R5 |||
-| - NIST SP 800 171 R2 |||
-| - CMMC Level 3 |||
-| - FedRAMP H |||
-| - FedRAMP M |||
-| - HIPAA/HITRUST |||
-| - SWIFT CSP CSCF v2020 |||
-| - UK OFFICIAL and UK NHS |||
-| - Canada Federal PBMM |||
-| - New Zealand ISM Restricted |||
-| - New Zealand ISM Restricted v3.5 |||
-| - Australian Government ISM Protected |||
-| - RMIT Malaysia |||
+| Standards for Azure subscriptions | Standards for AWS accounts | Standards for GCP projects |
+| -| | |
+| PCI-DSS v3.2.1 **(deprecated)** | CIS 1.2.0 | CIS 1.1.0 |
+| PCI DSS v4 | CIS 1.5.0 | CIS 1.2.0 |
+| SOC TSP | PCI DSS v3.2.1 | PCI DSS v3.2.1 |
+| SOC 2 Type 2 | | NIST 800-53 |
+| ISO 27001:2013 | | ISO 27001 |
+| Azure CIS 1.1.0 |||
+| Azure CIS 1.3.0 |||
+| Azure CIS 1.4.0 |||
+| NIST SP 800-53 R4 |||
+| NIST SP 800-53 R5 |||
+| NIST SP 800 171 R2 |||
+| CMMC Level 3 |||
+| FedRAMP H |||
+| FedRAMP M |||
+| HIPAA/HITRUST |||
+| SWIFT CSP CSCF v2020 |||
+| UK OFFICIAL and UK NHS |||
+| Canada Federal PBMM |||
+| New Zealand ISM Restricted |||
+| New Zealand ISM Restricted v3.5 |||
+| Australian Government ISM Protected |||
+| RMIT Malaysia |||
> [!TIP] > Standards are added to the dashboard as they become available. This table might not contain recently added standards.
deployment-environments Concept Environments Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md
Last updated 04/25/2023
Learn about the key concepts and components of Azure Deployment Environments. This knowledge can help you more effectively deploy environments for your scenarios.
+This diagram shows the key components of Deployment Environments and how they relate to each other. You can learn more about each component in the following sections.
++ ## Dev centers A dev center is a collection of projects that require similar settings. Dev centers enable platform engineers to:
An environment is a collection of Azure resources on which your application is d
in Azure Deployment Environments, you use [managed identities](../active-directory/managed-identities-azure-resources/overview.md) to provide elevation-of-privilege capabilities. Identities can help you provide self-serve capabilities to your development teams without giving them access to the target subscriptions in which the Azure resources are created.
-The managed identity that's attached to the dev center needs to be granted appropriate access to connect to the catalogs. You should grant owner access to the target deployment subscriptions that are configured at the project level. The Azure Deployment Environments service will use the specific managed identity to perform the deployment on behalf of the developer.
+The managed identity that's attached to the dev center needs to be granted appropriate access to connect to the catalogs. You should grant owner access to the target deployment subscriptions that are configured at the project level. The Azure Deployment Environments service uses the specific managed identity to perform the deployment on behalf of the developer.
## Dev center environment types
-You can define the types of environments that development teams can create: for example, dev, test, sandbox, pre-production, or production. Azure Deployment Environments provides the flexibility to name the environment types according to the nomenclature that your enterprise uses. You can configure settings for various environment types based on the specific needs of the development teams.
+You can define the types of environments that development teams can create: for example, dev, test, sandbox, preproduction, or production. Azure Deployment Environments provides the flexibility to name the environment types according to the nomenclature that your enterprise uses. You can configure settings for various environment types based on the specific needs of the development teams.
## Project environment types
Project environment types are a subset of the environment types that you configu
Project environment types allow you to automatically apply the right set of policies on environments and help abstract the Azure governance-related concepts from your development teams. The service also provides the flexibility to preconfigure: -- The [managed identity](concept-environments-key-concepts.md#identities) that will be used to perform the deployment.
+- The [managed identity](concept-environments-key-concepts.md#identities) that is used to perform the deployment.
- The access levels that the development teams will get after a specific environment is created. ## Catalogs
Deployment environments scan the specified folder of the repository to find [env
## Environment definitions
-An environment definition is a combination of an IaC template and a manifest file. The template defines the environment, and the manifest provides metadata about the template. Your development teams will use the items that you provide in the catalog to create environments in Azure.
+An environment definition is a combination of an IaC template and a manifest file. The template defines the environment, and the manifest provides metadata about the template. Your development teams use the items that you provide in the catalog to create environments in Azure.
> [!NOTE] > Azure Deployment Environments uses Azure Resource Manager (ARM) templates.
-## ARM templates
+### ARM templates
[ARM templates](../azure-resource-manager/templates/overview.md) help you implement the IaC for your Azure solutions by defining the infrastructure and configuration for your project, the resources to deploy, and the properties of those resources.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Last updated 04/25/2023
This quickstart shows you how to create and configure a dev center in Azure Deployment Environments.
-A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md).
The following diagram shows the steps you perform in this quickstart to configure a dev center for Azure Deployment Environments in the Azure portal.
The following diagram shows the steps you perform in this quickstart to configur
First, you create a dev center to organize your deployment environments resources. Next, you create a key vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Then, you attach an identity to the dev center and assign that identity access to the key vault. Then, you add a catalog that stores your IaC templates to the dev center. Finally, you create environment types to define the types of environments that development teams can create.
-The following diagram shows the steps you perform in the [Create and configure a project quickstart](quickstart-create-and-configure-projects.md) to configure a project associated with a dev center for Deployment Environments in the Azure portal.
+The following diagram shows the steps you perform in the [Create and configure a project quickstart](quickstart-create-and-configure-projects.md) to configure a project associated with a dev center for Deployment Environments.
:::image type="content" source="media/quickstart-create-and-configure-devcenter/dev-box-build-stages-1b.png" alt-text="Diagram showing the stages required to configure a project for Deployment Environments.":::
energy-data-services Resources Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/resources-partner-solutions.md
This article highlights Microsoft partners with software solutions officially su
| Partner | Description | Website/Product link | | - | -- | -- |
-| Bluware | Bluware enables energy companies to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Azure Data Manager for Energy is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI&trade;, drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by 10 times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)|
-| Katalyst | Katalyst Data Management&reg; provides the only integrated, end-to-end subsurface data management solution for the oil and gas industry. Over 160 employees operate in North America, Europe and Asia-Pacific, dedicated to enabling digital transformation and optimizing the value of geotechnical information for exploration, production, and M&A activity. |[Katalyst Data Management solution](https://www.katalystdm.com/seismic-news/katalyst-announces-sub-surface-data-management-solution-powered-by-microsoft-energy-data-services/) |
-| Interica | Interica OneView&trade; harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos, and clearly determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a complete holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy adoption with Interica OneView&trade;](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView&trade;](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView&trade; connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)|
+| Bluware | Bluware enables you to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Azure Data Manager for Energy is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI&trade; drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by ten times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)|
+| Katalyst | Katalyst Data Management&reg; provides the only integrated, end-to-end subsurface data management solution for the oil and gas industry. Over 160 employees operate in North America, Europe, and Asia-Pacific, dedicated to enabling digital transformation and optimizing the value of geotechnical information for exploration, production, and M&A activity. |[Katalyst Data Management solution](https://www.katalystdm.com/seismic-news/katalyst-announces-sub-surface-data-management-solution-powered-by-microsoft-energy-data-services/) |
+| Interica | Interica OneView&trade; harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos and determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy adoption with Interica OneView&trade;](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView&trade;](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView&trade; connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)|
+|Aspentech|AspenTech and Microsoft are working together to accelerate your digital transformation by optimizing assets to run safer, greener, longer, and faster. With MicrosoftΓÇÖs end-to-end solutions and AspenTechΓÇÖs deep domain expertise, we provide capital-intensive industries with a scalable, trusted data environment that delivers the insights you need to optimize assets, performance, and reliability. As partners, we are innovating to achieve operational excellence and empowering the workforce by unlocking new efficiency, safety, sustainability, and profitability levels.|[Help your energy customers transform with new Microsoft Azure Data Manager for Energy](https://blogs.partner.microsoft.com/partner/help-your-energy-customers-transform-with-new-microsoft-energy-data-services/)|
## Next steps To learn more about Azure Data Manager for Energy, visit > [!div class="nextstepaction"]
-> [What is Azure Data Manager for Energy?](overview-microsoft-energy-data-services.md)
+> [What is Azure Data Manager for Energy?](overview-microsoft-energy-data-services.md)
event-grid Mqtt Publish And Subscribe Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md
In this article, you use the Azure CLI to do the following tasks:
- You need an X.509 client certificate to generate the thumbprint and authenticate the client connection. ## Generate sample client certificate and thumbprint
-If you don't already have a certificate, you can create a sample certificate using the [step CLI](https://smallstep.com/docs/step-cli/installation/). Consider installing manually for Windows. After a successful installation of Step, you should open a command prompt in your user profile folder (Win+R type %USERPROFILE%).
+If you don't already have a certificate, you can create a sample certificate using the [step CLI](https://smallstep.com/docs/step-cli/installation/). Consider installing manually for Windows.
After a successful installation of Step, you should open a command prompt in your user profile folder (Win+R type %USERPROFILE%).
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
event-hubs Event Hubs About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-about.md
The following sections describe key features of the Azure Event Hubs service:
## Fully managed PaaS Event Hubs is a fully managed Platform-as-a-Service (PaaS) with little configuration or management overhead, so you focus on your business solutions. [Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) gives you the PaaS Kafka experience without having to manage, configure, or run your clusters.
+## Event Hubs for Apache Kafka
+[Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) furthermore enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
+
+## Schema Registry in Azure Event Hubs
+[Azure Schema Registry](schema-registry-overview.md) in Event Hubs provides a centralized repository for managing schemas of events streaming applications. Azure Schema Registry comes free with every Event Hubs namespace, and it integrates seamlessly with you Kafka applications or Event Hubs SDK based applications.
+
+It ensures data compatibility and consistency across event producers and consumers, enabling seamless schema evolution, validation, and governance, and promoting efficient data exchange and interoperability.
+ ## Support for real-time and batch processing Ingest, buffer, store, and process your stream in real time to get actionable insights. Event Hubs uses a [partitioned consumer model](event-hubs-scalability.md#partitions), enabling multiple applications to process the stream concurrently and letting you control the speed of processing. Azure Event Hubs also integrates with [Azure Functions](../azure-functions/index.yml) for a serverless architecture.
With Event Hubs, you can start with data streams in megabytes, and grow to gigab
## Rich ecosystem With a broad ecosystem available for the industry-standard AMQP 1.0 protocol and SDKs available in various languages: [.NET](https://github.com/Azure/azure-sdk-for-net/), [Java](https://github.com/Azure/azure-sdk-for-java/), [Python](https://github.com/Azure/azure-sdk-for-python/), [JavaScript](https://github.com/Azure/azure-sdk-for-js/), you can easily start processing your streams from Event Hubs. All supported client languages provide low-level integration. The ecosystem also provides you with seamless integration with Azure services like Azure Stream Analytics and Azure Functions and thus enables you to build serverless architectures.
-## Event Hubs for Apache Kafka
-[Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) furthermore enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
## Event Hubs premium and dedicated Event Hubs **premium** caters to high-end streaming needs that require superior performance, better isolation with predictable latency and minimal interference in a managed multitenant PaaS environment. On top of all the features of the standard offering, the premium tier offers several extra features such as [dynamic partition scale up](dynamically-add-partitions.md), extended retention, and [customer-managed-keys](configure-customer-managed-key.md). For more information, see [Event Hubs Premium](event-hubs-premium-overview.md).
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported | Hong Kong<br/>Taipei | | **China Mobile International** |Supported |Supported | Hong Kong<br/>Hong Kong2<br/>Singapore | | **China Telecom Global** |Supported |Supported | Hong Kong<br/>Hong Kong2 |
-| **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** | Supported | Supported | Frankfurt<br/>Hong Kong<br/>Singapore2<br/>Tokyo2 |
| **Chunghwa Telecom** |Supported |Supported | Taipei | | **Claro** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles |
firewall Long Running Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/long-running-sessions.md
Azure Firewall scales in/out based on throughput and CPU usage. Scale in is perf
### Firewall maintenance
-The Azure Firewall engineering team updates the firewall on an as-needed basis (usually every month), generally during night time hours in the local time-zone for that region. Updates include security patches, bug fixes, and new feature roll outs that are applied by configuring the firewall in a [rolling update mode](https://azure.microsoft.com/blog/deployment-strategies-defined/). The firewall instances are put in a drain mode before reimaging them to give short-lived sessions time to drain. Long running sessions remaining on an instance after the drain period are dropped during the restart.
+The Azure Firewall engineering team updates the firewall on an as-needed basis (usually every month), generally during night time hours in the local time-zone for that region. Updates include security patches, bug fixes, and new feature roll outs that are applied by configuring the firewall in a [rolling update mode](https://blog.itaysk.com/2017/11/20/deployment-strategies-defined#rolling-upgrade). The firewall instances are put in a drain mode before reimaging them to give short-lived sessions time to drain. Long running sessions remaining on an instance after the drain period are dropped during the restart.
### Idle timeout
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 07/18/2023 Last updated : 07/25/2023
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 07/18/2023 Last updated : 07/25/2023
guides Azure Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/developer/azure-developer-guide.md
Most applications must store data, so however you decide to host your applicatio
> **When to use**: When your app stores nonrelational data, such as key-value pairs (tables), blobs, files shares, or messages (queues). >
- > **Get started**: Choose from one of these types of storage: [blobs](../../storage/blobs/storage-quickstart-blobs-dotnet.md), [tables](../../cosmos-db/tutorial-develop-table-dotnet.md), [queues](../../storage/queues/storage-dotnet-how-to-use-queues.md), or [files](../../storage/files/storage-dotnet-how-to-use-files.md).
+ > **Get started**: Choose from one of these types of storage: [blobs](../../storage/blobs/storage-quickstart-blobs-dotnet.md), [tables](../../cosmos-db/tutorial-develop-table-dotnet.md), [queues](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli), or [files](../../storage/files/storage-dotnet-how-to-use-files.md).
* **Azure SQL Database**: An Azure-based version of the Microsoft SQL Server engine for storing relational tabular data in the cloud. SQL Database provides predictable performance, scalability with no downtime, business continuity, and data protection.
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/operations/azure-operations-guide.md
For more information, see [Get started with Azure Table storage](../../cosmos-db
#### Queue storage Azure Queue storage provides cloud messaging between application components. In designing applications for scale, application components are often decoupled so that they can scale independently. Queue storage delivers asynchronous messaging for communication between application components, whether they are running in the cloud, on the desktop, on an on-premises server, or on a mobile device. Queue storage also supports managing asynchronous tasks and building process workflows.
-For more information, see [Get started with Azure Queue storage](../../storage/queues/storage-dotnet-how-to-use-queues.md).
+For more information, see [Get started with Azure Queue storage](/azure/storage/queues/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli).
### Deploying a storage account
hdinsight Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/disk-encryption.md
Beginning with the November 2020 release, HDInsight supports the creation of clu
For clusters created before the November 2020 release, you will have to perform key rotation manually using the versioned key URI.
+### VM types that support disk encryption
+
+| Size | vCPU | Memory: GiB |
+|-|--|-|
+| Standard_D4a_v4 | 4 | 16
+| Standard_D8a_v4 | 8 | 32
+| Standard_D16a_v4 | 16 | 64
+| Standard_D32a_v4 | 32 | 128
+| Standard_D48a_v4 | 48 | 192
+| Standard_D64a_v4 | 64 | 256
+| Standard_D96a_v4 | 96 | 384
+| Standard_E64is_v3 | 64 | 432
+| Standard_E20s_V3 | 20 | 160
+| Standard_E2s_V3 | 2 | 16
+| Standard_E2a_v4 | 2 | 16
+| Standard_E4a_v4 | 4 | 32
+| Standard_E8a_v4 | 8 | 64
+| Standard_E16a_v4 | 16 | 128
+| Standard_E20a_v4 | 20 | 160
+| Standard_E32a_v4 | 32 | 256
+| Standard_E48a_v4 | 48 | 384
+| Standard_E64a_v4 | 64 | 512
+| Standard_E96a_v4 | 96 | 672
+| Standard_DS3_v2 | 4 | 14
+| Standard_DS4_v2 | 8 | 28
+| Standard_DS5_v2 | 16 | 56
+| Standard_DS12_v2 | 4 | 28
+| Standard_DS13_v2 | 8 | 56
+| Standard_DS14_v2 | 16 | 112
+ #### Using the Azure portal During cluster creation, you can either use a versioned key, or a versionless key in the following way:
hdinsight General Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/general-guidelines.md
If the url auth is enabled:
* If the access is for this url, then Ranger will check if the user is in the allow list. * Ranger won't check any of the fine grained policies.
+### Manage Ranger audit logs
+
+To prevent Ranger audit logs from consuming too much disk space on your hn0 headnode, you can change the number of days to retain the logs.
+
+1. Sign in to the **Ambari UI**.
+2. Navigate to **Services** > **Ranger** > **Configs** > **Advanced** > **Advanced ranger-solr-configuration**.
+3. Change the 'Max Retention Days' to 7 days or less.
+4. Select **Save** and restart affected components for the change to take effect.
+
+### Use a custom Ranger DB
+
+We recommend deploying an external Ranger DB to use with your ESP cluster for high availability of Ranger metadata, which ensures that policies are available even if the cluster is unavailable. Since an external database is customer-managed, you'll also have the ability to tune the DB size and share the database across multiple ESP clusters. You can specify your [external Ranger DB during the ESP cluster creation](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters) process using the Azure portal, Azure Resource Manager, Azure CLI, etc.
+
+### Set Ranger user sync to run daily
+
+HDInsight ESP clusters are configured for Ranger to synchronize AD users automatically every hour. The Ranger sync is a user sync and can cause extra load on the AD instance. For this reason, we recommend that you change the Ranger user sync interval to 24 hours.
+
+1. Sign in to the **Ambari UI**.
+2. Navigate to **Services** > **Ranger** > **Configs** > **Advanced** > **ranger-ugsync-site**
+3. Set property **ranger.usersync.sleeptimeinmillisbetweensynccycle** to 86400000 (24h in milliseconds).
+4. Select **Save** and restart affected components for the change to take effect.
+ ## Resource groups Use a new resource group for each cluster so that you can distinguish between cluster resources.
Use a new resource group for each cluster so that you can distinguish between cl
Azure AD DS is required for secure clusters to join a domain. HDInsight can't depend on on-premises domain controllers or custom domain controllers, as it introduces too many fault points, credential sharing, DNS permissions, and so on. For more information, see [Azure AD DS FAQs](../../active-directory-domain-services/faqs.yml).
+### Choose correct Azure AD DS SKU
+
+When creating your managed domain, [you can choose from different SKUs](/azure/active-directory-domain-services/administration-concepts#azure-ad-ds-skus) that offer varying levels of performance and features. The amount of ESP clusters and other applications that will be using the Azure AD-DS instance for authentication requests determines which SKU is appropriate for your organization. If you notice high CPU on your managed domain or your business requirements change, you can upgrade your SKU.
+ ### Azure AD DS instance * Create the instance with the `.onmicrosoft.com domain`. This way, there wonΓÇÖt be multiple DNS servers serving the domain.
HDInsight can't depend on on-premises domain controllers or custom domain contro
* Configure the DNS for the virtual network properly (the Azure AD DS domain name should resolve without any hosts file entries). * If you're restricting outbound traffic, make sure that you have read through the [firewall support in HDInsight](../hdinsight-restrict-outbound-traffic.md)
+### Consider Azure AD DS replica sets
+
+When you create an Azure AD DS managed domain, you define a unique namespace, and two domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set. [Adding additional replica sets](/azure/active-directory-domain-services/tutorial-create-replica-set) will provide resiliency and ensure availability of authentication services, which is critical for Azure HDInsight clusters.
+
+### Configure scoped user/group synchronization
+
+When you enable [Azure Active Directory Domain Services (Azure AD DS) for your ESP cluster](/azure/hdinsight/domain-joined/apache-domain-joined-create-configure-enterprise-security-cluster), you can choose to synchronize all users and groups from Azure AD or scoped groups and their members. We recommend that you choose "Scoped" synchronization for the best performance.
+
+[Scoped synchronization](/azure/active-directory-domain-services/scoped-synchronization) can be modified with different group selections or converted to "All" users and groups if needed. You can't change the synchronization type from "All" to "Scoped" unless you delete and recreate the Azure AD DS instance.
+ ### Properties synced from Azure AD to Azure AD DS * Azure AD connect syncs from on-premises to Azure AD.
For more information, see [Azure AD UserPrincipalName population](../../active-d
* Azure AD to Azure AD DS sync is automatic (latencies are under 20 minutes). * Password hashes are synced only when there's a changed password. When you enable password hash sync, all existing passwords don't get synced automatically as they're stored irreversibly. When you change the password, password hashes get synced.
+### Set Ambari LDAP sync to run daily
+
+The process of syncing new LDAP users to Ambari is automatically configured to run every hour. Running this every hour can cause excess load on the cluster's headnodes and the AD instance. For improved performance, we recommend changing the /opt/startup_scripts/start_ambari_ldap_sync.py script that runs the Ambari LDAP sync to run once a day. This script is run through a crontab job, and it is stored the in the directory "/etc/cron.hourly/" on the cluster headnodes.
+
+To make it run once a day, perform the following steps:
+
+1. ssh to hn0
+2. Move the script to the cron daily folder: `sudo mv /etc/cron.hourly/ambarildapsync /etc/cron.daily/ambarildapsync`
+3. Apply the change in the crontab job: `sudo service cron reload`
+4. ssh to hn1 and repeat stepts 1 - 3
+
+If needed, you can [use the Ambari REST API to manually synchronize new users and groups](/azure/hdinsight/hdinsight-sync-aad-users-to-cluster#use-the-apache-ambari-rest-api-to-synchronize-users) immediately.
+ ### Computer objects location Each cluster is associated with a single OU. An internal user is provisioned in the OU. All the nodes are domain joined into the same OU.
Most common reasons:
For a full list of the Ambari properties that affect your HDInsight cluster configuration, see [Ambari LDAP Authentication Setup](https://ambari.apache.org/1.2.1/installing-hadoop-using-ambari/content/ambari-chap2-4.html).
+### Generate domain user keytab(s)
+
+All service keytabs are automatically generated for you during the ESP cluster creation process. To enable secure communication between the cluster and other services and/or jobs that require authentication, you can generate a keytab for your domain username.
+
+Use the ktutil on one of the cluster VMs to create a Kerberos keytab:
+
+```
+
+ktutil
+ktutil: addent -password -p <username>@<DOMAIN.COM> -k 1 -e aes256-cts-hmac-sha1-96
+Password for <username>@<DOMAIN.COM>: <password>
+ktutil: wkt <username>.keytab
+ktutil: q
+```
+
+If your TenantName & DomainName are different, you need to add a SALT value using the -s option. Check the HDInsight FAQ page to [determine the proper SALT value when creating a Kerberos keytab](/azure/hdinsight/hdinsight-faq#how-do-i-create-a-keytab-for-an-hdinsight-esp-cluster-).
+
+### LDAP certificate renewal
+
+HDInsight will automatically renew the certificates for the managed identities you use for clusters with the Enterprise Security Package (ESP). However, there is a limitation when different managed identities are used for Azure AD DS and ADLS Gen2 that could cause the renewal process to fail. Follow the 2 recommendations below to ensure we are able to renew your certificates successfully:
+
+- If you use different managed identities for ADLS Gen2 and Azure AD DS clusters, then both of them should have the **Storage blob data Owner** and **HDInsight Domain Services Contributor** roles assigned to them.
+- HDInsight clusters require public IPs for certificate updates and other maintenance so **any policies that deny public IP on the cluster should be removed**.
+ ## Next steps * [Enterprise Security Package configurations with Azure Active Directory Domain Services in HDInsight](./apache-domain-joined-configure-using-azure-adds.md)
hdinsight Apache Hbase Accelerated Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-accelerated-writes.md
To create a new HBase cluster with the Accelerated Writes feature, follow the st
:::image type="content" source="./media/apache-hbase-accelerated-writes/azure-portal-create-hbase-wals.png" alt-text="Enable accelerated writes option for HDInsight Apache HBase" border="true":::
-## Other considerations
+## Verify Accelerated Writes feature was enabled
+
+You can use the Azure portal to verify if the Accelerated Writes feature is enabled on an HBASE cluster.
+
+1. Search for your HBASE cluster in the Azure portal.
+2. Select the **Cluster Size** blade.
+3. **Premium disks per worker node** will be displayed.
+
+## Scaling HBASE clusters
To preserve data durability, create a cluster with a minimum of three worker nodes. Once created, you can't scale down the cluster to less than three worker nodes.
hdinsight Hbase Troubleshoot Hbase Hbck Inconsistencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-hbase-hbck-inconsistencies.md
Last updated 08/28/2022
# Scenario: `hbase hbck` command returns inconsistencies in Azure HDInsight
-This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters.
+This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. If you are using hbase-2.x, see [How to use Apache HBase HBCK2 tool](./how-to-use-hbck2-tool.md)
## Issue: Region is not in `hbase:meta`
hdinsight Hbase Troubleshoot Start Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-start-fails.md
Last updated 12/21/2022
This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters.
-## Scenario: Master startup cannot progress, in holding-pattern until region comes online
+## Scenario: `Master startup cannot progress, in holding-pattern until region comes online`
### Issue
hbase:namespace,,1546588612000.0000010bc582e331e3080d5913a97000. is NOT online;
### Cause
-HMaster will check for the WAL directory on the region servers before bringing back the **OPEN** regions online. In this case, if that directory was not present, it was not getting started
+HMaster checks the WAL directory on the region servers before bringing back the **OPEN** regions online. In this case, if that directory wasn't present, it was not getting started
### Resolution
HMaster will check for the WAL directory on the region servers before bringing b
2. Restart the HMaster service from the Ambari UI.
+If you're using hbase-2.x, see more information in [how to use hbck2 to assign namespace and meta table](how-to-use-hbck2-tool.md#assign-and-unassign)
+ ## Scenario: Atomic renaming failure ### Issue
HMaster does a basic list command on the WAL folders. If at any time, HMaster se
### Resolution
-Check the call stack and try to determine which folder might be causing the problem (for instance, it might be the WAL folder or the .tmp folder). Then, in Cloud Explorer or by using HDFS commands, try to locate the problem file. Usually, this is a `*-renamePending.json` file. (The `*-renamePending.json` file is a journal file that's used to implement the atomic rename operation in the WASB driver. Due to bugs in this implementation, these files can be left over after process crashes, and so on.) Force-delete this file either in Cloud Explorer or by using HDFS commands.
+Check the call stack and try to determine which folder might be causing the problem (for instance, it might be the WAL folder or the .tmp folder). Then, in Azure Storage Explorer or by using HDFS commands, try to locate the problem file. Usually, this file is called `*-renamePending.json`. (The `*-renamePending.json` file is a journal file that's used to implement the atomic rename operation in the WASB driver. Due to bugs in this implementation, these files can be left over after process crashes, and so on.) Force-delete this file either in Cloud Explorer or by using HDFS commands.
-Sometimes, there might also be a temporary file named something like `$$$.$$$` at this location. You have to use HDFS `ls` command to see this file; you cannot see the file in Cloud Explorer. To delete this file, use the HDFS command `hdfs dfs -rm /\<path>\/\$\$\$.\$\$\$`.
+Sometimes, there might also be a temporary file named something like `$$$.$$$` at this location. You have to use HDFS `ls` command to see this file; you can't see the file in Azure Storage Explorer. To delete this file, use the HDFS command `hdfs dfs -rm /\<path>\/\$\$\$.\$\$\$`.
After you've run these commands, HMaster should start immediately.
After you've run these commands, HMaster should start immediately.
### Issue
-You might see a message that indicates that the `hbase: meta` table is not online. Running `hbck` might report that `hbase: meta table replicaId 0 is not found on any region.` In the HMaster logs, you might see the message: `No server address listed in hbase: meta for region hbase: backup <region name>`.
+You might see a message that indicates that the `hbase: meta` table isn't online. Running `hbck` might report that `hbase: meta table replicaId 0 is not found on any region.` In the HMaster logs, you might see the message: `No server address listed in hbase: meta for region hbase: backup <region name>`.
### Cause
-HMaster could not initialize after restarting HBase.
+HMaster couldn't initialize after restarting HBase.
### Resolution
HMaster could not initialize after restarting HBase.
-## Scenario: java.io.IOException: Timedout
+## Scenario: `java.io.IOException: Timedout`
### Issue
HMaster times out with fatal exception similar to: `java.io.IOException: Timedou
### Cause
-You might experience this issue if you have many tables and regions that have not been flushed when you restart your HMaster services. The time-out is a known defect with HMaster. General cluster startup tasks can take a long time. HMaster shuts down if the namespace table isnΓÇÖt yet assigned. The lengthy startup tasks happen where large amount of unflushed data exists and a timeout of five minutes is not sufficient.
+You might experience this issue if you have many tables and regions that haven't been flushed when you restart your HMaster services. The time-out is a known defect with HMaster. General cluster startup tasks can take a long time. HMaster shuts down if the namespace table isnΓÇÖt yet assigned. The lengthy startup tasks happen where large amount of unflushed data exists and a timeout of five minutes isn't sufficient.
### Resolution
Nodes reboot periodically. From the region server logs you may see entries simil
### Cause: zookeeper session timeout
-Long `regionserver` JVM GC pause. The pause will cause `regionserver` to be unresponsive and not able to send heart beat to HMaster within the zk session timeout 40s. HMaster will believe `regionserver` is dead and will abort the `regionserver` and restart.
+Long `regionserver` JVM GC pause. The pause causes `regionserver` to be unresponsive and not able to send heart beat to HMaster within the zookeeper session timeout 40s. HMaster believes `regionserver` is dead, aborts the `regionserver` and restarts.
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
hdinsight Apache Spark Use With Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md
In this article, you use [Jupyter Notebook](https://jupyter.org/) available with
> [!NOTE] > You do not need to perform this step if you have created the HDInsight cluster with Data Lake Storage as default storage. The cluster creation process adds some sample data in the Data Lake Storage account that you specify while creating the cluster. Skip to the section Use HDInsight Spark cluster with Data Lake Storage.
-If you created an HDInsight cluster with Data Lake Storage as additional storage and Azure Storage Blob as default storage, you should first copy over some sample data to the Data Lake Storage account. You can use the sample data from the Azure Storage Blob associated with the HDInsight cluster. You can use the [ADLCopy tool](https://www.microsoft.com/download/details.aspx?id=50358) to do so. Download and install the tool from the link.
+If you created an HDInsight cluster with Data Lake Storage as additional storage and Azure Storage Blob as default storage, you should first copy over some sample data to the Data Lake Storage account. You can use the sample data from the Azure Storage Blob associated with the HDInsight cluster.
1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`.
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
healthcare-apis Selectable Search Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/selectable-search-parameters.md
-# selectable search parameter
+# Selectable search parameter capability
Searching for resources is fundamental to FHIR. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. As the FHIR service in Azure health data services is provisioned, inbuilt search parameters are enabled by default. During the ingestion of data in the FHIR service, specific properties from FHIR resources are extracted and indexed with these search parameters. This is done to perform efficient searches. The selectable search parameter functionality allows you to enable or disable inbuilt search parameters. This functionality helps you to store more resources in allocated storage space and have performance improvements, by enabling only needed search parameters.
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Previously updated : 07/24/2023 Last updated : 07/26/2023
You complete the steps by using Visual Studio Code with the Azure IoT Hub extens
```json {
+ "PatientId": "patient1",
"HeartRate": 78, "RespiratoryRate": 12, "HeartRateVariability": 30,
You complete the steps by using Visual Studio Code with the Azure IoT Hub extens
After you select **Send**, it might take up to five minutes for the FHIR resources to be available in the FHIR service. > [!IMPORTANT]
- > To avoid device spoofing in device-to-cloud (D2C) messages, Azure IoT Hub enriches all device messages with additional properties before routing them to the event hub. For example: **Properties**: `iothub-creation-time-utc` and **SystemProperties**: `iothub-connection-device-id`. For more information, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties) and [How to use IotJsonPathContent templates with the MedTech service device mapping](how-to-use-iotjsonpathcontent-templates.md).
+ > To avoid device spoofing in device-to-cloud (D2C) messages, Azure IoT Hub enriches all device messages with additional properties before routing them to the event hub. For example: **SystemProperties**: `iothub-connection-device-id` and **Properties**: `iothub-creation-time-utc`. For more information, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties) and [How to use IotJsonPathContent templates with the MedTech service device mapping](how-to-use-iotjsonpathcontent-templates.md).
> > You do not want to send this example device message to your IoT hub as the enrichments will be duplicated by the IoT hub and cause an error with your MedTech service. This is only an example of how your device messages are enriched by the IoT hub. >
healthcare-apis How To Use Iotjsonpathcontent Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-templates.md
The IoT hub enriches and routes the device message to the event hub before the M
{ "Body": { "PatientId": "patient1",
- "HeartRate": 78
+ "HeartRate": "78"
}, "SystemProperties": { "iothub-enqueuedtime": "2023-07-25T20:41:26.046Z",
healthcare-apis Overview Of Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-samples.md
Previously updated : 07/17/2023 Last updated : 07/25/2023
Each MedTech service scenario-based sample contains the following resources:
[Conversions using functions](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings/calculatedcontent/conversions-using-functions)
+## IotJsonPathContent
+
+[Single device message into multiple resources](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings/iotjsonpathcontent/single-device-message-into-multiple-resources)
+ ## Next steps In this article, you learned about the MedTech service scenario-based mappings samples.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/azure-policy.md
# Integrate Azure Key Vault with Azure Policy
-[Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy provides the ability to place guardrails on Azure resources to ensure they are compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they will be able to see a drill down of which resources and components are compliant and which are not. For more information, see the [Overview of the Azure Policy service](../../governance/policy/overview.md).
+[Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy provides the ability to place guardrails on Azure resources to ensure they're compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they'll be able to see a drill down of which resources and components are compliant and which aren't. For more information, see the [Overview of the Azure Policy service](../../governance/policy/overview.md).
Example Usage Scenarios: -- You want to improve the security posture of your company by implementing requirements around minimum key sizes and maximum validity periods of certificates in your company's key vaults but you don't know which teams will be compliant and which are not.-- You currently don't have a solution to perform an audit across your organization, or you are conducting manual audits of your environment by asking individual teams within your organization to report their compliance. You are looking for a way to automate this task, perform audits in real time, and guarantee the accuracy of the audit.
+- You want to improve the security posture of your company by implementing requirements around minimum key sizes and maximum validity periods of certificates in your company's key vaults but you don't know which teams will be compliant and which aren't.
+- You currently don't have a solution to perform an audit across your organization, or you're conducting manual audits of your environment by asking individual teams within your organization to report their compliance. You're looking for a way to automate this task, perform audits in real time, and guarantee the accuracy of the audit.
- You want to enforce your company security policies and stop individuals from creating self-signed certificates, but you don't have an automated way to block their creation. - You want to relax some requirements for your test teams, but you want to maintain tight controls over your production environment. You need a simple automated way to separate enforcement of your resources. - You want to be sure that you can roll-back enforcement of new policies in the event of a live-site issue. You need a one-click solution to turn off enforcement of the policy.-- You are relying on a 3rd party solution for auditing your environment and you want to use an internal Microsoft offering.
+- You're relying on a 3rd party solution for auditing your environment and you want to use an internal Microsoft offering.
## Types of policy effects and guidance
-When enforcing a policy, you can determine its effect over the resulting evaluation. Each policy definition allows you to choose one of multiple effects. Therefore, policy enforcement may behave differently depending on the type of operation you are evaluating. In general, the effects for policies that integrate with Key Vault include:
+When enforcing a policy, you can determine its effect over the resulting evaluation. Each policy definition allows you to choose one of multiple effects. Therefore, policy enforcement may behave differently depending on the type of operation you're evaluating. In general, the effects for policies that integrate with Key Vault include:
-- [**Audit**](../../governance/policy/concepts/effects.md#audit): when the effect of a policy is set to `Audit`, the policy will not cause any breaking changes to your environment. It will only alert you to components such as certificates that do not comply with the policy definitions within a specified scope, by marking these components as non-compliant in the policy compliance dashboard. Audit is default if no policy effect is selected.
+- [**Audit**](../../governance/policy/concepts/effects.md#audit): when the effect of a policy is set to `Audit`, the policy won't cause any breaking changes to your environment. It will only alert you to components such as certificates that don't comply with the policy definitions within a specified scope, by marking these components as non-compliant in the policy compliance dashboard. Audit is default if no policy effect is selected.
-- [**Deny**](../../governance/policy/concepts/effects.md#deny): when the effect of a policy is set to `Deny`, the policy will block the creation of new components such as certificates as well as block new versions of existing components that do not comply with the policy definition. Existing non-compliant resources within a key vault are not affected. The 'audit' capabilities will continue to operate.
+- [**Deny**](../../governance/policy/concepts/effects.md#deny): when the effect of a policy is set to `Deny`, the policy will block the creation of new components such as certificates as well as block new versions of existing components that don't comply with the policy definition. Existing non-compliant resources within a key vault aren't affected. The 'audit' capabilities will continue to operate.
-- [**Disabled**](../../governance/policy/concepts/effects.md#disabled): when the effect of a policy is set to `Disabled`, the policy will still be evaluated but enforcement will not take effect, thus being compliant for the condition with `Disabled` effect. This is useful to disable the policy for a specific condition as opposed to all conditions.
+- [**Disabled**](../../governance/policy/concepts/effects.md#disabled): when the effect of a policy is set to `Disabled`, the policy will still be evaluated but enforcement won't take effect, thus being compliant for the condition with `Disabled` effect. This is useful to disable the policy for a specific condition as opposed to all conditions.
-- [**Modify**](../../governance/policy/concepts/effects.md#modify): when the effect of a policy is set to `Modify`, you can perform addition of resource tags, such as adding the `Deny` tag to a network. This is useful to disable access to a public network for Azure Key Vault managed HSM. It is necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `Modify` effect.
+- [**Modify**](../../governance/policy/concepts/effects.md#modify): when the effect of a policy is set to `Modify`, you can perform addition of resource tags, such as adding the `Deny` tag to a network. This is useful to disable access to a public network for Azure Key Vault managed HSM. It's necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `Modify` effect.
-- [**DeployIfNotExists**](../../governance/policy/concepts/effects.md#deployifnotexists): when the effect of a policy is set to `DeployIfNotExists`, a deployment template is executed when the condition is met. This can be used to configure diagnostic settings for Key Vault to log analytics workspace. It is necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `DeployIfNotExists` effect.
+- [**DeployIfNotExists**](../../governance/policy/concepts/effects.md#deployifnotexists): when the effect of a policy is set to `DeployIfNotExists`, a deployment template is executed when the condition is met. This can be used to configure diagnostic settings for Key Vault to log analytics workspace. It's necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `DeployIfNotExists` effect.
-- [**AuditIfNotExists**](../../governance/policy/concepts/effects.md#deployifnotexists): when the effect of a policy is set to `AuditIfNotExists`, you can identify resources that lack the properties specified in the details of the policy condition. This is useful to identify key vaults that have no resource logs enabled. It is necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `DeployIfNotExists` effect.
+- [**AuditIfNotExists**](../../governance/policy/concepts/effects.md#deployifnotexists): when the effect of a policy is set to `AuditIfNotExists`, you can identify resources that lack the properties specified in the details of the policy condition. This is useful to identify key vaults that have no resource logs enabled. It's necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `DeployIfNotExists` effect.
## Available Built-In Policy Definitions
Using the Azure Policy service, you can govern the migration to the RBAC permiss
#### Network Access
-Reduce the risk of data leakage by restricting public network access, enabling [Azure Private Link](https://azure.microsoft.com/products/private-link/) connections, creating private DNS zones to override DNS resolution for a private endpoint, and enabling [firewall protection](network-security.md) so that the key vault is not accessible by default to any public IP.
+Reduce the risk of data leakage by restricting public network access, enabling [Azure Private Link](https://azure.microsoft.com/products/private-link/) connections, creating private DNS zones to override DNS resolution for a private endpoint, and enabling [firewall protection](network-security.md) so that the key vault isn't accessible by default to any public IP.
| Policy | Effects | |--|--|
Promote the use of short-lived certificates to mitigate undetected attacks, by m
| [Certificates should have the specified lifetime action triggers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12ef42cb-9903-4e39-9c26-422d29570417) | Effects: Audit (_Default_), Deny, Disabled > [!NOTE]
-> It is recommended to apply [the certificate expiration policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff772fb64-8e40-40ad-87bc-7706e1949427) multiple times with different expiration thresholds, for example, at 180, 90, 60, and 30-day thresholds.
+> It's recommended to apply [the certificate expiration policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff772fb64-8e40-40ad-87bc-7706e1949427) multiple times with different expiration thresholds, for example, at 180, 90, 60, and 30-day thresholds.
#### Certificate Authority
Restrict the type of your key vault's certificates to be RSA, ECC, or HSM-backed
#### HSM-backed keys
-An HSM is a hardware security module that stores keys. An HSM provides a physical layer of protection for cryptographic keys. The cryptographic key cannot leave a physical HSM which provides a greater level of security than a software key. Some organizations have compliance requirements that mandate the use of HSM keys. You can use this policy to audit any keys stored in your Key Vault that is not HSM backed. You can also use this policy to block the creation of new keys that are not HSM backed. This policy will apply to all key types, including RSA and ECC.
+An HSM is a hardware security module that stores keys. An HSM provides a physical layer of protection for cryptographic keys. The cryptographic key can't leave a physical HSM which provides a greater level of security than a software key. Some organizations have compliance requirements that mandate the use of HSM keys. You can use this policy to audit any keys stored in your Key Vault that isn't HSM backed. You can also use this policy to block the creation of new keys that aren't HSM backed. This policy will apply to all key types, including RSA and ECC.
| Policy | Effects | |--|--|
An HSM is a hardware security module that stores keys. An HSM provides a physica
#### Lifecycle of Keys
-With lifecycle management built-ins you can flag or block keys that do not have an expiration date, get alerts whenever delays in key rotation may result in an outage, prevent the creation of new keys that are close to their expiration date, limit the lifetime and active status of keys to drive key rotation, and preventing keys from being active for more than a specified number of days.
+With lifecycle management built-ins you can flag or block keys that don't have an expiration date, get alerts whenever delays in key rotation may result in an outage, prevent the creation of new keys that are close to their expiration date, limit the lifetime and active status of keys to drive key rotation, and preventing keys from being active for more than a specified number of days.
| Policy | Effects | |--|--|
+| [Keys should have a rotation policy ensuring that their rotation is scheduled within the specified number of days after creation](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd8cf8476-a2ec-4916-896e-992351803c44) | Audit (_Default_), Disabled
| [Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) | Audit (_Default_), Deny, Disabled | [**[Preview]**: Managed HSM keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d478a74-21ba-4b9f-9d8f-8e6fced0eec5) | Audit (_Default_), Deny, Disabled | [Keys should have more than the specified number of days before expiration](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5ff38825-c5d8-47c5-b70e-069a21955146) | Audit (_Default_), Deny, Disabled
With lifecycle management built-ins you can flag or block keys that do not have
#### Key Attributes
-Restrict the type of your Key Vault's keys to be RSA, ECC, or HSM-backed. If you use elliptic curve cryptography or ECC keys, you can customize and select curve names such as P-256, P-256K, P-384, and P-521. If you use RSA keys, you can mandate the use of a minimum key size for current and new keys to be 2048 bits, 3072 bits, or 4096 bits. Keep in mind that using RSA keys with smaller key sizes is not a secure design practice, thus it is recommended to block the creation of new keys that do not meet the minimum size requirement.
+Restrict the type of your Key Vault's keys to be RSA, ECC, or HSM-backed. If you use elliptic curve cryptography or ECC keys, you can customize and select curve names such as P-256, P-256K, P-384, and P-521. If you use RSA keys, you can mandate the use of a minimum key size for current and new keys to be 2048 bits, 3072 bits, or 4096 bits. Keep in mind that using RSA keys with smaller key sizes isn't a secure design practice, thus it is recommended to block the creation of new keys that don't meet the minimum size requirement.
| Policy | Effects | |--|--|
Restrict the type of your Key Vault's keys to be RSA, ECC, or HSM-backed. If you
#### Lifecycle of Secrets
-With lifecycle management built-ins you can flag or block secrets that do not have an expiration date, get alerts whenever delays in secret rotation may result in an outage, prevent the creation of new keys that are close to their expiration date, limit the lifetime and active status of keys to drive key rotation, and preventing keys from being active for more than a specified number of days.
+With lifecycle management built-ins you can flag or block secrets that don't have an expiration date, get alerts whenever delays in secret rotation may result in an outage, prevent the creation of new keys that are close to their expiration date, limit the lifetime and active status of keys to drive key rotation, and preventing keys from being active for more than a specified number of days.
| Policy | Effects | |--|--|
You manage a key vault used by multiple teams that contains 100 certificates, an
1. You assign the **Certificates should have the specified maximum validity period** policy, specify that the maximum validity period of a certificate is 24 months, and set the effect of the policy to "audit". 1. You view the [compliance report on the Azure portal](#view-compliance-results), and discover that 20 certificates are non-compliant and valid for > 2 years, and the remaining certificates are compliant.
-1. You contact the owners of these certificates and communicate the new security requirement that certificates cannot be valid for longer than 2 years. Some teams respond and 15 of the certificates were renewed with a maximum validity period of 2 years or less. Other teams do not respond, and you still have 5 non-compliant certificates in your key vault.
-1. You change the effect of the policy you assigned to "deny". The 5 non-compliant certificates are not revoked, and they continue to function. However, they cannot be renewed with a validity period that is greater than 2 years.
+1. You contact the owners of these certificates and communicate the new security requirement that certificates can't be valid for longer than 2 years. Some teams respond and 15 of the certificates were renewed with a maximum validity period of 2 years or less. Other teams don't respond, and you still have 5 non-compliant certificates in your key vault.
+1. You change the effect of the policy you assigned to "deny". The 5 non-compliant certificates aren't revoked, and they continue to function. However, they can't be renewed with a validity period that is greater than 2 years.
## Enabling and managing a key vault policy through the Azure portal
The policy evaluation of existing components in a vault may take up to 1 hour (a
If the compliance results show up as "Not Started" it may be due to the following reasons: -- The policy valuation has not completed yet. Initial evaluation latency can take up to 2 hours in the worst-case scenario.
+- The policy valuation hasn't completed yet. Initial evaluation latency can take up to 2 hours in the worst-case scenario.
- There are no key vaults in the scope of the policy assignment. - There are no key vaults with certificates within the scope of the policy assignment.
key-vault Move Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-subscription.md
> Make sure you understand the impact of this change and follow the guidance in this article carefully before deciding to move key vault to a new subscription. > If you are using Managed Service Identities (MSI) please read the post-move instructions at the end of the document.
-[Azure Key Vault](overview.md) is automatically tied to the default [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) tenant ID for the subscription in which it is created. You can find tenant ID associated with your subscription by following this [guide](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md). All access policy entries and roles assignments are also tied to this tenant ID. If you move your Azure subscription from tenant A to tenant B, your existing key vaults will be inaccessible by the service principals (users and applications) in tenant B. To fix this issue, you need to:
+[Azure Key Vault](overview.md) is automatically tied to the default [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) tenant ID for the subscription in which it is created. You can find tenant ID associated with your subscription by following this [guide](/azure/active-directory-b2c/tenant-management-read-tenant-name). All access policy entries and roles assignments are also tied to this tenant ID. If you move your Azure subscription from tenant A to tenant B, your existing key vaults will be inaccessible by the service principals (users and applications) in tenant B. To fix this issue, you need to:
> [!NOTE] > If Key Vault is created through [Azure Lighthouse](../../lighthouse/overview.md), it is tied to managing tenant id instead. Azure Lighthouse is only supported by vault access policy permission model.
For more information about Azure Key Vault and Azure Active Directory, see - [About Azure Key Vault](overview.md) - [What is Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md)-- [How to find tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md)
+- [How to find tenant ID](/azure/active-directory-b2c/tenant-management-read-tenant-name)
## Limitations
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
Here's what's new with Azure Key Vault. New features and improvements are also announced on the [Azure updates Key Vault channel](https://azure.microsoft.com/updates/?category=security&query=Key%20vault).
+## July 2023
+
+Built-in policy to govern the key rotation configuration in Azure Key Vault. With this policy, you can audit existing keys in key vaults to ensure that all keys are configured for rotation and comply with your organization's standards.
+
+For more information, see [Configure key rotation governance](../keys/how-to-configure-key-rotation.md#configure-key-rotation-policy-governance)
+ ## June 2023 Key Vault enforces TLS 1.2 or higher for enhanced security. If you're still using an older TLS version, see [Enable support for TLS 1.2 in your environment](/troubleshoot/azure/active-directory/enable-support-tls-environment/#why-this-change-is-being-made) to update your clients and ensure uninterrupted access to Key Vault services. You can monitor TLS version used by clients by monitoring Key Vault logs with sample Kusto query [here](monitor-key-vault.md#sample-kusto-queries).
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
Key rotation policy can also be configured using ARM templates.
```
+## Configure key rotation policy governance
+
+Using the Azure Policy service, you can govern the key lifecycle and ensure that all keys are configured to rotate within a specified number of days.
+
+### Create and assign policy definition
+
+1. Navigate to Policy resource
+1. Select **Assignments** under **Authoring** on the left side of the Azure Policy page.
+1. Select **Assign policy** at the top of the page. This button opens to the Policy assignment page.
+1. Enter the following information:
+ - Define the scope of the policy by choosing the subscription and resource group over which the policy will be enforced. Select by clicking the three-dot button at on **Scope** field.
+ - Select the name of the policy definition: "[Keys should have a rotation policy ensuring that their rotation is scheduled within the specified number of days after creation.
+](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd8cf8476-a2ec-4916-896e-992351803c44)"
+ - Go to the **Parameters** tab at the top of the page and define the desired effect of the policy (Audit, or Disabled).
+1. Fill out any additional fields. Navigate the tabs clicking on **Previous** and **Next** buttons at the bottom of the page.
+1. Select **Review + create**
+1. Select **Create**
+
+Once the built-in policy is assigned, it can take up to 24 hours to complete the scan. After the scan is completed, you can see compliance results like below.
++ ## Resources - [Monitoring Key Vault with Azure Event Grid](../general/event-grid-overview.md)
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
Refer to your HSM vendor's documentation to download and install the BYOK tool.
Transfer the BYOK file to your connected computer. > [!NOTE]
-> Importing RSA 1,024-bit keys is not supported. Currently, importing an Elliptic Curve (EC) key is not supported.
+> Importing RSA 1,024-bit keys is not supported. Importing EC-HSM P256K keys is supported.
> > **Known issue**: Importing an RSA 4K target key from Luna HSMs is only supported with firmware 7.4.0 or newer.
key-vault Multi Region Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/multi-region-replication.md
The following regions are supported as primary regions (Regions where you can re
- Switzerland West > [!NOTE]
-> US Central, US East, West US 2, Switzerland North, West Europe, Central India, Canada Central, Canada East, Japan West, Qatar Central and US West Central cannot be extended as a secondary region at this time.
+> US Central, US East, West US 2, Switzerland North, West Europe, Central India, Canada Central, Canada East, Japan West, Qatar Central, Poland Central and US West Central cannot be extended as a secondary region at this time.
## Billing
The [Managed HSM soft-delete feature](soft-delete-overview.md) allows recovery o
## Private link behavior with Multi-region replication
-The [Azure Private Link feature](private-link.md) allows you to access the Managed HSM service over a private endpoint in your virtual network. You would configure private endpoint on the Managed HSM in the primary region just as you would when not using the multi-region replication feature. For the Managed HSM in the secondary region, it is recommended to create another private endpoint once the Managed HSM in the primary region is replicated to the Manged HSM in the secondary region. This will redirect client requests to the Managed HSM closest to the client location.
+The [Azure Private Link feature](private-link.md) allows you to access the Managed HSM service over a private endpoint in your virtual network. You would configure private endpoint on the Managed HSM in the primary region just as you would when not using the multi-region replication feature. For the Managed HSM in the secondary region, it is recommended to create another private endpoint once the Managed HSM in the primary region is replicated to the Managed HSM in the secondary region. This will redirect client requests to the Managed HSM closest to the client location.
Some scenarios below with examples: Managed HSM in a primary region (UK South) and another Managed HSM in a secondary region (US West Central).
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success
``` ## Access and download load test results-
+>[!IMPORTANT]
+>For load tests with more than 45 engine instances or a greater than 3-hour test run duration, the results file will not be available for download. You can configure a [JMeter Backend Listener](#export-test-results-using-jmeter-backend-listeners) to export the results to a data store of your choice.
# [Azure portal](#tab/portal) To download the test results for a test run in the Azure portal:
When you run a load test as part of your CI/CD pipeline, Azure Load Testing gene
:::image type="content" source="./media/how-to-export-test-results/azure-pipelines-run-summary.png" alt-text="Screenshot that shows the Azure Pipelines workflow summary page, highlighting the test results in the Stages section." lightbox="./media/how-to-export-test-results/azure-pipelines-run-summary.png":::
+## Export test results using JMeter Backend Listeners
+You can use [JMeter Backend Listeners](https://jmeter.apache.org/usermanual/component_reference.html#Backend_Listener) to export test results to databases like InfluxDB, MySQL or monitoring tools like AppInsights.
+
+You can use the backend listeners available by default in JMeter, backend listeners from [jmeter-plugins.org](https://jmeter-plugins.org), or a custom backend listener in the form of a Java archive (JAR) file.
+
+A sample JMeter script that uses a [backend listener for Azure Application Insights](https://github.com/adrianmo/jmeter-backend-azure) is available [here](https://github.com/Azure-Samples/azure-load-testing-samples/tree/main/jmeter-backend-listeners).
+
+The following code snippet shows an example of a backend listener, for Azure Application Insights, in a JMX file:
## Next steps
load-testing Resource Jmeter Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-jmeter-support.md
The following table lists the Apache JMeter features and their support in Azure
| Configuration elements | All configuration elements are supported. | Example: [Read data from a CSV file](./how-to-read-csv-data.md) | | JMeter properties | Azure Load Testing supports uploading a single user properties file per load test to override JMeter configuration settings or add custom properties.<br/>System properties files aren't supported. | [Configure JMeter user properties](./how-to-configure-user-properties.md) | | Plugins | Azure Load Testing lets you use plugins from https://jmeter-plugins.org, or upload a Java archive (JAR) file with your own plugin code.<br/>The [Web Driver sampler](https://jmeter-plugins.org/wiki/WebDriverSampler/) and any plugins that use backend listeners aren't supported. | [Customize a load test with plugins](./how-to-use-jmeter-plugins.md) |
-| Listeners | Azure Load Testing ignores all [Results Collectors](https://jmeter.apache.org/api/org/apache/jmeter/reporters/ResultCollector.html), which includes visualizers such as the [results tree](https://jmeter.apache.org/usermanual/component_reference.html#View_Results_Tree) or [graph results](https://jmeter.apache.org/usermanual/component_reference.html#Graph_Results).<br/>[Backend listeners](https://jmeter.apache.org/usermanual/component_reference.html#Backend_Listener) aren't supported. | |
+| Listeners | Azure Load Testing ignores all [Results Collectors](https://jmeter.apache.org/api/org/apache/jmeter/reporters/ResultCollector.html), which includes visualizers such as the [results tree](https://jmeter.apache.org/usermanual/component_reference.html#View_Results_Tree) or [graph results](https://jmeter.apache.org/usermanual/component_reference.html#Graph_Results). | |
| Dashboard report | The Azure Load Testing dashboard shows the client metrics, and optionally the server-side metrics. <br/>You can export the load test results to use them in a reporting tool or [generate the JMeter dashboard](https://jmeter.apache.org/usermanual/generating-dashboard.html#report) on your local machine.| [Export test results](./how-to-export-test-results.md) | | Test fragments| Not supported. | |
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
The following limits apply on a per-region, per-subscription basis.
| Resource | Default limit | Maximum limit | |||| | Concurrent engine instances | 5-100 <sup>1</sup> | 1000 |
-| Engine instances per test run | 1-45 <sup>1</sup> | 45 |
+| Engine instances per test run | 1-45 <sup>1</sup> | 400 |
<sup>1</sup> If you aren't already at the maximum limit, you can request an increase. We aren't currently able to approve increase requests past our maximum limitations stated above. To request an increase for your default limit, contact Azure Support. Default limits vary by offer category type.
The following limits apply on a per-region, per-subscription basis.
| Resource | Default limit | Maximum limit | |||| | Concurrent test runs | 5-25 <sup>2</sup> | 1000 |
-| Test duration | 3 hours | |
+| Test duration | 3 hours <sup>2</sup> | 24 |
| Tests per resource | 10000 | | | Test runs per test | 5000 | | | File uploads per test | 1000 | |
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023 ms.suite: integration
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
Each model can be evaluated for the specific inference task that the model can b
1. Select **Finish** in the Evaluate wizard to submit your evaluation job. Once the job completes, you can view evaluation metrics for the model. Based on the evaluation metrics, you might decide if you would like to finetune the model using your own training data. Additionally, you can decide if you would like to register the model and deploy it to an endpoint.
-**Advanced Evaluation Parameters:**
-
-* The Evaluate UI wizard, allows you to perform basic evaluation by providing your own test data. Additionally, there are several advanced evaluation parameters described [in this reference page](https://github.com/Azure/azureml-assets/blob/main/training/model_evaluation/components/evaluate_model/README.md), such as evaluation config. Each of these settings has default values, but can be customized via code based samples, if needed.
- ### Evaluating using code based samples To enable users to get started with model evaluation, we have published samples (both Python notebooks and CLI examples) in the [Evaluation samples in azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/evaluation). Each model card also links to evaluation samples for corresponding tasks
You can invoke the finetune settings form by selecting on the **Finetune** butto
3. Select **Finish** in the finetune form to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then register the finetuned model output by the finetuning job and deploy this model to an endpoint for inferencing.
-**Advanced finetuning parameters:**
-
-The finetuning feature, allows you to perform basic finetuning by providing your own training data. Additionally, there are several advanced finetuning parameters, such as learning rate, epochs, batch size, etc., described in the Readme file for each task [here](https://github.com/Azure/azureml-assets/tree/main/training/finetune_acft_hf_nlp/components/finetune). Each of these settings has default values, but can be customized via code based samples, if needed.
- ### Finetuning using code based samples Currently, Azure Machine Learning supports finetuning models for the following language tasks:
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-workspace-vnet.md
Azure Machine Learning supports storage accounts configured to use either a priv
> [!TIP] > When using a service endpoint, you can also disable public access. For more information, see [disallow public read access](../../storage/blobs/anonymous-read-access-configure.md#allow-or-disallow-public-read-access-for-a-storage-account). + ## Secure Azure Key Vault
validate=False)
To enable network isolation for Azure Monitor and the Application Insights instance for the workspace, use the following steps:
-1. Upgrade the Application Insights instance for your workspace. For steps on how to upgrade, see [Migrate to workspace-based Application Insights resources](/azure/azure-monitor/app/convert-classic-resource).
+1. Open your Application Insights resource in the Azure Portal. The __Overview__ tab may or may not have a Workspace property. If it _doesn't_ have the property, perform step 2. If it _does_, then you can proceed directly to step 3.
- > [!TIP]
- > New workspaces create a workspace-based Application Insights resource by default.
+ > [!TIP]
+ > New workspaces create a workspace-based Application Insights resource by default. If your workspace was recently created, then you would not need to perform step 2.
+
+1. Upgrade the Application Insights instance for your workspace. For steps on how to upgrade, see [Migrate to workspace-based Application Insights resources](/azure/azure-monitor/app/convert-classic-resource).
1. Create an Azure Monitor Private Link Scope and add the Application Insights instance from step 1 to the scope. For steps on how to do this, see [Configure your Azure Monitor private link](/azure/azure-monitor/logs/private-link-configure).
This article is part of a series on securing an Azure Machine Learning workflow.
* [Tutorial: Create a secure workspace](../tutorial-create-secure-workspace.md) * [Tutorial: Create a secure workspace using a template](../tutorial-create-secure-workspace-template.md) * [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)++
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
cqlsh $host 9042 -u cassandra -p $initial_admin_password --ssl
### Connecting from an application
-As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started) and [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started).
+As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started).
Disabling certificate verification is recommended because certificate verification will not work unless you map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes.
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
cqlsh $host 9042 -u cassandra -p $initial_admin_password --ssl
``` ### Connecting from an application
-As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started) and [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started).
+As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started).
Disabling certificate verification is recommended because certificate verification will not work unless you map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes.
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
ms. Previously updated : 12/12/2022 Last updated : 07/26/2023
On machines you want to migrate, you need to install the Mobility service agent.
``` 2. Run the Mobility Service Installer: ```
- UnifiedAgent.exe /Role "MS" /Platform "VmWare" /Silent
+ UnifiedAgent.exe /Role "MS" /Platform "VmWare" /Silent /CSType CSLegacy
``` 3. Register the agent with the replication appliance: ```
On machines you want to migrate, you need to install the Mobility service agent.
``` 2. Run the installer script: ```
- sudo ./install -r MS -v VmWare -q
+ sudo ./install -r MS -v VmWare -q -c CSLegacy
``` 3. Register the agent with the replication appliance: ```
- /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <replication appliance IP address> -P <Passphrase File Path>
+ /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <replication appliance IP address> -P <Passphrase File Path> -c CSLegacy
``` ## Replicate machines
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-cli.md
A read replica server can be created using the following command:
az mysql flexible-server replica create --replica-name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup ```
+> [!IMPORTANT]
+>When using CLI for creating in-region read replica from a source server with private access, the source server network settings are carried over. The private access input parameters, such as "private-dns-zone", "subnet" and "vnet" are ignored and in-region read-replica is created with same private access settings as the source server
+ > [!NOTE] > Read replicas are created with the same server configuration as the source. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group, same location and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the source.
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023 # Azure Policy built-in definitions for Azure Database for MySQL
network-watcher Azure Monitor Agent With Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/azure-monitor-agent-with-connection-monitor.md
Title: Monitor network connectivity using Azure Monitor Agent
-description: This article describes how to monitor network connectivity in Connection Monitor by using Azure Monitor Agent.
-
+ Title: Monitor network connectivity using Azure Monitor agent
+
+description: Learn how to use Azure Monitor agent to monitor network connectivity with Network Watcher connection monitor.
+ - Previously updated : 10/27/2022- Last updated : 07/26/2023+ #Customer intent: I need to monitor a connection by using Azure Monitor Agent.
-# Monitor network connectivity using Azure Monitor Agent with connection monitor
+# Monitor network connectivity using Azure Monitor agent with connection monitor
-Connection Monitor supports the Azure Monitor Agent extension, which eliminates any dependency on the legacy Log Analytics agent.
+Connection monitor supports the Azure Monitor agent extension, which eliminates any dependency on the legacy Log Analytics agent.
-With Azure Monitor Agent, a single agent consolidates all the features necessary to address all connectivity logs and metrics data collection needs across Azure and on-premises machines compared to running various monitoring agents.
+Azure Monitor agent consolidates all the features necessary to address connectivity logs and metrics data collection needs across Azure and on-premises machines compared to running various monitoring agents.
-Azure Monitor Agent provides the following benefits:
+Azure Monitor agent provides the following benefits:
* Enhanced security and performance capabilities * Effective cost savings with efficient data collection * Ease of troubleshooting, with simpler data collection management for the Log Analytics agent
-[Learn more about Azure Monitor Agent](../azure-monitor/agents/agents-overview.md).
+For more information, see [Azure Monitor agent](../azure-monitor/agents/agents-overview.md).
+
+To start using connection monitor for monitoring, follow these steps:
-To start using Connection Monitor for monitoring, do the following:
* Install monitoring agents * Create a connection monitor * Analyze monitoring data and set alerts * Diagnose issues in your network
-The following sections provide details on the installation of monitoring agents. For more information, see [Monitor network connectivity by using Connection Monitor](connection-monitor-overview.md).
+The following sections provide details on the installation of monitoring agents. For more information, see [Monitor network connectivity using Connection monitor](connection-monitor-overview.md).
## Install monitoring agents for Azure and non-Azure resources
-Connection Monitor relies on lightweight executable files to run connectivity checks. It supports connectivity checks from both Azure and on-premises environments. The executable file that you use depends on whether your virtual machine (VM) is hosted on Azure or on-premises.
+Connection monitor relies on lightweight executable files to run connectivity checks. It supports connectivity checks from both Azure and on-premises environments. The executable file that you use depends on whether your virtual machine (VM) is hosted on Azure or on-premises.
### Agents for Azure virtual machines and scale sets
-To install agents for Azure virtual machines and Virtual Machine Scale Sets, see the "Agents for Azure virtual machines and Virtual Machine Scale Sets" section of [Monitor network connectivity by using Connection Monitor](connection-monitor-overview.md#agents-for-azure-virtual-machines-and-virtual-machine-scale-sets).
+To install agents for Azure virtual machines and Virtual Machine Scale Sets, see the "Agents for Azure virtual machines and Virtual Machine Scale Sets" section of [Monitor network connectivity using Connection monitor](connection-monitor-overview.md#agents-for-azure-virtual-machines-and-virtual-machine-scale-sets).
### Agents for on-premises machines
-To make Connection Monitor recognize your on-premises machines as sources for monitoring, do the following:
+To make connection monitor recognize your on-premises machines as sources for monitoring, follow these steps:
* Enable your hybrid endpoints to [Azure Arc-enabled servers](../azure-arc/overview.md). * Connect hybrid machines by installing the [Azure Connected Machine agent](../azure-arc/servers/overview.md) on each machine.
- This agent doesn't deliver any other functionality, and it doesn't replace Azure Monitor Agent. The Azure Connected Machine agent simply enables you to manage the Windows and Linux machines that are hosted outside of Azure on your corporate network or other cloud providers.
+ This agent doesn't deliver any other functionality, and it doesn't replace Azure Monitor agent. The Azure Connected Machine agent simply enables you to manage the Windows and Linux machines that are hosted outside of Azure on your corporate network or other cloud providers.
+
+* [Install Azure Monitor agent](../azure-monitor/agents/agents-overview.md) to enable the Network Watcher extension.
+
+ The agent collects monitoring logs and data from the hybrid sources and delivers them to Azure Monitor.
+
+### Enable the Network Performance Monitor solution for on-premises machines
+
+To enable the Network Performance Monitor solution for on-premises machines, follow these steps:
+
+1. In the Azure portal, go to **Network Watcher**.
+
+1. Under **Monitoring**, select **Network Performance Monitor**. A list of workspaces with Network Performance Monitor solution enabled is displayed, filtered by **Subscriptions**.
+
+1. To add the Network Performance Monitor solution in a new workspace, select **Add NPM**.
+
+1. In **Enable Non-Azure**, select the subscription and workspace in which you want to enable the solution, and then select **Create**.
+
+ After you've enabled the solution, the workspace takes a couple of minutes to be displayed.
-* [Install Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) to enable the Network Watcher extension.
+Unlike Log Analytics agents, the Network Performance Monitor solution can be configured to send data only to a single Log Analytics workspace.
- The agent collects monitoring logs and data from the hybrid sources and delivers it to Azure Monitor.
+If you wish to escape the installation process for enabling the Network Watcher extension, you can proceed with the creation of connection monitor and allow auto enablement of monitoring solution on your on-premises machines.
## Coexistence with other agents
-Azure Monitor Agent can coexist with, or run side by side on the same machine as, the legacy Log Analytics agents. You can continue to use their existing functionality during evaluation or migration.
+Azure Monitor agent can coexist with, or run side by side on the same machine with, the legacy Log Analytics agent. You can continue to use their existing functionality during evaluation or migration.
-Although this coexistence allows you to begin the transition, there are certain limitations. Keep in mind the following considerations:
+Although this coexistence allows you to begin the transition, there are certain limitations that you need to consider:
-* Do not collect duplicate data, because it could alter query results and affect downstream features such as alerts, dashboards, or workbooks.
+* Don't collect duplicate data, because it could alter query results and affect downstream features such as alerts, dashboards, or workbooks.
- For example, the VM insights feature uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install Azure Monitor Agent and create a data collection rule for the same events and performance data, it will result in duplicate data. Ensure that you're not collecting the same data from both agents. If you are collecting duplicate data, make sure that it's coming from different machines or going to separate destinations.
+ For example, the VM insights feature uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install Azure Monitor agent and create a data collection rule for the same events and performance data, it will result in duplicate data. Ensure that you're not collecting the same data from both agents. If you're collecting duplicate data, make sure that it's coming from different machines or going to separate destinations.
* Data duplication would also generate more charges for data ingestion and retention.
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
In the Azure portal, to create a test group in a connection monitor, specify val
:::image type="content" source="./media/connection-monitor-2-preview/arc-endpoint.png" alt-text="Screenshot of Azure Arc-enabled and Azure Monitor Agent-enabled hosts.":::
- If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
+ If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
network-watcher Network Watcher Security Group View Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-powershell.md
The scenario covered in this article retrieves the configured and effective secu
The first step is to retrieve the Network Watcher instance. This variable is passed to the `Get-AzNetworkWatcherSecurityGroupView` cmdlet. ```powershell
-$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
+$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
+$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
``` ## Get a VM
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
orbital Modem Chain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/modem-chain.md
# How to configure the RF chain
-You have the flexibility to choose between managed modem or virtual RF functionality using the Azure Orbital Ground Station service. These operational modes are specified on a per channel basis in the contact profile. See [ground station contact profile](concepts-contact-profile.md) to learn more about channels and links.
+You have the flexibility to choose between managed modem or virtual RF functionality using the Azure Orbital Ground Station service. These operational modes are specified on a per-channel basis in the contact profile. See [ground station contact profile](concepts-contact-profile.md) to learn more about channels and links.
## Managed modems vs virtual RF delivery
-We recommend taking advantage of Orbital Ground Station's managed modem functionality if possible. The modem is managed by the service and is inserted between your endpoint and the incoming or outgoing virtual RF stream for each pass. You can specify the modem setup using a modem configuration file or apply one of the in-built named modem configurations for commonly used public satellites such as Aqua.
+We recommend taking advantage of Azure Orbital Ground Station's managed modem functionality, if possible. The modem is managed by the service and is inserted between your endpoint and the incoming or outgoing virtual RF stream for each pass. You can specify the modem setup using your modem configuration file or apply one of the in-built named modem configurations for commonly used public satellites such as Aqua.
-Use virtual RF delivery if you wish to have tighter control on the modem setup or bring your own modem to the resource group. Orbital Ground Station will connect to your channel endpoint specified in the contact profile.
+Virtual RF delivery can be used if you wish to have tighter control on the modem setup or bring your own modem to the Azure resource group. Azure Orbital Ground Station will connect to your channel endpoint that is specified in the contact profile.
## How to configure your channels
The table below shows you how to configure the modem or virtual RF parameters.
| decodingConfiguration | Null (not used) | > [!NOTE]
-> Endpoint specified for the channel will apply to whichever option is selected. Please review [how to prepare network](prepare-network.md) for more details on setting up endpoints.
+> The endpoint specified for the channel will apply to whichever option is selected. Please review [how to prepare network](prepare-network.md) for more details on setting up endpoints.
-### For full-duplex cases
+### Full-duplex cases
Use the same modem config file in uplink and downlink channels for full-duplex communications in the same band. ### How to input the modem config
-You can enter the modem config when creating a contact profile object or add it in later. Modifications to existing modem configs are also allowed.
+You can enter your existing modem config when creating a contact profile object or add it in later. Modifications to existing modem configs are also allowed.
-#### Entering the modem config using the API
+#### Entering the modem config using the Azure Orbital Ground Station API
Enter the modem config as a JSON escaped string from the desired modem config file when using the API.
-#### Entering the modem config using the portal
+#### Entering the modem config using the Azure Portal
Select 'Raw XML' and then **paste the modem config raw (without JSON escapement)** into the field shown below when entering channel details using the portal. :::image type="content" source="media/azure-ground-station-modem-config-portal-entry.png" alt-text="Screenshot of entering a modem configuration into the contact profile object." lightbox="media/azure-ground-station-modem-config-portal-entry.png"::: ### Named modem configuration
-We currently support the following named modem configurations.
+We currently support the following named modem configurations:
| **Public Satellite Service** | **Named modem string** | **Note** | |--|--|--|
-| Aqua Direct Broadcast | aqua_direct_broadcast | This is NASA AQUA's 15-Mbps direct broadcast service |
-| Aqua Direct Playback | aqua_direct_playback | This is NASA's AQUA's 150-Mbps direct broadcast service |
-| Terra Direct Broadcast | terra_direct_broadcast | This is NASA Terra's 13.125-Mbps direct broadcast service |
+| Aqua Direct Broadcast | aqua_direct_broadcast | This is NASA Aqua 15-Mbps direct broadcast service |
+| Aqua Direct Playback | aqua_direct_playback | This is NASA Aqua 150-Mbps direct broadcast service |
+| Terra Direct Broadcast | terra_direct_broadcast | This is NASA Terra 13.125-Mbps direct broadcast service |
| SNPP Direct Broadcast | snpp_direct_broadcast | This is NASA SNPP 15-Mbps direct broadcast service | | JPSS-1 Direct Broadcast | jpss-1_direct_broadcast | This is NASA JPSS-1 15-Mbps direct broadcast service | > [!NOTE] > We recommend using the Aqua Direct Broadcast modem configuration when testing with Aqua. >
-> Orbital does not have control over the downlink schedules for these public satellites. NASA conducts their own operations which may interrupt the downlink availabilities.
+> Azure Orbital Ground Station does not have control over the downlink schedules for these public satellites. NASA conducts their own operations which may interrupt downlink availabilities.
| **Spacecraft Title** | **Aqua** |**Suomi NPP**|**JPSS-1/NOAA-20**| **Terra** | | : | :-: | :-: | :-: | :-: |
We currently support the following named modem configurations.
| `direction:` | Downlink, | Downlink, | Downlink, | Downlink, | | `polarization:` | RHCP | RHCP | RHCP | RHCP |
-#### Specifying a named modem configuration using the API
+#### Specifying a named modem configuration using the Azure Orbital Ground Station API
Enter the named modem string into the demodulationConfiguration parameter when using the API. ```javascript
Enter the named modem string into the demodulationConfiguration parameter when u
} ```
-#### Specifying a named modem configuration using the portal
+#### Specifying a named modem configuration using the Azure Portal
Select 'Preset Named Modem Configuration'and chose a configuration as shown below when entering channel details using the portal.
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/troubleshoot.md
To find the offer in the Azure Marketplace, use the following steps:
1. Search for _Apache Kafka on Confluent Cloud_. 1. Select the application tile.
-If the offer isn't displayed, contact [Confluent support](https://support.confluent.io). Your Azure Active Directory tenant ID must be on the list of allowed tenants. To learn how to find your tenant ID, see [How to find your Azure Active Directory tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
+If the offer isn't displayed, contact [Confluent support](https://support.confluent.io). Your Azure Active Directory tenant ID must be on the list of allowed tenants. To learn how to find your tenant ID, see [How to find your Azure Active Directory tenant ID](/azure/active-directory-b2c/tenant-management-read-tenant-name).
## Purchase errors
peering-service Customer Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/customer-walkthrough.md
Previously updated : 07/23/2023 Last updated : 07/26/2023 # Azure Peering Service customer walkthrough This section explains the steps to optimize your prefixes with an Internet Service Provider (ISP) or Internet Exchange Provider (IXP) who is a Peering Service partner.
-The complete list of Peering Service providers can be found here: [Peering Service partners](location-partners.md)
+See [Peering Service partners](location-partners.md) for a complete list of Peering Service providers.
## Activate the prefix
To activate the prefix, follow these steps:
1. Review the settings, and then select **Create**.
-## FAQs:
+## Frequently asked questions (FAQ)
**Q.** Will Microsoft re-advertise my prefixes to the Internet?
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
You can monitor your I/O consumption in the Azure portal or by using Azure CLI c
### Maximum IOPS for your configuration
-|SKU Name |Storage Size in GiB |32 |64 |128 |256 |512 |1,024|2,048|4,096|8,192 |16,384|32768 |
+|SKU Name |Storage Size in GiB |32 |64 |128 |256 |512 |1,024|2,048|4,096|8,192 |16,384|32767 |
|||||-|-|--|--|--|--||| | |Maximum IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 | |**Burstable** | | | | | | | | | | | | |
When marked with a \*, IOPS are limited by the VM type you selected. Otherwise I
### Maximum I/O bandwidth (MiB/sec) for your configuration
-|SKU Name |Storage Size, GiB |32 |64 |128 |256 |512 |1,024 |2,048 |4,096 |8,192 |16,384|37,768|
+|SKU Name |Storage Size, GiB |32 |64 |128 |256 |512 |1,024 |2,048 |4,096 |8,192 |16,384|32,767|
||-| | |- |- |-- |-- |-- |-- ||| | |**Storage Bandwidth, MiB/sec** |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 | |**Burstable** | | | | | | | | | | | | |
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md
Follow these steps to restart your flexible server.
initiated. > [!NOTE]
-> Using custom RBAC role to restart server please make sure that in addition to Microsoft.DBforPostgreSQL/flexibleServers/restart/action permission this role also has Microsoft.DbforPostgreSQL/servers/write permission granted to it.
+> Using custom RBAC role to restart server please make sure that in addition to Microsoft.DBforPostgreSQL/flexibleServers/restart/action permission this role also has Microsoft.DbforPostgreSQL/servers/read permission granted to it.
## Next steps - Learn about [business continuity](./concepts-business-continuity.md)
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-5g-core Azure Private 5G Core Release Notes 2305 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2305.md
The following table provides a summary of issues fixed in this release.
|No. |Feature | Issue | Workaround/comments | |--|--|--|--|
- | 1 | Local Dashboards | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory does not transmit via the web proxy. If there is a firewall blocking traffic that does not go via the web proxy then enabling Azure Active Directory will cause the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead.
-Description |
+ | 1 | Local Dashboards | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory does not transmit via the web proxy. If there is a firewall blocking traffic that does not go via the web proxy then enabling Azure Active Directory will cause the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. |
| 2 | Reboot | AP5GC may intermittently fail to recover after the underlying platform is rebooted and may require another reboot to recover. | Not applicable. | ## Known issues from previous releases
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
sap Bom Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bom-prepare.md
To generate a BOM with permalinks:
The following sample is a small part of an example BOM file for S/4HANA 1909 SP2.
-You can find multiple complete, usable BOM files in the [GitHub repository](https://github.com/Azure/sap-automation/tree/main/training-materials/WORKSPACES/BOMS) folder.
- ```yml step|BOM Content
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
Next, upload the SAP software files to the storage account:
1. [HANA_2_00_059_v0004ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/archives/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml)
- 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
-
- 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml)
- 1. For S/4HANA 2020 SPS 03: 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
- 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
-
- 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml)
-
1. For S/4HANA 2021 ISS 00: 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
- 1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
-
- 1. [SUM20SP15_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml)
-
1. Depending on your SAP version, go to the folder **S41909SPS03_v0011ms** or **S42020SPS03_v0003ms** or **S4HANA_2021_ISS_v0001ms**. 1. Create a subfolder named **templates**.
Next, upload the SAP software files to the storage account:
1. [HANA_2_00_059_v0004ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/archives/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml)
- 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
-
- 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml)
-
-
1. For S/4HANA 2020 SPS 03: 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml) 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
- 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
-
- 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml)
-
1. For S/4HANA 2021 ISS 00: 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml) 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
- 1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
-
- 1. [SUM20SP15_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SUM20SP15_latest/SUM20SP15_latest.yaml)
- 1. Repeat the previous step for the main and dependent BOM files. 1. Upload all the packages that you downloaded to the `archives` folder. Don't rename the files.
sap Dbms Guide Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-oracle.md
Previously updated : 07/13/2023 Last updated : 07/25/2023
Installing or migrating existing SAP on Oracle systems to Azure, the following d
For information about which Oracle versions and corresponding OS versions are supported for running SAP on Oracle on Azure Virtual Machines, see SAP Note [<u>2039619</u>](https://launchpad.support.sap.com/#/notes/2039619).
-General information about running SAP Business Suite on Oracle can be found in the [<u>SAP on Oracle community page</u>](https://www.sap.com/community/topic/oracle.html). SAP on Oracle on Azure is only supported on Oracle Linux (and not Suse or Red Hat). Oracle RAC isn't supported on Azure because RAC would require Multicast networking.
+General information about running SAP Business Suite on Oracle can be found in the [<u>SAP on Oracle community page</u>](https://www.sap.com/community/topic/oracle.html). SAP on Oracle on Azure is only supported on Oracle Linux (and not Suse or Red Hat) for application and database servers.
+ASCS/ERS servers can use RHEL/SUSE because Oracle client isn't installed or used on these VMs. Application Servers (PAS/AAS) shouldn't be installed on these VMs. Refer to SAP Note [3074643 - OLNX: FAQ: if Pacemaker for Oracle Linux is supported in SAP Environment](https://me.sap.com/notes/3074643). Oracle RAC isn't supported on Azure because RAC would require Multicast networking.
## Storage configuration
Checklist for Oracle Automatic Storage Management:
1. All SAP on Oracle on Azure systems are running **ASM** including Development, QAS and Production. Small, Medium and Large databases 2. [**ASMLib**](https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/about-oracle-asm-with-oracle-asmlib.html)
- is used and not UDEV. UDEV is required for multiple SANs, a scenario that does not exist on Azure
+ is used and not UDEV. UDEV is required for multiple SANs, a scenario that doesn't exist on Azure
3. ASM should be configured for **External Redundancy**. Azure Premium SSD storage has built in triple redundancy. Azure Premium SSD matches the reliability and integrity of any other storage solution. For optional safety customers can consider **Normal Redundancy** for the Log Disk Group 4. No Mirror Log is required for ASM [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626) 5. ASM Disk Groups configured as per Variant 1, 2 or 3 below
Documentation is available with:
Run an Oracle AWR report as the first step when troubleshooting a performance problem. Disk performance metrics are detailed in the AWR report.
-Disk performance can be monitored from inside Oracle Enterprise Manager and via external tools. Documentation which might help is available here:
+Disk performance can be monitored from inside Oracle Enterprise Manager and via external tools. Documentation, which might help is available here:
- [Using Views to Display Oracle ASM Information](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/views-asm-info.html#GUID-23E1F0D8-ECF5-4A5A-8C9C-11230D2B4AD4) - [ASMCMD Disk Group Management Commands (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/asmcmd-diskgroup-commands.html#GUID-55F7A91D-2197-467C-9847-82A3308F0392)
OS level monitoring tools can't monitor ASM disks as there is no recognizable fi
### Training Resources on Oracle Automatic Storage Management (ASM)
-Oracle DBAs that are not familiar with Oracle ASM follow the training materials and resources here:
+Oracle DBAs that aren't familiar with Oracle ASM follow the training materials and resources here:
- [Sap on Oracle with ASM on Microsoft Azure - Part1 - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-oracle-with-asm-on-microsoft-azure-part1/ba-p/1865024) - [Oracle19c DB \[ ASM \] installation on \[ Oracle Linux 8.3 \] \[ Grid \| ASM \| UDEV \| OEL 8.3 \] \[ VMware \] - YouTube](https://www.youtube.com/watch?v=pRJgiuT-S2M) - [ASM Administrator's Guide (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/automatic-storage-management-administrators-guide.pdf)-- [Oracle for SAP Technology Update (April 2022)](https://www.oracle.com/a/ocom/docs/ora4sap-technology-update-5112158.pdf)
+- [Oracle for SAP Development Update (May 2022)](https://www.oracle.com/a/ocom/docs/sap-on-oracle-dev-update.pdf)
- [Performance and Scalability Considerations for Disk Groups (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/performance-scability-diskgroup.html#GUID-BC6544D7-6D59-42B3-AE1F-4201D3459ADD) - [Migrating to Oracle ASM with Oracle Enterprise Manager](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/admin-asm-em.html#GUID-002546C0-7D5F-46E9-B3AD-CDCFF25AFEA0) - [Using RMAN to migrate to ASM \| The Oracle Mentor (wordpress.com)](https://theoraclementor.wordpress.com/2013/07/07/using-rman-to-migrate-to-asm/)
or other backup tools.
## SAP on Oracle on Azure with LVM
-ASM is the default recommendation from Oracle for all SAP systems of any size on Azure. Performance, Reliability and Support are better for customers using ASM. Oracle provide documentation and training for DBAs to transition to ASM and every customer who has migrated to ASM has been pleased with the benefits. In cases where the Oracle DBA team doesn't follow the recommendation from Oracle, Microsoft and SAP to use ASM the following LVM configuration should be used.
+ASM is the default recommendation from Oracle for all SAP systems of any size on Azure. Performance, Reliability and Support are better for customers using ASM. Oracle provides documentation and training for DBAs to transition to ASM and every customer who has migrated to ASM has been pleased with the benefits. In cases where the Oracle DBA team doesn't follow the recommendation from Oracle, Microsoft and SAP to use ASM the following LVM configuration should be used.
Note that: when creating LVM the ΓÇ£-iΓÇ¥ option must be used to evenly distribute data across the number of disks in the LVM group.
SAP on Oracle on Azure also supports Windows. The recommendations for Windows de
1. The following Windows releases are recommended: Windows Server 2022 (only from Oracle Database 19.13.0 on) Windows Server 2019 (only from Oracle Database 19.5.0 on)
-2. There is no support for ASM on Windows. Windows Storage Spaces should be used to aggregate disks for optimal performance
+2. There's no support for ASM on Windows. Windows Storage Spaces should be used to aggregate disks for optimal performance
3. Install the Oracle Home on a dedicated independent disk (don't install Oracle Home on the C: Drive) 4. All disks must be formatted NTFS 5. Follow the Windows Tuning guide from Oracle and enable large pages, lock pages in memory and other Windows specific settings
-At the time, of writing ASM for Windows customers on Azure isn't supported. SWPM for Windows does not support ASM currently. VLDB SAP on Oracle migrations to Azure have required ASM and have therefore selected Oracle Linux.
+At the time, of writing ASM for Windows customers on Azure isn't supported. SWPM for Windows doesn't support ASM currently. VLDB SAP on Oracle migrations to Azure have required ASM and have therefore selected Oracle Linux.
## Storage Configurations for SAP on Oracle on Windows
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 06/29/2023 Last updated : 07/25/2023
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- July 25, 2023: Adding reference to SAP Note #3074643 to [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
- July 13, 2023: Clarifying dfifferences in zonal replication between NFS on AFS and ANF in table in [Azure Storage types for SAP workload](./planning-guide-storage.md) - July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 do not show any performance difference in [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md)-- July 13, 2023: Replaced links in ANF section of [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md#) to new ANF related documentation
+- July 13, 2023: Replaced links in ANF section of [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) to new ANF related documentation
- July 11, 2023: Add a note about Azure NetApp Files application volume group for SAP HANA in [HA for HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [HANA scale-out with standby node with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for HANA Scale-out HA on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) and [HA for HANA scale-out on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) - June 29, 2023: Update important considerations and sizing information in [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) - June 26, 2023: Update important considerations and sizing information in [HA for HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md) and [HANA scale-out with standby node with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md).
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
search Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md
Previously updated : 01/04/2023 Last updated : 07/26/2023 # C# samples for Azure Cognitive Search
Code samples from the Azure SDK development team demonstrate API usage. You can
| [FieldBuilderIgnore](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/samples/Sample04_FieldBuilderIgnore.md) | Demonstrates a technique for working with unsupported data types. | | [Indexing documents (push model)](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md) | "Push" model indexing, where you send a JSON payload to an index on a service. | | [Encryption key sample](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/samples/Sample06_EncryptedIndex.md) | Demonstrates using a customer-managed encryption key to add an extra layer of protection over sensitive content. |
+| [Vector search sample](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/samples/Sample07_VectorSearch.md) | Shows you how to index a vector field and perform vector search using the Azure SDK for .NET. Vector search is in preview. A beta version of azure.search.documents provides support for this preview feature. |
## Doc samples
Code samples from the Cognitive Search team demonstrate features and workflows.
| [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo) | [How to use the .NET client library](search-howto-dotnet-sdk.md) | Steps through the basic workflow, but in more detail and with discussion of API usage. | | [DotNetHowToSynonyms](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSynonyms) | [Example: Add synonyms in C#](search-synonyms-tutorial-sdk.md) | Synonym lists are used for query expansion, providing matchable terms that are external to an index. | | [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | [Tutorial: Index Azure SQL data](search-indexer-tutorial.md) | Shows how to configure an Azure SQL indexer that has a schedule, field mappings, and parameters. |
-| [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | [How to configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md) | Shows how to create objects that are encrypted with a customer key. |
+| [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | [How to configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md) | Shows how to create objects that are encrypted with a Customer Key. |
| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index. | [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. | | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) | Shows how to configure an indexer and skillset. | | [create-mvc-app](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-mvc-app) | [Tutorial: Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) | While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, and other server-side behaviors.|
+## Accelerators
+
+An accelerator is an end-to-end solution that includes code and documentation that you can adapt for your own implementation of a specific scenario.
+
+| Samples | Repository | Description |
+|||-|
+| [Search + QnA Maker Accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator) | [search-qna-maker-accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator)| A [solution](https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381) combining the power of Search and QnA Maker. See the live [demo site](https://aka.ms/qnaWithAzureSearchDemo). |
+| [Knowledge Mining Solution Accelerator](/samples/azure-samples/azure-search-knowledge-mining/azure-search-knowledge-mining/) | [azure-search-knowledge-mining](https://github.com/azure-samples/azure-search-knowledge-mining/tree/main/) | Includes templates, support files, and analytical reports to help you prototype an end-to-end knowledge mining solution. |
+
+## Demos
+
+A demo repo provides proof-of-concept source code for examples or scenarios shown in demonstrations. Demo solutions aren't designed for adaptation by customers.
+
+| Samples | Repository | Description |
+|||-|
+| [Covid-19 search app](https://github.com/liamc) | [covid19search](https://github.com/liamca/covid19search) | Source code repository for the Cognitive Search based [Covid-19 Search App](https://covid19search.azurewebsites.net/) |
+| [JFK demo](https://github.com/Microsoft/AzureSearch_JFK_Files/blob/master/README.md) | [AzureSearch_JFK_Files](https://github.com/Microsoft/AzureSearch_JFK_Files) | Learn more about the [JFK solution](https://www.microsoft.com/ai/ai-lab-jfk-files). |
+ ## Other samples The following samples are also published by the Cognitive Search team, but aren't referenced in documentation. Associated readme files provide usage instructions.
-| Samples | Description |
-||-|
-| [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/check-storage-usage/README.md) | Invokes an Azure function that checks search service storage on a schedule. |
-| [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/export-dat) | C# console app that partitions and export a large index. |
-| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/multiple-search-services) | Issue a single query across multiple search services and combine the results into a single page. |
-| [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) | Source code demonstrating indexer connections and indexing of Azure Data Lake Gen2 files and folders that are secured through Azure AD and role-based access controls. |
-| [azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | Source code for consumable custom skills that you can incorporate in your won solutions. |
-| [Knowledge Mining Solution Accelerator](/samples/azure-samples/azure-search-knowledge-mining/azure-search-knowledge-mining/) | Includes templates, support files, and analytical reports to help you prototype an end-to-end knowledge mining solution. |
-| [Covid-19 Search App repository](https://github.com/liamca/covid19search) | Source code repository for the Cognitive Search based [Covid-19 Search App](https://covid19search.azurewebsites.net/) |
-| [JFK](https://github.com/Microsoft/AzureSearch_JFK_Files) | Learn more about the [JFK solution](https://www.microsoft.com/ai/ai-lab-jfk-files). |
-| [Search + QnA Maker Accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator) | A [solution](https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381) combining the power of Search and QnA Maker. See the live [demo site](https://aka.ms/qnaWithAzureSearchDemo). |
+| Samples | Repository | Description |
+|||-|
+| [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/check-storage-usage/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Invokes an Azure function that checks search service storage on a schedule. |
+| [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/export-dat) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that partitions and export a large index. |
+| [Backup and restore an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/index-backup-restore/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that copies an index from one service to another, and in the process, creates JSON files on your computer with the index schema and documents.|
+| [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/master/data-lake-gen2-acl-indexing/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Source code demonstrating indexer connections and indexing of Azure Data Lake Gen2 files and folders that are secured through Azure AD and role-based access controls. |
+| [Search aggregations](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/search-aggregations/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Proof-of-concept source code that demonstrates how to obtain aggregations from a search index and then filter by them. |
+| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/multiple-search-services) | [azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples) | Issue a single query across multiple search services and combine the results into a single page. |
+| [Power Skills](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/README.md) | [azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | Source code for consumable custom skills that you can incorporate in your won solutions. |
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-azure-data-lake-storage.md
In this article, learn how to configure an [**indexer**](search-indexer-overview
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing from ADLS Gen2. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-For a code sample in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
+For a code sample in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
## Prerequisites
You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor stat
+ [Change detection and deletion detection](search-howto-index-changed-deleted-blobs.md) + [Index large data sets](search-howto-large-index.md)
-+ [C# Sample: Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md)
++ [C# Sample: Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/master/data-lake-gen2-acl-indexing/README.md)
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
You can use a system-assigned managed identity or a user-assigned managed identi
* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-indexing-azure-blob-storage.md). > [!TIP]
-> For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
+> For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
## Create the data source
Azure storage accounts can be further secured using firewalls and virtual networ
* [Azure Blob indexer](search-howto-indexing-azure-blob-storage.md) * [Azure Data Lake Storage Gen2 indexer](search-howto-index-azure-data-lake-storage.md) * [Azure Table indexer](search-howto-indexing-azure-tables.md)
-* [C# Example: Index Data Lake Gen2 using Azure AD (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md)
+* [C# Example: Index Data Lake Gen2 using Azure AD (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/master/data-lake-gen2-acl-indexing/README.md)
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md
The following table lists the modifications that require an index rebuild.
| Assign an analyzer to a field | [Analyzers](search-analyzers.md) are defined in an index and then assigned to fields. You can add a new analyzer definition to an index at any time, but you can only *assign* an analyzer when the field is created. This is true for both the **analyzer** and **indexAnalyzer** properties. The **searchAnalyzer** property is an exception (you can assign this property to an existing field). | | Update or delete an analyzer definition in an index | You can't delete or change an existing analyzer configuration (analyzer, tokenizer, token filter, or char filter) in the index unless you rebuild the entire index. | | Add a field to a suggester | If a field already exists and you want to add it to a [Suggesters](index-add-suggesters.md) construct, you must rebuild the index. |
-| Switch tiers | In-place upgrades aren't supported. If you require more capacity, you must create a new service and rebuild your indexes from scratch. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples). This app will back up your index to a series of JSON files, and then recreate the index in a search service you specify.|
+| Switch tiers | In-place upgrades aren't supported. If you require more capacity, you must create a new service and rebuild your indexes from scratch. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-utilities). This app will back up your index to a series of JSON files, and then recreate the index in a search service you specify.|
## Modifications with no rebuild requirement
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
Previously updated : 10/21/2022 Last updated : 07/25/2023 # Import data wizard in Azure Cognitive Search
The wizard isn't without limitations. Constraints are summarized as follows:
+ A [knowledge store](knowledge-store-concept-intro.md), which can be created by the wizard, is limited to a few default projections and uses a default naming convention. If you want to customize names or projections, you'll need to create the knowledge store through REST API or the SDKs.
-+ Public access to all networks must be enabled on the supported data source while the wizard is used, since the portal won't be able to access the data source during setup if public access is disabled. This means that if your data source has a firewall enabled, you must disable it, run the Import Data wizard and then enable it after wizard setup is completed. If this isn't an option, you can create Azure Cognitive Search data source, indexer, skillset and index through REST API or the SDKs.
++ Public access to all networks must be enabled on the supported data source while the wizard is used, since the portal won't be able to access the data source during setup if public access is disabled. This means that if your data source has a firewall enabled or you have set a shared private link, you must disable them, run the Import Data wizard and then enable it after wizard setup is completed. If this isn't an option, you can create Azure Cognitive Search data source, indexer, skillset and index through REST API or the SDKs. ## Workflow
search Search Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md
If you don't use indexers, you would use your application code to push objects a
## Back up and restore alternatives
-A business continuity strategy for the data layer usually includes a restore-from-backup step. Because Azure Cognitive Search isn't a primary data storage solution, Microsoft doesn't provide a formal mechanism for self-service backup and restore. However, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples) to back up your index definition and snapshot to a series of JSON files, and then use these files to restore the index, if needed. This tool can also move indexes between service tiers.
+A business continuity strategy for the data layer usually includes a restore-from-backup step. Because Azure Cognitive Search isn't a primary data storage solution, Microsoft doesn't provide a formal mechanism for self-service backup and restore. However, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-utilities) to back up your index definition and snapshot to a series of JSON files, and then use these files to restore the index, if needed. This tool can also move indexes between service tiers.
Otherwise, your application code used for creating and populating an index is the de facto restore option if you delete an index by mistake. To rebuild an index, you would delete it (assuming it exists), recreate the index in the service, and reload by retrieving data from your primary data store.
security Threat Modeling Tool Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-getting-started.md
What Ricardo just showed Cristina is a DFD, short for **[Data Flow Diagram](http
## Analyzing threats
-Once he clicks on the analysis view from the icon menu selection (file with magnifying glass), he is taken to a list of generated threats the Threat Modeling Tool found based on the default template, which uses the SDL approach called **[STRIDE (Spoofing, Tampering, Info Disclosure, Repudiation, Denial of Service and Elevation of Privilege)](https://en.wikipedia.org/wiki/STRIDE_(security))**. The idea is that software comes under a predictable set of threats, which can be found using these 6 categories.
+Once he clicks on the analysis view from the icon menu selection (file with magnifying glass), he is taken to a list of generated threats the Threat Modeling Tool found based on the default template, which uses the SDL approach called **[STRIDE (Spoofing, Tampering, Repudiation, Info Disclosure, Denial of Service and Elevation of Privilege)](https://en.wikipedia.org/wiki/STRIDE_(security))**. The idea is that software comes under a predictable set of threats, which can be found using these 6 categories.
This approach is like securing your house by ensuring each door and window has a locking mechanism in place before adding an alarm system or chasing after the thief.
The approach to threat modeling we've presented here is substantially simpler th
## Next Steps
-Send your questions, comments and concerns to tmtextsupport@microsoft.com. **[Download](https://aka.ms/threatmodelingtool)** the Threat Modeling Tool to get started.
+Send your questions, comments and concerns to tmtextsupport@microsoft.com. **[Download](https://aka.ms/threatmodelingtool)** the Threat Modeling Tool to get started.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
Microsoft Sentinel is a scalable, cloud-native, security information event manag
For more information, see the [Microsoft Sentinel product documentation](../../sentinel/overview.md).
-The following tables display the current Microsoft Sentinel feature availability in Azure and Azure Government.
-
-| Feature | Azure | Azure Government |
-| -- | -- | - |
-| **Incidents** | | |
-|- [Automation rules](../../sentinel/automate-incident-handling-with-automation-rules.md) | Public Preview | Public Preview |
-| - [Cross-tenant/Cross-workspace incidents view](../../sentinel/multiple-workspace-view.md) |GA | GA |
-| - [Entity insights](../../sentinel/enable-entity-behavior-analytics.md) | GA | Public Preview |
-|- [SOC incident audit metrics](../../sentinel/manage-soc-with-incident-metrics.md) | GA | GA |
-| - [Incident advanced search](../../sentinel/investigate-cases.md#search-for-incidents) |GA |GA |
-| - [Microsoft 365 Defender incident integration](../../sentinel/microsoft-365-defender-sentinel-integration.md) | GA | GA |
-| - [Microsoft Teams integrations](../../sentinel/collaborate-in-microsoft-teams.md) |Public Preview |Not Available |
-|- [Bring Your Own ML (BYO-ML)](../../sentinel/bring-your-own-ml.md) | Public Preview | Public Preview |
-|- [Search large datasets](../../sentinel/investigate-large-datasets.md) | Public Preview | Not Available |
-|- [Restore historical data](../../sentinel/investigate-large-datasets.md) | Public Preview | Not Available |
-| **Notebooks** | | |
-|- [Notebooks](../../sentinel/notebooks.md) | GA | GA |
-| - [Notebook integration with Azure Synapse](../../sentinel/notebooks-with-synapse.md) | Public Preview | Not Available|
-| **Watchlists** | | |
-|- [Watchlists](../../sentinel/watchlists.md) | GA | GA |
-|- [Large watchlists from Azure Storage](../../sentinel/watchlists.md) | Public Preview | Not Available |
-|- [Watchlist templates](../../sentinel/watchlists.md) | Public Preview | Not Available |
-| **Workspace Manager** | | |
-| - [Workspace manager](../../sentinel/workspace-manager.md) | Public Preview | Public Preview |
-| **Hunting** | | |
-| - [Hunting](../../sentinel/hunting.md) | GA | GA |
-| - [Hunts](../../sentinel/hunts.md) | Public Preview | Not Available |
-| **Content and content management** | | |
-| - [Content hub](../../sentinel/sentinel-solutions.md) and [solutions](../../sentinel/sentinel-solutions-catalog.md) | Public Preview | Public Preview |
-| - [Repositories](../../sentinel/ci-cd.md?tabs=github) | Public Preview | Not Available |
-| **Data collection** | | |
-| - [Advanced SIEM Information Model (ASIM)](../../sentinel/normalization.md) | Public Preview | Not Available |
-| **Threat intelligence support** | | |
-| - [Threat Intelligence - TAXII data connector](../../sentinel/understand-threat-intelligence.md) | GA | GA |
-| - [Threat Intelligence Platform data connector](../../sentinel/understand-threat-intelligence.md) | Public Preview | Not Available |
-| - [Threat Intelligence Research Blade](https://techcommunity.microsoft.com/t5/azure-sentinel/what-s-new-threat-intelligence-menu-item-in-public-preview/ba-p/1646597) | GA | GA |
-| - [Add indicators in bulk to threat intelligence by file](../../sentinel/indicators-bulk-file-import.md) | Public Preview | Not Available |
-| - [URL Detonation](https://techcommunity.microsoft.com/t5/azure-sentinel/using-the-new-built-in-url-detonation-in-azure-sentinel/ba-p/996229) | Public Preview | Not Available |
-| - [Threat Intelligence workbook](/azure/architecture/example-scenario/data/sentinel-threat-intelligence) | GA | GA |
-| - [GeoLocation and WhoIs data enrichment](../../sentinel/work-with-threat-indicators.md) | Public Preview | Not Available |
-| - [Threat intelligence matching analytics](../../sentinel/work-with-threat-indicators.md) | Public Preview |Not Available |
-|**Detection support** | | |
-| - [Fusion](../../sentinel/fusion.md)<br>Advanced multistage attack detections <sup>[1](#footnote1)</sup> | GA | GA |
-| - [Fusion detection for ransomware](../../sentinel/fusion.md#fusion-for-ransomware) | Public Preview | Not Available |
-| - [Fusion for emerging threats](../../sentinel/fusion.md#fusion-for-emerging-threats) | Public Preview |Not Available |
-| - [Anomalous Windows File Share Access Detection](../../sentinel/fusion.md) | Public Preview | Not Available |
-| - [Anomalous RDP Login Detection](../../sentinel/configure-connector-login-detection.md)<br>Built-in ML detection | Public Preview | Not Available |
-| - [Anomalous SSH login detection](../../sentinel/connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection)<br>Built-in ML detection | Public Preview | Not Available |
-| **Domain solution content** | | |
-| - [Apache Log4j Vulnerability Detection](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| - [Cybersecurity Maturity Model Certification (CMMC)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| - [Microsoft Defender for IoT](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| - [Maturity Model for Event Log Management M2131](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| - [Microsoft Insider Risk Management (IRM)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| - [Microsoft Sentinel Deception](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| - [Zero Trust (TIC3.0)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| **Azure service connectors** | | |
-| - [Azure Activity Logs](../../sentinel/data-connectors/azure-activity.md) | GA | GA |
-| - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA |
-| - [Azure ADIP](../../sentinel/data-connectors/azure-active-directory-identity-protection.md) | GA | GA |
-| - [Azure DDoS Protection](../../sentinel/data-connectors/azure-ddos-protection.md) | GA | GA |
-| - [Microsoft Purview](../../sentinel/data-connectors/microsoft-purview.md) | Public Preview | Not Available |
-| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA |
-| - [Microsoft Defender for IoT](../../sentinel/data-connectors/microsoft-defender-for-iot.md) | GA | GA |
-| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
-| - [Azure Firewall](../../sentinel/data-connectors/azure-firewall.md) | GA | GA |
-| - [Azure Information Protection](../../sentinel/data-connectors/azure-information-protection.md) | Public Preview | Not Available |
-| - [Azure Key Vault](../../sentinel/data-connectors/azure-key-vault.md) | Public Preview | Not Available |
-| - [Azure Kubernetes Services (AKS)](../../sentinel/data-connectors/azure-kubernetes-service-aks.md) | Public Preview | Not Available |
-| - [Azure WAF](../../sentinel/data-connectors/azure-web-application-firewall-waf.md) | GA | GA |
-| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA |
-| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
-| **Windows connectors** | | |
-| - [Windows Firewall](../../sentinel/data-connectors/windows-firewall.md) | GA | GA |
-| - [Windows Security Events](/azure/sentinel/connect-windows-security-events) | GA | GA |
-| **[External connectors](https://azuremarketplace.microsoft.com/marketplace/apps?filters=solution-templates&page=1&search=sentinel)** | | |
-| - Agari Phishing Defense and Brand Protection | Public Preview | Public Preview |
-| - AI Analyst Darktrace | Public Preview | Public Preview |
-| - AI Vectra Detect | Public Preview | Public Preview |
-| - Akamai Security Events | Public Preview | Public Preview |
-| - Alcide kAudit | Public Preview | Not Available |
-| - Alsid for Active Directory | Public Preview | Not Available |
-| - Apache HTTP Server | Public Preview | Not Available |
-| - Arista Networks | Public Preview | Not Available |
-| - Armorblox | Public Preview | Not Available |
-| - Aruba ClearPass | Public Preview | Public Preview |
-| - AWS | GA | GA |
-| - Barracuda CloudGen Firewall | GA | GA |
-| - Barracuda Web App Firewall | GA | GA |
-| - BETTER Mobile Threat Defense MTD | Public Preview | Not Available |
-| - Beyond Security beSECURE | Public Preview | Not Available |
-| - Blackberry CylancePROTECT | Public Preview | Public Preview |
-| - Box | Public Preview | Not Available |
-| - Broadcom Symantec DLP | Public Preview | Public Preview |
-| - Check Point | GA | GA |
-| - Cisco ACI | Public Preview | Not Available |
-| - Cisco ASA | GA | GA |
-| - Cisco Duo Security | Public Preview | Not Available |
-| - Cisco ISE | Public Preview | Not Available |
-| - Cisco Meraki | Public Preview | Public Preview |
-| - Cisco Secure Email Gateway / ESA | Public Preview | Not Available |
-| - Cisco Umbrella | Public Preview | Public Preview |
-| - Cisco UCS | Public Preview | Public Preview |
-| - Cisco Firepower EStreamer | Public Preview | Public Preview |
-| - Cisco Web Security Appliance (WSA) | Public Preview | Not Available |
-| - Citrix Analytics WAF | GA | GA |
-| - Cloudflare | Public Preview | Not Available |
-| - [Common Event Format (CEF)](../../sentinel/connect-common-event-format.md) | GA | GA |
-| - Contrast Security | Public Preview | Not Available |
-| - CrowdStrike | Public Preview | Not Available |
-| - CyberArk Enterprise Password Vault (EPV) Events | Public Preview | Public Preview |
-| - Digital Guardian | Public Preview | Not Available |
-| - ESET Enterprise Inspector | Public Preview | Not Available |
-| - Eset Security Management Center| Public Preview | Not Available |
-| - ExtraHop Reveal(x) | GA | GA |
-| - F5 BIG-IP | GA | GA |
-| - F5 Networks | GA | GA |
-| - FireEye NX (Network Security) | Public Preview | Not Available |
-| - Flare Systems Firework| Public Preview | Not Available |
-| - Forcepoint NGFW | Public Preview | Public Preview |
-| - Forcepoint CASB | Public Preview | Public Preview |
-| - Forcepoint DLP | Public Preview | Not Available |
-| - Forescout| Public Preview | Not Available |
-| - ForgeRock Common Audit for CEF| Public Preview | Public Preview |
-| - Fortinet | GA | GA |
-| - Google Cloud Platform DNS | Public Preview | Not Available |
-| - Google Cloud Platform | Public Preview | Not Available |
-| - Google Workspace (G Suite) | Public Preview | Not Available |
-| - Illusive Attack Management System| Public Preview | Public Preview |
-| - Imperva WAF Gateway | Public Preview | Public Preview |
-| - InfoBlox Cloud| Public Preview | Not Available |
-| - Infoblox NIOS | Public Preview | Public Preview |
-| - Juniper IDP | Public Preview | Not Available |
-| - Juniper SRX | Public Preview | Public Preview |
-| - Kaspersky AntiVirus | Public Preview | Not Available |
-| - Lookout Mobile Threat Defense| Public Preview | Not Available |
-| - McAfee ePolicy | Public Preview | Not Available |
-| - McAfee Network Security Platform | Public Preview | Not Available |
-| - Morphisec UTPP | Public Preview | Public Preview |
-| - Netskope | Public Preview | Public Preview |
-| - NXLog Windows DNS | Public Preview | Not Available |
-| - NXLog LinuxAudit | Public Preview | Not Available |
-| - Okta Single Sign On | Public Preview | Public Preview |
-| - Onapsis Platform | Public Preview | Public Preview |
-| - One Identity Safeguard | GA | GA |
-| - Oracle Cloud Infrastructure| Public Preview | Not Available |
-| - Oracle Database Audit| Public Preview | Not Available |
-| - Orca Security Alerts | Public Preview | Not Available |
-| - Palo Alto Networks | GA | GA |
-| - Perimeter 81 Activity Logs | GA | Not Available |
-| - Ping Identity | Public Preview | Not Available |
-| - Proofpoint On Demand Email Security| Public Preview | Not Available |
-| - Proofpoint TAP | Public Preview | Public Preview |
-| - Pulse Connect Secure | Public Preview | Public Preview |
-| - Qualys Vulnerability Management | Public Preview | Public Preview |
-| - Rapid7 | Public Preview | Not Available |
-| - RSA SecurID | Public Preview | Not Available |
-| - Salesforce Service Cloud | Public Preview | Not Available |
-| - [SAP (Microsoft Sentinel Solution for SAP)](../../sentinel/sap/deployment-overview.md) | GA | GA |
-| - Semperis | Public Preview | Not Available |
-| - Senserva Pro | Public Preview | Not Available |
-| - Slack Audit | Public Preview | Not Available |
-| - SonicWall Firewall | Public Preview | Public Preview |
-| - Sonrai Security | Public Preview | Not Available |
-| - Sophos Cloud Optix | Public Preview | Not Available |
-| - Sophos XG Firewall | Public Preview | Public Preview |
-| - Squadra Technologies secRMM | GA | GA |
-| - Squid Proxy | Public Preview | Not Available |
-| - Symantec Integrated Cyber Defense Exchange | GA | GA |
-| - Symantec ProxySG | Public Preview | Public Preview |
-| - Symantec VIP | Public Preview | Public Preview |
-| - [Syslog](../../sentinel/connect-syslog.md) | GA | GA |
-| - Tenable | Public Preview | Not Available |
-| - Thycotic Secret Server | Public Preview | Public Preview |
-| - Trend Micro Deep Security | GA | GA |
-| - Trend Micro TippingPoint | Public Preview | Public Preview |
-| - Trend Micro XDR | Public Preview | Not Available |
-| - Ubiquiti | Public Preview | Not Available |
-| - vArmour | Public Preview | Not Available |
-| - Vectra | Public Preview | Not Available |
-| - VMware Carbon Black Endpoint Standard | Public Preview | Public Preview |
-| - VMware ESXi | Public Preview | Public Preview |
-| - WireX Network Forensics Platform | Public Preview | Public Preview |
-| - Zeek Network (Corelight) | Public Preview | Not Available |
-| - Zimperium Mobile Threat Defense | Public Preview | Not Available |
-| - Zscaler | GA | GA |
-
-<sup><a name="footnote1"></a>1</sup> SSH and RDP detections are not supported for sovereign clouds because the Databricks ML platform is not available.
+For Microsoft Sentinel feature availability in Azure, Azure Government, and Azure China 21 Vianet, see [Microsoft Sentinel feature support for Azure clouds](../../sentinel/feature-availability.md).
### Microsoft Purview Data Connectors
sentinel Connect Google Cloud Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-google-cloud-platform.md
You can set up the GCP environment in one of two ways:
terraform apply ```
-1. Type your Microsoft tenant ID. Learn how to [find your tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
+1. Type your Microsoft tenant ID. Learn how to [find your tenant ID](/azure/active-directory-b2c/tenant-management-read-tenant-name).
1. When asked if a workload Identity Pool has already been created for Azure, type *yes* or *no*. 1. When asked if you want to create the resources listed, type *yes*. 1. Save the resources parameters for later use.
sentinel Argos Cloud Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/argos-cloud-security.md
Enter the information into the [ARGOS Sentinel](https://app.argos-security.io/ac
New detections will automatically be forwarded.
-[Learn more about the integration](https://www.argos-security.io/resources#integrations)
+[Learn more about the integration](https://argos-security.io/faq/)
sentinel Digital Shadows Searchlight Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/digital-shadows-searchlight-using-azure-function.md
Set the `DigitalShadowsURL` value to: `https://api.searchlight.app/v1`
Set the `HighVariabilityClassifications` value to: `exposed-credential,marked-document` Set the `ClassificationFilterOperation` value to: `exclude` for exclude function app or `include` for include function app >Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+ - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: ``` https://CustomerId.ods.opinsights.azure.us ```.
4. Once all application settings have been entered, click **Save**.
sentinel Holm Security Asset Data Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/holm-security-asset-data-using-azure-function.md
To integrate with Holm Security Asset Data (using Azure Functions) make sure you
**STEP 1 - Configuration steps for the Holm Security API**
- [Follow these instructions](https://support.holmsecurity.com/hc/en-us/articles/360027651591-How-do-I-set-up-an-API-token-) to create an API authentication token.
+ [Follow these instructions](https://support.holmsecurity.com/knowledge/how-do-i-set-up-an-api-token) to create an API authentication token.
**STEP 2 - Use the below deployment option to deploy the connector and the associated Azure Function**
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
Title: Cloud feature availability in Microsoft Sentinel
+ Title: Microsoft Sentinel feature support for Azure clouds
description: This article describes feature availability in Microsoft Sentinel across different Azure environments.--++ Previously updated : 02/02/2023+ Last updated : 07/25/2023
-# Cloud feature availability in Microsoft Sentinel
+# Microsoft Sentinel feature support for Azure clouds
-This article describes feature availability in Microsoft Sentinel across different Azure environments.
+This article describes the features available in Microsoft Sentinel across different Azure environments. Features are listed as GA (generally available), public preview, or shown as not available.
## Analytics
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Analytics rules health](monitor-analytics-rule-integrity.md) |Public Preview |&#x2705; |&#10060; |
-|[MITRE ATT&CK dashboard](mitre-coverage.md) |Public Preview |&#x2705; |&#10060; |
-|[NRT rules](near-real-time-rules.md) |Public Preview |&#x2705; |&#x2705; |
-|[Recommendations](detection-tuning.md) |Public Preview |&#x2705; |&#10060; |
-|[Scheduled](detect-threats-built-in.md) and [Microsoft rules](create-incidents-from-alerts.md) |GA |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Analytics rules health](monitor-analytics-rule-integrity.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[MITRE ATT&CK dashboard](mitre-coverage.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[NRT rules](near-real-time-rules.md) |Public preview |&#x2705; |&#x2705; |&#x2705; |
+|[Recommendations](detection-tuning.md) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Scheduled](detect-threats-built-in.md) and [Microsoft rules](create-incidents-from-alerts.md) |GA |&#x2705; |&#x2705; |&#x2705; |
## Content and content management
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Content hub](sentinel-solutions.md) and [solutions](sentinel-solutions-catalog.md) |Public preview |&#x2705; |&#10060; |
-|[Repositories](ci-cd.md?tabs=github) |Public preview |&#x2705; |&#10060; |
-|[Workbooks](monitor-your-data.md) |GA |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Content hub](sentinel-solutions.md) and [solutions](sentinel-solutions-catalog.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Repositories](ci-cd.md?tabs=github) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Workbooks](monitor-your-data.md) |GA |&#x2705; |&#x2705; |&#x2705; |
## Data collection
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Amazon Web Services](connect-aws.md?tabs=ct) |GA |&#x2705; |&#10060; |
-|[Amazon Web Services S3 (Preview)](connect-aws.md?tabs=s3) |Public Preview |&#x2705; |&#10060; |
-|[Azure Active Directory](connect-azure-active-directory.md) |GA |&#x2705; |&#x2705; <sup>[1](#logsavailable)</sup> |
-|[Azure Active Directory Identity Protection](connect-services-api-based.md) |GA |&#x2705; |&#10060; |
-|[Azure Activity](data-connectors/azure-activity.md) |GA |&#x2705; |&#x2705; |
-|[Azure DDoS Protection](connect-services-diagnostic-setting-based.md) |GA |&#x2705; |&#10060; |
-|[Azure Firewall](data-connectors/azure-firewall.md) |GA |&#x2705; |&#x2705; |
-|[Azure Information Protection (Preview)](data-connectors/azure-information-protection.md) |Deprecated |&#10060; |&#10060; |
-|[Azure Key Vault](data-connectors/azure-key-vault.md) |Public Preview |&#x2705; |&#x2705; |
-|[Azure Kubernetes Service (AKS)](data-connectors/azure-kubernetes-service-aks.md) |Public Preview |&#x2705; |&#x2705; |
-|[Azure SQL Databases](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-sql-solution-query-deep-dive/ba-p/2597961) |GA |&#x2705; |&#x2705; |
-|[Azure Web Application Firewall (WAF)](data-connectors/azure-web-application-firewall-waf.md) |GA |&#x2705; |&#x2705; |
-|[Cisco ASA](data-connectors/cisco-asa.md) |GA |&#x2705; |&#x2705; |
-|[Codeless Connectors Platform](create-codeless-connector.md?tabs=deploy-via-arm-template%2Cconnect-via-the-azure-portal) |Public Preview |&#x2705; |&#10060; |
-|[Common Event Format (CEF)](connect-common-event-format.md) |GA |&#x2705; |&#x2705; |
-|[Common Event Format (CEF) via AMA (Preview)](connect-cef-ama.md) |Public Preview |&#x2705; |&#x2705; |
-|[Data Connectors health](monitor-data-connector-health.md#use-the-sentinelhealth-data-table-public-preview) |Public Preview |&#x2705; |&#10060; |
-|[DNS](data-connectors/dns.md) |Public Preview |&#x2705; |&#x2705; |
-|[GCP Pub/Sub Audit Logs](connect-google-cloud-platform.md) |Public Preview |&#x2705; |&#10060; |
-|[Microsoft 365 Defender](connect-microsoft-365-defender.md?tabs=MDE) |GA |&#x2705; |&#10060; |
-|[Microsoft Purview Insider Risk Management (Preview)](sentinel-solutions-catalog.md#domain-solutions) |Public Preview |&#x2705; |&#10060; |
-|[Microsoft Defender for Cloud](connect-defender-for-cloud.md) |GA |&#x2705; |&#x2705; |
-|[Microsoft Defender for IoT](connect-services-api-based.md) |GA |&#x2705; |&#10060; |
-|[Microsoft Power BI (Preview)](data-connectors/microsoft-powerbi.md) |Public Preview |&#x2705; |&#10060; |
-|[Microsoft Project (Preview)](data-connectors/microsoft-project.md) |Public Preview |&#x2705; |&#10060; |
-|[Microsoft Purview (Preview)](connect-services-diagnostic-setting-based.md) |Public Preview |&#x2705; |&#10060; |
-|[Microsoft Purview Information Protection](connect-microsoft-purview.md) |Public Preview |&#x2705; |&#10060; |
-|[Office 365](connect-services-api-based.md) |GA |&#x2705; |&#x2705; |
-|[Security Events via Legacy Agent](connect-services-windows-based.md#log-analytics-agent-legacy) |GA |&#x2705; |&#x2705; |
-|[Syslog](connect-syslog.md) |GA |&#x2705; |&#x2705; |
-|[Windows DNS Events via AMA (Preview)](connect-dns-ama.md) |Public Preview |&#x2705; |&#10060; |
-|[Windows Firewall](data-connectors/windows-firewall.md) |GA |&#x2705; |&#x2705; |
-|[Windows Forwarded Events](connect-services-windows-based.md) |GA |&#x2705; |&#x2705; |
-|[Windows Security Events via AMA](connect-services-windows-based.md) |GA |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Amazon Web Services](connect-aws.md?tabs=ct) |GA |&#x2705; |&#10060; |&#10060; |
+|[Amazon Web Services S3 (Preview)](connect-aws.md?tabs=s3) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Azure Active Directory](connect-azure-active-directory.md) |GA |&#x2705; |&#x2705;|&#x2705; <sup>[1](#logsavailable)</sup> |
+|[Azure Active Directory Identity Protection](connect-services-api-based.md) |GA |&#x2705;| &#x2705; |&#10060; |
+|[Azure Activity](data-connectors/azure-activity.md) |GA |&#x2705;| &#x2705;|&#x2705; |
+|[Azure DDoS Protection](connect-services-diagnostic-setting-based.md) |GA |&#x2705;| &#x2705;|&#10060; |
+|[Azure Firewall](data-connectors/azure-firewall.md) |GA |&#x2705;| &#x2705;|&#x2705; |
+|[Azure Information Protection (Preview)](data-connectors/azure-information-protection.md) |Deprecated |&#10060; |&#10060; |&#10060; |
+|[Azure Key Vault](data-connectors/azure-key-vault.md) |Public preview |&#x2705; |&#x2705;|&#x2705; |
+|[Azure Kubernetes Service (AKS)](data-connectors/azure-kubernetes-service-aks.md) |Public preview |&#x2705;| &#x2705;|&#x2705; |
+|[Azure SQL Databases](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-sql-solution-query-deep-dive/ba-p/2597961) |GA |&#x2705; |&#x2705;|&#x2705; |
+|[Azure Web Application Firewall (WAF)](data-connectors/azure-web-application-firewall-waf.md) |GA |&#x2705; |&#x2705;|&#x2705; |
+|[Cisco ASA](data-connectors/cisco-asa.md) |GA |&#x2705; |&#x2705;|&#x2705; |
+|[Codeless Connectors Platform](create-codeless-connector.md?tabs=deploy-via-arm-template%2Cconnect-via-the-azure-portal) |Public preview |&#x2705; |&#10060;|&#10060; |
+|[Common Event Format (CEF)](connect-common-event-format.md) |GA |&#x2705; |&#x2705;|&#x2705; |
+|[Common Event Format (CEF) via AMA (Preview)](connect-cef-ama.md) |Public preview |&#x2705;|&#10060; |&#x2705; |
+|[DNS](data-connectors/dns.md) |Public preview |&#x2705;| &#10060; |&#x2705; |
+|[GCP Pub/Sub Audit Logs](connect-google-cloud-platform.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Microsoft 365 Defender](connect-microsoft-365-defender.md?tabs=MDE) |GA |&#x2705;| &#x2705;|&#10060; |
+|[Microsoft Purview Insider Risk Management (Preview)](sentinel-solutions-catalog.md#domain-solutions) |Public preview |&#x2705; |&#x2705;|&#10060; |
+|[Microsoft Defender for Cloud](connect-defender-for-cloud.md) |GA |&#x2705; |&#x2705; |&#x2705;|
+|[Microsoft Defender for IoT](connect-services-api-based.md) |GA |&#x2705;|&#x2705;|&#10060; |
+|[Microsoft Power BI (Preview)](data-connectors/microsoft-powerbi.md) |Public preview |&#x2705; |&#x2705;|&#10060; |
+|[Microsoft Project (Preview)](data-connectors/microsoft-project.md) |Public preview |&#x2705; |&#x2705;|&#10060; |
+|[Microsoft Purview (Preview)](connect-services-diagnostic-setting-based.md) |Public preview |&#x2705;|&#10060; |&#10060; |
+|[Microsoft Purview Information Protection](connect-microsoft-purview.md) |Public preview |&#x2705;| &#10060;|&#10060; |
+|[Office 365](connect-services-api-based.md) |GA |&#x2705;|&#x2705; |&#x2705; |
+|[Security Events via Legacy Agent](connect-services-windows-based.md#log-analytics-agent-legacy) |GA |&#x2705; |&#x2705;|&#x2705; |
+|[Syslog](connect-syslog.md) |GA |&#x2705;| &#x2705;|&#x2705; |
+|[Windows DNS Events via AMA (Preview)](connect-dns-ama.md) |Public preview |&#x2705; |&#10060;|&#10060; |
+|[Windows Firewall](data-connectors/windows-firewall.md) |GA |&#x2705; |&#x2705;|&#x2705; |
+|[Windows Forwarded Events](connect-services-windows-based.md) |GA |&#x2705;|&#x2705; |&#x2705; |
+|[Windows Security Events via AMA](connect-services-windows-based.md) |GA |&#x2705; |&#x2705;|&#x2705; |
<sup><a name="logsavailable"></a>1</sup> Supports only sign-in logs and audit logs. ## Hunting
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Hunting blade](hunting.md) |GA |&#x2705; |&#x2705; |
-|[Restore historical data](restore.md) |GA |&#x2705; |&#x2705; |
-|[Search large datasets](search-jobs.md) |GA |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Bookmarks](bookmarks.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Hunts](hunts.md) |Public preview|&#x2705; |&#10060; |&#10060; |
+|[Livestream](livestream.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Queries](hunts.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Restore historical data](restore.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Search large datasets](search-jobs.md) |GA |&#x2705; |&#x2705; |&#x2705; |
## Incidents
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Add entities to threat intelligence](add-entity-to-threat-intelligence.md?tabs=incidents) |Public Preview |&#x2705; |&#10060; |
-|[Advanced and/or conditions](add-advanced-conditions-to-automation-rules.md) |Public Preview |&#x2705; |&#x2705; |
-|[Automation rules](automate-incident-handling-with-automation-rules.md) |Public Preview |&#x2705; |&#x2705; |
-|[Automation rules health](monitor-automation-health.md) |Public Preview |&#x2705; |&#10060; |
-|[Create incidents manually](create-incident-manually.md) |Public Preview |&#x2705; |&#x2705; |
-|[Cross-tenant/Cross-workspace incidents view](multiple-workspace-view.md) |GA |&#x2705; |&#x2705; |
-|[Incident advanced search](investigate-cases.md#search-for-incidents) |GA |&#x2705; |&#x2705; |
-|[Incident tasks](incident-tasks.md) |Public Preview |&#x2705; |&#x2705; |
-|[Microsoft 365 Defender incident integration](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-365-defender-incidents-in-microsoft-sentinel-and-bi-directional-sync) |Public Preview |&#x2705; |&#10060; |
-|[Microsoft Teams integrations](collaborate-in-microsoft-teams.md) |Public Preview |&#x2705; |&#10060; |
-|[Playbook template gallery](use-playbook-templates.md) |Public Preview |&#x2705; |&#10060; |
-|[Run playbooks on entities](respond-threats-during-investigation.md) |Public Preview |&#x2705; |&#10060; |
-|[Run playbooks on incidents](automate-responses-with-playbooks.md) |Public Preview |&#x2705; |&#x2705; |
-|[SOC incident audit metrics](manage-soc-with-incident-metrics.md) |GA |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Add entities to threat intelligence](add-entity-to-threat-intelligence.md?tabs=incidents) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Advanced and/or conditions](add-advanced-conditions-to-automation-rules.md) |GA |&#x2705; |&#x2705;| &#x2705; |
+|[Automation rules](automate-incident-handling-with-automation-rules.md) |GA |&#x2705; |&#x2705;| &#x2705; |
+|[Automation rules health](monitor-automation-health.md) |Public preview |&#x2705; |&#x2705;| &#10060; |
+|[Create incidents manually](create-incident-manually.md) |GA |&#x2705; |&#x2705;| &#x2705; |
+|[Cross-tenant/Cross-workspace incidents view](multiple-workspace-view.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Incident advanced search](investigate-cases.md#search-for-incidents) |GA |&#x2705; |&#x2705;| &#x2705; |
+|[Incident tasks](incident-tasks.md) |Public preview |&#x2705; |&#x2705;| &#x2705; |
+|[Microsoft 365 Defender incident integration](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-365-defender-incidents-in-microsoft-sentinel-and-bi-directional-sync) |GA |&#x2705; |&#x2705;| &#10060; |
+|[Microsoft Teams integrations](collaborate-in-microsoft-teams.md) |Public preview |&#x2705; |&#x2705;| &#10060; |
+|[Playbook template gallery](use-playbook-templates.md) |Public preview |&#x2705; |&#x2705;| &#10060; |
+|[Run playbooks on entities](respond-threats-during-investigation.md) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Run playbooks on incidents](automate-responses-with-playbooks.md) |Public preview |&#x2705; |&#x2705;| &#x2705; |
+|[SOC incident audit metrics](manage-soc-with-incident-metrics.md) |GA |&#x2705; |&#x2705;| &#x2705; |
## Machine Learning
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Anomalous RDP login detection - built-in ML detection](configure-connector-login-detection.md) |Public Preview |&#x2705; |&#x2705; |
-|[Anomalous SSH login detection - built-in ML detection](connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection) |Public Preview |&#x2705; |&#x2705; |
-|[Bring Your Own ML (BYO-ML)](bring-your-own-ml.md) |Public Preview |&#x2705; |&#10060; |
-|[Fusion](fusion.md) - advanced multistage attack detections <sup>[1](#partialga)</sup> |GA |&#x2705; |&#x2705; |
-|[Fusion detection for ransomware](fusion.md#fusion-for-ransomware) |Public Preview |&#x2705; |&#x2705; |
-|[Fusion for emerging threats](fusion.md#fusion-for-emerging-threats) |Public Preview |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Anomalous RDP login detection - built-in ML detection](configure-connector-login-detection.md) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Anomalous SSH login detection - built-in ML detection](connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Fusion](fusion.md) - advanced multistage attack detections <sup>[1](#partialga)</sup> |GA |&#x2705; |&#x2705; |&#x2705; |
+ <sup><a name="partialga"></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview. ## Normalization
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Advanced Security Information Model (ASIM)](normalization.md) |Public Preview |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Advanced Security Information Model (ASIM)](normalization.md) |Public preview |&#x2705; |&#x2705; |&#x2705; |
## Notebooks
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Notebooks](notebooks.md) |GA |&#x2705; |&#x2705; |
-|[Notebook integration with Azure Synapse](notebooks-with-synapse.md) |Public Preview |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Notebooks](notebooks.md) |GA |&#x2705;|&#x2705; |&#x2705; |
+|[Notebook integration with Azure Synapse](notebooks-with-synapse.md) |Public preview |&#x2705;|&#x2705; |&#x2705; |
## SAP
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Threat protection for SAP](sap/deployment-overview.md)<sup>[1](#sap)</sup> |GA |&#x2705; |&#x2705; |
-
-<sup><a name="sap"></a>1</sup> Deploy SAP security content [via GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP).
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Threat protection for SAP](sap/deployment-overview.md)</sup> |GA |&#x2705;|&#x2705; |&#x2705; |
## Threat intelligence support
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[GeoLocation and WhoIs data enrichment](work-with-threat-indicators.md) |Public Preview |&#x2705; |&#10060; |
-|[Import TI from flat file](indicators-bulk-file-import.md) |Public Preview |&#x2705; |&#x2705; |
-|[Threat intelligence matching analytics](use-matching-analytics-to-detect-threats.md) |Public Preview |&#x2705; |&#10060; |
-|[Threat Intelligence Platform data connector](understand-threat-intelligence.md) |Public Preview |&#x2705; |&#x2705; |
-|[Threat Intelligence Research blade](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-threat-intelligence-menu-item-in-public-preview/ba-p/1646597) |GA |&#x2705; |&#x2705; |
-|[Threat Intelligence - TAXII data connector](understand-threat-intelligence.md) |GA |&#x2705; |&#x2705; |
-|[Threat Intelligence workbook](/azure/architecture/example-scenario/data/sentinel-threat-intelligence) |GA |&#x2705; |&#x2705; |
-|[URL detonation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-the-new-built-in-url-detonation-in-azure-sentinel/ba-p/996229) |Public Preview |&#x2705; |&#10060; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[GeoLocation and WhoIs data enrichment](work-with-threat-indicators.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Import TI from flat file](indicators-bulk-file-import.md) |Public preview |&#x2705; |&#x2705; |&#x2705; |
+|[Threat Intelligence Platform data connector](understand-threat-intelligence.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Threat Intelligence Research page](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-threat-intelligence-menu-item-in-public-preview/ba-p/1646597) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Threat Intelligence - TAXII data connector](understand-threat-intelligence.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Microsoft Defender for Threat Intelligence connector](connect-mdti-data-connector.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Microsoft Defender Threat intelligence matching analytics](use-matching-analytics-to-detect-threats.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Threat Intelligence workbook](/azure/architecture/example-scenario/data/sentinel-threat-intelligence) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[URL detonation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-the-new-built-in-url-detonation-in-azure-sentinel/ba-p/996229) |Public preview |&#x2705;|&#10060; |&#10060; |
+|[Threat Intelligence Upload Indicators API](connect-threat-intelligence-upload-api.md) |Public preview |&#x2705; |&#10060; |&#10060; |
## UEBA
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Active Directory sync via MDI](enable-entity-behavior-analytics.md#how-to-enable-user-and-entity-behavior-analytics) |Public preview |&#x2705; |&#10060; |
-|[Azure resource entity pages](entity-pages.md) |Public Preview |&#x2705; |&#10060; |
-|[Entity insights](identify-threats-with-entity-behavior-analytics.md) |GA |&#x2705; |&#x2705; |
-|[Entity pages](entity-pages.md) |GA |&#x2705; |&#x2705; |
-|[Identity info table data ingestion](investigate-with-ueba.md) |GA |&#x2705; |&#x2705; |
-|[IoT device entity page](/azure/defender-for-iot/organizations/iot-advanced-threat-monitoring#investigate-further-with-iot-device-entities) |Public Preview |&#x2705; |&#10060; |
-|[Peer/Blast radius enrichments](identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba) |Public preview |&#x2705; |&#10060; |
-|[SOC-ML anomalies](soc-ml-anomalies.md#what-are-customizable-anomalies) |GA |&#x2705; |&#10060; |
-|[UEBA anomalies](soc-ml-anomalies.md#ueba-anomalies) |GA |&#x2705; |&#10060; |
-|[UEBA enrichments\insights](investigate-with-ueba.md) |GA |&#x2705; |&#x2705; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Active Directory sync via MDI](enable-entity-behavior-analytics.md#how-to-enable-user-and-entity-behavior-analytics) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Azure resource entity pages](entity-pages.md) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Entity insights](identify-threats-with-entity-behavior-analytics.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Entity pages](entity-pages.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Identity info table data ingestion](investigate-with-ueba.md) |GA |&#x2705;|&#x2705; |&#x2705; |
+|[IoT device entity page](/azure/defender-for-iot/organizations/iot-advanced-threat-monitoring#investigate-further-with-iot-device-entities) |Public preview |&#x2705;|&#x2705; |&#10060; |
+|[Peer/Blast radius enrichments](identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba) |Public preview |&#x2705;|&#10060; |&#10060; |
+|[SOC-ML anomalies](soc-ml-anomalies.md#what-are-customizable-anomalies) |GA |&#x2705; |&#x2705; |&#10060; |
+|[UEBA anomalies](soc-ml-anomalies.md#ueba-anomalies) |GA |&#x2705; |&#x2705; |&#10060; |
+|[UEBA enrichments\insights](investigate-with-ueba.md) |GA |&#x2705; |&#x2705; |&#x2705; |
## Watchlists
-|Feature |Feature stage |Azure commercial |Azure China 21Vianet |
-|||||
-|[Large watchlists from Azure Storage](watchlists.md) |Public Preview |&#x2705; |&#10060; |
-|[Watchlists](watchlists.md) |GA |&#x2705; |&#x2705; |
-|[Watchlist templates](watchlist-schemas.md) |Public Preview |&#x2705; |&#10060; |
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Large watchlists from Azure Storage](watchlists.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Watchlists](watchlists.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Watchlist templates](watchlist-schemas.md) |Public preview |&#x2705;|&#10060; |&#10060; |
## Next steps
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023 # Azure Policy built-in definitions for Azure Service Fabric
service-fabric Service Fabric Cluster Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-capacity.md
Because each node type is a distinct scale set, it can be scaled up or down inde
Each cluster requires one **primary node type**, which runs critical system services that provide Service Fabric platform capabilities. Although it's possible to also use primary node types to run your applications, it's recommended to dedicate them solely to running system services.
-**Non-primary node types** can be used to define application roles (such as *front-end* and *back-end* services) and to physically isolate services within a cluster. Service Fabric clusters can have zero or more non-primary node types.
+**nonprimary node types** can be used to define application roles (such as *front-end* and *back-end* services) and to physically isolate services within a cluster. Service Fabric clusters can have zero or more nonprimary node types.
The primary node type is configured using the `isPrimary` attribute under the node type definition in the Azure Resource Manager deployment template. See the [NodeTypeDescription object](/azure/templates/microsoft.servicefabric/clusters#nodetypedescription-object) for the full list of node type properties. For example usage, open any *AzureDeploy.json* file in [Service Fabric cluster samples](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/) and *Find on Page* search for the `nodeTypes` object. ### Node type planning considerations
-The number of initial nodes types depends upon the purpose of you cluster and the applications and services running on it. Consider the following questions:
+The number of initial nodes types depends upon the purpose of your cluster and the applications and services running on it. Consider the following questions:
* ***Does your application have multiple services, and do any of them need to be public or internet facing?***
- Typical applications contain a front-end gateway service that receives input from a client, and one or more back-end services that communicate with the front-end services, with separate networking between the front-end and back-end services. These cases typically require three node types: one primary node type, and two non-primary node types (one each for the front and back-end service).
+ Typical applications contain a front-end gateway service that receives input from a client, and one or more back-end services that communicate with the front-end services, with separate networking between the front-end and back-end services. These cases typically require three node types: one primary node type, and two nonprimary node types (one each for the front and back-end service).
* ***Do the services that make up your application have different infrastructure needs such as greater RAM or higher CPU cycles?***
- Often, front-end service can run on smaller VMs (VM sizes like D2) that have ports open to the internet. Computationally intensive back-end services might need to run on larger VMs (with VM sizes like D4, D6, D15) that are not internet-facing. Defining different node types for these services allow you to make more efficient and secure use of underlying Service Fabric VMs, and enables them to scale them independently. For more on estimating the amount of resources you'll need, see [Capacity planning for Service Fabric applications](service-fabric-capacity-planning.md)
+ Often, front-end service can run on smaller VMs (VM sizes like D2) that have ports open to the internet. Computationally intensive back-end services might need to run on larger VMs (with VM sizes like D4, D6, D15) that aren't internet-facing. Defining different node types for these services allow you to make more efficient and secure use of underlying Service Fabric VMs, and enables them to scale them independently. For more on estimating the amount of resources you'll need, see [Capacity planning for Service Fabric applications](service-fabric-capacity-planning.md)
* ***Will any of your application services need to scale out beyond 100 nodes?***
The number of initial nodes types depends upon the purpose of you cluster and th
Service Fabric supports clusters that span across [Availability Zones](../availability-zones/az-overview.md) by deploying node types that are pinned to specific zones, ensuring high-availability of your applications. Availability Zones require additional node type planning and minimum requirements. For details, see [Topology for spanning a primary node type across Availability Zones](service-fabric-cross-availability-zones.md#topology-for-spanning-a-primary-node-type-across-availability-zones).
-When determining the number and properties of node types for the initial creation of your cluster, keep in mind that you can always add, modify, or remove (non-primary) node types once your cluster is deployed. [Primary node types can also be scaled up or down](service-fabric-scale-up-primary-node-type.md) in running clusters, though to do so you will need to create a new node type, move the workload over, and then remove the original primary node type.
+When determining the number and properties of node types for the initial creation of your cluster, keep in mind that you can always add, modify, or remove (nonprimary) node types once your cluster is deployed. [Primary node types can also be scaled up or down](service-fabric-scale-up-primary-node-type.md) in running clusters, though to do so you'll need to create a new node type, move the workload over, and then remove the original primary node type.
A further consideration for your node type properties is durability level, which determines privileges a node type's VMs have within Azure infrastructure. Use the size of VMs you choose for your cluster and the instance count you assign for individual node types to help determine the appropriate durability tier for each of your node types, as described next.
The table below lists Service Fabric durability tiers, their requirements, and a
| Durability tier | Required minimum number of VMs | Supported VM Sizes | Updates you make to your virtual machine scale set | Updates and maintenance initiated by Azure | | - | - | - | -- | - |
-| Gold | 5 | Full-node sizes dedicated to a single customer (for example, L32s, GS5, G5, DS15_v2, D15_v2) | Can be delayed until approved by the Service Fabric cluster | Can be paused for 2 hours per upgrade domain to allow additional time for replicas to recover from earlier failures |
-| Silver | 5 | VMs of single core or above with at least 50 GB of local SSD | Can be delayed until approved by the Service Fabric cluster | Cannot be delayed for any significant period of time |
-| Bronze | 1 | VMs with at least 50 GB of local SSD | Will not be delayed by the Service Fabric cluster | Cannot be delayed for any significant period of time |
+| Gold | 5 | Full-node sizes dedicated to a single customer - [available VM sizes](../virtual-machines/isolation.md) | Can be delayed until approved by the Service Fabric cluster | Can be paused for 2 hours per upgrade domain to allow additional time for replicas to recover from earlier failures |
+| Silver | 5 | VMs of single core or above with at least 50 GB of local SSD | Can be delayed until approved by the Service Fabric cluster | Can't be delayed for any significant period of time |
+| Bronze | 1 | VMs with at least 50 GB of local SSD | Will not be delayed by the Service Fabric cluster | Can't be delayed for any significant period of time |
> [!NOTE]
-> The above mentioned minimum number of VMs is a necessary requirement for each durability level. We have validations in-place which will prevent creation or modification of existing virtual machine scalesets which do not meet these requirements.
+> The above mentioned minimum number of VMs is a necessary requirement for each durability level. We have validations in-place which will prevent creation or modification of existing virtual machine scalesets which don't meet these requirements.
> [!WARNING] > With Bronze durability, automatic OS image upgrade isn't available. While [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) (intended only for non-Azure hosted clusters) is *not recommended* for Silver or greater durability levels, it is your only option to automate Windows updates with respect to Service Fabric upgrade domains.
Node types running with Bronze durability obtain no privileges. This means that
### Silver and Gold
-Use Silver or Gold durability for all node types that host stateful services you expect to scale-in frequently, and where you wish deployment operations be delayed and capacity to be reduced in favor of simplifying the process. Scale-out scenarios should not affect your choice of the durability tier.
+Use Silver or Gold durability for all node types that host stateful services you expect to scale-in frequently, and where you wish deployment operations be delayed and capacity to be reduced in favor of simplifying the process. Scale-out scenarios shouldn't affect your choice of the durability tier.
#### Advantages
Use Silver or Gold durability for all node types that host stateful services you
* Deployments to virtual machine scale sets and other related Azure resources can time out, be delayed, or be blocked entirely by problems in your cluster or at the infrastructure level. * Increases the number of [replica lifecycle events](service-fabric-reliable-services-lifecycle.md) (for example, primary swaps) due to automated node deactivations during Azure infrastructure operations.
-* Takes nodes out of service for periods of time while Azure platform software updates or hardware maintenance activities are occurring. You may see nodes with status Disabling/Disabled during these activities. This reduces the capacity of your cluster temporarily, but should not impact the availability of your cluster or applications.
+* Takes nodes out of service for periods of time while Azure platform software updates or hardware maintenance activities are occurring. You may see nodes with status Disabling/Disabled during these activities. This reduces the capacity of your cluster temporarily, but shouldn't impact the availability of your cluster or applications.
#### Best practices for Silver and Gold durability node types
Follow these recommendations for managing node types with Silver or Gold durabil
* Keep your cluster and applications healthy at all times, and make sure that applications respond to all [Service replica lifecycle events](service-fabric-reliable-services-lifecycle.md) (like replica in build is stuck) in a timely fashion. * Adopt safer ways to make a VM size change (scale up/down). Changing the VM size of a virtual machine scale set requires careful planning and caution. For details, see [Scale up a Service Fabric node type](service-fabric-scale-up-primary-node-type.md)
-* Maintain a minimum count of five nodes for any virtual machine scale set that has durability level of Gold or Silver enabled. Your cluster will enter error state if you scale in below this threshold, and you'll need to manually clean up state (`Remove-ServiceFabricNodeState`) for the removed nodes.
+* Maintain a minimum count of five nodes for any virtual machine scale set that has durability level of Gold or Silver enabled. Your cluster will enter error state if you scale in below this threshold, and you'll need to manually clean up the state (`Remove-ServiceFabricNodeState`) for the removed nodes.
* Each virtual machine scale set with durability level Silver or Gold must map to its own node type in the Service Fabric cluster. Mapping multiple virtual machine scale sets to a single node type will prevent coordination between the Service Fabric cluster and the Azure infrastructure from working properly.
-* Do not delete random VM instances, always use virtual machine scale set scale in feature. The deletion of random VM instances has a potential of creating imbalances in the VM instance spread across [upgrade domains](service-fabric-cluster-resource-manager-cluster-description.md#upgrade-domains) and [fault domains](service-fabric-cluster-resource-manager-cluster-description.md#fault-domains). This imbalance could adversely affect the systems ability to properly load balance among the service instances/Service replicas.
-* If using Autoscale, set the rules such that scale in (removing of VM instances) operations are done only one node at a time. Scaling in more than one instance at a time is not safe.
+* Don't delete random VM instances, always use virtual machine scale set scale in feature. The deletion of random VM instances has a potential of creating imbalances in the VM instance spread across [upgrade domains](service-fabric-cluster-resource-manager-cluster-description.md#upgrade-domains) and [fault domains](service-fabric-cluster-resource-manager-cluster-description.md#fault-domains). This imbalance could adversely affect the systems ability to properly load balance among the service instances/Service replicas.
+* If using Autoscale, set the rules such that scale in (removing of VM instances) operations are done only one node at a time. Scaling in more than one instance at a time isn't safe.
* If deleting or deallocating VMs on the primary node type, never reduce the count of allocated VMs below what the reliability tier requires. These operations will be blocked indefinitely in a scale set with a durability level of Silver or Gold. ### Changing durability levels
Follow these recommendations for managing node types with Silver or Gold durabil
Within certain constraints, node type durability level can be adjusted: * Node types with durability levels of Silver or Gold can't be downgraded to Bronze.
-* Downgrading node types with durability level of Gold to Silver is not supported.
+* Downgrading node types with durability level of Gold to Silver isn't supported.
* Upgrading from Bronze to Silver or Gold can take a few hours. * When changing durability level, be sure to update it in both the Service Fabric extension configuration in your virtual machine scale set resource and in the node type definition in your Service Fabric cluster resource. These values must match.
The reliability tier can take the following values:
* **Silver** - System services run with target replica set count of five * **Bronze** - System services run with target replica set count of three
-Here is the recommendation on choosing the reliability tier. The number of seed nodes is also set to the minimum number of nodes for a reliability tier.
+Here's the recommendation on choosing the reliability tier. The number of seed nodes is also set to the minimum number of nodes for a reliability tier.
| **Number of nodes** | **Reliability Tier** | | | |
-| 1 | *Do not specify the `reliabilityLevel` parameter: the system calculates it.* |
+| 1 | *Don't specify the `reliabilityLevel` parameter: the system calculates it.* |
| 3 | Bronze | | 5 or 6| Silver | | 7 or 8 | Gold |
The capacity needs of your cluster will be determined by your specific workload
**For production workloads, the recommended VM size (SKU) is [Standard D2_V2](../virtual-machines/dv2-dsv2-series.md) (or equivalent) with a minimum of 50 GB of local SSD, 2 cores, and 4 GiB of memory.** A minimum of 50 GB local SSD is recommended, however some workloads (such as those running Windows containers) will require larger disks.
-By default, local SSD is configured to 64 GB. This can be configured in the MaxDiskQuotaInMB setting of the Diagnostics section of cluster settings.
+By default, local SSD is configured to 64 GB. The size can be configured in the MaxDiskQuotaInMB setting of the Diagnostics section of cluster settings.
For instructions on how to adjust the cluster settings of a cluster hosted in Azure, see [Upgrade the configuration of a cluster in Azure](./service-fabric-cluster-config-upgrade-azure.md#customize-cluster-settings-using-resource-manager-templates)
For instructions on how to adjust the cluster settings of a standalone cluster h
When choosing other [VM sizes](../virtual-machines/sizes-general.md) for production workloads, keep in mind the following constraints: -- Partial / single core VM sizes like Standard A0 are not supported.-- *A-series* VM sizes are not supported for performance reasons.-- Low-priority VMs are not supported.-- [B-Series Burstable SKU's](../virtual-machines/sizes-b-series-burstable.md) are not supported.
+- Partial / single core VM sizes like Standard A0 aren't supported.
+- *A-series* VM sizes aren't supported for performance reasons.
+- Low-priority VMs aren't supported.
+- [B-Series Burstable SKUs](../virtual-machines/sizes-b-series-burstable.md) aren't supported.
#### Primary node type **Production workloads** on Azure require a minimum of five primary nodes (VM instances) and reliability tier of Silver. It's recommended to dedicate the cluster primary node type to system services, and use placement constraints to deploy your application to secondary node types.
-**Test workloads** in Azure can run a minimum of one or three primary nodes. To configure a one node cluster, be sure that the `reliabilityLevel` setting is completely omitted in your Resource Manager template (specifying empty string value for `reliabilityLevel` is not sufficient). If you set up the one node cluster set up with Azure portal, this configuration is done automatically.
+**Test workloads** in Azure can run a minimum of one or three primary nodes. To configure a one node cluster, be sure that the `reliabilityLevel` setting is omitted in your Resource Manager template (specifying empty string value for `reliabilityLevel` isn't sufficient). If you set up the one node cluster set up with Azure portal, this configuration is done automatically.
> [!WARNING]
-> One-node clusters run with a special configuration without reliability and where scale out is not supported.
+> One-node clusters run with a special configuration without reliability and where scale out isn't supported.
-#### Non-primary node types
+#### nonprimary node types
-The minimum number of nodes for a non-primary node type depends on the specific [durability level](#durability-characteristics-of-the-cluster) of the node type. You should plan the number of nodes (and durability level) based on the number of replicas of applications or services that you want to run for the node type, and depending on whether the workload is stateful or stateless. Keep in mind you can increase or decrease the number of VMs in a node type anytime after you have deployed the cluster.
+The minimum number of nodes for a nonprimary node type depends on the specific [durability level](#durability-characteristics-of-the-cluster) of the node type. You should plan the number of nodes (and durability level) based on the number of replicas of applications or services that you want to run for the node type, and depending on whether the workload is stateful or stateless. Keep in mind you can increase or decrease the number of VMs in a node type anytime after you have deployed the cluster.
##### Stateful workloads
For stateful production workloads using Service Fabric [reliable collections or
##### Stateless workloads
-For stateless production workloads, the minimum supported non-primary node type size is three to maintain quorum, however a node type size of five is recommended.
+For stateless production workloads, the minimum supported nonprimary node type size is three to maintain quorum, however a node type size of five is recommended.
## Next steps
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-fabric-settings.md
This article describes the various fabric settings for your Service Fabric clust
There are three different upgrade policies: -- **Dynamic** ΓÇô Changes to a dynamic configuration do not cause any process restarts of either Service Fabric processes or your service host processes. -- **Static** ΓÇô Changes to a static configuration will cause the Service Fabric node to restart in order to consume the change. Services on the nodes will be restarted.
+- **Dynamic** ΓÇô Changes to a dynamic configuration don't cause any process restarts of either Service Fabric processes or your service host processes.
+- **Static** ΓÇô Changes to a static configuration cause the Service Fabric node to restart in order to consume the change. Services on the nodes is restarted.
- **NotAllowed** ΓÇô These settings cannot be modified. Changing these settings requires that the cluster be destroyed and a new cluster created. The following is a list of Fabric settings that you can customize, organized by section.
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|ApplicationCertificateValidationPolicy|string, default is "None"|Static| This does not validate the server certificate; succeed the request. Refer to config ServiceCertificateThumbprints for the comma-separated list of thumbprints of the remote certs that the reverse proxy can trust. Refer to config ServiceCommonNameAndIssuer for the subject name and issuer thumbprint of the remote certs that the reverse proxy can trust. To learn more, see [Reverse proxy secure connection](service-fabric-reverseproxy-configure-secure-communication.md#secure-connection-establishment-between-the-reverse-proxy-and-services). |
+|ApplicationCertificateValidationPolicy|string, default is "None"|Static| This doesn't validate the server certificate; succeed the request. Refer to config ServiceCertificateThumbprints for the comma-separated list of thumbprints of the remote certs that the reverse proxy can trust. Refer to config ServiceCommonNameAndIssuer for the subject name and issuer thumbprint of the remote certs that the reverse proxy can trust. To learn more, see [Reverse proxy secure connection](service-fabric-reverseproxy-configure-secure-communication.md#secure-connection-establishment-between-the-reverse-proxy-and-services). |
|BodyChunkSize |Uint, default is 16384 |Dynamic| Gives the size of for the chunk in bytes used to read the body. | |CrlCheckingFlag|uint, default is 0x40000000 |Dynamic| Flags for application/service certificate chain validation; e.g. CRL checking 0x10000000 CERT_CHAIN_REVOCATION_CHECK_END_CERT 0x20000000 CERT_CHAIN_REVOCATION_CHECK_CHAIN 0x40000000 CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT 0x80000000 CERT_CHAIN_REVOCATION_CHECK_CACHE_ONLY Setting to 0 disables CRL checking Full list of supported values is documented by dwFlags of CertGetCertificateChain: https://msdn.microsoft.com/library/windows/desktop/aa376078(v=vs.85).aspx | |DefaultHttpRequestTimeout |Time in seconds. default is 120 |Dynamic|Specify timespan in seconds. Gives the default request timeout for the http requests being processed in the http app gateway. |
-|ForwardClientCertificate|bool, default is FALSE|Dynamic|When set to false, reverse proxy will not request for the client certificate. When set to true, reverse proxy will request for the client certificate during the TLS handshake and forward the base64 encoded PEM format string to the service in a header named X-Client-Certificate. The service can fail the request with appropriate status code after inspecting the certificate data. If this is true and client does not present a certificate, reverse proxy will forward an empty header and let the service handle the case. Reverse proxy will act as a transparent layer. To learn more, see [Set up client certificate authentication](service-fabric-reverseproxy-configure-secure-communication.md#setting-up-client-certificate-authentication-through-the-reverse-proxy). |
+|ForwardClientCertificate|bool, default is FALSE|Dynamic|When set to false, reverse proxy won't request for the client certificate. When set to true, reverse proxy requests for the client certificate during the TLS handshake and forward the base64 encoded PEM format string to the service in a header named X-Client-Certificate. The service can fail the request with appropriate status code after inspecting the certificate data. If this is true and client doesn't present a certificate, reverse proxy forwards an empty header and let the service handle the case. Reverse proxy acts as a transparent layer. To learn more, see [Set up client certificate authentication](service-fabric-reverseproxy-configure-secure-communication.md#setting-up-client-certificate-authentication-through-the-reverse-proxy). |
|GatewayAuthCredentialType |string, default is "None" |Static| Indicates the type of security credentials to use at the http app gateway endpoint Valid values are None/X509. | |GatewayX509CertificateFindType |string, default is "FindByThumbprint" |Dynamic| Indicates how to search for certificate in the store specified by GatewayX509CertificateStoreName Supported value: FindByThumbprint; FindBySubjectName. |
-|GatewayX509CertificateFindValue | string, default is "" |Dynamic| Search filter value used to locate the http app gateway certificate. This certificate is configured on the https endpoint and can also be used to verify the identity of the app if needed by the services. FindValue is looked up first; and if that does not exist; FindValueSecondary is looked up. |
-|GatewayX509CertificateFindValueSecondary | string, default is "" |Dynamic|Search filter value used to locate the http app gateway certificate. This certificate is configured on the https endpoint and can also be used to verify the identity of the app if needed by the services. FindValue is looked up first; and if that does not exist; FindValueSecondary is looked up.|
+|GatewayX509CertificateFindValue | string, default is "" |Dynamic| Search filter value used to locate the http app gateway certificate. This certificate is configured on the https endpoint and can also be used to verify the identity of the app if needed by the services. FindValue is looked up first; and if that doesn't exist; FindValueSecondary is looked up. |
+|GatewayX509CertificateFindValueSecondary | string, default is "" |Dynamic|Search filter value used to locate the http app gateway certificate. This certificate is configured on the https endpoint and can also be used to verify the identity of the app if needed by the services. FindValue is looked up first; and if that doesn't exist; FindValueSecondary is looked up.|
|GatewayX509CertificateStoreName |string, default is "My" |Dynamic| Name of X.509 certificate store that contains certificate for http app gateway. | |HttpRequestConnectTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Dynamic|Specify timespan in seconds. Gives the connect timeout for the http requests being sent from the http app gateway. | |IgnoreCrlOfflineError|bool, default is TRUE|Dynamic|Whether to ignore CRL offline error for application/service certificate verification. | |IsEnabled |Bool, default is false |Static| Enables/Disables the HttpApplicationGateway. HttpApplicationGateway is disabled by default and this config needs to be set to enable it. | |NumberOfParallelOperations | Uint, default is 5000 |Static|Number of reads to post to the http server queue. This controls the number of concurrent requests that can be satisfied by the HttpGateway. |
-|RemoveServiceResponseHeaders|string, default is "Date; Server"|Static|Semi colon/ comma-separated list of response headers that will be removed from the service response; before forwarding it to the client. If this is set to empty string; pass all the headers returned by the service as-is. i.e do not overwrite the Date and Server |
+|RemoveServiceResponseHeaders|string, default is "Date; Server"|Static|Semi colon/ comma-separated list of response headers that is removed from the service response; before forwarding it to the client. If this is set to empty string; pass all the headers returned by the service as-is. i.e don't overwrite the Date and Server |
|ResolveServiceBackoffInterval |Time in seconds, default is 5 |Dynamic|Specify timespan in seconds. Gives the default back-off interval before retrying a failed resolve service operation. | |SecureOnlyMode|bool, default is FALSE|Dynamic| SecureOnlyMode: true: Reverse Proxy will only forward to services that publish secure endpoints. false: Reverse Proxy can forward requests to secure/non-secure endpoints. To learn more, see [Reverse proxy endpoint selection logic](service-fabric-reverseproxy-configure-secure-communication.md#endpoint-selection-logic-when-services-expose-secure-as-well-as-unsecured-endpoints). | |ServiceCertificateThumbprints|string, default is ""|Dynamic|The comma-separated list of thumbprints of the remote certs that the reverse proxy can trust. To learn more, see [Reverse proxy secure connection](service-fabric-reverseproxy-configure-secure-communication.md#secure-connection-establishment-between-the-reverse-proxy-and-services). |
The following is a list of Fabric settings that you can customize, organized by
| | | | | |DeployedState |wstring, default is L"Disabled" |Static |2-stage removal of CSS. | |EnableSecretMonitoring|bool, default is FALSE |Static |Must be enabled to use Managed KeyVaultReferences. Default may become true in the future. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](./service-fabric-keyvault-references.md)|
-|SecretMonitoringInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(15) |Static |The rate at which Service Fabric will poll Key Vault for changes when using Managed KeyVaultReferences. This rate is a best effort, and changes in Key Vault may be reflected in the cluster earlier or later than the interval. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](./service-fabric-keyvault-references.md) |
+|SecretMonitoringInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(15) |Static |The rate at which Service Fabric polls Key Vault for changes when using Managed KeyVaultReferences. This rate is a best effort, and changes in Key Vault may be reflected in the cluster earlier or later than the interval. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](./service-fabric-keyvault-references.md) |
|UpdateEncryptionCertificateTimeout |TimeSpan, default is Common::TimeSpan::MaxValue |Static |Specify timespan in seconds. The default has changed to TimeSpan::MaxValue; but overrides are still respected. May be deprecated in the future. | ## CentralSecretService/Replication
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
-|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This reduces network traffic.|
## ClusterManager
The following is a list of Fabric settings that you can customize, organized by
|FabricUpgradeHealthCheckInterval |Time in seconds, default is 60 |Dynamic|The frequency of health status check during a monitored Fabric upgrade | |FabricUpgradeStatusPollInterval |Time in seconds, default is 60 |Dynamic|The frequency of polling for Fabric upgrade status. This value determines the rate of update for any GetFabricUpgradeProgress call | |ImageBuilderTimeoutBuffer |Time in seconds, default is 3 |Dynamic|Specify timespan in seconds. The amount of time to allow for Image Builder specific timeout errors to return to the client. If this buffer is too small; then the client times out before the server and gets a generic timeout error. |
-|InfrastructureTaskHealthCheckRetryTimeout | Time in seconds, default is 60 |Dynamic|Specify timespan in seconds. The amount of time to spend retrying failed health checks while post-processing an infrastructure task. Observing a passed health check will reset this timer. |
-|InfrastructureTaskHealthCheckStableDuration | Time in seconds, default is 0|Dynamic| Specify timespan in seconds. The amount of time to observe consecutive passed health checks before post-processing of an infrastructure task finishes successfully. Observing a failed health check will reset this timer. |
+|InfrastructureTaskHealthCheckRetryTimeout | Time in seconds, default is 60 |Dynamic|Specify timespan in seconds. The amount of time to spend retrying failed health checks while post-processing an infrastructure task. Observing a passed health check resets this timer. |
+|InfrastructureTaskHealthCheckStableDuration | Time in seconds, default is 0|Dynamic| Specify timespan in seconds. The amount of time to observe consecutive passed health checks before post-processing of an infrastructure task finishes successfully. Observing a failed health check resets this timer. |
|InfrastructureTaskHealthCheckWaitDuration |Time in seconds, default is 0|Dynamic| Specify timespan in seconds. The amount of time to wait before starting health checks after post-processing an infrastructure task. | |InfrastructureTaskProcessingInterval | Time in seconds, default is 10 |Dynamic|Specify timespan in seconds. The processing interval used by the infrastructure task processing state machine. | |MaxCommunicationTimeout |Time in seconds, default is 600 |Dynamic|Specify timespan in seconds. The maximum timeout for internal communications between ClusterManager and other system services (i.e.; Naming Service; Failover Manager and etc.). This timeout should be smaller than global MaxOperationTimeout (as there might be multiple communications between system components for each client operation). |
The following is a list of Fabric settings that you can customize, organized by
|QuorumLossWaitDuration |Time in seconds, default is MaxValue |Not Allowed| Specify timespan in seconds. The QuorumLossWaitDuration for ClusterManager. | |ReplicaRestartWaitDuration |Time in seconds, default is (60.0 \* 30)|Not Allowed|Specify timespan in seconds. The ReplicaRestartWaitDuration for ClusterManager. | |ReplicaSetCheckTimeoutRollbackOverride |Time in seconds, default is 1200 |Dynamic| Specify timespan in seconds. If ReplicaSetCheckTimeout is set to the maximum value of DWORD; then it's overridden with the value of this config for the purposes of rollback. The value used for roll-forward is never overridden. |
-|SkipRollbackUpdateDefaultService | Bool, default is false |Dynamic|The CM will skip reverting updated default services during application upgrade rollback. |
+|SkipRollbackUpdateDefaultService | Bool, default is false |Dynamic|The CM skips reverting updated default services during application upgrade rollback. |
|StandByReplicaKeepDuration | Time in seconds, default is (3600.0 \* 2)|Not Allowed|Specify timespan in seconds. The StandByReplicaKeepDuration for ClusterManager. | |TargetReplicaSetSize |Int, default is 7 |Not Allowed|The TargetReplicaSetSize for ClusterManager. | |UpgradeHealthCheckInterval |Time in seconds, default is 60 |Dynamic|The frequency of health status checks during a monitored application upgrades |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
-|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This reduces network traffic.|
## Common | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |AllowCreateUpdateMultiInstancePerNodeServices |Bool, default is false |Dynamic|Allows creation of multiple stateless instances of a service per node. This feature is currently in preview. |
-|EnableAuxiliaryReplicas |Bool, default is false |Dynamic|Enable creation or update of auxiliary replicas on services. If true; upgrades from SF version 8.1+ to lower targetVersion will be blocked. |
+|EnableAuxiliaryReplicas |Bool, default is false |Dynamic|Enable creation or update of auxiliary replicas on services. If true; upgrades from SF version 8.1+ to lower targetVersion is blocked. |
|PerfMonitorInterval |Time in seconds, default is 1 |Dynamic|Specify timespan in seconds. Performance monitoring interval. Setting to 0 or negative value disables monitoring. | ## DefragmentationEmptyNodeDistributionPolicy
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|AdminOnlyHttpAudit |Bool, default is true | Dynamic | Exclude HTTP requests which do not impact the state of the cluster from auditing. Currently; only requests of "GET" type are excluded; but this is subject to change. |
+|AdminOnlyHttpAudit |Bool, default is true | Dynamic | Exclude HTTP requests which doesn't impact the state of the cluster from auditing. Currently; only requests of "GET" type are excluded; but this is subject to change. |
|AppDiagnosticStoreAccessRequiresImpersonation |Bool, default is true | Dynamic |Whether or not impersonation is required when accessing diagnostic stores on behalf of the application. | |AppEtwTraceDeletionAgeInDays |Int, default is 3 | Dynamic |Number of days after which we delete old ETL files containing application ETW traces. | |ApplicationLogsFormatVersion |Int, default is 0 | Dynamic |Version for application logs format. Supported values are 0 and 1. Version 1 includes more fields from the ETW event record than version 0. | |AuditHttpRequests |Bool, default is false | Dynamic | Turn HTTP auditing on or off. The purpose of auditing is to see the activities that have been performed against the cluster; including who initiated the request. Note that this is a best attempt logging; and trace loss may occur. HTTP requests with "User" authentication is not recorded. |
-|CaptureHttpTelemetry|Bool, default is true | Dynamic | Turn HTTP telemetry on or off. The purpose of telemetry is for Service Fabric to be able to capture telemetry data to help plan future work and identify problem areas. Telemetry does not record any personal data or the request body. Telemetry captures all HTTP requests unless otherwise configured. |
+|CaptureHttpTelemetry|Bool, default is true | Dynamic | Turn HTTP telemetry on or off. The purpose of telemetry is for Service Fabric to be able to capture telemetry data to help plan future work and identify problem areas. Telemetry doesn't record any personal data or the request body. Telemetry captures all HTTP requests unless otherwise configured. |
|ClusterId |String | Dynamic |The unique ID of the cluster. This is generated when the cluster is created. | |ConsumerInstances |String | Dynamic |The list of DCA consumer instances. | |DiskFullSafetySpaceInMB |Int, default is 1024 | Dynamic |Remaining disk space in MB to protect from use by DCA. |
The following is a list of Fabric settings that you can customize, organized by
|ForwarderPoolStartPort|Int, default is 16700|Static|The start address for the forwarding pool that is used for recursive queries.| |InstanceCount|int, default is -1|Static|Default value is -1 which means that DnsService is running on every node. OneBox needs this to be set to 1 since DnsService uses well known port 53, so it cannot have multiple instances on the same machine.| |IsEnabled|bool, default is FALSE|Static|Enables/Disables DnsService. DnsService is disabled by default and this config needs to be set to enable it. |
-|PartitionPrefix|string, default is "--"|Static|Controls the partition prefix string value in DNS queries for partitioned services. The value: <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>Cannot be an empty string.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md).|
-|PartitionSuffix|string, default is ""|Static|Controls the partition suffix string value in DNS queries for partitioned services. The value: <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md). |
-|RecursiveQueryParallelMaxAttempts|Int, default is 0|Static|The number of times parallel queries will be attempted. Parallel queries are executed after the max attempts for serial queries have been exhausted.|
+|PartitionPrefix|string, default is "--"|Static|Controls the partition prefix string value in DNS queries for partitioned services. The value: <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Shouldn't contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Shouldn't be longer than 5 characters.</li><li>Cannot be an empty string.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md).|
+|PartitionSuffix|string, default is ""|Static|Controls the partition suffix string value in DNS queries for partitioned services. The value: <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Shouldn't contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Shouldn't be longer than 5 characters.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md). |
+|RecursiveQueryParallelMaxAttempts|Int, default is 0|Static|The number of times parallel queries are attempted. Parallel queries are executed after the max attempts for serial queries have been exhausted.|
|RecursiveQueryParallelTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Static|The timeout value in seconds for each attempted parallel query.|
-|RecursiveQuerySerialMaxAttempts|Int, default is 2|Static|The number of serial queries that will be attempted, at most. If this number is higher than the number of forwarding DNS servers, querying will stop once all the servers have been attempted exactly once.|
+|RecursiveQuerySerialMaxAttempts|Int, default is 2|Static|The number of serial queries that are attempted, at most. If this number is higher than the number of forwarding DNS servers, querying stops once all the servers have been attempted exactly once.|
|RecursiveQuerySerialTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Static|The timeout value in seconds for each attempted serial query.| |TransientErrorMaxRetryCount|Int, default is 3|Static|Controls the number of times SF DNS will retry when a transient error occurs while calling SF APIs (e.g. when retrieving names and endpoints).| |TransientErrorRetryIntervalInMillis|Int, default is 0|Static|Sets the delay in milliseconds between retries for when SF DNS calls SF APIs.|
The following is a list of Fabric settings that you can customize, organized by
| | | | | |ActivationMaxFailureCount |Int, default is 10 |Dynamic|This is the maximum count for which system will retry failed activation before giving up. | |ActivationMaxRetryInterval |Time in seconds, default is 300 |Dynamic|Specify timespan in seconds. Max retry interval for Activation. On every continuous failure the retry interval is calculated as Min( ActivationMaxRetryInterval; Continuous Failure Count * ActivationRetryBackoffInterval). |
-|ActivationRetryBackoffInterval |Time in seconds, default is 5 |Dynamic|Specify timespan in seconds. Backoff interval on every activation failure;On every continuous activation failure the system will retry the activation for up to the MaxActivationFailureCount. The retry interval on every try is a product of continuous activation failure and the activation back-off interval. |
+|ActivationRetryBackoffInterval |Time in seconds, default is 5 |Dynamic|Specify timespan in seconds. Backoff interval on every activation failure; On every continuous activation failure the system will retry the activation for up to the MaxActivationFailureCount. The retry interval on every try is a product of continuous activation failure and the activation back-off interval. |
|EnableRestartManagement |Bool, default is false |Dynamic|This is to enable server restart. | |EnableServiceFabricAutomaticUpdates |Bool, default is false |Dynamic|This is to enable fabric automatic update via Windows Update. | |EnableServiceFabricBaseUpgrade |Bool, default is false |Dynamic|This is to enable base update for server. |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
-|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.|
+|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This reduces network traffic.|
## FailoverManager
The following is a list of Fabric settings that you can customize, organized by
|ReplicaDropWaitDurationInSeconds|int, default is 600|Static|This parameter is used when the data loss api is called. It controls how long the system will wait for a replica to get dropped after remove replica is internally invoked on it. | |ReplicaRestartWaitDuration |Time in seconds, default is 60 minutes|Static|Specify timespan in seconds. The ReplicaRestartWaitDuration for FaultAnalysisService. | |StandByReplicaKeepDuration| Time in seconds, default is (60*24*7) minutes |Static|Specify timespan in seconds. The StandByReplicaKeepDuration for FaultAnalysisService. |
-|StoredActionCleanupIntervalInSeconds | Int, default is 3600 |Static|This is how often the store will be cleaned up. Only actions in a terminal state; and that completed at least CompletedActionKeepDurationInSeconds ago will be removed. |
+|StoredActionCleanupIntervalInSeconds | Int, default is 3600 |Static|This is how often the store is cleaned up. Only actions in a terminal state; and that completed at least CompletedActionKeepDurationInSeconds ago will be removed. |
|StoredChaosEventCleanupIntervalInSeconds | Int, default is 3600 |Static|This is how often the store will be audited for cleanup; if the number of events is more than 30000; the cleanup will kick in. | |TargetReplicaSetSize |Int, default is 0 |Static|NOT_PLATFORM_UNIX_START The TargetReplicaSetSize for FaultAnalysisService. |
The following is a list of Fabric settings that you can customize, organized by
|EnableImageStoreHealthReporting |bool, default is TRUE |Static|Config to determine whether file store service should report its health. | |FreeDiskSpaceNotificationSizeInKB|int64, default is 25\*1024 |Dynamic|The size of free disk space below which health warning may occur. Minimum values of this config and FreeDiskSpaceNotificationThresholdPercentage config are used to determine sending of the health warning. | |FreeDiskSpaceNotificationThresholdPercentage|double, default is 0.02 |Dynamic|The percentage of free disk space below which health warning may occur. Minimum value of this config and FreeDiskSpaceNotificationInMB config are used to determine sending of health warning. |
-|GenerateV1CommonNameAccount| bool, default is TRUE|Static|Specifies whether to generate an account with user name V1 generation algorithm. Starting with Service Fabric version 6.1; an account with v2 generation is always created. The V1 account is necessary for upgrades from/to versions that do not support V2 generation (prior to 6.1).|
+|GenerateV1CommonNameAccount| bool, default is TRUE|Static|Specifies whether to generate an account with user name V1 generation algorithm. Starting with Service Fabric version 6.1; an account with v2 generation is always created. The V1 account is necessary for upgrades from/to versions that don't support V2 generation (prior to 6.1).|
|MaxCopyOperationThreads | Uint, default is 0 |Dynamic| The maximum number of parallel files that secondary can copy from primary. '0' == number of cores. | |MaxFileOperationThreads | Uint, default is 100 |Static| The maximum number of parallel threads allowed to perform FileOperations (Copy/Move) in the primary. '0' == number of cores. | |MaxRequestProcessingThreads | Uint, default is 200 |Static|The maximum number of parallel threads allowed to process requests in the primary. '0' == number of cores. |
The following is a list of Fabric settings that you can customize, organized by
|ApplicationHostCloseTimeout| TimeSpan, default is Common::TimeSpan::FromSeconds(120)|Dynamic| Specify timespan in seconds. When Fabric exit is detected in a self activated processes; FabricRuntime closes all of the replicas in the user's host (applicationhost) process. This is the timeout for the close operation. | | CnsNetworkPluginCnmUrlPort | wstring, default is L"48080" | Static | Azure cnm api url port | | CnsNetworkPluginCnsUrlPort | wstring, default is L"10090" | Static | Azure cns url port |
-|ContainerServiceArguments|string, default is "-H localhost:2375 -H npipe://"|Static|Service Fabric (SF) manages docker daemon (except on windows client machines like Win10). This configuration allows user to specify custom arguments that should be passed to docker daemon when starting it. When custom arguments are specified, Service Fabric do not pass any other argument to Docker engine except '--pidfile' argument. Hence users should not specify '--pidfile' argument as part of their customer arguments. Also, the custom arguments should ensure that docker daemon listens on default name pipe on Windows (or Unix domain socket on Linux) for Service Fabric to be able to communicate with it.|
+|ContainerServiceArguments|string, default is "-H localhost:2375 -H npipe://"|Static|Service Fabric (SF) manages docker daemon (except on windows client machines like Win10). This configuration allows user to specify custom arguments that should be passed to docker daemon when starting it. When custom arguments are specified, Service Fabric doesn't pass any other argument to Docker engine except '--pidfile' argument. Hence users shouldn't specify '--pidfile' argument as part of their customer arguments. Also, the custom arguments should ensure that docker daemon listens on default name pipe on Windows (or Unix domain socket on Linux) for Service Fabric to be able to communicate with it.|
|ContainerServiceLogFileMaxSizeInKb|int, default is 32768|Static|Maximum file size of log file generated by docker containers. Windows only.| |ContainerImageDownloadTimeout|int, number of seconds, default is 1200 (20 mins)|Dynamic|Number of seconds before download of image times out.|
-|ContainerImagesToSkip|string, image names separated by vertical line character, default is ""|Static|Name of one or more container images that should not be deleted. Used with the PruneContainerImages parameter.|
+|ContainerImagesToSkip|string, image names separated by vertical line character, default is ""|Static|Name of one or more container images that shouldn't be deleted. Used with the PruneContainerImages parameter.|
|ContainerServiceLogFileNamePrefix|string, default is "sfcontainerlogs"|Static|File name prefix for log files generated by docker containers. Windows only.| |ContainerServiceLogFileRetentionCount|int, default is 10|Static|Number of log files generated by docker containers before log files are overwritten. Windows only.| |CreateFabricRuntimeTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(120)|Dynamic| Specify timespan in seconds. The timeout value for the sync FabricCreateRuntime call |
The following is a list of Fabric settings that you can customize, organized by
|DeploymentMaxRetryInterval| TimeSpan, default is Common::TimeSpan::FromSeconds(3600)|Dynamic| Specify timespan in seconds. Max retry interval for the deployment. On every continuous failure the retry interval is calculated as Min( DeploymentMaxRetryInterval; Continuous Failure Count * DeploymentRetryBackoffInterval) | |DeploymentRetryBackoffInterval| TimeSpan, default is Common::TimeSpan::FromSeconds(10)|Dynamic|Specify timespan in seconds. Back-off interval for the deployment failure. On every continuous deployment failure the system will retry the deployment for up to the MaxDeploymentFailureCount. The retry interval is a product of continuous deployment failure and the deployment backoff interval. | |DisableContainers|bool, default is FALSE|Static|Config for disabling containers - used instead of DisableContainerServiceStartOnContainerActivatorOpen which is deprecated config |
-|DisableDockerRequestRetry|bool, default is FALSE |Dynamic| By default SF communicates with DD (docker dameon) with a timeout of 'DockerRequestTimeout' for each http request sent to it. If DD does not responds within this time period; SF resends the request if top level operation still has remaining time. With Hyper-V container; DD sometimes take much more time to bring up the container or deactivate it. In such cases DD request times out from SF perspective and SF retries the operation. Sometimes this seems to adds more pressure on DD. This config allows to disable this retry and wait for DD to respond. |
+|DisableDockerRequestRetry|bool, default is FALSE |Dynamic| By default SF communicates with DD (docker dameon) with a timeout of 'DockerRequestTimeout' for each http request sent to it. If DD doesn't responds within this time period; SF resends the request if top level operation still has remaining time. With Hyper-V container; DD sometimes take much more time to bring up the container or deactivate it. In such cases DD request times out from SF perspective and SF retries the operation. Sometimes this seems to add more pressure on DD. This config allows you to disable this retry and wait for DD to respond. |
|DisableLivenessProbes | wstring, default is L"" | Static | Config to disable Liveness probes in cluster. You can specify any non-empty value for SF to disable probes. | |DisableReadinessProbes | wstring, default is L"" | Static | Config to disable Readiness probes in cluster. You can specify any non-empty value for SF to disable probes. |
-|DnsServerListTwoIps | Bool, default is FALSE | Static | This flags adds the local dns server twice to help alleviate intermittent resolve issues. |
+|DnsServerListTwoIps | Bool, default is FALSE | Static | This flag adds the local dns server twice to help alleviate intermittent resolve issues. |
| DockerTerminateOnLastHandleClosed | bool, default is TRUE | Static | By default if FabricHost is managing the 'dockerd' (based on: SkipDockerProcessManagement == false) this setting configures what happens when either FabricHost or dockerd crash. When set to `true` if either process crashes all running containers will be forcibly terminated by the HCS. If set to `false` the containers will continue to keep running. Note: Previous to 8.0 this behavior was unintentionally the equivalent of `false`. The default setting of `true` here is what we expect to happen by default moving forward for our cleanup logic to be effective on restart of these processes. | | DoNotInjectLocalDnsServer | bool, default is FALSE | Static | Prevents the runtime to injecting the local IP as DNS server for containers. | |EnableActivateNoWindow| bool, default is FALSE|Dynamic| The activated process is created in the background without any console. |
The following is a list of Fabric settings that you can customize, organized by
|IsDefaultContainerRepositoryPasswordEncrypted|bool, default is FALSE|Static|Whether the DefaultContainerRepositoryPassword is encrypted or not.| |LinuxExternalExecutablePath|string, default is "/usr/bin/" |Static|The primary directory of external executable commands on the node.| |NTLMAuthenticationEnabled|bool, default is FALSE|Static| Enables support for using NTLM by the code packages that are running as other users so that the processes across machines can communicate securely. |
-|NTLMAuthenticationPasswordSecret|SecureString, default is Common::SecureString("")|Static|Is an encrypted has that is used to generate the password for NTLM users. Has to be set if NTLMAuthenticationEnabled is true. Validated by the deployer. |
+|NTLMAuthenticationPasswordSecret|SecureString, default is Common::SecureString("")|Static|Is an encryption that is used to generate the password for NTLM users. Has to be set if NTLMAuthenticationEnabled is true. Validated by the deployer. |
|NTLMSecurityUsersByX509CommonNamesRefreshInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(3)|Dynamic|Specify timespan in seconds. Environment-specific settings The periodic interval at which Hosting scans for new certificates to be used for FileStoreService NTLM configuration. | |NTLMSecurityUsersByX509CommonNamesRefreshTimeout|TimeSpan, default is Common::TimeSpan::FromMinutes(4)|Dynamic| Specify timespan in seconds. The timeout for configuring NTLM users using certificate common names. The NTLM users are needed for FileStoreService shares. |
-|PruneContainerImages|bool, default is FALSE|Dynamic| Remove unused application container images from nodes. When an ApplicationType is unregistered from the Service Fabric cluster, the container images that were used by this application will be removed on nodes where it was downloaded by Service Fabric. The pruning runs every hour, so it may take up to one hour (plus time to prune the image) for images to be removed from the cluster.<br>Service Fabric will never download or remove images not related to an application. Unrelated images that were downloaded manually or otherwise must be removed explicitly.<br>Images that should not be deleted can be specified in the ContainerImagesToSkip parameter.|
+|PruneContainerImages|bool, default is FALSE|Dynamic| Remove unused application container images from nodes. When an ApplicationType is unregistered from the Service Fabric cluster, the container images that were used by this application will be removed on nodes where it was downloaded by Service Fabric. The pruning runs every hour, so it may take up to one hour (plus time to prune the image) for images to be removed from the cluster.<br>Service Fabric will never download or remove images not related to an application. Unrelated images that were downloaded manually or otherwise must be removed explicitly.<br>Images that shouldn't be deleted can be specified in the ContainerImagesToSkip parameter.|
|RegisterCodePackageHostTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(120)|Dynamic| Specify timespan in seconds. The timeout value for the FabricRegisterCodePackageHost sync call. This is applicable for only multi code package application hosts like FWP | |RequestTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(30)|Dynamic| Specify timespan in seconds. This represents the timeout for communication between the user's application host and Fabric process for various hosting related operations such as factory registration; runtime registration. | |RunAsPolicyEnabled| bool, default is FALSE|Static| Enables running code packages as local user other than the user under which fabric process is running. In order to enable this policy Fabric must be running as SYSTEM or as user who has SeAssignPrimaryTokenPrivilege. |
The following is a list of Fabric settings that you can customize, organized by
| | | | | |ActiveListeners |Uint, default is 50 |Static| Number of reads to post to the http server queue. This controls the number of concurrent requests that can be satisfied by the HttpGateway. | |HttpGatewayHealthReportSendInterval |Time in seconds, default is 30 |Static|Specify timespan in seconds. The interval at which the Http Gateway sends accumulated health reports to the Health Manager. |
-|HttpStrictTransportSecurityHeader|string,default is ""|Dynamic| Specify the HTTP Strict Transport Security header value to be included in every response sent by the HttpGateway. When set to empty string; this header will not be included in the gateway response.|
+|HttpStrictTransportSecurityHeader|string, default is ""|Dynamic| Specify the HTTP Strict Transport Security header value to be included in every response sent by the HttpGateway. When set to empty string; this header will not be included in the gateway response.|
|IsEnabled|Bool, default is false |Static| Enables/Disables the HttpGateway. HttpGateway is disabled by default. | |MaxEntityBodySize |Uint, default is 4194304 |Dynamic|Gives the maximum size of the body that can be expected from an http request. Default value is 4MB. Httpgateway will fail a request if it has a body of size > this value. Minimum read chunk size is 4096 bytes. So this has to be >= 4096. |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|AutomaticMemoryConfiguration |Int, default is 1 |Dynamic|Flag that indicates if the memory settings should be automatically and dynamically configured. If zero then the memory configuration settings are used directly and do not change based on system conditions. If one then the memory settings are configured automatically and may change based on system conditions. |
+|AutomaticMemoryConfiguration |Int, default is 1 |Dynamic|Flag that indicates if the memory settings should be automatically and dynamically configured. If zero then the memory configuration settings are used directly and don't change based on system conditions. If one then the memory settings are configured automatically and may change based on system conditions. |
|MaximumDestagingWriteOutstandingInKB | Int, default is 0 |Dynamic|The number of KB to allow the shared log to advance ahead of the dedicated log. Use 0 to indicate no limit. |SharedLogId |string, default is "" |Static|Unique guid for shared log container. Use "" if using default path under fabric data root. | |SharedLogPath |string, default is "" |Static|Path and file name to location to place shared log container. Use "" for using default path under fabric data root. |
The following is a list of Fabric settings that you can customize, organized by
## ManagedIdentityTokenService | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|IsEnabled|bool, default is FALSE|Static|Flag controlling the presence and status of the Managed Identity Token Service in the cluster;this is a prerequisite for using the managed identity functionality of Service Fabric applications.|
+|IsEnabled|bool, default is FALSE|Static|Flag controlling the presence and status of the Managed Identity Token Service in the cluster; this is a prerequisite for using the managed identity functionality of Service Fabric applications.|
| RunInStandaloneMode |bool, default is FALSE |Static|The RunInStandaloneMode for ManagedIdentityTokenService. | | StandalonePrincipalId |wstring, default is "" |Static|The StandalonePrincipalId for ManagedIdentityTokenService. | | StandaloneSendX509 |bool, default is FALSE |Static|The StandaloneSendX509 for ManagedIdentityTokenService. |
The following is a list of Fabric settings that you can customize, organized by
|AzureStorageMaxWorkerThreads | Int, default is 25 |Dynamic|The maximum number of worker threads in parallel. | |AzureStorageOperationTimeout | Time in seconds, default is 6000 |Dynamic|Specify timespan in seconds. Time out for xstore operation to complete. | |CleanupApplicationPackageOnProvisionSuccess|bool, default is true |Dynamic|Enables or disables the automatic cleanup of application package on successful provision.
-|CleanupUnusedApplicationTypes|Bool, default is FALSE |Dynamic|This configuration if enabled, allows to automatically unregister unused application type versions skipping the latest three unused versions, thereby trimming the disk space occupied by image store. The automatic cleanup will be triggered at the end of successful provision for that specific app type and also runs periodically once a day for all the application types. Number of unused versions to skip is configurable using parameter "MaxUnusedAppTypeVersionsToKeep". <br/> *Best practice is to use `true`.*
+|CleanupUnusedApplicationTypes|Bool, default is FALSE |Dynamic|This configuration if enabled, allows you to automatically unregister unused application type versions skipping the latest three unused versions, thereby trimming the disk space occupied by image store. The automatic cleanup will be triggered at the end of successful provision for that specific app type and also runs periodically once a day for all the application types. Number of unused versions to skip is configurable using parameter "MaxUnusedAppTypeVersionsToKeep". <br/> *Best practice is to use `true`.*
|DisableChecksumValidation | Bool, default is false |Static| This configuration allows us to enable or disable checksum validation during application provisioning. | |DisableServerSideCopy | Bool, default is false |Static|This configuration enables or disables server-side copy of application package on the ImageStore during application provisioning. | |ImageCachingEnabled | Bool, default is true |Static|This configuration allows us to enable or disable caching. |
The following is a list of Fabric settings that you can customize, organized by
## MetricLoadStickinessForSwap | **Parameter** | **Allowed Values** |**Upgrade Policy**| **Guidance or Short Description** | | | | | |
-|PropertyGroup|KeyDoubleValueMap, default is None|Dynamic|Determines the part of the load that sticks with replica when swapped It takes value between 0 (load doesn't stick with replica) and 1 (load sticks with replica - default) |
+|PropertyGroup|KeyDoubleValueMap, default is None|Dynamic|Determines the part of the load that sticks with replica when swapped. It takes value between 0 (load doesn't stick with replica) and 1 (load sticks with replica - default) |
## Naming/Replication
The following is a list of Fabric settings that you can customize, organized by
|AffinityConstraintPriority | Int, default is 0 | Dynamic|Determines the priority of affinity constraint: 0: Hard; 1: Soft; negative: Ignore. | |ApplicationCapacityConstraintPriority | Int, default is 0 | Dynamic|Determines the priority of capacity constraint: 0: Hard; 1: Soft; negative: Ignore. | |AutoDetectAvailableResources|bool, default is TRUE|Static|This config will trigger auto detection of available resources on node (CPU and Memory) When this config is set to true - we will read real capacities and correct them if user specified bad node capacities or didn't define them at all If this config is set to false - we will trace a warning that user specified bad node capacities; but we will not correct them; meaning that user wants to have the capacities specified as > than the node really has or if capacities are undefined; it will assume unlimited capacity |
-|BalancingDelayAfterNewNode | Time in seconds, default is 120 |Dynamic|Specify timespan in seconds. Do not start balancing activities within this period after adding a new node. |
-|BalancingDelayAfterNodeDown | Time in seconds, default is 120 |Dynamic|Specify timespan in seconds. Do not start balancing activities within this period after a node down event. |
+|BalancingDelayAfterNewNode | Time in seconds, default is 120 |Dynamic|Specify timespan in seconds. Don't start balancing activities within this period after adding a new node. |
+|BalancingDelayAfterNodeDown | Time in seconds, default is 120 |Dynamic|Specify timespan in seconds. Don't start balancing activities within this period after a node down event. |
|BlockNodeInUpgradeConstraintPriority | Int, default is -1 |Dynamic|Determines the priority of capacity constraint: 0: Hard; 1: Soft; negative: Ignore | |CapacityConstraintPriority | Int, default is 0 | Dynamic|Determines the priority of capacity constraint: 0: Hard; 1: Soft; negative: Ignore. | |ConsecutiveDroppedMovementsHealthReportLimit | Int, default is 20 | Dynamic|Defines the number of consecutive times that ResourceBalancer-issued Movements are dropped before diagnostics are conducted and health warnings are emitted. Negative: No Warnings Emitted under this condition. |
-|ConstraintFixPartialDelayAfterNewNode | Time in seconds, default is 120 |Dynamic| Specify timespan in seconds. DDo not Fix FaultDomain and UpgradeDomain constraint violations within this period after adding a new node. |
-|ConstraintFixPartialDelayAfterNodeDown | Time in seconds, default is 120 |Dynamic| Specify timespan in seconds. Do not Fix FaultDomain and UpgradeDomain constraint violations within this period after a node down event. |
+|ConstraintFixPartialDelayAfterNewNode | Time in seconds, default is 120 |Dynamic| Specify timespan in seconds. Don't Fix FaultDomain and UpgradeDomain constraint violations within this period after adding a new node. |
+|ConstraintFixPartialDelayAfterNodeDown | Time in seconds, default is 120 |Dynamic| Specify timespan in seconds. Don't Fix FaultDomain and UpgradeDomain constraint violations within this period after a node down event. |
|ConstraintViolationHealthReportLimit | Int, default is 50 |Dynamic| Defines the number of times constraint violating replica has to be persistently unfixed before diagnostics are conducted and health reports are emitted. | |DecisionOperationalTracingEnabled | bool, default is FALSE |Dynamic| Config that enables CRM Decision operational structural trace in the event store. | |DetailedConstraintViolationHealthReportLimit | Int, default is 200 |Dynamic| Defines the number of times constraint violating replica has to be persistently unfixed before diagnostics are conducted and detailed health reports are emitted. |
The following is a list of Fabric settings that you can customize, organized by
|PLBRefreshGap | Time in seconds, default is 1 |Dynamic| Specify timespan in seconds. Defines the minimum amount of time that must pass before PLB refreshes state again. | |PreferredLocationConstraintPriority | Int, default is 2| Dynamic|Determines the priority of preferred location constraint: 0: Hard; 1: Soft; 2: Optimization; negative: Ignore | |PreferredPrimaryDomainsConstraintPriority| Int, default is 1 | Dynamic| Determines the priority of preferred primary domain constraint: 0: Hard; 1: Soft; negative: Ignore |
-|PreferUpgradedUDs|bool,default is FALSE|Dynamic|Turns on and off logic which prefers moving to already upgraded UDs. Starting with SF 7.0, the default value for this parameter is changed from TRUE to FALSE.|
+|PreferUpgradedUDs|bool, default is FALSE|Dynamic|Turns on and off logic which prefers moving to already upgraded UDs. Starting with SF 7.0, the default value for this parameter is changed from TRUE to FALSE.|
|PreventTransientOvercommit | Bool, default is false | Dynamic|Determines should PLB immediately count on resources that will be freed up by the initiated moves. By default; PLB can initiate move out and move in on the same node which can create transient overcommit. Setting this parameter to true will prevent those kinds of overcommits and on-demand defrag (also known as placementWithMove) will be disabled. | |ScaleoutCountConstraintPriority | Int, default is 0 |Dynamic| Determines the priority of scaleout count constraint: 0: Hard; 1: Soft; negative: Ignore. | |SubclusteringEnabled|Bool, default is FALSE | Dynamic |Acknowledge subclustering when calculating standard deviation for balancing |
-|SubclusteringReportingPolicy| Int, default is 1 |Dynamic|Defines how and if the subclustering health reports are sent: 0: Do not report; 1: Warning; 2: OK |
+|SubclusteringReportingPolicy| Int, default is 1 |Dynamic|Defines how and if the subclustering health reports are sent: 0: Don't report; 1: Warning; 2: OK |
|SwapPrimaryThrottlingAssociatedMetric | string, default is ""|Static| The associated metric name for this throttling. | |SwapPrimaryThrottlingEnabled | Bool, default is false|Dynamic| Determine whether the swap-primary throttling is enabled. | |SwapPrimaryThrottlingGlobalMaxValue | Int, default is 0 |Dynamic| The maximal number of swap-primary replicas allowed globally. | |TraceCRMReasons |Bool, default is true |Dynamic|Specifies whether to trace reasons for CRM issued movements to the operational events channel. | |UpgradeDomainConstraintPriority | Int, default is 1| Dynamic|Determines the priority of upgrade domain constraint: 0: Hard; 1: Soft; negative: Ignore. | |UseMoveCostReports | Bool, default is false | Dynamic|Instructs the LB to ignore the cost element of the scoring function; resulting potentially large number of moves for better balanced placement. |
-|UseSeparateAuxiliaryLoad | Bool, default is true | Dynamic|Setting which determines if PLB should use different load for auxiliary on each node If UseSeparateAuxiliaryLoad is turned off: - Reported load for auxiliary on one node will result in overwriting load for each auxiliary (on all other nodes) If UseSeparateAuxiliaryLoad is turned on: - Reported load for auxiliary on one node will take effect only on that auxiliary (no effect on auxiliaries on other nodes) - If replica crash happens - new replica is created with average load of all the rest auxiliaries - If PLB moves existing replica - load goes with it. |
-|UseSeparateAuxiliaryMoveCost | Bool, default is false | Dynamic|Setting which determines if PLB should use different move cost for auxiliary on each node If UseSeparateAuxiliaryMoveCost is turned off: - Reported move cost for auxiliary on one node will result in overwritting move cost for each auxiliary (on all other nodes) If UseSeparateAuxiliaryMoveCost is turned on: - Reported move cost for auxiliary on one node will take effect only on that auxiliary (no effect on auxiliaries on other nodes) - If replica crash happens - new replica is created with default move cost specified on service level - If PLB moves existing replica - move cost goes with it. |
+|UseSeparateAuxiliaryLoad | Bool, default is true | Dynamic|Setting which determines if PLB should use different load for auxiliary on each node. If UseSeparateAuxiliaryLoad is turned off: - Reported load for auxiliary on one node will result in overwriting load for each auxiliary (on all other nodes) If UseSeparateAuxiliaryLoad is turned on: - Reported load for auxiliary on one node will take effect only on that auxiliary (no effect on auxiliaries on other nodes) - If replica crash happens - new replica is created with average load of all the rest auxiliaries - If PLB moves existing replica - load goes with it. |
+|UseSeparateAuxiliaryMoveCost | Bool, default is false | Dynamic|Setting which determines if PLB should use different move cost for auxiliary on each node. If UseSeparateAuxiliaryMoveCost is turned off: - Reported move cost for auxiliary on one node will result in overwriting move cost for each auxiliary (on all other nodes) If UseSeparateAuxiliaryMoveCost is turned on: - Reported move cost for auxiliary on one node will take effect only on that auxiliary (no effect on auxiliaries on other nodes) - If replica crash happens - new replica is created with default move cost specified on service level - If PLB moves existing replica - move cost goes with it. |
|UseSeparateSecondaryLoad | Bool, default is true | Dynamic|Setting which determines if separate load should be used for secondary replicas. |
-|UseSeparateSecondaryMoveCost | Bool, default is true | Dynamic|Setting which determines if PLB should use different move cost for secondary on each node. If UseSeparateSecondaryMoveCost is turned off: - Reported move cost for secondary on one node will result in overwritting move cost for each secondary (on all other nodes) If UseSeparateSecondaryMoveCost is turned on: - Reported move cost for secondary on one node will take effect only on that secondary (no effect on secondaries on other nodes) - If replica crash happens - new replica is created with default move cost specified on service level - If PLB moves existing replica - move cost goes with it. |
+|UseSeparateSecondaryMoveCost | Bool, default is true | Dynamic|Setting which determines if PLB should use different move cost for secondary on each node. If UseSeparateSecondaryMoveCost is turned off: - Reported move cost for secondary on one node will result in overwriting move cost for each secondary (on all other nodes) If UseSeparateSecondaryMoveCost is turned on: - Reported move cost for secondary on one node will take effect only on that secondary (no effect on secondaries on other nodes) - If replica crash happens - new replica is created with default move cost specified on service level - If PLB moves existing replica - move cost goes with it. |
|ValidatePlacementConstraint | Bool, default is true |Dynamic| Specifies whether or not the PlacementConstraint expression for a service is validated when a service's ServiceDescription is updated. | |ValidatePrimaryPlacementConstraintOnPromote| Bool, default is TRUE |Dynamic|Specifies whether or not the PlacementConstraint expression for a service is evaluated for primary preference on failover. | |VerboseHealthReportLimit | Int, default is 20 | Dynamic|Defines the number of times a replica has to go unplaced before a health warning is reported for it (if verbose health reporting is enabled). |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy**| **Guidance or Short Description** | | | | | |
-|BatchAcknowledgementInterval|TimeSpan, default is Common::TimeSpan::FromMilliseconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before sending back an acknowledgement. Other operations received during this time period will have their acknowledgements sent back in a single message-> reducing network traffic but potentially reducing the throughput of the replicator.|
+|BatchAcknowledgementInterval|TimeSpan, default is Common::TimeSpan::FromMilliseconds(15)|Static|Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before sending back an acknowledgment. Other operations received during this time period will have their acknowledgments sent back in a single message-> reducing network traffic but potentially reducing the throughput of the replicator.|
|MaxCopyQueueSize|uint, default is 1024|Static|This is the maximum value defines the initial size for the queue which maintains replication operations. Note that it must be a power of 2. If during runtime the queue grows to this size operation will be throttled between the primary and secondary replicators.| |MaxPrimaryReplicationQueueMemorySize|uint, default is 0|Static|This is the maximum value of the primary replication queue in bytes.| |MaxPrimaryReplicationQueueSize|uint, default is 1024|Static|This is the maximum number of operations that could exist in the primary replication queue. Note that it must be a power of 2.|
The following is a list of Fabric settings that you can customize, organized by
|AADLoginEndpoint|string, default is ""|Static|Azure Active Directory Login Endpoint, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us" | |AADTenantId|string, default is ""|Static|Tenant ID (GUID) | |AcceptExpiredPinnedClusterCertificate|bool, default is FALSE|Dynamic|Flag indicating whether to accept expired cluster certificates declared by thumbprint Applies only to cluster certificates; so as to keep the cluster alive. |
-|AdminClientCertThumbprints|string, default is ""|Dynamic|Thumbprints of certificates used by clients in admin role. It is a comma-separated name list. |
+|AdminClientCertThumbprints|string, default is ""|Dynamic|Thumbprints of certificates used by clients in admin role. It's a comma-separated name list. |
|AADTokenEndpointFormat|string, default is ""|Static|Azure Active Directory Token Endpoint, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us/{0}" | |AdminClientClaims|string, default is ""|Dynamic|All possible claims expected from admin clients; the same format as ClientClaims; this list internally gets added to ClientClaims; so no need to also add the same entries to ClientClaims. |
-|AdminClientIdentities|string, default is ""|Dynamic|Windows identities of fabric clients in admin role; used to authorize privileged fabric operations. It is a comma-separated list; each entry is a domain account name or group name. For convenience; the account that runs fabric.exe is automatically assigned admin role; so is group ServiceFabricAdministrators. |
+|AdminClientIdentities|string, default is ""|Dynamic|Windows identities of fabric clients in admin role; used to authorize privileged fabric operations. It's a comma-separated list; each entry is a domain account name or group name. For convenience; the account that runs fabric.exe is automatically assigned admin role; so is group ServiceFabricAdministrators. |
|AppRunAsAccountGroupX509Folder|string, default is /home/sfuser/sfusercerts |Static|Folder where AppRunAsAccountGroup X509 certificates and private keys are located | |CertificateExpirySafetyMargin|TimeSpan, default is Common::TimeSpan::FromMinutes(43200)|Static|Specify timespan in seconds. Safety margin for certificate expiration; certificate health report status changes from OK to Warning when expiration is closer than this. Default is 30 days. | |CertificateHealthReportingInterval|TimeSpan, default is Common::TimeSpan::FromSeconds(3600 * 8)|Static|Specify timespan in seconds. Specify interval for certificate health reporting; default to 8 hours; setting to 0 disables certificate health reporting |
-|ClientCertThumbprints|string, default is ""|Dynamic|Thumbprints of certificates used by clients to talk to the cluster; cluster uses this authorize incoming connection. It is a comma-separated name list. |
+|ClientCertThumbprints|string, default is ""|Dynamic|Thumbprints of certificates used by clients to talk to the cluster; cluster uses this to authorize incoming connection. It's a comma-separated name list. |
|ClientClaimAuthEnabled|bool, default is FALSE|Static|Indicates if claim-based authentication is enabled on clients; setting this true implicitly sets ClientRoleEnabled. | |ClientClaims|string, default is ""|Dynamic|All possible claims expected from clients for connecting to gateway. This is a 'OR' list: ClaimsEntry \|\| ClaimsEntry \|\| ClaimsEntry ... each ClaimsEntry is a "AND" list: ClaimType=ClaimValue && ClaimType=ClaimValue && ClaimType=ClaimValue ... |
-|ClientIdentities|string, default is ""|Dynamic|Windows identities of FabricClient; naming gateway uses this to authorize incoming connections. It is a comma-separated list; each entry is a domain account name or group name. For convenience; the account that runs fabric.exe is automatically allowed; so are group ServiceFabricAllowedUsers and ServiceFabricAdministrators. |
+|ClientIdentities|string, default is ""|Dynamic|Windows identities of FabricClient; naming gateway uses this to authorize incoming connections. It's a comma-separated list; each entry is a domain account name or group name. For convenience; the account that runs fabric.exe is automatically allowed; so are group ServiceFabricAllowedUsers and ServiceFabricAdministrators. |
|ClientRoleEnabled|bool, default is FALSE|Static|Indicates if client role is enabled; when set to true; clients are assigned roles based on their identities. For V2; enabling this means client not in AdminClientCommonNames/AdminClientIdentities can only execute read-only operations. | |ClusterCertThumbprints|string, default is ""|Dynamic|Thumbprints of certificates allowed to join the cluster; a comma-separated name list. | |ClusterCredentialType|string, default is "None"|Not Allowed|Indicates the type of security credentials to use in order to secure the cluster. Valid values are "None/X509/Windows" |
-|ClusterIdentities|string, default is ""|Dynamic|Windows identities of cluster nodes; used for cluster membership authorization. It is a comma-separated list; each entry is a domain account name or group name |
-|ClusterSpn|string, default is ""|Not Allowed|Service principal name of the cluster; when fabric runs as a single domain user (gMSA/domain user account). It is the SPN of lease listeners and listeners in fabric.exe: federation listeners; internal replication listeners; runtime service listener and naming gateway listener. This should be left empty when fabric runs as machine accounts; in which case connecting side compute listener SPN from listener transport address. |
+|ClusterIdentities|string, default is ""|Dynamic|Windows identities of cluster nodes; used for cluster membership authorization. It's a comma-separated list; each entry is a domain account name or group name |
+|ClusterSpn|string, default is ""|Not Allowed|Service principal name of the cluster; when fabric runs as a single domain user (gMSA/domain user account). It's the SPN of lease listeners and listeners in fabric.exe: federation listeners; internal replication listeners; runtime service listener and naming gateway listener. This should be left empty when fabric runs as machine accounts; in which case connecting side compute listener SPN from listener transport address. |
|CrlCheckingFlag|uint, default is 0x40000000|Dynamic|Default certificate chain validation flag; may be overridden by component-specific flag; e.g. Federation/X509CertChainFlags 0x10000000 CERT_CHAIN_REVOCATION_CHECK_END_CERT 0x20000000 CERT_CHAIN_REVOCATION_CHECK_CHAIN 0x40000000 CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT 0x80000000 CERT_CHAIN_REVOCATION_CHECK_CACHE_ONLY Setting to 0 disables CRL checking Full list of supported values is documented by dwFlags of CertGetCertificateChain: https://msdn.microsoft.com/library/windows/desktop/aa376078(v=vs.85).aspx | |CrlDisablePeriod|TimeSpan, default is Common::TimeSpan::FromMinutes(15)|Dynamic|Specify timespan in seconds. How long CRL checking is disabled for a given certificate after encountering offline error; if CRL offline error can be ignored. | |CrlOfflineHealthReportTtl|TimeSpan, default is Common::TimeSpan::FromMinutes(1440)|Dynamic|Specify timespan in seconds. |
-|DisableFirewallRuleForDomainProfile| bool, default is TRUE |Static| Indicates if firewall rule should not be enabled for domain profile |
-|DisableFirewallRuleForPrivateProfile| bool, default is TRUE |Static| Indicates if firewall rule should not be enabled for private profile |
-|DisableFirewallRuleForPublicProfile| bool, default is TRUE | Static|Indicates if firewall rule should not be enabled for public profile |
+|DisableFirewallRuleForDomainProfile| bool, default is TRUE |Static| Indicates if firewall rule shouldn't be enabled for domain profile |
+|DisableFirewallRuleForPrivateProfile| bool, default is TRUE |Static| Indicates if firewall rule shouldn't be enabled for private profile |
+|DisableFirewallRuleForPublicProfile| bool, default is TRUE | Static|Indicates if firewall rule shouldn't be enabled for public profile |
| EnforceLinuxMinTlsVersion | bool, default is FALSE | Static | If set to true; only TLS version 1.2+ is supported. If false; support earlier TLS versions. Applies to Linux only | | EnforcePrevalidationOnSecurityChanges | bool, default is FALSE| Dynamic | Flag controlling the behavior of cluster upgrade upon detecting changes of its security settings. If set to 'true', the cluster upgrade will attempt to ensure that at least one of the certificates matching any of the presentation rules can pass a corresponding validation rule. The pre-validation is executed before the new settings are applied to any node, but runs only on the node hosting the primary replica of the Cluster Manager service at the time of initiating the upgrade. The default is currently set to 'false'; starting with release 7.1, the setting will be set to 'true' for new Azure Service Fabric clusters.|
-|FabricHostSpn| string, default is "" |Static| Service principal name of FabricHost; when fabric runs as a single domain user (gMSA/domain user account) and FabricHost runs under machine account. It is the SPN of IPC listener for FabricHost; which by default should be left empty since FabricHost runs under machine account |
+|FabricHostSpn| string, default is "" |Static| Service principal name of FabricHost; when fabric runs as a single domain user (gMSA/domain user account) and FabricHost runs under machine account. It's the SPN of IPC listener for FabricHost; which by default should be left empty since FabricHost runs under machine account |
|IgnoreCrlOfflineError|bool, default is FALSE|Dynamic|Whether to ignore CRL offline error when server-side verifies incoming client certificates | |IgnoreSvrCrlOfflineError|bool, default is TRUE|Dynamic|Whether to ignore CRL offline error when client side verifies incoming server certificates; default to true. Attacks with revoked server certificates require compromising DNS; harder than with revoked client certificates. | |ServerAuthCredentialType|string, default is "None"|Static|Indicates the type of security credentials to use in order to secure the communication between FabricClient and the Cluster. Valid values are "None/X509/Windows" |
-|ServerCertThumbprints|string, default is ""|Dynamic|Thumbprints of server certificates used by cluster to talk to clients; clients use this to authenticate the cluster. It is a comma-separated name list. |
+|ServerCertThumbprints|string, default is ""|Dynamic|Thumbprints of server certificates used by cluster to talk to clients; clients use this to authenticate the cluster. It's a comma-separated name list. |
|SettingsX509StoreName| string, default is "MY"| Dynamic|X509 certificate store used by fabric for configuration protection | |UseClusterCertForIpcServerTlsSecurity|bool, default is FALSE|Static|Whether to use cluster certificate to secure IPC Server TLS transport unit | |X509Folder|string, default is /var/lib/waagent|Static|Folder where X509 certificates and private keys are located |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |ActivateNode |string, default is "Admin" |Dynamic| Security configuration for activation a node. |
-|BlockAccessToWireServer|bool, default is FALSE| Static |Blocks access to ports of the WireServer endpoint from Docker containers deployed as Service Fabric applications. This parameter is supported for Service Fabric clusters deployed on Azure Virtual Machines, Windows and Linux, and defaults to 'false' (access is permitted).|
-|CancelTestCommand |string, default is "Admin" |Dynamic| Cancels a specific TestCommand - if it is in flight. |
+|CancelTestCommand |string, default is "Admin" |Dynamic| Cancels a specific TestCommand - if it's in flight. |
|CodePackageControl |string, default is "Admin" |Dynamic| Security configuration for restarting code packages. | |CreateApplication |string, default is "Admin" | Dynamic|Security configuration for application creation. | |CreateComposeDeployment|string, default is "Admin"| Dynamic|Creates a compose deployment described by compose files |
The following is a list of Fabric settings that you can customize, organized by
|GetUpgradesPendingApproval |string, default is "Admin" |Dynamic| Induces GetUpgradesPendingApproval on a partition. | |GetUpgradeStatus |string, default is "Admin\|\|User" |Dynamic| Security configuration for polling application upgrade status. | |InternalList |string, default is "Admin" | Dynamic|Security configuration for image store client file list operation (internal). |
-|InvokeContainerApi|string,default is "Admin"|Dynamic|Invoke container API |
+|InvokeContainerApi|string, default is "Admin"|Dynamic|Invoke container API |
|InvokeInfrastructureCommand |string, default is "Admin" |Dynamic| Security configuration for infrastructure task management commands. | |InvokeInfrastructureQuery |string, default is "Admin\|\|User" | Dynamic|Security configuration for querying infrastructure tasks. | |List |string, default is "Admin\|\|User" | Dynamic|Security configuration for image store client file list operation. |
The following is a list of Fabric settings that you can customize, organized by
|ServiceNotifications |string, default is "Admin\|\|User" |Dynamic| Security configuration for event-based service notifications. | |SetUpgradeOrchestrationServiceState|string, default is "Admin"| Dynamic|Induces SetUpgradeOrchestrationServiceState on a partition | |StartApprovedUpgrades |string, default is "Admin" |Dynamic| Induces StartApprovedUpgrades on a partition. |
-|StartChaos |string, default is "Admin" |Dynamic| Starts Chaos - if it is not already started. |
+|StartChaos |string, default is "Admin" |Dynamic| Starts Chaos - if it's not already started. |
|StartClusterConfigurationUpgrade |string, default is "Admin" |Dynamic| Induces StartClusterConfigurationUpgrade on a partition. | |StartInfrastructureTask |string, default is "Admin" | Dynamic|Security configuration for starting infrastructure tasks. | |StartNodeTransition |string, default is "Admin" |Dynamic| Security configuration for starting a node transition. |
The following is a list of Fabric settings that you can customize, organized by
|NodesToBeRemoved|string, default is ""| Dynamic |The nodes which should be removed as part of configuration upgrade. (Only for Standalone Deployments)| |ServiceRunAsAccountName |String | Not Allowed |The account name under which to run fabric host service. | |SkipContainerNetworkResetOnReboot|bool, default is FALSE|NotAllowed|Whether to skip resetting container network on reboot.|
-|SkipFirewallConfiguration |Bool, default is false | Dynamic |Specifies if firewall settings need to be set by the system or not. This applies only if you are using windows firewall. If you are using third party firewalls, then you must open the ports for the system and applications to use |
+|SkipFirewallConfiguration |Bool, default is false | Dynamic |Specifies if firewall settings need to be set by the system or not. This applies only if you're using windows firewall. If you're using third party firewalls, then you must open the ports for the system and applications to use |
## TokenValidationService
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|BatchAcknowledgementInterval | Time in seconds, default is 0.015 | Static | Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before sending back an acknowledgement. Other operations received during this time period will have their acknowledgements sent back in a single message-> reducing network traffic but potentially reducing the throughput of the replicator. |
+|BatchAcknowledgementInterval | Time in seconds, default is 0.015 | Static | Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before sending back an acknowledgment. Other operations received during this time period will have their acknowledgments sent back in a single message-> reducing network traffic but potentially reducing the throughput of the replicator. |
|MaxCopyQueueSize |Uint, default is 16384 | Static |This is the maximum value defines the initial size for the queue which maintains replication operations. Note that it must be a power of 2. If during runtime the queue grows to this size operation will be throttled between the primary and secondary replicators. | |MaxPrimaryReplicationQueueMemorySize |Uint, default is 0 | Static |This is the maximum value of the primary replication queue in bytes. | |MaxPrimaryReplicationQueueSize |Uint, default is 8192 | Static |This is the maximum number of operations that could exist in the primary replication queue. Note that it must be a power of 2. |
The following is a list of Fabric settings that you can customize, organized by
|MaxSecondaryReplicationQueueSize |Uint, default is 16384 | Static |This is the maximum number of operations that could exist in the secondary replication queue. Note that it must be a power of 2. | |ReplicatorAddress |string, default is "localhost:0" | Static | The endpoint in form of a string -'IP:Port' which is used by the Windows Fabric Replicator to establish connections with other replicas in order to send/receive operations. | |ReplicationBatchSendInterval|TimeSpan, default is Common::TimeSpan::FromMilliseconds(15) | Static | Specify timespan in seconds. Determines the amount of time that the replicator waits after receiving an operation before force sending a batch.|
-|ShouldAbortCopyForTruncation |bool, default is FALSE | Static | Allow pending log truncation to go through during copy. With this enabled the copy stage of builds can be cancelled if the log is full and they are block truncation. |
+|ShouldAbortCopyForTruncation |bool, default is FALSE | Static | Allow pending log truncation to go through during copy. With this enabled the copy stage of builds can be canceled if the log is full and they are block truncation. |
## Transport | **Parameter** | **Allowed Values** |**Upgrade policy** |**Guidance or Short Description** | | | | | | |ConnectionOpenTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(60)|Static|Specify timespan in seconds. Time out for connection setup on both incoming and accepting side (including security negotiation in secure mode) | |FrameHeaderErrorCheckingEnabled|bool, default is TRUE|Static|Default setting for error checking on frame header in non-secure mode; component setting overrides this. |
-|MessageErrorCheckingEnabled|bool,default is TRUE|Static|Default setting for error checking on message header and body in non-secure mode; component setting overrides this. |
+|MessageErrorCheckingEnabled|bool, default is TRUE|Static|Default setting for error checking on message header and body in non-secure mode; component setting overrides this. |
|ResolveOption|string, default is "unspecified"|Static|Determines how FQDN is resolved. Valid values are "unspecified/ipv4/ipv6". | |SendTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(300)|Dynamic|Specify timespan in seconds. Send timeout for detecting stuck connection. TCP failure reports are not reliable in some environment. This may need to be adjusted according to available network bandwidth and size of outbound data (\*MaxMessageSize\/\*SendQueueSizeLimit). |
The following is a list of Fabric settings that you can customize, organized by
## UserServiceMetricCapacities | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|PropertyGroup| UserServiceMetricCapacitiesMap, default is None | Static | A collection of user services resource governance limits Needs to be static as it affects AutoDetection logic |
+|PropertyGroup| UserServiceMetricCapacitiesMap, default is None | Static | A collection of user services resource governance limits Needs to be static as it affects Auto-Detection logic |
## Next steps For more information, see [Upgrade the configuration of an Azure cluster](service-fabric-cluster-config-upgrade-azure.md) and [Upgrade the configuration of a standalone cluster](service-fabric-cluster-config-upgrade-windows-server.md).
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Replication appliance / Configuration server** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
-[Rollup 67](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | 9.54.6682.1 | 9.54.6682.1 / 5.1.8095.0 | 9.54.6682.1 | 5.23.0428.1 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9261.0
+[Rollup 67](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | 9.54.6682.1 | 9.54.6682.1 / 5.1.8095.0 | 9.54.6682.1 | 5.23.0428.1 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9261.0 (VMware) & 2.0.9260.0 (Hyper-V)
[Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6615.1 | 9.53.6615.1 / 5.1.8095.0 | 9.53.6615.1 | 5.1.8103.0 (Modernized VMware), 5.1.8095.0 (Hyper-V) & 5.23.0210.5 (Classic VMware) | 2.0.9260.0 [Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 9.52.6522.1 / 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 (VMware) & 5.1.7882.0 (Hyper-V) | 2.0.9259.0 [Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 9.51.6477.1 / 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Locate the installer files for the serverΓÇÖs operating system using the followi
```cmd
- .\UnifiedAgentInstaller.exe /Platform vmware /Silent /Role MS /CSType CSPrime /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery"
+ .\UnifiedAgentInstaller.exe /Platform vmware /Silent /Role MS /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery"
``` Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file).
Locate the installer files for the serverΓÇÖs operating system using the followi
4. After successfully installing, register the source machine with the above appliance using the following command: ```cmd
- "C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true
+ "C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CredentialLessDiscovery true
``` #### Installation settings
Syntax | `.\UnifiedAgentInstaller.exe /Platform vmware /Role MS /CSType CSPrime
`/InstallLocation`| Optional. Specifies the Mobility service installation location (any folder). `/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. `/Silent`| Optional. Specifies whether to run the installer in silent mode.
-`/CSType`| Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy)
+`/CSType`| Optional. Used to define modernized or classic architecture. By default, modernized architecture would be launched. (CSPrime or CSLegacy)
#### Registration settings
Setting | Details
| Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true` `/SourceConfigFilePath` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder.
-`/CSType` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy).
+`/CSType` | Optional. Used to define modernized or legacy architecture. By default, modernized architecture would be launched. (CSPrime or CSLegacy).
`/CredentialLessDiscovery` | Optional. Specifies whether credential-less discovery will be performed or not.
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
2. To install, use the below command: ```bash
- sudo ./install -q -r MS -v VmWare -c CSPrime
+ sudo ./install -q -r MS -v VmWare
``` Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file).
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
3. After successfully installing, register the source machine with the above appliance using the following command: ```bash
- <InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q
+ <InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -S config.json -q
``` #### Installation settings Setting | Details |
- Syntax | `./install -q -r MS -v VmWare -c CSPrime`
+ Syntax | `./install -q -r MS -v VmWare`
`-r` | Mandatory. Installation parameter. Specifies whether the Mobility service (MS) should be installed. `-d` | Optional. Specifies the Mobility service installation location: `/usr/local/ASR`. `-v` | Mandatory. Specifies the platform on which Mobility service is installed. <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs. `-q` | Optional. Specifies whether to run the installer in silent mode.
- `-c` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy).
+ `-c` | Optional. Used to define modernized or legacy architecture. By default, modernized architecture would be launched. (CSPrime or CSLegacy).
#### Registration settings Setting | Details |
- Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q -D true`
+ Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -S config.json -q -D true`
`-S` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder.
- `-c` | Mandatory. Used to define modernized and legacy architecture. (CSPrime or CSLegacy).
+ `-c` | Optional. Used to define modernized and legacy architecture. By default, modernized architecture would be launched. (CSPrime or CSLegacy).
`-q` | Optional. Specifies whether to run the installer in silent mode. `-D` | Optional. Specifies whether credential-less discovery will be performed or not.
See information about [upgrading the mobility services](upgrade-mobility-service
- Run this command to install the agent. ```cmd
- UnifiedAgent.exe /Role "MS" /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" /Platform "VmWare" /Silent
+ UnifiedAgent.exe /Role "MS" /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" /Platform "VmWare" /Silent /CSType CSLegacy
``` - Run these commands to register the agent with the configuration server.
See information about [upgrading the mobility services](upgrade-mobility-service
Setting | Details |
-Syntax | `UnifiedAgent.exe /Role \<MS/MT> /InstallLocation \<Install Location> /Platform "VmWare" /Silent`
+Syntax | `UnifiedAgent.exe /Role \<MS/MT> /InstallLocation \<Install Location> /Platform "VmWare" /Silent /CSType CSLegacy`
Setup logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentInstaller.log` `/Role` | Mandatory installation parameter. Specifies whether the mobility service (MS) or master target (MT) should be installed. `/InstallLocation`| Optional parameter. Specifies the Mobility service installation location (any folder). `/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. `/Silent`| Optional. Specifies whether to run the installer in silent mode.
+` /CSType` | Required. Used to define modernized or classic architecture. By default, modernized architecture would be launched. (CSPrime or CSLegacy)
#### Registration settings Setting | Details
Agent configuration logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurat
2. Install as follows (root account is not required, but root permissions are required): ```bash
- sudo ./install -r MS -v VmWare -d <Install Location> -q
+ sudo ./install -r MS -v VmWare -d <Install Location> -q -c CSLegacy
``` 3. After the installation is finished, the Mobility service must be registered to the configuration server. Run the following command to register the Mobility service with the configuration server.
Agent configuration logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurat
Setting | Details |
-Syntax | `./install -r MS -v VmWare [-d <Install Location>] [-q]`
+Syntax | `./install -r MS -v VmWare [-d <Install Location>] [-q] -c CSLegacy`
`-r` | Mandatory installation parameter. Specifies whether the mobility service (MS) or master target (MT) should be installed. `-d` | Optional parameter. Specifies the Mobility service installation location: `/usr/local/ASR`. `-v` | Mandatory. Specifies the platform on which Mobility service is installed. <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs. `-q` | Optional. Specifies whether to run the installer in silent mode.
+`-c` | Required. Used to define modernized or classic architecture. By default, modernized architecture would be launched. (CSPrime or CSLegacy)
#### Registration settings Setting | Details |
-Syntax | `cd /usr/local/ASR/Vx/bin`</br> `UnifiedAgentConfigurator.sh -i \<CSIP> -P \<PassphraseFilePath>`
+Syntax | `cd /usr/local/ASR/Vx/bin`</br> `UnifiedAgentConfigurator.sh -i \<CSIP> -P \<PassphraseFilePath> -c CSLegacy`
`-i` | Mandatory parameter. `<CSIP>` specifies the configuration server's IP address. Use any valid IP address. `-P` | Mandatory. Full file path of the file in which the passphrase is saved. [Learn more](./vmware-azure-manage-configuration-server.md#generate-configuration-server-passphrase).
+`-c` | Required. Used to define modernized or classic architecture. By default, modernized architecture would be launched.(CSPrime or CSLegacy)
## Azure Virtual Machine agent
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
storage-mover Agent Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-deploy.md
Previously updated : 03/27/2023 Last updated : 07/25/2023 <!--
REVIEW Engineering: not reviewed
EDIT PASS: started Initial doc score: 83
-Current doc score: 96 (2026 words and 10 false-positive issues)
+Current doc score: 96 (2038 words and 10 false-positive issues)
!######################################################## --> # Deploy an Azure Storage Mover agent
-The Azure Storage Mover service utilizes agents to perform the migration jobs you configure in the service. An agent is a virtual machine-based migration appliance that runs on a virtualization host. Ideally, your virtualization host is located as near as possible to the source storage to be migrated.
+The Azure Storage Mover service utilizes agents to perform the migration jobs you configure in the service. An agent is a virtual machine-based migration appliance that runs on a virtualization host. Ideally, your virtualization host is located as near as possible to the source storage to be migrated. Storage Mover can support multiple agents.
-Because the agent is essentially a migration appliance, you interact with it through an agent-local administrative shell. The shell limits the operations you can perform on this machine, though network configuration and troubleshooting tasks are accessible.
+Because an agent is essentially a migration appliance, you interact with it through an agent-local administrative shell. The shell limits the operations you can perform on this machine, though network configuration and troubleshooting tasks are accessible.
Use of the agent in migrations is managed through Azure. Both Azure PowerShell and CLI are supported, and graphical interaction is available within the Azure portal. The agent is made available as a disk image compatible with new Windows Hyper-V virtual machines (VM).
This article guides you through the steps necessary to successfully deploy a Sto
## Prerequisites -- A capable Windows Hyper-V host on which to run the agent VM. See the [Recommended compute and memory resources](#recommended-compute-and-memory-resources) section in this article for details about resource requirements for the agent VM.
+- A capable Windows Hyper-V host on which to run the agent VM.<br/> See the [Recommended compute and memory resources](#recommended-compute-and-memory-resources) section in this article for details about resource requirements for the agent VM.
> [!NOTE] > At present, Windows Hyper-V is the only supported virtualization environment for your agent VM. Other virtualization environments have not been tested and are not supported.
-## Download the agent VM image
-
-The image is hosted on Microsoft Download Center as a zip file. Download the file at [https://aka.ms/StorageMover/agent](https://aka.ms/StorageMover/agent) and extract the agent virtual hard disk (VHD) image to your virtualization host.
- ## Determine required resources for the VM Like every VM, the agent requires available compute, memory, network, and storage space resources on the host. Although overall data size may affect the time required to complete a migration, it's generally the number of files and folders that drives resource requirements.
Like every VM, the agent requires available compute, memory, network, and storag
The agent requires unrestricted internet connectivity.
-There's no single network configuration option that works for every environment. However, the simplest configuration involves the deployment of an external virtual switch. The external switch type is connected to a physical adapter and allows your host operating system (OS) to share its connection with all your virtual machines (VMs). This switch allows communication between your physical network, the management operating system, and the virtual adapters on your virtual machines. This approach is fine for a test environment, but may not be suitable for a production server.
+Although no single network configuration option works for every environment, the simplest configuration involves the deployment of an external virtual switch. The external switch type is connected to a physical adapter and allows your host operating system (OS) to share its connection with all your virtual machines (VMs). This switch allows communication between your physical network, the management operating system, and the virtual adapters on your virtual machines. This approach may be acceptable for a test environment, but is likely not sufficient for a production server.
After the switch is created, ensure that both the management and agent VMs are on the same switch. On the WAN link firewall, outbound TCP port 443 must be open. Keep in mind that connectivity interruptions are to be expected when changing network configurations.
You can get help with [creating a virtual switch for Hyper-V virtual machines](/
**Number of items** *refers to the total number of files and folders in the source.* > [!IMPORTANT]
-> While agent VMs below minimal specs may work for your migration, they may not perform optimally.
+> While agent VMs below minimal specs may work for your migration, they may not perform optimally and are not supported.
The [Performance targets](performance-targets.md) article contains test results from different source namespaces and VM resources.
The [Performance targets](performance-targets.md) article contains test results
At a minimum, the agent image needs 20 GiB of local storage. The amount required may increase if a large number of small files are cached during a migration.
+## Download the agent VM image
+
+The image is hosted on Microsoft Download Center as a zip file. Download the file at [https://aka.ms/StorageMover/agent](https://aka.ms/StorageMover/agent) and extract the agent virtual hard disk (VHD) image to your virtualization host.
+ ## Create the agent VM 1. Create a new VM to host the agent. Open **Hyper-V Manager**. In the **Actions** pane, select **New** and **Virtual Machine...** to launch the **New Virtual Machine Wizard**.
The agent is delivered with a default user account and password. Immediately aft
## Bandwidth throttling
-A general consideration when deploying new machines in a network is the amount of bandwidth they use. Any Azure Storage Mover agent uses all available network bandwidth on the local network (source share to agent) and the WAN link (agent to Azure Storage).
+Take time to consider the amount of bandwidth a new machine uses before you deploy it to your network. An Azure Storage Mover agent communicates with a source share using the local network, and the Azure Storage service on the wide area network (WAN) link. In both cases, the agent uses all available network bandwidth.
> [!IMPORTANT] > The current Azure Storage Mover agent does not support bandwidth throttling schedules.
-If bandwidth throttling is important to you, consider creating a local virtual network (VNet) with network quality of service (QoS) settings and an internet connection. Then expose the agent to the internet through this VNet. An unauthenticated network proxy server can also be configured locally on the agent.
+If bandwidth throttling is important to you, create a local virtual network (VNet) with network quality of service (QoS) settings and an internet connection. This approach allows you to expose the agent through the VNet, and to locally configure an unauthenticated network proxy server on the agent if needed.
## Decommissioning an agent
-When you no longer need a specific storage mover agent, you can decommission it.
-During public review, decommissioning is a two-step process:
+When you no longer need a specific storage mover agent, you can decommission it. Decommissioning is a two-step process:
-1. The agent needs to be unregistered from the storage mover resource.
-1. Stop and delete the agent VM on your virtualization host.
+1. Unregister the agent from the storage mover resource.
+1. Stop the agent VM on your virtualization host and then delete it.
Decommissioning an agent starts with unregistering the agent. There are three options to start the unregistration process:
You can unregister an agent using the administrative shell of the agent VM. The
2) Network configuration 3) Service and job status 4) Unregister
-5) Open restricted shell
-6) Collect support bundle
-7) Restart agent
+5) Collect support bundle
+6) Restart agent
+7) Disk Cleanup
8) Exit xdmsh> 4
storage-mover Agent Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md
Previously updated : 03/14/2023 Last updated : 07/24/2023 <!--
REVIEW Engineering: not reviewed
EDIT PASS: COMPLETE Initial doc score: 86
-Current doc score: 100 (1654 words and 0 issues)
+Current doc score: 100 (1669 words and 0 issues)
!######################################################## --> # How to register an Azure Storage Mover agent
-The Azure Storage Mover service utilizes agents that carry out the migration jobs you configure in the service. The agent is a virtual machine / appliance that you run on a virtualization host, close to the source storage.
+The Azure Storage Mover service utilizes agents that carry out the migration jobs you configure in the service. The agent is a virtual machine-based appliance that you run on a virtualization host, close to the source storage.
You need to register an agent to create a trust relationship with your Storage Mover resource. This trust enables your agent to securely receive migration jobs and report progress. Agent registration can occur over either the public or private endpoint of your Storage Mover resource. A private endpoint, also known as the private link to a resource, can be deployed in an Azure virtual network (VNet).
-You can connect to an Azure VNET from other networks, like an on-premises corporate network. This type of connection is made through a VPN connection such as Azure Express Route. To learn more about this approach, refer to the [Azure ExpressRoute documentation](/azure/expressroute/) and [Azure Private Link](/azure/private-link) documentation.
+You can connect to an Azure VNET from other networks, such as an on-premises corporate network. This type of connection is made through a VPN connection such as Azure Express Route. To learn more about this approach, refer to the [Azure ExpressRoute](/azure/expressroute/) and [Azure Private Link](/azure/private-link) documentation.
-[!IMPORTANT] Currently, Storage Mover can be configured to route migration data from the agent to the destination storage account over Private Link. Hybrid Compute heartbeats and certificates can also be routed to a private Azure Arc service endpoint in your virtual network (VNet). Some Storage Mover traffic can't be routed through Private Link and is routed over the public endpoint of a storage mover resource. This data includes control messages, progress telemetry, and copy logs.
+> [!IMPORTANT]
+> Currently, Storage Mover can be configured to route migration data from the agent to the destination storage account over Private Link. Hybrid Compute heartbeats and certificates can also be routed to a private Azure Arc service endpoint in your virtual network (VNet). Some Storage Mover traffic can't be routed through Private Link and is routed over the public endpoint of a storage mover resource. This data includes control messages, progress telemetry, and copy logs.
In this article, you learn how to successfully register a previously deployed Storage Mover agent virtual machine (VM). ## Prerequisites
-There are two prerequisites before you can register an Azure Storage Mover agent:
+There are two prerequisites to be completed before you can register an Azure Storage Mover agent:
-1. You need to have an Azure Storage Mover resource deployed. <br />Follow the steps in the *[Create a storage mover resource](storage-mover-create.md)* article to deploy this resource in an Azure subscription and region of your choice.
+1. **You need to have an Azure Storage Mover resource deployed.** <br />Follow the steps in the *[Create a storage mover resource](storage-mover-create.md)* article to deploy this resource in an Azure subscription and region of your choice.
-1. You need to deploy the Azure Storage Mover agent VM. <br /> Follow the steps in the [Azure Storage Mover agent VM deployment](agent-deploy.md) article to run the agent VM and to get it connected to the internet.
+1. **You need to deploy the Azure Storage Mover agent VM.** <br /> Follow the steps in the [Azure Storage Mover agent VM deployment](agent-deploy.md) article to create the agent VM and to connect it to the internet.
## Registration overview :::image type="content" source="media/agent-register/agent-registration-title.png" alt-text="Image showing three components. The storage mover agent, deployed on-premises and close to the source data to be migrated. The storage mover cloud resource, deployed in an Azure resource group. And finally, a line connecting the two." lightbox="media/agent-register/agent-registration-title-large.png":::
-Registration creates trust between the agent and the cloud resource. It allows you to remotely manage the agent and to give it migration jobs to execute.
+The agent registration process creates a trust between the agent and the Storage Mover cloud resource. The trust allows you to remotely manage the agent and to assign it migration jobs to execute.
Registration is always initiated from the agent. In the interest of security, only the agent can establish trust by reaching out to the Storage Mover service. The registration procedure utilizes your Azure credentials and permissions on the storage mover resource you've previously deployed. If you don't have a storage mover cloud resource or an agent VM deployed yet, refer to the [prerequisites section](#prerequisites). ## Step 1: Connect to the agent VM
-The agent VM is an appliance. It offers an administrative shell that limits which operations you can perform on this machine. When you connect to this VM, for instance directly from your Hyper-V host, you'd see that shell loaded and can interact with it directly.
+The agent VM is an appliance. It offers an administrative shell that limits the operations you can perform on this machine. When you connect to the agent, the shell loads and provides you with options that allow you to interact with it directly. However, the agent VM is a Linux based appliance, and copy and paste functionality often doesn't work within the default Hyper-V window.
+
+Rather than use the Hyper-V window, use an SSH connection instead. This approach provides the following advantages:
-However, the agent VM is a Linux based appliance and copy/paste often doesn't work within the default Hyper-V window. Use an SSH connection instead. Advantages of an SSH connection are:
- You can connect to the agent VM's shell from any management machine and don't need to be logged into the Hyper-V host. - Copy / paste is fully supported.
However, the agent VM is a Linux based appliance and copy/paste often doesn't wo
## Step 2: Test network connectivity
-Your agent needs to be connected to the internet.
+Your agent needs to be connected to the internet.
When logged into the administrative shell, you can test the agents connectivity state:
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
If you want to store your logs for both near real-time query and long term reten
If you want to access your logs through another query engine such as Splunk, you can configure your diagnostic settings to send logs to an event hub and ingest logs from the event hub to your chosen destination.
-Azure Storage logs in Azure Monitor can be enabled through the Azure portal, PowerShell, the Azure CLI, and Azure Resource Manager templates. For at-scale deployments, Azure Policy can be used with full support for remediation tasks. For more information, see [Azure/Community-Policy](https://github.com/Azure/Community-Policy/tree/master/Policies/Storage/deploy-storage-monitoring-log-analytics) and [ciphertxt/AzureStoragePolicy](https://github.com/ciphertxt/AzureStoragePolicy).
+Azure Storage logs in Azure Monitor can be enabled through the Azure portal, PowerShell, the Azure CLI, and Azure Resource Manager templates. For at-scale deployments, Azure Policy can be used with full support for remediation tasks. For more information, see [ciphertxt/AzureStoragePolicy](https://github.com/ciphertxt/AzureStoragePolicy).
## See also
storage Versioning Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-enable.md
For more information about deploying resources with templates in the Azure porta
+> [!IMPORTANT]
+> Currently, once you configure the retention, there will be a rule created in the lifecycle management policy to delete the older version based on the retention period set. Thereafter, the settings shall not be visible in the data protection options. In case you want to change the retention period, you will have to delete the rule, which shall make the settings visible for editing again. In case you have any other rule already to delete the versions, then also this setting shall not appear.
+ ## List blob versions To display a blob's versions, use the Azure portal, PowerShell, or Azure CLI. You can also list a blob's versions using one of the Blob Storage SDKs.
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
storage Files Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-data-protection-overview.md
description: Learn how to protect your data in Azure Files. Understand the conce
Previously updated : 06/19/2023 Last updated : 07/26/2023
Azure Files gives you many tools to protect your data, including soft delete, share snapshots, Azure Backup, and Azure File Sync. This article describes how to protect your data in Azure Files, and the concepts and processes involved with backup and recovery of Azure file shares.
+ :::column:::
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/TOHaNJpAOfc" title="How Azure Files can help protect against ransomware and accidental data loss" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
+ :::column-end:::
+ :::column:::
+ Watch this video to learn how Azure Files advanced data protection helps enterprises stay protected against ransomware and accidental data loss while delivering greater business continuity.
+ :::column-end:::
+ ## Why you should protect your data For Azure Files, data protection refers to protecting the storage account, file shares, and data within them from being deleted or modified, and for restoring data after it's been deleted or modified.
storage Migrate Files Between Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/migrate-files-between-shares.md
Title: Migrate files between SMB Azure file shares
description: Learn how to migrate files from one SMB Azure file share to another using common migration tools. Previously updated : 07/11/2023 Last updated : 07/26/2023
Follow these steps to migrate using Robocopy, a command-line file copy utility t
1. Deploy a Windows virtual machine (VM) in Azure in the same region as your source file share. Keeping the data and networking in Azure will be fast and avoid outbound data transfer charges. For optimal performance, we recommend a multi-core VM type with at least 56 GiB of memory, for example **Standard_DS5_v2**.
-1. Mount both the source and target file shares to the VM. Mount them using the storage account key to make sure the VM has access to all the files.
+1. Mount both the source and target file shares to the VM. Be sure to mount them using the storage account key to make sure the VM has access to all the files. Don't use a domain identity.
-1. Run this command at the Windows command prompt:
+1. Run this command at the Windows command prompt. Optionally, you can include flags for logging features as a best practice (/NP, /NFL, /NDL, /UNILOG).
```console
- robocopy <source> <target> /mir /copyall /mt:16 /DCOPY:DAT
+ robocopy <source> <target> /MIR /COPYALL /MT:16 /R:2 /W:1 /B /IT /DCOPY:DAT
``` If your source share was mounted as s:\ and target was t:\ the command looks like this: ```console
- robocopy s:\ t:\ /mir /copyall /mt:16 /DCOPY:DAT
+ robocopy s:\ t:\ /MIR /COPYALL /MT:16 /R:2 /W:1 /B /IT /DCOPY:DAT
``` You can run the command while your source is still online, but be aware that any I/O will work against the throttle limits on your existing share.
Follow these steps to migrate using Robocopy, a command-line file copy utility t
1. After the command completes for the second time, you can redirect your application to the new share. - ## See also - [Migrate to Azure file shares using RoboCopy](storage-files-migration-robocopy.md)
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
As an alternative to CNAME, you can use DFS Namespaces with SMB Azure file shares. To learn more, see [How to use DFS Namespaces with Azure Files](files-manage-namespaces.md).
- As a workaround for mounting the file share, see the instructions in [Mount the file share from a non-domain-joined VM](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm).
+ As a workaround for mounting the file share, see the instructions in [Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm-or-a-vm-joined-to-a-different-ad-domain).
* <a id="ad-vm-subscription"></a> **Can I access Azure file shares with Azure AD credentials from a VM under a different subscription?**
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName>
If you run into issues, see [Unable to mount Azure file shares with AD credentials](/troubleshoot/azure/azure-storage/files-troubleshoot-smb-authentication?toc=/azure/storage/files/toc.json#unable-to-mount-azure-file-shares-with-ad-credentials).
-## Mount the file share from a non-domain-joined VM
+## Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain
Non-domain-joined VMs or VMs that are joined to a different AD domain than the storage account can access Azure file shares if they have line-of-sight to the domain controllers and provide explicit credentials. The user accessing the file share must have an identity and credentials in the AD domain that the storage account is joined to.
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /use
## Next steps
-If the identity you created in AD DS to represent the storage account is in a domain or OU that enforces password rotation, you might need to [update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md).
+If the identity you created in AD DS to represent the storage account is in a domain or OU that enforces password rotation, you might need to [update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md).
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Before you enable AD DS authentication for Azure file shares, make sure you've c
- Domain-join an on-premises machine or an Azure VM to on-premises AD DS. For information about how to domain-join, refer to [Join a Computer to a Domain](/windows-server/identity/ad-fs/deployment/join-a-computer-to-a-domain).
- If a machine isn't domain joined, you can still use AD DS for authentication if the machine has line of sight to the on-premises AD domain controller and the user provides explicit credentials. For more information, see [Mount the file share from a non-domain-joined VM](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm).
+ If a machine isn't domain joined, you can still use AD DS for authentication if the machine has line of sight to the on-premises AD domain controller and the user provides explicit credentials. For more information, see [Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm-or-a-vm-joined-to-a-different-ad-domain).
- Select or create an Azure storage account. For optimal performance, we recommend that you deploy the storage account in the same region as the client from which you plan to access the share. Then, [mount the Azure file share](storage-how-to-use-files-windows.md) with your storage account key. Mounting with the storage account key verifies connectivity.
storage Storage Queues Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-queues-introduction.md
Queue Storage contains the following components:
## Next steps - [Create a storage account](../common/storage-account-create.md?toc=/azure/storage/queues/toc.json)-- [Get started with Queue Storage using .NET](storage-dotnet-how-to-use-queues.md)-- [Get started with Queue Storage using Java](storage-java-how-to-use-queue-storage.md)-- [Get started with Queue Storage using Python](storage-python-how-to-use-queue-storage.md)-- [Get started with Queue Storage using Node.js](storage-nodejs-how-to-use-queues.md)
+- [Get started with Queue Storage using .NET](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli)
+- [Get started with Queue Storage using Java](/azure/storage/queues/storage-quickstart-queues-java?tabs=powershell%2Cpasswordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli)
+- [Get started with Queue Storage using Python](/azure/storage/queues/storage-quickstart-queues-python?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli)
+- [Get started with Queue Storage using Node.js](/azure/storage/queues/storage-quickstart-queues-nodejs?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/analytics/partner-overview.md
This article highlights Microsoft partner companies that are integrated with Azu
![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>InformaticaΓÇÖs enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality, and governance of enterprise data on Azure. AI-powered, metadata-driven data integration, and data quality and governance capabilities enable you to modernize analytics and accelerate your move to a data warehouse or to a data lake on Azure.|[Partner page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)| ![Qlik company logo](./media/qlik-logo.png) |**Qlik**<br>Qlik helps accelerate BI and ML initiatives with a scalable data integration and automation solution. Qlik also goes beyond migration tools to help drive agility throughout the data and analytics process with automated data pipelines and a governed, self-service catalog.|[Partner page](https://www.qlik.com/us/products/technology/qlik-microsoft-azure-migration)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik_data_integration_platform)| ![Starburst logo](./media/starburst-logo.jpg) |**Starburst**<br>Starburst unlocks the value of data by making it fast and easy to access anywhere. Starburst queries data across any database, making it actionable for data-driven organizations. With Starburst, teams can prevent vendor lock-in, and use the existing tools that work for their business.|[Partner page](https://www.starburst.io/platform/deployment-options/starburst-on-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/starburstdatainc1582306810515.starburst-enterprise)|
-![Striim company logo](./media/striim-logo.png) |**Striim**<br>Striim enables continuous data movement and in-stream transformations from a wide variety of sources into multiple Azure solutions including Azure Synapse Analytics, Azure Cosmos DB, and Azure cloud databases. The Striim solution enables Azure Data Lake Storage customers to quickly build streaming data pipelines. Customers can choose their desired data latency (real-time, micro-batch, or batch) and enrich the data with more context. These pipelines can then support any application or big data analytics solution, including Azure SQL Data Warehouse and Azure Databricks. |[Partner page](https://www.striim.com/partners/striim-for-microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/striim.azurestorageintegration?tab=overview)|
+![Striim company logo](./media/striim-logo.png) |**Striim**<br>Striim enables continuous data movement and in-stream transformations from a wide variety of sources into multiple Azure solutions including Azure Synapse Analytics, Azure Cosmos DB, and Azure cloud databases. The Striim solution enables Azure Data Lake Storage customers to quickly build streaming data pipelines. Customers can choose their desired data latency (real-time, micro-batch, or batch) and enrich the data with more context. These pipelines can then support any application or big data analytics solution, including Azure SQL Data Warehouse and Azure Databricks. |[Partner ](https://www.striim.com/partners/striim-and-microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/striim.azurestorageintegration?tab=overview)|
![Talend company logo](./media/talend-logo.png) |**Talend**<br>Talend Data Fabric is a platform that brings together multiple integration and governance capabilities. Using a single unified platform, Talend delivers complete, clean, and uncompromised data in real time. The Talend Trust Score helps assess the reliability of any data set. |[Partner page](https://www.talend.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendclouddi)| ![Unravel](./media/unravel-logo.png) |**Unravel Data**<br>Unravel Data provides observability and automatic management through a single pane of glass. AI-powered recommendations proactively improve reliability, speed, and resource allocations of your data pipelines and jobs. Unravel connects easily with Azure Databricks, HDInsight, Azure Data Lake Storage, and more through the Azure Marketplace or Unravel SaaS service. Unravel Data also helps migrate to Azure by providing an assessment of your environment. This assessment uncovers usage details, dependency maps, cost, and effort needed for a fast move with less risk.|[Partner page](https://www.unraveldata.com/azure-databricks/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/unravel-data.unravel4databrickssubscriptionasaservice?tab=Overview) |![Wandisco company logo](./medi) is tightly integrated with Azure. Besides having an Azure portal deployment experience, it also uses role-based access control, Azure Active Directory, Azure Policy enforcement, and Activity log integration. With Azure Billing integration, you don't need to add a vendor contract or get more vendor approvals.<br><br>Accelerate the replication of Hadoop data between multiple sources and targets for any data architecture. With LiveData Cloud Services, your data will be available for Azure Databricks, Synapse Analytics, and HDInsight as soon as it lands, with guaranteed 100% data consistency. |[Partner page](https://www.wandisco.com/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wandisco.ldma?tab=Overview)|
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/container-solutions/partner-overview.md
This article highlights Microsoft partner solutions that enable automation, data
| ![Robin.io company logo](./media/robin-logo.png) |**Robin.io**<br>Robin.io provides an application and data management platform that enables enterprises and 5G service providers to deliver complex application pipelines as a service.<br><br>Robin Cloud Native Storage (CNS) brings advanced data management capabilities to Azure Kubernetes Service. Robin CNS seamlessly integrates with Azure Disk Storage to simplify management of stateful applications. Developers and DevOps teams can deploy Robin CNS as a standard Kubernetes operator on AKS. Robin Cloud Native Storage helps simplify data management operations such as BCDR and cloning of entire applications. |[Partner page](https://robin.io/robin-cloud-native-storage-for-microsoft-aks/)| | ![NetApp company logo](./media/astra-logo.jpg) |**NetApp**<br>NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation.<br><br>NetApp Astra Control Service is a fully managed service that makes it easier for customers to manage, protect, and move their data-rich containerized workloads running on Kubernetes within and across public clouds and on-premises. Astra Control provides persistent container storage with Azure NetApp Files offering advanced application-aware data management functionality (like snapshot-revert, backup-restore, activity log, and active cloning) for data protection, disaster recovery, data audit, and migration use-cases for your modern apps. |[Partner page](https://cloud.netapp.com/astra)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/netapp.netapp-astra-acs)| | ![Rackware company logo](./media/rackware-logo.png) |**Rackware**<br>RackWare provides an intelligent highly automated Hybrid Cloud Management Platform that extends across physical and virtual environments.<br><br>RackWare SWIFT is a converged disaster recovery, backup and migration solution for Kubernetes and OpenShift. It is a cross-platform, cross-cloud and cross-version solution that enables you to move and protect your stateful Kubernetes applications from any on-premises or cloud environment to Azure Kubernetes Service (AKS) and Azure Storage.|[Partner page](https://www.rackwareinc.com/rackware-swift-microsoft-azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=rackware%20swift&page=1&filters=virtual-machine-images)|
-| ![Ondat company logo](./media/ondat-logo.png) |**Ondat**<br>Ondat, formerly StorageOS, provides an agnostic platform to run any data service anywhere, while ensuring industry-leading levels of application performance, availability and security.<br><br>Ondat cloud native storage solution delivers persistent container storage for your stateful applications in production. Fast, scalable, software-based block storage, Ondat delivers high availability, rapid application failover, replication, encryption of data in-transit & at-rest, data reduction with access controls and native Kubernetes integration.|[Partner page](https://www.ondat.io/datasheets/ondat-aks) |
Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu). ## Next steps
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-rdp-properties.md
Title: Customize RDP properties with PowerShell - Azure
-description: How to customize RDP Properties for Azure Virtual Desktop with PowerShell cmdlets.
+ Title: Customize RDP properties - Azure
+description: How to customize RDP Properties for Azure Virtual Desktop.
Previously updated : 08/24/2022 Last updated : 07/26/2022
RDP files have the following properties by default:
|VideoPlayback|Enabled| |EnableCredssp|Enabled|
->[!NOTE]
+>[!IMPORTANT]
>- Multi-monitor mode is only enabled for Desktop application groups and will be ignored for RemoteApp application groups.
+>
>- All default RDP file properties are exposed in the Azure Portal.
+>
>- A null CustomRdpProperty field will apply all default RDP properties to your host pool. An empty CustomRdpProperty field won't apply any default RDP properties to your host pool.
+>
+>- If you also configure device redirection settings using Group Policy objects (GPOs), the settings in the GPOs will override the RDP properties you specify on the host pool.
## Prerequisites
virtual-desktop Client Features Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md
To transfer files between your local device and your remote session:
1. For the prompt **Access local resources**, check the box for **File transfer**, then select **Allow**.
-1. Once you're remote session has started, an extra icon will appear in the Remote Desktop Web client taskbar for **Upload new file** (the upwards arrow icon). Selecting this will open a file explorer window on your local device.
+1. Once you're remote session has started, open **File Explorer**, then select **This PC**.
-1. Browse to and select files you want to upload to the remote session. You can select multiple files by holding down the <kbd>CTRL</kbd> key on your keyboard for Windows, or the <kbd>Command</kbd> key for macOS, then select **Open**. There is a file size limit of 255MB.
+1. You'll see a redirected drive called **Remote Desktop Virtual Drive on RDWebClient**. Inside this drive are two folders: **Uploads** and **Downloads**
-1. In your remote session, open **File Explorer**, then select **This PC**.
+ - **Downloads** prompts your local browser to download any files you copy to this folder.
+ - **Uploads** contains the files you uploaded through the Remote Desktop Web client.
-1. You'll see a redirected drive called **Remote Desktop Virtual Drive on RDWebClient**. Inside this drive are two folders: **Uploads** and **Downloads**. **Uploads** contains the files you uploaded through the Remote Desktop Web client.
-
-1. To transfer files from your remote session to your local device, copy and paste files to the **Downloads** folder. Before the paste can complete, the Remote Desktop Web client will prompt you **Are you sure you want to download *N* file(s)?**. Select **Confirm**. Your browser will download the files in its normal way.
+1. To download from your remote session to your local device, copy and paste files to the **Downloads** folder. Before the paste can complete, the Remote Desktop Web client will prompt you **Are you sure you want to download *N* file(s)?**. Select **Confirm**. Your browser will download the files in its normal way.
If you don't want to see this prompt every time you download files from the current browser, check the box for **DonΓÇÖt ask me again on this browser** before confirming.
+1. To upload files from your local device to your remote session, use the button in the Remote Desktop Web client taskbar for **Upload new file** (the upwards arrow icon). Selecting this will open a file explorer window on your local device.
+
+ Browse to and select files you want to upload to the remote session. You can select multiple files by holding down the <kbd>CTRL</kbd> key on your keyboard for Windows, or the <kbd>Command</kbd> key for macOS, then select **Open**. There is a file size limit of 255MB.
+ > [!IMPORTANT] > - We recommend using *Copy* rather than *Cut* when transferring files from your remote session to your local device as an issue with the network connection can cause the files to be lost. > > - Uploaded files are available in a remote session until you sign out of the Remote Desktop Web client.
+>
+> - Don't download files directly from your browser in a remote session to the **Remote Desktop Virtual Drive on RDWebClient\Downloads** folder as it triggers your local browser to download the file before it is ready. Download files in a remote session to a different folder, then copy and paste them to the **Remote Desktop Virtual Drive on RDWebClient\Downloads** folder.
### Clipboard
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Wi
In this release, we've made the following changes: -- Fixed an issue where the client doesn't auto-reconnect when the Gateway WebSocket connection shuts down normally.
+- Fixed an issue where the client doesn't auto-reconnect when the gateway WebSocket connection shuts down normally.
## Updates for version 1.2.4485
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Wi
In this release, we've made the following changes: -- Added a new RDP file property called "allowed security protocols." This property restricts the list of security protocols the client can negotiate.
+- Added a new RDP file property called *allowed security protocols*. This property restricts the list of security protocols the client can negotiate.
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. -- Accessibility improvements -
+- Accessibility improvements:
- Narrator now describes the toggle button in the display settings side panel as *toggle button* instead of *button*. - Control types for text now correctly say that they're *text* and not *custom*. - Fixed an issue where Narrator didn't read the error message that appears after the user selects **Delete**.
- - Added heading-level description to subscribe with URL.
--- Dialog improvements -
- - Updated File and URI Launch Dialog Error Handling to be more specific and user-friendly.
+ - Added heading-level description to **Subscribe with URL**.
+- Dialog improvements:
+ - Updated **file** and **URI launch** dialog error handling messages to be more specific and user-friendly.
- The client now displays an error message after unsuccessfully checking for updates instead of incorrectly notifying the user that the client is up to date.
- - Fixed an issue where, after having been automatically reconnected to the remote session, the Connection Information dialog gave inconsistent information about identity verification.
+ - Fixed an issue where, after having been automatically reconnected to the remote session, the **connection information** dialog gave inconsistent information about identity verification.
## Updates for version 1.2.4419
In this release, we've made the following change:
In this release, we've made the following changes: -- Fixed an issue where the narrator was announcing the Tenant Expander button as "on" or "off" instead of "expanded" or ΓÇ£collapsed."
+- Fixed an issue where the narrator was announcing the **tenant expander** button as **on** or **off** instead of **expanded** or **collapsed**.
- Fixed an issue where the text size didn't change when the user adjusted the text size system setting. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
In this release, we've made the following changes:
- The client now shows an error message when the user tries to open a connection from the UI, but the connection doesn't launch. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to Teams for Azure Virtual Desktop, including the following:
- - The new hardware encoding feature increases the video quality (resolution and framerate) of the outgoing camera during Teams calls. Because this feature uses the underlying hardware on the PC and not just software, we're being extra careful to ensure broad compatibility before turning the feature on by default for all users. Therefore, this feature is currently off by default. To get an early preview of the feature, you can enable it on your local machine by creating a registry key at **Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Terminal Server Client\Default\AddIns\WebRTC Redirector\\(DWORD)UseHardwareEncoding** and setting it to **1**. To disable the feature, set the key to **0**.
+ - The new hardware encoding feature increases the video quality (resolution and framerate) of the outgoing camera during Teams calls. Because this feature uses the underlying hardware on the PC and not just software, we're being extra careful to ensure broad compatibility before turning the feature on by default for all users. Therefore, this feature is currently off by default. To get an early preview of the feature, you can enable it on your local machine by creating a registry key at **Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Terminal Server Client\Default\AddIns\WebRTC Redirector\UseHardwareEncoding** as a **DWORD** value and setting it to **1**. To disable the feature, set the key to **0**.
## Updates for version 1.2.3130
In this release, we've made the following changes:
- Improved Narrator application experience. - Accessibility improvements.-- Fixed a regression that prevented subsequent connections after reconnecting to an existing session with the group policy object (GPO) "User Configuration\Administrative Templates\System\Ctrl+Alt+Del Options\Remove Lock Computer" enabled.
+- Fixed a regression that prevented subsequent connections after reconnecting to an existing session with the group policy object (GPO) **User Configuration\Administrative Templates\System\Ctrl+Alt+Del Options\Remove Lock Computer** enabled.
- Added an error message for when a user selects a credential type for smart card or Windows Hello for Business but the required smart card redirection is disabled in the RDP file. - Improved diagnostic for User Data Protocol (UDP)-based Remote Desktop Protocol (RDP) transport protocols. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
In this release, we've made the following changes:
In this release, we've made the following changes: - Fixed an issue where Narrator didn't announce grid or list views correctly.-- Fixed an issue where the msrdc.exe process might take a long time to exit after closing the last Azure Virtual Desktop connection if customers have set a very short token expiration policy.
+- Fixed an issue where the `msrdc.exe` process might take a long time to exit after closing the last Azure Virtual Desktop connection if customers have set a very short token expiration policy.
- Updated the error message that appears when users are unable to subscribe to their feed. - Updated the disconnect dialog boxes that appear when the user locks their remote session or puts their local computer in sleep mode to be only informational. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
In this release, we've made the following change:
In this release, we've made the following changes: -- The client also updates in the background when the auto-update feature is enabled, no remote connection is active, and MSRDCW.exe isn't running.
+- The client also updates in the background when the auto-update feature is enabled, no remote connection is active, and `msrdcw.exe` isn't running.
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Fixed an ICE inversion parameter issue that prevented some Teams calls from connecting.
In this release, we've made the following change:
In this release, we've made the following changes: -- Fixed an issue that caused the client to crash when users selected "Disconnect all sessions" in the system tray.
+- Fixed an issue that caused the client to crash when users selected **Disconnect all sessions** in the system tray.
- Fixed an issue where the client wouldn't switch to full screen on a single monitor with a docking station. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to Teams on Azure Virtual Desktop, including the following:
In this release, we've made the following changes:
In this release, we've made the following changes: - Added the Experience Monitor access point to the system tray icon.-- Fixed an issue where entering an email address into the "Subscribe to a Workplace" tab caused the application to stop responding.
+- Fixed an issue where entering an email address into the **Subscribe to a Workplace** tab caused the application to stop responding.
- Fixed an issue where the client sometimes didn't send Event Hubs and Diagnostics events. - Updates to Teams on Azure Virtual Desktop, including: - Improved audio and video sync performance and added hardware accelerated decode that decreases CPU utilization on the client.
In this release, we've made the following changes:
- Fixed an issue where single sign-on (SSO) didn't work on Windows 7. - Fixed the connection failure that happened when calling or joining a Teams call while another app has an audio stream opened in exclusive mode and when media optimization for Teams is enabled. - Fixed a failure to enumerate audio or video devices in Teams when media optimization for Teams is enabled.-- Added a "Need help with settings?" link to the desktop settings page.-- Fixed an issue with the "Subscribe" button that happened when using high-contrast dark themes.
+- Added a **Need help with settings?** link to the desktop settings page.
+- Fixed an issue with the **Subscribe** button that happened when using high-contrast dark themes.
## Updates for version 1.2.1275
In this release, we've made the following changes:
In this release, we've made the following changes: -- Renamed the "Update" action for Workspaces to "Refresh" for consistency with other Remote Desktop clients.
+- Renamed the **Update** action for Workspaces to **Refresh** for consistency with other Remote Desktop clients.
- You can now refresh a Workspace directly from its context menu. - Manually refreshing a Workspace now ensures all local content is updated. - You can now reset the client's user data from the About page without needing to uninstall the app.-- You can also reset the client's user data using msrdcw.exe /reset with an optional /f parameter to skip the prompt.
+- You can also reset the client's user data using `msrdcw.exe /reset` with an optional `/f` parameter to skip the prompt.
- We now automatically look for a client update when navigating to the About page. - Updated the color of the buttons for consistency.
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 07/18/2023 Last updated : 07/25/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Previously updated : 11/22/2022 Last updated : 07/25/2023 # Azure Virtual Machine Scale Set automatic OS image upgrades
The following example describes how to set automatic OS upgrades on a scale set
"disableAutomaticRollback": false } },
+ },
"imagePublisher": { "type": "string", "defaultValue": "MicrosoftWindowsServer"
The following example describes how to set automatic OS upgrades on a scale set
"imageOSVersion": { "type": "string", "defaultValue": "latest"
- }
-}
+ }
```
properties:ΓÇ»{
        enableAutomaticOSUpgrade: true       }     }
+}
``` ## Using Application Health Probes
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
The following table provides a mapping of the version of the Log Analytics VM ex
| Log Analytics Linux VM extension version | Log Analytics Agent bundle version | |--|--|
+| 1.16.0 | [1.16.0](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.16.0-0) |
| 1.14.23 | [1.14.23](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.23-0) | | 1.14.20 | [1.14.20](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.20-0) | | 1.14.19 | [1.14.19](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.19-0) |
The following JSON shows the schema for the Log Analytics agent extension. The e
"properties": { "publisher": "Microsoft.EnterpriseCloud.Monitoring", "type": "OmsAgentForLinux",
- "typeHandlerVersion": "1.13",
+ "typeHandlerVersion": "1.16",
"autoUpgradeMinorVersion": true, "settings": { "workspaceId": "myWorkspaceId",
The following JSON shows the schema for the Log Analytics agent extension. The e
| apiVersion | 2018-06-01 | | publisher | Microsoft.EnterpriseCloud.Monitoring | | type | OmsAgentForLinux |
-| typeHandlerVersion | 1.13 |
+| typeHandlerVersion | 1.16 |
| workspaceId (e.g) | 6f680a37-00c6-41c7-a93f-1437e3462574 | | workspaceKey (e.g) | z4bU3p1/GrnWpQkky4gdabWXAhbWSTz70hm4m2Xt92XI+rSRgE8qVvRhsGo9TXffbrTahyrwv35W0pOqQAU7uQ== |
The following example assumes the VM extension is nested inside the virtual mach
"properties": { "publisher": "Microsoft.EnterpriseCloud.Monitoring", "type": "OmsAgentForLinux",
- "typeHandlerVersion": "1.13",
+ "typeHandlerVersion": "1.16",
"settings": { "workspaceId": "myWorkspaceId", "skipDockerProviderInstall": true
When placing the extension JSON at the root of the template, the resource name i
"properties": { "publisher": "Microsoft.EnterpriseCloud.Monitoring", "type": "OmsAgentForLinux",
- "typeHandlerVersion": "1.13",
+ "typeHandlerVersion": "1.16",
"settings": { "workspaceId": "myWorkspaceId", "skipDockerProviderInstall": true
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/storage-performance.md
Lasv3 and Lsv2-series VMs use AMD EPYC&trade; server processors based on the Zen
* Avoid mixing NVMe admin commands (for example, NVMe SMART info query, etc.) with NVMe I/O commands during active workloads. Lsv3, Lasv3, and Lsv2 NVMe devices are backed by Hyper-V NVMe Direct technology, which switches into ΓÇ£slow modeΓÇ¥ whenever any NVMe admin commands are pending. Lsv3, Lasv3, and Lsv2 users might see a dramatic performance drop in NVMe I/O performance if that happens. * Lsv2 users aren't recommended to rely on device NUMA information (all 0) reported from within the VM for data drives to decide the NUMA affinity for their apps. The recommended way for better performance is to spread workloads across CPUs if possible. * The maximum supported queue depth per I/O queue pair for Lsv3, Lasv3, and Lsv2 VM NVMe device is 1024. Lsv3, Lasv3, and Lsv2 users are recommended to limit their (synthetic) benchmarking workloads to queue depth 1024 or lower to avoid triggering queue full conditions, which can reduce performance.
-* The best performance is obtained when I/O is done directly to each of the raw NVMe devices with no partitioning, no file systems, no RAID config, etc. Before starting a testing session, ensure the configuration is in a known fresh/clean state by running `blkdiscard` on each of the NVMe devices.
+* The best performance is obtained when I/O is done directly to each of the raw NVMe devices with no partitioning, no file systems, no RAID config, etc. Before starting a testing session, ensure the configuration is in a known fresh/clean state by running `blkdiscard` on each of the NVMe devices. To obtain the most consistent performance during benchmarking, it's recommended to precondition the NVMe devices before testing by issuing random writes to all of the devices' LBAs twice as defined in the SNIA Solid State Storage Enterprise Performance Test Specification.
## Utilizing local NVMe storage
-Local storage on the 1.92 TB NVMe disk on all Lsv3, Lasv3, and Lsv2 VMs is ephemeral. During a successful standard reboot of the VM, the data on the local NVMe disk persists. The data doesn't persist on the NVMe if the VM is redeployed, de-allocated, or deleted. Data doesn't persist if another issue causes the VM, or the hardware it's running on, to become unhealthy. When scenario happens, any data on the old host is securely erased.
+Local storage on the 1.92 TB NVMe disk on all Lsv3, Lasv3, and Lsv2 VMs is ephemeral. During a successful standard reboot of the VM, the data on the local NVMe disk persists. The data doesn't persist on the NVMe if the VM is redeployed, deallocated, or deleted. Data doesn't persist if another issue causes the VM, or the hardware it's running on, to become unhealthy. When scenario happens, any data on the old host is securely erased.
There are also cases when the VM needs to be moved to a different host machine, for example, during a planned maintenance operation. Planned maintenance operations and some hardware failures can be anticipated with [Scheduled Events](scheduled-events.md). Use Scheduled Events to stay updated on any predicted maintenance and recovery operations.
Scenarios that maintain data on local NVMe disks include:
- The VM is running and healthy. - The VM is rebooted in place (by you or Azure). -- The VM is paused (stopped without de-allocation).
+- The VM is paused (stopped without deallocation).
- Most the planned maintenance servicing operations. Scenarios that securely erase data to protect the customer include: -- The VM is redeployed, stopped (de-allocated), or deleted (by you).
+- The VM is redeployed, stopped (deallocated), or deleted (by you).
- The VM becomes unhealthy and has to service heal to another node due to a hardware issue. - A few of the planned maintenance servicing operations that require the VM to be reallocated to another host for servicing.
Much like any other VM, use the [Portal](quick-create-portal.md), [Azure CLI](qu
### Does a single NVMe disk failure cause all VMs on the host to fail?
-If a disk failure is detected on the hardware node, the hardware is in a failed state. When this problem occurs, all VMs on the node are automatically de-allocated and moved to a healthy node. For Lsv3, Lasv3, and Lsv2-series VMs, this problem means that the customer's data on the failing node is also securely erased. The customer needs to recreate the data on the new node.
+If a disk failure is detected on the hardware node, the hardware is in a failed state. When this problem occurs, all VMs on the node are automatically deallocated and moved to a healthy node. For Lsv3, Lasv3, and Lsv2-series VMs, this problem means that the customer's data on the failing node is also securely erased. The customer needs to recreate the data on the new node.
### Do I need to change the blk_mq settings?
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/18/2023 Last updated : 07/25/2023
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
|Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Fortinet NGFW](https://www.fortinet.com/products/next-generation-firewall)|General Availability of [Fortinet NGFW](https://aka.ms/fortinetngfwdocumentation) and [Fortinet SD-WAN/NGFW dual-role](https://aka.ms/fortinetdualroledocumentation) NVAs.|May 2023| Same limitations as routing intent. Doesn't support internet inbound scenario.| |Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Check Point CloudGuard Network Security for Azure Virtual WAN](https://www.checkpoint.com/cloudguard/microsoft-azure-security/wan/) |General Availability of [Check Point CloudGuard Network Security NVA deployable from Azure Marketplace](https://sc1.checkpoint.com/documents/IaaS/WebAdminGuides/EN/CP_CloudGuard_Network_for_Azure_vWAN_AdminGuide/Content/Topics-Azure-vWAN/Introduction.htm) within the Virtual WAN hub in all Azure regions.|May 2023|Same limitations as routing intent. Doesn't support internet inbound scenario.| |Feature|Software-as-a-service|Palo Alto Networks Cloud NGFW|Public preview of [Palo Alto Networks Cloud NGFW](https://aka.ms/pancloudngfwdocs), the first software-as-a-serivce security offering deployable within the Virtual WAN hub.|May 2023|Palo Alto Networks Cloud NGFW is only deployable in newly created Virtual WAN hubs in some Azure regions. See [Limitations of Palo Alto Networks Cloud NGFW](how-to-palo-alto-cloud-ngfw.md) for a full list of limitations.|
-| Feature| Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| [Fortinet SD-WAN](https://docs.fortinet.com/document/fortigate-public-cloud/7.2.2/azure-vwan-sd-wan-deployment-guide/12818/deployment-overview)| General availability of Fortinet SD-WAN solution in Virtual WAN. Next-Generation Firewall use cases in preview.| October 2022| SD-WAN solution generally available. Next Generation Firewall use cases in preview.|
|Feature |Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| [Versa SD-WAN](about-nva-hub.md#partners)|Preview of Versa SD-WAN.|November 2021| | |Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Cisco Viptela, Barracuda and VMware (Velocloud) SD-WAN](about-nva-hub.md#partners) |General Availability of SD-WAN solutions in Virtual WAN.|June/July 2021| |
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
The steps in this article require an Azure AD tenant. If you don't have an Azure
* User account The global administrator account will be used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
-1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Azure Active Directory](/azure/active-directory-b2c/tenant-management-read-tenant-name).
## Authorize the Azure VPN application
web-application-firewall Waf Front Door Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-best-practices.md
Title: Best practices for Web Application Firewall on Azure Front Door
-description: In this article, you learn about the best practices for using the web application firewall with Azure Front Door.
+ Title: Best practices for Azure Web Application Firewall in Azure Front Door
+description: In this article, you learn about the best practices for using Azure Web Application Firewall in Azure Front Door.
-# Best practices for Web Application Firewall (WAF) on Azure Front Door
+# Best practices for Azure Web Application Firewall in Azure Front Door
-This article summarizes best practices for using the web application firewall (WAF) on Azure Front Door.
+This article summarizes best practices for using Azure Web Application Firewall in Azure Front Door.
## General best practices
+This section discusses general best practices.
+ ### Enable the WAF
-For internet-facing applications, we recommend you enable a web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks.
+For internet-facing applications, we recommend that you enable a web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks.
### Tune your WAF The rules in your WAF should be tuned for your workload. If you don't tune your WAF, it might accidentally block requests that should be allowed. Tuning might involve creating [rule exclusions](waf-front-door-exclusion.md) to reduce false positive detections.
-While you tune your WAF, consider using [detection mode](waf-front-door-policy-settings.md#waf-mode), which logs requests and the actions the WAF would normally take, but doesn't actually block any traffic.
+While you tune your WAF, consider using [detection mode](waf-front-door-policy-settings.md#waf-mode). This mode logs requests and the actions the WAF would normally take, but it doesn't actually block any traffic.
-For more information, see [Tuning Web Application Firewall (WAF) for Azure Front Door](waf-front-door-tuning.md).
+For more information, see [Tune Azure Web Application Firewall for Azure Front Door](waf-front-door-tuning.md).
### Use prevention mode
-After you've tuned your WAF, you should configure it to [run in prevention mode](waf-front-door-policy-settings.md#waf-mode). By running in prevention mode, you ensure the WAF actually blocks requests that it detects are malicious. Running in detection mode is useful while you tune and configure your WAF, but provides no protection.
+After you tune your WAF, configure it to [run in prevention mode](waf-front-door-policy-settings.md#waf-mode). By running in prevention mode, you ensure that the WAF blocks requests that it detects are malicious. Running in detection mode is useful while you tune and configure your WAF, but it provides no protection.
### Define your WAF configuration as code
-When you tune your WAF for your application workload, you typically create a set of rule exclusions to reduce false positive detections. If you manually configure these exclusions by using the Azure portal, then when you upgrade your WAF to use a newer ruleset version, you need to reconfigure the same exceptions against the new ruleset version. This process can be time-consuming and error-prone.
+When you tune your WAF for your application workload, you typically create a set of rule exclusions to reduce false positive detections. If you manually configure these exclusions by using the Azure portal, when you upgrade your WAF to use a newer rule-set version, you need to reconfigure the same exceptions against the new rule-set version. This process can be time consuming and error prone.
+
+Instead, consider defining your WAF rule exclusions and other configuration as code, such as by using the Azure CLI, Azure PowerShell, Bicep, or Terraform. When you need to update your WAF rule-set version, you can easily reuse the same exclusions.
-Instead, consider defining your WAF rule exclusions and other configuration as code, such as by using the Azure CLI, Azure PowerShell, Bicep or Terraform. Then, when you need to update your WAF ruleset version, you can easily reuse the same exclusions.
+## Managed rule-set best practices
-## Managed ruleset best practices
+This section discusses best practices for rule sets.
### Enable default rule sets
-Microsoft's default rule sets are designed to protect your application by detecting and blocking common attacks. The rules are based on a various sources including the OWASP top 10 attack types and information from Microsoft Threat Intelligence.
+Microsoft's default rule sets are designed to protect your application by detecting and blocking common attacks. The rules are based on various sources, including the OWASP top-10 attack types and information from Microsoft Threat Intelligence.
For more information, see [Azure-managed rule sets](afds-overview.md#azure-managed-rule-sets).
Bots are responsible for a significant proportion of traffic to web applications
For more information, see [Bot protection rule set](afds-overview.md#bot-protection-rule-set).
-### Use the latest ruleset versions
+### Use the latest rule set versions
Microsoft regularly updates the managed rules to take account of the current threat landscape. Ensure that you regularly check for updates to Azure-managed rule sets.
-For more information, see [Web Application Firewall DRS rule groups and rules](waf-front-door-drs.md).
+For more information, see [Azure Web Application Firewall DRS rule groups and rules](waf-front-door-drs.md).
## Rate limiting best practices
+This section discusses best practices for rate limiting.
+ ### Add rate limiting
-Front Door's WAF enables you to control the number of requests allowed from each client's IP address over a period of time. It's a good practice to add rate limiting to reduce the impact of clients accidentally or intentionally sending large amounts of traffic to your service, such as during a [*retry storm*](/azure/architecture/antipatterns/retry-storm/).
+The Azure Front Door WAF enables you to control the number of requests allowed from each client's IP address over a period of time. It's a good practice to add rate limiting to reduce the effect of clients accidentally or intentionally sending large amounts of traffic to your service, such as during a [retry storm](/azure/architecture/antipatterns/retry-storm/).
For more information, see the following resources:-- [What is rate limiting for Azure Front Door Service?](waf-front-door-rate-limit.md).-- [Configure a Web Application Firewall rate limit rule using Azure PowerShell](waf-front-door-rate-limit-configure.md).-- [Why do additional requests above the threshold configured for my rate limit rule get passed to my backend server?](waf-faq.yml#why-do-additional-requests-above-the-threshold-configured-for-my-rate-limit-rule-get-passed-to-my-backend-server-)+
+- [What is rate limiting for Azure Front Door?](waf-front-door-rate-limit.md).
+- [Configure an Azure Web Application Firewall rate limit rule by using Azure PowerShell](waf-front-door-rate-limit-configure.md).
+- [Why do additional requests above the threshold configured for my rate limit rule get passed to my back-end server?](waf-faq.yml#why-do-additional-requests-above-the-threshold-configured-for-my-rate-limit-rule-get-passed-to-my-backend-server-)
### Use a high threshold for rate limits
-It's usually a good practice to set your rate limit threshold to be quite high. For example, if you know that a single client IP address might send around 10 requests to your server each minute, consider specifying a threshold of 20 requests per minute.
-
-High rate limit thresholds avoid blocking legitimate traffic, while still providing protection against extremely high numbers of requests that might overwhelm your infrastructure.
+Usually it's good practice to set your rate limit threshold to be quite high. For example, if you know that a single client IP address might send around 10 requests to your server each minute, consider specifying a threshold of 20 requests per minute.
+
+High rate-limit thresholds avoid blocking legitimate traffic. These thresholds still provide protection against extremely high numbers of requests that might overwhelm your infrastructure.
## Geo-filtering best practices
+This section discusses best practices for geo-filtering.
+ ### Geo-filter traffic
-Many web applications are designed for users within a specific geographic region. If this situation applies to your application, consider implementing geo-filtering to block requests that come from outside of the countries/regions you expect to receive traffic from.
+Many web applications are designed for users within a specific geographic region. If this situation applies to your application, consider implementing geo-filtering to block requests that come from outside of the countries or regions from which you expect to receive traffic.
-For more information, see [What is geo-filtering on a domain for Azure Front Door Service?](waf-front-door-tutorial-geo-filtering.md).
+For more information, see [What is geo-filtering on a domain for Azure Front Door?](waf-front-door-tutorial-geo-filtering.md).
### Specify the unknown (ZZ) location
-Some IP addresses aren't mapped to locations in our dataset. When an IP address can't be mapped to a location, the WAF assigns the traffic to the unknown (ZZ) country/region. To avoid blocking valid requests from these IP addresses, consider allowing the unknown (ZZ) country/region through your geo-filter.
+Some IP addresses aren't mapped to locations in our dataset. When an IP address can't be mapped to a location, the WAF assigns the traffic to the unknown (ZZ) country or region. To avoid blocking valid requests from these IP addresses, consider allowing the unknown (ZZ) country or region through your geo-filter.
-For more information, see [What is geo-filtering on a domain for Azure Front Door Service?](waf-front-door-tutorial-geo-filtering.md).
+For more information, see [What is geo-filtering on a domain for Azure Front Door?](waf-front-door-tutorial-geo-filtering.md).
## Logging
+This section discusses logging.
+ ### Add diagnostic settings to save your WAF's logs
-Front Door's WAF integrates with Azure Monitor. It's important to save the WAF logs to a destination like Log Analytics. You should review the WAF logs regularly. Reviewing logs helps you to [tune your WAF policies to reduce false-positive detections](#tune-your-waf), and to understand whether your application has been the subject of attacks.
+The Azure Front Door WAF integrates with Azure Monitor. It's important to save the WAF logs to a destination like Log Analytics. You should review the WAF logs regularly. Reviewing logs helps you to [tune your WAF policies to reduce false-positive detections](#tune-your-waf) and to understand whether your application has been the subject of attacks.
For more information, see [Azure Web Application Firewall monitoring and logging](waf-front-door-monitor.md). ### Send logs to Microsoft Sentinel
-Microsoft Sentinel is a security information and event management (SIEM) system, which imports logs and data from multiple sources to understand the threat landscape for your web application and overall Azure environment. Front Door's WAF logs should be imported into Microsoft Sentinel or another SIEM so that your internet-facing properties are included in its analysis. For Microsoft Sentinel, use the Azure WAF connector to easily import your WAF logs.
+Microsoft Sentinel is a security information and event management (SIEM) system, which imports logs and data from multiple sources to understand the threat landscape for your web application and overall Azure environment. Azure Front Door WAF logs should be imported into Microsoft Sentinel or another SIEM so that your internet-facing properties are included in its analysis. For Microsoft Sentinel, use the Azure WAF connector to easily import your WAF logs.
-For more information, see [Using Microsoft Sentinel with Azure Web Application Firewall](../waf-sentinel.md).
+For more information, see [Use Microsoft Sentinel with Azure Web Application Firewall](../waf-sentinel.md).
## Next steps
-Learn how to [create a Front Door WAF policy](waf-front-door-create-portal.md).
+Learn how to [create an Azure Front Door WAF policy](waf-front-door-create-portal.md).
web-application-firewall Waf Front Door Exclusion Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion-configure.md
Title: Configure WAF exclusion lists for Front Door
-description: Learn how to configure a WAF exclusion list for an existing Front Door endpoint.
+ Title: Configure WAF exclusion lists for Azure Front Door
+description: Learn how to configure a web application firewall (WAF) exclusion list for an existing Azure Front Door endpoint.
zone_pivot_groups: web-application-firewall-configuration
-# Configure Web Application Firewall exclusion lists
+# Configure web application firewall exclusion lists
-Sometimes the Front Door Web Application Firewall (WAF) might block a legitimate request. As part of tuning your WAF, you can configure the WAF to allow the request for your application. WAF exclusion lists allow you to omit specific request attributes from a WAF evaluation. The rest of the request is evaluated as normal. For more information about exclusion lists, see [Web Application Firewall (WAF) with Front Door exclusion lists](waf-front-door-exclusion.md).
+Sometimes Azure Web Application Firewall in Azure Front Door might block a legitimate request. As part of tuning your web application firewall (WAF), you can configure the WAF to allow the request for your application. WAF exclusion lists allow you to omit specific request attributes from a WAF evaluation. The rest of the request is evaluated as normal. For more information about exclusion lists, see [Azure Web Application Firewall with Azure Front Door exclusion lists](waf-front-door-exclusion.md).
-An exclusion list can be configured by using [Azure PowerShell](/powershell/module/az.frontdoor/New-AzFrontDoorWafManagedRuleExclusionObject), the [Azure CLI](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion#az-network-front-door-waf-policy-managed-rules-exclusion-add), the [REST API](/rest/api/frontdoorservice/webapplicationfirewall/policies/createorupdate), Bicep, ARM templates, and the Azure portal.
+An exclusion list can be configured by using [Azure PowerShell](/powershell/module/az.frontdoor/New-AzFrontDoorWafManagedRuleExclusionObject), the [Azure CLI](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion#az-network-front-door-waf-policy-managed-rules-exclusion-add), the [REST API](/rest/api/frontdoorservice/webapplicationfirewall/policies/createorupdate), Bicep, Azure Resource Manager templates, and the Azure portal.
## Scenario Suppose you've created an API. Your clients send requests to your API that include headers with names like `userid` and `user-id`.
-While tuning your WAF, you've noticed that some legitimate requests have been blocked because the user headers included character sequences that the WAF detected as SQL injection attacks. Specifically, rule ID 942230 detects the request headers and blocks the requests. [Rule 942230 is part of the SQLI rule group.](waf-front-door-drs.md#drs942-20)
+While tuning your WAF, you notice that some legitimate requests were blocked because the user headers included character sequences that the WAF detected as SQL injection attacks. Specifically, rule ID 942230 detects the request headers and blocks the requests. [Rule 942230 is part of the SQLI rule group.](waf-front-door-drs.md#drs942-20)
You decide to create an exclusion to allow these legitimate requests to pass through without the WAF blocking them.
You decide to create an exclusion to allow these legitimate requests to pass thr
## Create an exclusion
-1. Open your Front Door WAF policy.
+1. Open your Azure Front Door WAF policy.
-1. Select **Managed rules**, and then select **Manage exclusions** on the toolbar.
+1. Select **Managed rules** > **Manage exclusions**.
- :::image type="content" source="../media/waf-front-door-exclusion-configure/managed-rules-exclusion.png" alt-text="Screenshot of the Azure portal showing the WAF policy's managed rules page, with the 'Manage exclusions' button highlighted." :::
+ :::image type="content" source="../media/waf-front-door-exclusion-configure/managed-rules-exclusion.png" alt-text="Screenshot that shows the Azure portal showing the WAF policy's Managed rules page, with the Manage exclusions button highlighted." :::
-1. Select the **Add** button.
+1. Select **Add**.
- :::image type="content" source="../media/waf-front-door-exclusion-configure/exclusion-add.png" alt-text="Screenshot of the Azure portal showing the exclusion list, with the Add button highlighted." :::
+ :::image type="content" source="../media/waf-front-door-exclusion-configure/exclusion-add.png" alt-text="Screenshot that shows the Azure portal with the exclusion list Add button." :::
-1. Configure the exclusion's **Applies to** section as follows:
+1. Configure the exclusion's **Applies to** section:
| Field | Value | |-|-|
You decide to create an exclusion to allow these legitimate requests to pass thr
| Rule group | SQLI | | Rule | 942230 Detects conditional SQL injection attempts |
-1. Configure the exclusion match conditions as follows:
+1. Configure the exclusion match conditions:
| Field | Value | |-|-| | Match variable | Request header name | | Operator | Starts with |
- | Selector | user |
+ | Selector | User |
1. Review the exclusion, which should look like the following screenshot:
- :::image type="content" source="../media/waf-front-door-exclusion-configure/exclusion-details.png" alt-text="Screenshot of the Azure portal showing the exclusion configuration." :::
+ :::image type="content" source="../media/waf-front-door-exclusion-configure/exclusion-details.png" alt-text="Screenshot that shows the Azure portal showing the exclusion configuration." :::
This exclusion applies to any request headers that start with the word `user`. The match condition is case insensitive, so headers that start with `User` are also covered by the exclusion. If WAF rule 942230 detects a risk in these header values, it ignores the header and moves on.
$exclusion = New-AzFrontDoorWafManagedRuleOverrideObject `
Use the [New-AzFrontDoorWafRuleGroupOverrideObject](/powershell/module/az.frontdoor/new-azfrontdoorwafrulegroupoverrideobject) cmdlet to create a rule group override, which applies the exclusion to the appropriate rule group.
-The example below uses the SQLI rule group, because that group contains rule ID 942230.
+The following example uses the SQLI rule group because that group contains rule ID 942230.
```azurepowershell $ruleGroupOverride = New-AzFrontDoorWafRuleGroupOverrideObject `
$ruleGroupOverride = New-AzFrontDoorWafRuleGroupOverrideObject `
Use the [New-AzFrontDoorWafManagedRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleobject) cmdlet to configure the managed rule set, including the rule group override that you created in the previous step.
-The example below configures the DRS 2.0 rule set with the rule group override and its exclusion.
+The following example configures the DRS 2.0 rule set with the rule group override and its exclusion.
```azurepowershell $managedRuleSet = New-AzFrontDoorWafManagedRuleObject `
$managedRuleSet = New-AzFrontDoorWafManagedRuleObject `
## Apply the managed rule set configuration to the WAF profile
-Use the [Update-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/update-azfrontdoorwafpolicy) cmdlet to update your WAF policy to include the configuration you created above. Ensure that you use the correct resource group name and WAF policy name for your own environment.
+Use the [Update-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/update-azfrontdoorwafpolicy) cmdlet to update your WAF policy to include the configuration you created. Ensure that you use the correct resource group name and WAF policy name for your own environment.
```azurepowershell Update-AzFrontDoorWafPolicy `
Update-AzFrontDoorWafPolicy `
## Create an exclusion
-Use the [`az network front-door waf-policy managed-rules exclusion add`](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion) command to update your WAF policy to add a new exclusion.
+Use the [`az network front-door waf-policy managed-rules exclusion add`](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion) command to update your WAF policy to add a new exclusion.
The exclusion identifies request headers that start with the word `user`. The match condition is case insensitive, so headers that start with `User` are also covered by the exclusion.
az network front-door waf-policy managed-rules exclusion add \
## Example Bicep file
-The following example Bicep file shows how to do the following steps:
+The following example Bicep file shows how to:
-- Create a Front Door WAF policy.
+- Create an Azure Front Door WAF policy.
- Enable the DRS 2.0 rule set. - Configure an exclusion for rule 942230, which exists within the SQLI rule group. This exclusion applies to any request headers that start with the word `user`. The match condition is case insensitive, so headers that start with `User` are also covered by the exclusion. If WAF rule 942230 detects a risk in these header values, it ignores the header and moves on.
resource wafPolicy 'Microsoft.Network/frontDoorWebApplicationFirewallPolicies@20
## Next steps -- Learn more about [Front Door](../../frontdoor/front-door-overview.md).
+Learn more about [Azure Front Door](../../frontdoor/front-door-overview.md).
web-application-firewall Waf Front Door Rate Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit.md
Title: Web application firewall rate limiting for Azure Front Door
-description: Learn how to use Web Application Firewall (WAF) rate limiting protecting your web applications from malicious attacks.
+description: Learn how to use web application firewall rate limiting to protect your web applications from malicious attacks.
Last updated 04/20/2023
-# What is rate limiting for Azure Front Door Service?
+# What is rate limiting for Azure Front Door?
-Rate limiting enables you to detect and block abnormally high levels of traffic from any socket IP address. The socket IP address is the address of the client that initiated the TCP connection to Front Door. Typically, the socket IP address is the IP address of the user, but it might also be the IP address of a proxy server or another device that sits between the user and the Front Door. By using the web application firewall (WAF) with Azure Front Door, you can mitigate some types of denial of service attacks. Rate limiting also protects you against clients that have accidentally been misconfigured to send large volumes of requests in a short time period.
+Rate limiting enables you to detect and block abnormally high levels of traffic from any socket IP address.
+By using Azure Web Application Firewall in Azure Front Door, you can mitigate some types of denial-of-service attacks. Rate limiting also protects you against clients that have accidentally been misconfigured to send large volumes of requests in a short time period.
-Rate limits can be defined at the socket IP address level or the remote address level. If you have multiple clients accessing your Front Door from different socket IP addresses, they'll each have their own rate limits applied. The socket IP address is the source IP address the WAF sees. If your user is behind a proxy, socket IP address is often the proxy server address. Remote address is the original client IP that is usually sent via the X-Forwarded-For request header.
+The socket IP address is the address of the client that initiated the TCP connection to Azure Front Door. Typically, the socket IP address is the IP address of the user, but it might also be the IP address of a proxy server or another device that sits between the user and Azure Front Door.
+
+You can define rate limits at the socket IP address level or the remote address level. If you have multiple clients that access Azure Front Door from different socket IP addresses, they each have their own rate limits applied. The socket IP address is the source IP address the web application firewall (WAF) sees. If your user is behind a proxy, the socket IP address is often the proxy server address. The remote address is the original client IP that's usually sent via the `X-Forwarded-For` request header.
## Configure a rate limit policy Rate limiting is configured by using [custom WAF rules](./waf-front-door-custom-rules.md).
-When you configure a rate limit rule, you specify the *threshold*: the number of web requests allowed from each socket IP address within a time period of either one minute or five minutes.
+When you configure a rate limit rule, you specify the *threshold*. The threshold is the number of web requests that are allowed from each socket IP address within a time period of either one minute or five minutes.
+
+You also must specify at least one *match condition*, which tells Azure Front Door when to activate the rate limit. You can configure multiple rate limits that apply to different paths within your application.
+
+If you need to apply a rate limit rule to all your requests, consider using a match condition like the following example:
-You also must specify at least one *match condition*, which tells Front Door when to activate the rate limit. You can configure multiple rate limits that apply to different paths within your application.
-If you need to apply a rate limit rule to all of your requests, consider using a match condition like the following example:
+The preceding match condition identifies all requests with a `Host` header of a length greater than `0`. Because all valid HTTP requests for Azure Front Door contain a `Host` header, this match condition has the effect of matching all HTTP requests.
+## Rate limits and Azure Front Door servers
-The match condition above identifies all requests with a `Host` header of length greater than 0. Because all valid HTTP requests for Front Door contain a `Host` header, this match condition has the effect of matching all HTTP requests.
+Requests from the same client often arrive at the same Azure Front Door server. In that case, you see requests are blocked as soon as the rate limit is reached for each of the client IP addresses.
-## Rate limits and Front Door servers
+It's possible that requests from the same client might arrive at a different Azure Front Door server that hasn't refreshed the rate limit counters yet. For example, the client might open a new TCP connection for each request, and each TCP connection could be routed to a different Azure Front Door server.
-Requests from the same client often arrive at the same Front Door server. In that case, you see requests are blocked as soon as the rate limit is reached for each of the client IP addresses.
+If the threshold is low enough, the first request to the new Azure Front Door server could pass the rate limit check. So, for a low threshold (for example, less than about 200 requests per minute), you might see some requests above the threshold get through.
-However, it's possible that requests from the same client might arrive at a different Front Door server that hasn't refreshed the rate limit counters yet. For example, the client might open a new TCP connection for each request. If the threshold is low enough, the first request to the new Front Door server could pass the rate limit check. So, for a low threshold (for example, less than about 200 requests per minute), you may see some requests above the threshold get through.
+A few considerations to keep in mind while you determine threshold values and time windows for rate limiting:
-A few considerations to keep in mind while determining threshold values and time windows for rate limiting:
-- Larger window size and smaller thresholds are most effective in preventing against DDoS attacks. -- Setting larger time window sizes (for example, 5 minutes over 1 minute) and larger thresholds values (for example, 200 over 100) tend to be more accurate in enforcing close to rate limits thresholds than using the shorter time window sizes and lower thresholds values.
+- Larger window size and smaller thresholds are most effective in preventing against DDoS attacks.
+- Setting larger time window sizes (for example, five minutes over one minute) and larger threshold values (for example, 200 over 100) tend to be more accurate in enforcing close to rate limits thresholds than using the shorter time window sizes and lower threshold values.
## Next steps -- [Configure rate limiting on your Front Door WAF](waf-front-door-rate-limit-configure.md)-- Review [Rate limiting best practices](waf-front-door-best-practices.md#rate-limiting-best-practices)
+- Configure [rate limiting on your Azure Front Door WAF](waf-front-door-rate-limit-configure.md).
+- Review [rate limiting best practices](waf-front-door-best-practices.md#rate-limiting-best-practices).
web-application-firewall Waf Front Door Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-tuning.md
Title: Tuning Web Application Firewall (WAF) for Azure Front Door
-description: In this article, you learn about how to tune the WAF for Front Door.
+ Title: Tune Azure Web Application Firewall for Azure Front Door
+description: In this article, you learn how to tune Azure Web Application Firewall for Azure Front Door.
zone_pivot_groups: front-door-tiers
-# Tuning Web Application Firewall (WAF) for Azure Front Door
-
-The Microsoft-managed Default Rule Set is based on the [OWASP Core Rule Set (CRS)](https://github.com/SpiderLabs/owasp-modsecurity-crs/tree/v3.1/dev) and includes Microsoft Threat Intelligence Collection rules. It is often expected that WAF rules need to be tuned to suit the specific needs of the application or organization using the WAF. This is commonly achieved by defining rule exclusions, creating custom rules, and even disabling rules that may be causing issues or false positives. There are a few things you can do if requests that should pass through your Web Application Firewall (WAF) are blocked.
+# Tune Azure Web Application Firewall for Azure Front Door
-> [!Note]
->
-> Managed Rule Set is not available for Azure Front Door Standard SKU. For more information about the different tier SKUs, refer to [Feature comparison between tiers](../../frontdoor/standard-premium/tier-comparison.md#feature-comparison-between-tiers)
+The Microsoft-managed default rule set is based on the [OWASP Core Rule Set](https://github.com/SpiderLabs/owasp-modsecurity-crs/tree/v3.1/dev) and includes Microsoft Threat Intelligence collection rules.
-First, ensure youΓÇÖve read the [Front Door WAF overview](afds-overview.md) and the [WAF Policy for Front Door](waf-front-door-create-portal.md) documents. Also, make sure youΓÇÖve enabled [WAF monitoring and logging](waf-front-door-monitor.md). These articles explain how the WAF functions, how the WAF rule sets work, and how to access WAF logs.
-
-## Understanding WAF logs
-
-The purpose of WAF logs is to show every request that is matched or blocked by the WAF. It is a collection of all evaluated requests that are matched or blocked. If you notice that the WAF blocks a request that it shouldn't (a false positive), you can do a few things. First, narrow down, and find the specific request. If desired, you can [configure a custom response message](./waf-front-door-configure-custom-response-code.md) to include the `trackingReference` field to easily identify the event and perform a log query on that specific value. Look through the logs to find the specific URI, timestamp, or client IP of the request. When you find the related log entries, you can begin to act on false positives.
-
-For example, say you have a legitimate traffic containing the string `1=1` that you want to pass through your WAF. Here's what the request looks like:
+It's often expected that web application firewall (WAF) rules must be tuned to suit the specific needs of the application or organization that's using the WAF. Organizations commonly achieve tuning by taking one of the following actions:
+
+- Defining rule exclusions.
+- Creating custom rules.
+- Disabling rules that might be causing issues or false positives.
+
+This article describes what you can do if requests that should pass through your WAF are blocked.
+
+> [!NOTE]
+> The Microsoft-managed rule set isn't available for the Azure Front Door Standard SKU. For more information about the different tier SKUs, see [Feature comparison between tiers](../../frontdoor/standard-premium/tier-comparison.md#feature-comparison-between-tiers).
+
+Read the [Azure Front Door WAF overview](afds-overview.md) and the [WAF Policy for Azure Front Door](waf-front-door-create-portal.md) documents. Also, enable [WAF monitoring and logging](waf-front-door-monitor.md). These articles explain how the WAF functions, how the WAF rule sets work, and how to access WAF logs.
+
+## Understand WAF logs
+
+The purpose of WAF logs is to show every request that's matched or blocked by the WAF. It's a collection of all evaluated requests that are matched or blocked. If you notice that the WAF blocks a request that it shouldn't (a false positive), you can do a few things.
+
+First, narrow down and find the specific request. You can [configure a custom response message](./waf-front-door-configure-custom-response-code.md) to include the `trackingReference` field to easily identify the event and perform a log query on that specific value. Look through the logs to find the specific URI, timestamp, or client IP of the request. When you find the related log entries, you can act on false positives.
+
+For example, say you have legitimate traffic that contains the string `1=1` that you want to pass through your WAF. Here's what the request looks like:
``` POST http://afdwafdemosite.azurefd.net/api/Feedbacks HTTP/1.1
Content-Length: 55
UserId=20&captchaId=7&captchaId=15&comment="1=1"&rating=3 ```
-If you try the request, the WAF blocks traffic that contains your *1=1* string in any parameter or field. This is a string often associated with a SQL injection attack. You can look through the logs and see the timestamp of the request and the rules that blocked/matched.
+If you try the request, the WAF blocks traffic that contains your `1=1` string in any parameter or field. This string is often associated with a SQL injection attack. You can look through the logs and see the timestamp of the request and the rules that blocked or matched.
-In the following example, we explore a log entry generated due to a rule match. The following Log Analytics query can be used to find requests that have been blocked within the last 24 hours:
+The following example shows a log entry that was generated based on a rule match. You can use the following Log Analytics query to find requests that were blocked within the last 24 hours.
::: zone pivot="front-door-standard-premium"
AzureDiagnostics
| where TimeGenerated > ago(1d) | where action_s == 'Block' ```
-
+ ::: zone-end
-
-In the `requestUri` field, you can see the request was made to `/api/Feedbacks/` specifically. Going further, we find the rule ID `942110` in the `ruleName` field. Knowing the rule ID, you could go to the [OWASP ModSecurity Core Rule Set Official Repository](https://github.com/coreruleset/coreruleset) and search by that [rule ID](https://github.com/coreruleset/coreruleset/blob/v3.1/dev/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf) to review its code and understand exactly what this rule matches on.
-
-Then, by checking the `action` field, we see that this rule is set to block requests upon matching, and we confirm that the request was in fact blocked by the WAF because the `policyMode` is set to `prevention`.
-
-Now, let's check the information in the `details` field. This is where you can see the `matchVariableName` and the `matchVariableValue` information. We learn that this rule was triggered because someone input *1=1* in the `comment` field of the web app.
+
+In the `requestUri` field, you can see the request was made to `/api/Feedbacks/` specifically. Going further, find the rule ID `942110` in the `ruleName` field. Knowing the rule ID, you could go to the [OWASP ModSecurity Core Rule Set official repository](https://github.com/coreruleset/coreruleset) and search by that [rule ID](https://github.com/coreruleset/coreruleset/blob/v3.1/dev/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf) to review its code and understand exactly what this rule matches on.
+
+Then, by checking the `action` field, you can see that this rule is set to block requests upon matching. You can confirm that the request was blocked by the WAF because the `policyMode` is set to `prevention`.
+
+Now, check the information in the `details` field. This field is where you can see the `matchVariableName` and the `matchVariableValue` information. This rule was triggered because someone input `1=1` in the `comment` field of the web app.
::: zone pivot="front-door-standard-premium"
Now, let's check the information in the `details` field. This is where you can s
``` ::: zone-end
-
-There is also value in checking the access logs to expand your knowledge about a given WAF event. Below we review the log that was generated as a response to the event above.
-
-You can see these are related logs based on the `trackingReference` value being the same. Amongst various fields that provide general insight, such as `userAgent` and `clientIP`, we call attention to the `httpStatusCode` and `httpStatusDetails` fields. Here, we can confirm that the client has received an HTTP 403 response, which absolutely confirms this request was denied and blocked.
+
+There's also value in checking the access logs to expand your knowledge about a given WAF event. Next, review the log that was generated as a response to the preceding event.
+
+You can see these logs are related because the `trackingReference` value is the same. Among various fields that provide general insight, such as `userAgent` and `clientIP`, notice the `httpStatusCode` and `httpStatusDetails` fields. Here, you can see that the client received an HTTP 403 response, which confirms that this request was denied and blocked.
::: zone pivot="front-door-standard-premium"
You can see these are related logs based on the `trackingReference` value being
::: zone-end
-## Resolving false positives
-
-To make an informed decision about handling a false positive, itΓÇÖs important to familiarize yourself with the technologies your application uses. For example, say there isn't a SQL server in your technology stack, and you are getting false positives related to those rules. Disabling those rules doesn't necessarily weaken your security.
+## Resolve false positives
-With this information, and the knowledge that rule 942110 is the one that matched the `1=1` string in our example, we can do a few things to stop this legitimate request from being blocked:
-
-* Use exclusion lists
- * See [Web Application Firewall (WAF) with Front Door Service exclusion lists](waf-front-door-exclusion.md) for more information about exclusion lists.
-* Change WAF actions
- * See [WAF Actions](afds-overview.md#waf-actions) for more information about what actions can be taken when a request matches a ruleΓÇÖs conditions.
-* Use custom rules
- * See [Custom rules for Web Application Firewall with Azure Front Door](waf-front-door-custom-rules.md) for more information about custom rules.
-* Disable rules
+To make an informed decision about handling a false positive, it's important to familiarize yourself with the technologies your application uses. For example, say there isn't a SQL server in your technology stack, and you're getting false positives related to those rules. Disabling those rules doesn't necessarily weaken your security.
+
+With this information, and the knowledge that rule 942110 is the one that matched the `1=1` string in the example, you can do a few things to stop this legitimate request from being blocked:
+
+* **Use exclusion lists.** For more information about exclusion lists, see [Azure Web Application Firewall with Azure Front Door exclusion lists](waf-front-door-exclusion.md).
+* **Change WAF actions.** For more information about what actions can be taken when a request matches a rule's conditions, see [WAF Actions](afds-overview.md#waf-actions).
+* **Use custom rules.** For more information about custom rules, see [Custom rules for Azure Web Application Firewall with Azure Front Door](waf-front-door-custom-rules.md).
+* **Disable rules.**
> [!TIP]
-> When selecting an approach to allow legitimate requests through the WAF, try to make this as narrow as you can. For example, it's better to use an exclusion list than disabling a rule entirely.
+> When you select an approach to allow legitimate requests through the WAF, try to make it as narrow as you can. For example, it's better to use an exclusion list than to disable a rule entirely.
+
+### Use exclusion lists
+
+One benefit of using an exclusion list is that only the match variable you select to exclude will be no longer inspected for that given request. That is, you can choose between specific request headers, request cookies, query string arguments, or request body post arguments to be excluded if a certain condition is met, as opposed to excluding the whole request from being inspected. The other nonspecified variables of the request are inspected normally.
+
+Exclusions are a global setting. The configured exclusion applies to all traffic that passes through your WAF, not just a specific web app or URI. For example, this could be a concern if `1=1` is a valid request in the body for a certain web app, but not for others under the same WAF policy.
+
+If it makes sense to use different exclusion lists for different applications, consider using different WAF policies for each application and applying them to each application's front end.
+
+When you configure exclusion lists for managed rules, you can choose to exclude:
-### Using exclusion lists
+- All rules within a rule set.
+- All rules within a rule group.
+- An individual rule.
-One benefit of using an exclusion list is that only the match variable you select to exclude will no longer be inspected for that given request. That is, you can choose between specific request headers, request cookies, query string arguments, or request body post arguments to be excluded if a certain condition is met, as opposed to excluding the whole request from being inspected. The other non-specified variables of the request will still be inspected normally.
-
-ItΓÇÖs important to consider that exclusions are a global setting. This means that the configured exclusion will apply to all traffic passing through your WAF, not just a specific web app or URI. For example, this could be a concern if *1=1* is a valid request in the body for a certain web app, but not for others under the same WAF policy. If it makes sense to use different exclusion lists for different applications, consider using different WAF policies for each application and applying them to each application's frontend.
-
-When configuring exclusion lists for managed rules, you can choose to exclude all rules within a rule set, all rules within a rule group, or an individual rule. An exclusion list can be configured using [PowerShell](/powershell/module/az.frontdoor/New-AzFrontDoorWafManagedRuleExclusionObject), [Azure CLI](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion), [REST API](/rest/api/frontdoorservice/webapplicationfirewall/policies/createorupdate), or the Azure portal.
+You can configure an exclusion list by using [PowerShell](/powershell/module/az.frontdoor/New-AzFrontDoorWafManagedRuleExclusionObject), the [Azure CLI](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion), the [REST API](/rest/api/frontdoorservice/webapplicationfirewall/policies/createorupdate), Bicep, Azure Resource Manager templates, or the Azure portal.
-* Exclusions at a rule level
- * Applying exclusions at a rule level means that the specified exclusions will not be analyzed against that individual rule only, while it will still be analyzed by all other rules in the rule set. This is the most granular level for exclusions, and it can be used to fine-tune the managed rule set based on the information you find in the WAF logs when troubleshooting an event.
-* Exclusions at rule group level
- * Applying exclusions at a rule group level means that the specified exclusions will not be analyzed against that specific set of rule types. For example, selecting *SQLI* as an excluded rule group indicates the defined request exclusions would not be inspected by any of the SQLI-specific rules, but it would still be inspected by rules in other groups, such as *PHP*, *RFI*, or *XSS*. This type of exclusion can be useful when we are sure the application is not susceptible to specific types of attacks. For example, an application that doesn't have any SQL databases could have all *SQLI* rules excluded without it being detrimental to its security level.
-* Exclusions at rule set level
- * Applying exclusions at a rule set level means that the specified exclusions will not be analyzed against any of the security rules available in that rule set. This is a comprehensive exclusion, so it should be used carefully.
+* **Exclusions at a rule level:** Applying exclusions at a rule level means that the specified exclusions won't be analyzed against that individual rule only. It will still be analyzed by all other rules in the rule set. This is the most granular level for exclusions. You can use it to fine-tune the managed rule set based on the information you find in the WAF logs when you troubleshoot an event.
+* **Exclusions at a rule group level:** Applying exclusions at a rule group level means that the specified exclusions won't be analyzed against that specific set of rule types. For example, selecting **SQLI** as an excluded rule group indicates the defined request exclusions won't be inspected by any of the SQLI-specific rules. It will still be inspected by rules in other groups, such as **PHP**, **RFI**, or **XSS**. This type of exclusion can be useful when you're sure the application isn't susceptible to specific types of attacks. For example, an application that doesn't have any SQL databases could have all SQLI rules excluded without it being detrimental to its security level.
+* **Exclusions at a rule set level:** Applying exclusions at a rule set level means that the specified exclusions won't be analyzed against any of the security rules available in that rule set. This exclusion is comprehensive, so use it carefully.
-In this example, we will be performing an exclusion at the most granular level (applying exclusion to a single rule) and we are looking to exclude the match variable **Request body post args name** that contains `comment`. This is apparent because you can see the match variable details in the firewall log: `"matchVariableName": "PostParamValue:comment"`. The attribute is `comment`. You can also find this attribute name a few other ways, see [Finding request attribute names](#finding-request-attribute-names).
+In this example, you perform an exclusion at the most granular level by applying an exclusion to a single rule. You want to exclude the match variable **Request body post args name** that contains `comment`. You can see the match variable details in the firewall log: `"matchVariableName": "PostParamValue:comment"`. The attribute is `comment`. You can also find this attribute name a few other ways. For more information, see [Find request attribute names](#find-request-attribute-names).
-![Exclusion rules](../media/waf-front-door-tuning/exclusion-rules.png)
+![Screenshot that shows exclusion rules.](../media/waf-front-door-tuning/exclusion-rules.png)
-![Rule exclusion for specific rule](../media/waf-front-door-tuning/exclusion-rule.png)
+![Screenshot that shows rule exclusion for a specific rule.](../media/waf-front-door-tuning/exclusion-rule.png)
-Occasionally, there are cases where specific parameters get passed into the WAF in a manner that may not be intuitive. For example, there is a token that gets passed when authenticating using Azure Active Directory. This token, `__RequestVerificationToken`, usually gets passed in as a request cookie. However, in some cases where cookies are disabled, this token is also passed in as a request post argument. For this reason, to address Azure AD token false positives, you need to ensure that `__RequestVerificationToken` is added to the exclusion list for both `RequestCookieNames` and `RequestBodyPostArgsNames`.
+Occasionally, there are cases where specific parameters get passed into the WAF in a manner that might not be intuitive. For example, a token gets passed when you authenticate by using Azure Active Directory (Azure AD). The token `__RequestVerificationToken` usually gets passed in as a request cookie.
-Exclusions on a field name (*selector*) means that the value will no longer be evaluated by the WAF. However, the field name itself continues to be evaluated and in rare cases it may match a WAF rule and trigger an action.
+In some cases where cookies are disabled, this token is also passed in as a request post argument. For this reason, to address Azure AD token false positives, you must ensure that `__RequestVerificationToken` is added to the exclusion list for both `RequestCookieNames` and `RequestBodyPostArgsNames`.
-![Rule exclusion for rule set](../media/waf-front-door-tuning/exclusion-rule-selector.png)
+Exclusions on a field name (**Selector**) means that the value will no longer be evaluated by the WAF. The field name itself continues to be evaluated and in rare cases it might match a WAF rule and trigger an action.
-### Changing WAF actions
+![Screenshot that shows rule exclusion for a rule set.](../media/waf-front-door-tuning/exclusion-rule-selector.png)
-Another way of handling the behavior of WAF rules is by choosing the action it will take when a request matches a ruleΓÇÖs conditions. The available actions are: [Allow, Block, Log, and Redirect](afds-overview.md#waf-actions).
+### Change WAF actions
-In this example, we changed the default action *Block* to the *Log* action on rule 942110. This will cause the WAF to log the request and continue evaluating the same request against the remaining lower priority rules.
+Another way to handle the behavior of WAF rules is by choosing the action it takes when a request matches a rule's conditions. The available actions are [Allow, Block, Log, and Redirect](afds-overview.md#waf-actions).
-![WAF actions](../media/waf-front-door-tuning/actions.png)
+In this example, the default action **Block** changed to the **Log** action on rule 942110. This action causes the WAF to log the request and continue evaluating the same request against the remaining lower priority rules.
-After performing the same request, we can refer back to the logs and we will see that this request was a match on rule ID 942110, and that the `action_s` field now indicates *Log* instead of *Block*. We then expanded the log query to include the `trackingReference_s` information and see what else has happened with this request.
+![Screenshot that shows WAF actions.](../media/waf-front-door-tuning/actions.png)
-![Log showing multiple rule matches](../media/waf-front-door-tuning/actions-log.png)
+After you perform the same request, you can refer back to the logs and see that this request was a match on rule ID 942110. The `action_s` field now indicates **Log** instead of **Block**. The log query was then expanded to include the `trackingReference_s` information to see what else happened with this request.
-Interestingly, we see a different SQLI rule match occurs milliseconds after rule ID 942110 was processed. The same request matched on rule ID 942310, and this time the default action *Block* was triggered.
+![Screenshot that shows a log showing multiple rule matches.](../media/waf-front-door-tuning/actions-log.png)
-Another advantage of using the *Log* action during WAF tuning or troubleshooting is that you can identify if multiple rules within a specific rule group are matching and blocking a given request. You can then create your exclusions at the appropriate level, i.e. at the rule or rule group level.
+Now you can see a different SQLI rule match that occurs milliseconds after rule ID 942110 was processed. The same request matched on rule ID 942310, and this time the default action **Block** was triggered.
-### Using custom rules
+Another advantage of using the **Log** action during WAF tuning or troubleshooting is that you can identify if multiple rules within a specific rule group are matching and blocking a given request. You can then create your exclusions at the appropriate level, that is, at the rule or rule group level.
-Once you have identified what is causing a WAF rule match, you can use custom rules to adjust how the WAF responds to the event. Custom rules are processed before managed rules, they can contain more than one condition, and their actions can be [Allow, Deny, Log or Redirect](afds-overview.md#waf-actions). When there is a rule match, the WAF engine stops processing. This means other custom rules with lower priority and managed rules are no longer executed.
+### Use custom rules
-In the example below, we created a custom rule with two conditions. The first condition is looking for the `comment` value in the request body. The second condition is looking for the `/api/Feedbacks/` value in the request URI.
+After you identify what's causing a WAF rule match, you can use custom rules to adjust how the WAF responds to the event. Custom rules are processed before managed rules. They can contain more than one condition, and their actions can be [Allow, Deny, Log, or Redirect](afds-overview.md#waf-actions).
-Using a custom rule allows you to be the most granular when fine-tuning your WAF rules and for dealing with false positives. In this case, weΓÇÖre not taking action only based on the `comment` request body value, which could exist across multiple sites or apps under the same WAF policy. By including another condition to also match on a particular request URI `/api/Feedbacks/`, we ensure this custom rule truly applies to this explicit use case that we vetted out. This ensures that the same attack, if performed against different conditions, would still be inspected and prevented by the WAF engine.
+> [!WARNING]
+> When a request matches a custom rule, the WAF engine stops processing the request. Managed rules won't be processed for this request and neither will other custom rules with a lower priority.
-![Log](../media/waf-front-door-tuning/custom-rule.png)
+The following example shows a custom rule with two conditions. The first condition looks for the `comment` value in the request body. The second condition looks for the `/api/Feedbacks/` value in the request URI.
-When exploring the log, you can see that the `ruleName_s` field contains the name given to the custom rule we created: `redirectcomment`. In the `action_s` field, you can see that the *Redirect* action was taken for this event. In the `details_matches_s` field, we can see the details for both conditions were matched.
+By using a custom rule, you can be the most granular so that you can fine-tune your WAF rules and deal with false positives. In this case, you're not taking action only based on the `comment` request body value, which could exist across multiple sites or apps under the same WAF policy.
-### Disabling rules
+When you include another condition to also match on a particular request URI `/api/Feedbacks/`, you ensure this custom rule truly applies to this explicit use case that you vetted out. In this way, the same attack, if performed against different conditions, is still inspected and prevented by the WAF engine.
-Another way to get around a false positive is to disable the rule that matched on the input the WAF thought was malicious. Since you've parsed the WAF logs and have narrowed the rule down to 942110, you can disable it in the Azure portal. See [Customize Web Application Firewall rules using the Azure portal](../ag/application-gateway-customize-waf-rules-portal.md#disable-rule-groups-and-rules).
-
-Disabling a rule is a benefit when you are sure that all requests meeting that specific condition are in fact legitimate requests, or when you are sure the rule simply does not apply to your environment (such as, disabling a SQL injection rule because you have non-SQL backends).
-
-However, disabling a rule is a global setting that applies to all frontend hosts associated to the WAF policy. When you choose to disable a rule, you may be leaving vulnerabilities exposed without protection or detection for any other frontend hosts associated to the WAF policy.
-
-If you want to use Azure PowerShell to disable a managed rule, see the [`PSAzureManagedRuleOverride`](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleoverrideobject) object documentation. If you want to use Azure CLI, see the [`az network front-door waf-policy managed-rules override`](/cli/azure/network/front-door/waf-policy/managed-rules/override) documentation.
+![Screenshot that shows a log.](../media/waf-front-door-tuning/custom-rule.png)
-![WAF rules](../media/waf-front-door-tuning/waf-rules.png)
+When you explore the log, you can see that the `ruleName_s` field contains the name given to the custom rule `redirectcomment`. In the `action_s` field, you can see that the **Redirect** action was taken for this event. In the `details_matches_s` field, you can see the details for both conditions were matched.
+
+### Disable rules
+
+Another way to get around a false positive is to disable the rule that matched the input the WAF thought was malicious. Because you parsed the WAF logs and narrowed the rule down to 942110, you can disable it in the Azure portal. For more information, see [Customize Azure Web Application Firewall rules by using the Azure portal](../ag/application-gateway-customize-waf-rules-portal.md#disable-rule-groups-and-rules).
+
+Disabling a rule is a benefit when you're sure that all requests meeting that specific condition are legitimate requests, or when you're sure the rule doesn't apply to your environment (such as disabling a SQL injection rule because you have non-SQL back ends).
+
+Disabling a rule is a global setting that applies to all front-end hosts associated to the WAF policy. When you choose to disable a rule, you might be leaving vulnerabilities exposed without protection or detection for any other front-end hosts associated to the WAF policy.
+
+If you want to use Azure PowerShell to disable a managed rule, see the [`PSAzureManagedRuleOverride`](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleoverrideobject) object documentation. If you want to use the Azure CLI, see the [`az network front-door waf-policy managed-rules override`](/cli/azure/network/front-door/waf-policy/managed-rules/override) documentation.
+
+![Screenshot that shows WAF rules.](../media/waf-front-door-tuning/waf-rules.png)
> [!TIP]
-> It's a good idea to document any changes you make to your WAF policy. Include example requests to illustrate the false positive detection, and clearly explain why you added a custom rule, disabled a rule or ruleset, or added an exception. This documentation can be helpful if you redesign your application in the future and need to verify that your changes are still valid. It can also help if you are ever audited or need to justify why you have reconfigured the WAF policy from its default settings.
+> Document any changes you make to your WAF policy. Include example requests to illustrate the false positive detection. Explain why you added a custom rule, disabled a rule or rule set, or added an exception. If you redesign your application in the future, you might need to verify that your changes are still valid. Or you might be audited or need to justify why you reconfigured the WAF policy from its default settings.
+
+## Find request fields
-## Finding request fields
+By using a browser proxy like [Fiddler](https://www.telerik.com/fiddler), you can inspect individual requests and determine what specific fields of a webpage are called. This technique is helpful when you need to exclude certain fields from inspection by using exclusion lists in the WAF.
-Using a browser proxy like [Fiddler](https://www.telerik.com/fiddler), you can inspect individual requests and determine what specific fields of a web page are called. This is helpful when we need to exclude certain fields from inspection using exclusion lists in WAF.
+### Find request attribute names
-### Finding request attribute names
-
-In this example, you can see the field where the `1=1` string was entered is called `comment`. This data was passed in the body of a POST request.
+In this example, the field where the `1=1` string was entered is called `comment`. This data was passed in the body of a POST request.
-![Fiddler request showing body](../media/waf-front-door-tuning/fiddler-request-attribute-name.png)
+![Screenshot that shows the body of a Fiddler request.](../media/waf-front-door-tuning/fiddler-request-attribute-name.png)
-This is a field you can exclude. To learn more about exclusion lists, See [Web application firewall exclusion lists](./waf-front-door-exclusion.md). You can exclude the evaluation in this case by configuring the following exclusion:
+You can exclude this field. To learn more about exclusion lists, see [Web application firewall exclusion lists](./waf-front-door-exclusion.md). You can exclude the evaluation in this case by configuring the following exclusion:
-![Exclusion rule](../media/waf-front-door-tuning/fiddler-request-attribute-name-exclusion.png)
+![Screenshot that shows an exclusion rule.](../media/waf-front-door-tuning/fiddler-request-attribute-name-exclusion.png)
-You can also examine the firewall logs to get the information to see what you need to add to the exclusion list. To enable logging, see [Monitoring metrics and logs in Azure Front Door](./waf-front-door-monitor.md).
+You can also examine the firewall logs to get the information to see what you need to add to the exclusion list. To enable logging, see [Monitor metrics and logs in Azure Front Door](./waf-front-door-monitor.md).
::: zone pivot="front-door-standard-premium"
-Examine the firewall log in the `PT1H.json` file for the hour that the request you want to inspect occurred. `PT1H.json` files are available in the storage account containers where the `FrontDoorWebApplicationFirewallLog` and the `FrontDoorAccessLog` diagnostic logs are stored.
+Examine the firewall log in the `PT1H.json` file for the hour that the request you want to inspect occurred. The `PT1H.json` files are available in the storage account containers where the `FrontDoorWebApplicationFirewallLog` and the `FrontDoorAccessLog` diagnostic logs are stored.
::: zone-end ::: zone pivot="front-door-classic"
-Examine the firewall log in the `PT1H.json` file for the hour that the request you want to inspect occurred. `PT1H.json` files are available in the storage account containers where the `FrontdoorWebApplicationFirewallLog` and the `FrontdoorAccessLog` diagnostic logs are stored.
+Examine the firewall log in the `PT1H.json` file for the hour that the request you want to inspect occurred. The `PT1H.json` files are available in the storage account containers where the `FrontdoorWebApplicationFirewallLog` and the `FrontdoorAccessLog` diagnostic logs are stored.
::: zone-end
-In this example, you can see the rule that blocked the request (with the same Transaction Reference) and occurred at the exact same time:
+In this example, you can see the rule that blocked the request (with the same Transaction Reference) and that occurred at the same time.
::: zone pivot="front-door-standard-premium"
In this example, you can see the rule that blocked the request (with the same Tr
::: zone-end
-With your knowledge of how the Azure-managed rule sets work (see [Web Application Firewall on Azure Front Door](afds-overview.md)) you know that the rule with the *action: Block* property is blocking based on the data matched in the request body. You can see in the details that it matched a pattern (`1=1`), and the field is named `comment`. Follow the same previous steps to exclude the request body post args name that contains `comment`.
+With your knowledge of how the Azure-managed rule sets work, you know that the rule with the `action: Block` property is blocking based on the data matched in the request body. (For more information, see [Azure Web Application Firewall in Azure Front Door](afds-overview.md).) You can see in the details that it matched a pattern (`1=1`) and the field is named `comment`. Follow the same previous steps to exclude the request body post args name that contains `comment`.
-### Finding request header names
+### Find request header names
-Fiddler is a useful tool once again to find request header names. In the following screenshot, you can see the headers for this GET request, which include Content-Type, User-Agent, and so on. You can also use request headers to create exclusions and custom rules in WAF.
+Fiddler is a useful tool to find request header names. The following screenshot shows the headers for this GET request, which include `Content-Type` and `User-Agent`. You can also use request headers to create exclusions and custom rules in the WAF.
-![Fiddler request showing header](../media/waf-front-door-tuning/fiddler-request-header-name.png)
+![Screenshot that shows the header of a Fiddler request.](../media/waf-front-door-tuning/fiddler-request-header-name.png)
-Another way to view request and response headers is to look inside the developer tools of your browser, such as Edge or Chrome. You can press F12 or right-click -> **Inspect** -> **Developer Tools**, and select the **Network** tab. Load a web page, and click the request you want to inspect.
+Another way to view request and response headers is to look inside the developer tools of your browser, such as Microsoft Edge or Chrome. You can select F12 or right-click **Inspect** > **Developer Tools**. Select the **Network** tab. Load a webpage and select the request you want to inspect.
-![Network inspector request](../media/waf-front-door-tuning/network-inspector-request.png)
+![Screenshot that shows a Network inspector request.](../media/waf-front-door-tuning/network-inspector-request.png)
-### Finding request cookie names
+### Find request cookie names
-If the request contains cookies, the Cookies tab can be selected to view them in Fiddler. Cookie information can also be used to create exclusions or custom rules in WAF.
+If the request contains cookies, select the **Cookies** tab to view them in Fiddler. Cookie information can also be used to create exclusions or custom rules in the WAF.
## Anomaly scoring rule
-If you see rule ID 949110 during the process of tuning your WAF, this indicates that the request was blocked by the [anomaly scoring](waf-front-door-drs.md#anomaly-scoring-mode) process.
+If you see rule ID 949110 during the process of tuning your WAF, its presence indicates that the request was blocked by the [anomaly scoring](waf-front-door-drs.md#anomaly-scoring-mode) process.
-Review the other WAF log entries for the same request, by searching for the log entries with the same tracking reference. Look at each of the rules that were triggered, and tune each rule by following the guidance throughout this article.
+Review the other WAF log entries for the same request by searching for the log entries with the same tracking reference. Look at each of the rules that were triggered. Tune each rule by following the guidance in this article.
## Next steps -- Learn about [Azure web application firewall](../overview.md).-- Learn how to [create a Front Door](../../frontdoor/quickstart-create-front-door.md).
+- Learn about [Azure Web Application Firewall](../overview.md).
+- Learn how to [create an instance of Azure Front Door](../../frontdoor/quickstart-create-front-door.md).