Updates from: 04/28/2023 01:11:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
# [macOS](#tab/macos)
- ```bash
+ ```zsh
python3 -m venv .venv source .venv/bin/activate ``` # [Windows](#tab/windows)
- ```bash
+ ```cmd
py -3 -m venv .venv .venv\scripts\activate ```
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
1. Update pip in the virtual environment by running the following command in the terminal:
- ```bash
+ ```
python -m pip install --upgrade pip ```
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
# [macOS](#tab/macos)
- ```bash
+ ```zsh
export FLASK_ENV=development ``` # [Windows](#tab/windows)
- ```bash
+ ```cmd
set FLASK_ENV=development ```
python -m pip install -r requirements.txt
# [macOS](#tab/macos)
-```bash
+```zsh
python -m pip install -r requirements.txt ``` # [Windows](#tab/windows)
-```bash
+```cmd
py -m pip install -r requirements.txt ```
python -m flask run --host localhost --port 5000
# [macOS](#tab/macos)
-```bash
+```zsh
python -m flask run --host localhost --port 5000 ``` # [Windows](#tab/windows)
-```bash
+```cmd
py -m flask run --host localhost --port 5000 ```
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
You must be assigned one of the following roles to view or manage device setting
- **Additional local administrators on Azure AD joined devices**: This setting allows you to select the users who are granted local administrator rights on a device. These users are added to the Device Administrators role in Azure AD. Global Administrators in Azure AD and device owners are granted local administrator rights by default. This option is a premium edition capability available through products like Azure AD Premium and Enterprise Mobility + Security.
+- **Enable Azure AD Local Administrator Password Solution (LAPS) (preview)**: LAPS is the management of local account passwords on Windows devices. LAPS provides a solution to securely manage and retrieve the built-in local admin password. With cloud version of LAPS, customers can enable storing and rotation of local admin passwords for both Azure AD and Hybrid Azure AD join devices. To learn how to manage LAPS in Azure AD, see [the overview article](howto-manage-local-admin-passwords.md).
- **Restrict non-admin users from recovering the BitLocker key(s) for their owned devices (preview)**: In this preview, admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices.
active-directory Troubleshoot Mac Sso Extension Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-mac-sso-extension-plugin.md
Use the following steps to check the operating system (OS) version on the macOS
1. From the macOS device, open Terminal from the **Applications** -> **Utilities** folder. 1. When the Terminal opens type **sw_vers** at the prompt, look for a result like the following:
- ```bash
+ ```zsh
% sw_vers ProductName: macOS ProductVersion: 13.0.1
Once deployed the **Microsoft Enterprise SSO Extension for Apple devices** suppo
1. When the **Spotlight Search** appears type **Terminal** and hit **return**. 1. When the Terminal opens type **`osascript -e 'id of app "<appname>"'`** at the prompt. See some examples follow:
- ```bash
+ ```zsh
% osascript -e 'id of app "Safari"' com.apple.Safari
During troubleshooting it may be useful to reproduce a problem while tailing the
1. When the **Spotlight Search** appears type: **Terminal** and hit **return**. 1. When the Terminal opens type:
- ```bash
+ ```zsh
tail -F ~/Library/Containers/com.microsoft.CompanyPortalMac.ssoextension/Data/Library/Caches/Logs/Microsoft/SSOExtension/* ``` > [!NOTE] > The trailing /* indicates that multiple logs will be tailed should any exist
- ```
+ ```output
% tail -F ~/Library/Containers/com.microsoft.CompanyPortalMac.ssoextension/Data/Library/Caches/Logs/Microsoft/SSOExtension/* ==> /Users/<username>/Library/Containers/com.microsoft.CompanyPortalMac.ssoextension/Data/Library/Caches/Logs/Microsoft/SSOExtension/SSOExtension 2022-12-25--13-11-52-855.log <== 2022-12-29 14:49:59:281 | I | TID=783491 MSAL 1.2.4 Mac 13.0.1 [2022-12-29 19:49:59] Handling SSO request, requested operation:
active-directory Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md
You can bulk-invite external users to an organization from email addresses that
3. Sign in to your tenancy
- ```powershell
+ ```azurepowershell-interactive
$cred = Get-Credential Connect-AzureAD -Credential $cred ``` 4. Run the PowerShell cmdlet
- ```powershell
+ ```azurepowershell-interactive
$invitations = import-csv C:\data\invitations.csv $messageInfo = New-Object Microsoft.Open.MSGraph.Model.InvitedUserMessageInfo $messageInfo.customizedMessageBody = "Hey there! Check this out. I created an invitation through PowerShell"
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-scenarios.md
This scenario requires automatic synchronization and identity management to conf
This section describes three techniques for automating account provisioning in the automated scenario.
-#### Technique 1: Use the [built-in cross-tenant synchronization capability in Azure AD](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/seamless-application-access-and-lifecycle-management-for-multi/ba-p/3728752)
+#### Technique 1: Use the [built-in cross-tenant synchronization capability in Azure AD](../multi-tenant-organizations/cross-tenant-synchronization-overview.md)
This approach only works when all tenants that you need to synchronize are in the same cloud instance (such as Commercial to Commercial). #### Technique 2: Provision accounts with Microsoft Identity Manager
-Use an external Identity and Access Management (IAM) solution such as [Microsoft Identity Manager](https://microsoft.sharepoint-df.com/microsoft-identity-manager/microsoft-identity-manager-2016) (MIM) as a synchronization engine.
+Use an external Identity and Access Management (IAM) solution such as [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) (MIM) as a synchronization engine.
This advanced deployment uses MIM as a synchronization engine. MIM calls the [Microsoft Graph API](https://developer.microsoft.com/graph) and [Exchange Online PowerShell](/powershell/exchange/exchange-online/exchange-online-powershell?view=exchange-ps&preserve-view=true). Alternative implementations can include the cloud-hosted [Active Directory Synchronization Service](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (ADSS) managed service offering from [Microsoft Industry Solutions](https://www.microsoft.com/industrysolutions). There are non-Microsoft offerings that you can create from scratch with other IAM offerings (such as SailPoint, Omada, and OKTA).
active-directory Multilateral Federation Solution One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-solution-one.md
The following are some of the trade-offs of using this solution:
* **Subscription required for Cirrus Bridge** - An annual subscription is required for the Cirrus Bridge. The subscription fee is based on anticipated annual authentication usage of the bridge.
+## Migration resources
+
+The following are resources to help with your migration to this solution architecture.
+
+| Migration Resource | Description |
+| - | - |
+| [Resources for migrating applications to Azure Active Directory (Azure AD)](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD |
+| [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md)|This article provides an overview to the Azure AD custom claims provider |
+| [Custom security attributes documentation](../fundamentals/custom-security-attributes-manage.md) | This article describes how to manage access to custom security attributes |
+| [Azure AD SSO integration with Cirrus Identity Bridge](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md) | Tutorial to integrate Cirrus Identity Bridge for Azure AD with Azure AD |
+| [Cirrus Identity Bridge Overview](https://blog.cirrusidentity.com/documentation/azure-bridge-setup-rev-6.0) | Link to the documentation for the Cirrus Identity Bridge |
+| [Azure MFA deployment considerations](../authentication/howto-mfa-getstarted.md) | Link to guidance for configuring multi-factor authentication (MFA) using Azure AD |
+ ## Next steps See these other multilateral federation articles:
active-directory Multilateral Federation Solution Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-solution-three.md
The following are some of the trade-offs of using this solution:
* **Significant ongoing staff allocation** - IT staff must maintain infrastructure and software for the authentication solution. Any staff attrition might introduce risk.
+## Migration resources
+
+The following are resources to help with your migration to this solution architecture.
+
+| Migration Resource | Description |
+| - | - |
+| [Resources for migrating applications to Azure Active Directory (Azure AD)](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD |
+ ## Next steps See these related multilateral federation articles:
active-directory Multilateral Federation Solution Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-solution-two.md
The following are some of the trade-offs of using this solution:
denominator (optimize for security controls, but at the expense of user friction) with limited ability to make granular decisions.
+## Migration resources
+
+The following are resources to help with your migration to this solution architecture.
+
+| Migration Resource | Description |
+| - | - |
+| [Resources for migrating applications to Azure Active Directory (Azure AD)](../manage-apps/migration-resources.md) | List of resources to help you migrate application access and authentication to Azure AD |
+| [Configuring Shibboleth as SAML Proxy](https://shibboleth.atlassian.net/wiki/spaces/KB/pages/1467056889/Using+SAML+Proxying+in+the+Shibboleth+IdP+to+connect+with+Azure+AD) | Link to a Shibboleth article that describes how to use the SAML proxying feature to connect Shibboleth IdP to Azure AD |
+| [Azure MFA deployment considerations](../authentication/howto-mfa-getstarted.md) | Link to guidance for configuring multi-factor authentication (MFA) using Azure AD |
+ ## Next steps See these other multilateral federation articles:
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
To use Azure PowerShell locally for this article instead of using Cloud Shell:
1. Sign in to Azure.
- ```azurepowershell
+ ```azurepowershell-interactive
Connect-AzAccount ``` 1. Install the [latest version of PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
- ```azurepowershell
+ ```azurepowershell-interactive
Install-Module -Name PowerShellGet -AllowPrerelease ```
To use Azure PowerShell locally for this article instead of using Cloud Shell:
1. Install the prerelease version of the `Az.ManagedServiceIdentity` module to perform the user-assigned managed identity operations in this article.
- ```azurepowershell
+ ```azurepowershell-interactive
Install-Module -Name Az.ManagedServiceIdentity -AllowPrerelease ```
In this article, you learn how to create, list, and delete a user-assigned manag
1. If you're running locally, sign in to Azure through the Azure CLI.
- ```
+ ```azurecli-interactive
az login ```
active-directory Qs Configure Cli Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md
To assign a user-assigned identity to a VM during its creation, your account nee
3. Create a VM using [az vm create](/cli/azure/vm/#az-vm-create). The following example creates a VM associated with the new user-assigned identity, as specified by the `--assign-identity` parameter, with the specified `--role` and `--scope`. Be sure to replace the `<RESOURCE GROUP>`, `<VM NAME>`, `<USER NAME>`, `<PASSWORD>`, `<USER ASSIGNED IDENTITY NAME>`, `<ROLE>`, and `<SUBSCRIPTION>` parameter values with your own values. ```azurecli-interactive
- az vm create --resource-group <RESOURCE GROUP> --name <VM NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY NAME> --role <ROLE> --scope <SUBSCRIPTION>
+ az vm create --resource-group <RESOURCE GROUP> --name <VM NAME> --image <SKU linux image> --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY NAME> --role <ROLE> --scope <SUBSCRIPTION>
``` ### Assign a user-assigned managed identity to an existing Azure VM
active-directory Qs Configure Cli Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md
This section walks you through creation of a virtual machine scale set and assig
3. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set associated with the new user-assigned managed identity, as specified by the `--assign-identity` parameter, with the specified `--role` and `--scope`. Be sure to replace the `<RESOURCE GROUP>`, `<VMSS NAME>`, `<USER NAME>`, `<PASSWORD>`, `<USER ASSIGNED IDENTITY>`, `<ROLE>`, and `<SUBSCRIPTION>` parameter values with your own values. ```azurecli-interactive
- az vmss create --resource-group <RESOURCE GROUP> --name <VMSS NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY> --role <ROLE> --scope <SUBSCRIPTION>
+ az vmss create --resource-group <RESOURCE GROUP> --name <VMSS NAME> --image <SKU Linux Image> --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY> --role <ROLE> --scope <SUBSCRIPTION>
``` ### Assign a user-assigned managed identity to an existing virtual machine scale set
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
The user assigned managed identity should be specified using its [resourceID](./
# [Azure CLI](#tab/azure-cli) ```azurecli
-az vm create --resource-group <MyResourceGroup> --name <myVM> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY NAME>
+az vm create --resource-group <MyResourceGroup> --name <myVM> --image <SKU Linux Image> --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY NAME>
``` [Configure managed identities for Azure resources on a VM using the Azure CLI](qs-configure-cli-windows-vm.md#user-assigned-managed-identity)
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
$smssignin = Get-MgUserAuthenticationPhoneMethod -UserId $userId
Users in scope fail to provision. The provisioning logs details include the following error message: ```
-The provisioning service was forbidden from performing an operation on Azure Active Directory, which is unusual.
-A simultaneous change to the target object may have occurred, in which case, the operation might succeed when it is retried.
-Alternatively, the target of the operation, or one of its properties, may be mastered on-premises, in which case,
-the provisioning service is not permitted to update it, and the corresponding source entry should be removed from the provisioning service's scope.
-Otherwise, authorizations may have been customized in such a way as to prevent the provisioning service from modifying the target object or one of its properties;
-if so, then, again, the corresponding source entry should be removed from scope.
-This operation was retried 0 times.
+Guest invitations not allowed for your company. Contact your company administrator for more details.
``` **Cause**
active-directory Alvao Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alvao-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and ALVAO](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure ALVAO to support provisioning with Azure AD
-Contact ALVAO support to configure ALVAO to support provisioning with Azure AD.
+1. Find your **Tenant SCIM Endpoint URL**, which is in the form: {ALVAO REST API address}/scim, for example, https://app.contoso.com/alvaorestapi/scim.
+1. Generate a new **Secret Token** in **WebApp - Administration - Settings - [Active Directory and Azure Active Directory](https://doc.alvao.com/en/11.1/list-of-windows/alvao-webapp/administration/settings/activedirectory)** and copy its value.
## Step 3. Add ALVAO from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ > [!NOTE]
+ >For advanced settings see:
+ > * [Mapping SCIM attributes to user fields](https://doc.alvao.com/en/11.1/alvao-asset-management/implementation/users/authentication/aad/provisioning/person-attribute-mapping)
+ > * [Mapping SCIM attributes to object properties](https://doc.alvao.com/en/11.1/alvao-asset-management/implementation/users/authentication/aad/provisioning/object-attribute-mapping)
1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to ALVAO**.
active-directory Code42 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* A Code42 tenant with Identity Management enabled. * A Code42 user account with [Customer Cloud Admin](https://support.code42.com/Administrator/Cloud/Monitoring_and_managing/Roles_reference#Customer_Cloud_Admin) permission.
-> [!NOTE]
-> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
- ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Locus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/locus-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Locus
+description: Learn how to configure single sign-on between Azure Active Directory and Locus.
++++++++ Last updated : 04/26/2023++++
+# Azure Active Directory SSO integration with Locus
+
+In this article, you learn how to integrate Locus with Azure Active Directory (Azure AD). Locus is a real-world ready dispatch management platform for last-mile excellence. When you integrate Locus with Azure AD, you can:
+
+* Control in Azure AD who has access to Locus.
+* Enable your users to be automatically signed-in to Locus with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Locus in a test environment. Locus supports **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Locus, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Locus single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Locus application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Locus from the Azure AD gallery
+
+Add Locus from the Azure AD application gallery to configure single sign-on with Locus. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Locus** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:locus-aws-us-east-1:<ConnectionName>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://accounts.locus-dashboard.com/login/callback?connection=<ConnectionName>`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<ClientId>.locus-dashboard.com/#/login/sso?clientId=<ClientId>&connection=<ConnectionName>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Locus Client support team](mailto:platform-oncall@locus.sh) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Locus SSO
+
+To configure single sign-on on **Locus** side, you need to send the **App Federation Metadata Url** to [Locus support team](mailto:platform-oncall@locus.sh). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Locus test user
+
+In this section, you create a user called Britta Simon at Locus. Work with [Locus support team](mailto:platform-oncall@locus.sh) to add the users in the Locus platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Locus Sign-on URL where you can initiate the login flow.
+
+* Go to Locus Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Locus tile in the My Apps, this will redirect to Locus Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Locus you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Proactis Rego Invoice Capture Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proactis-rego-invoice-capture-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Proactis Rego Invoice Capture
+description: Learn how to configure single sign-on between Azure Active Directory and Proactis Rego Invoice Capture.
++++++++ Last updated : 04/26/2023++++
+# Azure Active Directory SSO integration with Proactis Rego Invoice Capture
+
+In this article, you learn how to integrate Proactis Rego Invoice Capture with Azure Active Directory (Azure AD). With Proactis AP automation, you can capture all invoices and convert into eInvoices, validate their accuracy, duplicates and a valid supplier, and then transfer them into your finance system. When you integrate Proactis Rego Invoice Capture with Azure AD, you can:
+
+* Control in Azure AD who has access to Proactis Rego Invoice Capture.
+* Enable your users to be automatically signed-in to Proactis Rego Invoice Capture with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Proactis Rego Invoice Capture in a test environment. Proactis Rego Invoice Capture supports **SP** and **IDP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Proactis Rego Invoice Capture, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Proactis Rego Invoice Capture single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Proactis Rego Invoice Capture application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Proactis Rego Invoice Capture from the Azure AD gallery
+
+Add Proactis Rego Invoice Capture from the Azure AD application gallery to configure single sign-on with Proactis Rego Invoice Capture. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Proactis Rego Invoice Capture** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type one the following URLs:
+
+ | **Identifier** |
+ |-|
+ | `https://eu-p5.proactiscloud.com` |
+ | `https://eu-p5-uat.proactiscloud.com` |
+ | `https://us-p5-icmanaged.proactiscloud.com` |
+ | `https://us-p5-icmanageduat.proactiscloud.com` |
+ | `https://hosted.proactiscapture.com` |
+ | `https://hosteduat.proactiscapture.com` |
+ | `https://managed.proactiscapture.com` |
+ | `https://manageduat.proactiscapture.com` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://manageduat.proactiscapture.com/SSO/<CustomerName>/AssertionConsumerService` |
+ | `https://managed.proactiscapture.com/SSO/<CustomerName>/AssertionConsumerService` |
+ | `https://hosteduat.proactiscapture.com/SSO/<CustomerName>/AssertionConsumerService` |
+ | `https://hosted.proactiscapture.com/SSO/<CustomerName>/AssertionConsumerService` |
+ | `https://us-p5-icmanageduat.proactiscloud.com/SSO/<CustomerName>/AssertionConsumerService` |
+ | `https://us-p5-icmanaged.proactiscloud.com/SSO/<CustomerName>/AssertionConsumerService` |
+ | `https://eu-p5-uat.proactiscloud.com/SSO/<CustomerName>/AssertionConsumerService` |
+ | `https://eu-p5.proactiscloud.com/SSO/<CustomerName>/AssertionConsumerService` |
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |-|
+ | `https://manageduat.proactiscapture.com/SSO/<CustomerName>` |
+ | `https://managed.proactiscapture.com/SSO/<CustomerName>` |
+ | `https://hosteduat.proactiscapture.com/SSO/<CustomerName>`|
+ | `https://hosted.proactiscapture.com/SSO/<CustomerName>` |
+ | `https://us-p5-icmanageduat.proactiscloud.com/SSO/<CustomerName>` |
+ | `https://us-p5-icmanaged.proactiscloud.com/SSO/<CustomerName>` |
+ | `https://eu-p5-uat.proactiscloud.com/SSO/<CustomerName>` |
+ | `https://eu-p5.proactiscloud.com/SSO/<CustomerName>` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Proactis Rego Invoice Capture Client support team](mailto:support@proactis.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+1. On the **Set up Proactis Rego Invoice Capture** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Proactis Rego Invoice Capture SSO
+
+To configure single sign-on on **Proactis Rego Invoice Capture** side, you need to send the **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Proactis Rego Invoice Capture support team](mailto:support@proactis.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Proactis Rego Invoice Capture test user
+
+In this section, you create a user called Britta Simon at Proactis Rego Invoice Capture. Work with [Proactis Rego Invoice Capture support team](mailto:support@proactis.com) to add the users in the Proactis Rego Invoice Capture platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Proactis Rego Invoice Capture Sign-on URL where you can initiate the login flow.
+
+* Go to Proactis Rego Invoice Capture Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Proactis Rego Invoice Capture for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Proactis Rego Invoice Capture tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Proactis Rego Invoice Capture for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Proactis Rego Invoice Capture you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Securetransport Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/securetransport-tutorial.md
+
+ Title: Azure Active Directory SSO integration with SecureTransport
+description: Learn how to configure single sign-on between Azure Active Directory and SecureTransport.
++++++++ Last updated : 04/26/2023++++
+# Azure Active Directory SSO integration with SecureTransport
+
+In this article, you learn how to integrate SecureTransport with Azure Active Directory (Azure AD). SecureTransport is a high scalable and resilient multi-protocol MFT gateway, with fault-tolerance and high availability to meet all critical file transfer needs of any small or large organization. When you integrate SecureTransport with Azure AD, you can:
+
+* Control in Azure AD who has access to SecureTransport.
+* Enable your users to be automatically signed-in to SecureTransport with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for SecureTransport in a test environment. SecureTransport supports **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with SecureTransport, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SecureTransport single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the SecureTransport application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add SecureTransport from the Azure AD gallery
+
+Add SecureTransport from the Azure AD application gallery to configure single sign-on with SecureTransport. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **SecureTransport** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type one of the following values:
+
+ | User type | value |
+ |-|-|
+ | Admin | `st.sso.admin`|
+ | End-user | `st.sso.enduser` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | User type | URL |
+ |-|-|
+ | Admin | `https://<SecureTransport_Address>:<PORT>/saml2/sso/post/j_security_check`|
+ | End-user | `https://<SecureTransport_Address>:<PORT>/saml2/sso/post` |
+
+ c. In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | User type | URL |
+ |-|-|
+ | Admin | `https://<SecureTransport_Address>:<PORT>` |
+ | End-user | `https://<SecureTransport_Address>:<PORT>` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [SecureTransport Client support team](mailto:support@axway.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your SecureTransport application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but SecureTransport expects this to be mapped with the user's display name. For that you can use **user.displayname** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows the image of token attributes configuration.](common/default-attributes.png "Image")
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up SecureTransport** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure SecureTransport SSO
+
+To configure single sign-on on **SecureTransport** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [SecureTransport support team](mailto:support@axway.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create SecureTransport test user
+
+In this section, you create a user called Britta Simon at SecureTransport. Work with [SecureTransport support team](mailto:support@axway.com) to add the users in the SecureTransport platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to SecureTransport Sign-on URL where you can initiate the login flow.
+
+* Go to SecureTransport Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the SecureTransport tile in the My Apps, this will redirect to SecureTransport Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure SecureTransport you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sign In Enterprise Host Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sign-in-enterprise-host-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Sign In Enterprise Host Provisioning for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Sign In Enterprise Host Provisioning.
++
+writer: twimmers
+
+ms.assetid: 9032d0da-f472-4e8d-a14d-d84f472411ee
++++ Last updated : 04/27/2023+++
+# Tutorial: Configure Sign In Enterprise Host Provisioning for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Sign In Enterprise Host Provisioning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Sign In Enterprise Host Provisioning](https://signinenterprise.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Sign In Enterprise Host Provisioning.
+> * Remove users in Sign In Enterprise Host Provisioning when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Sign In Enterprise Host Provisioning.
+> * Provision groups and group memberships in Sign In Enterprise Host Provisioning.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Sign In Enterprise Host Provisioning with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Sign In Enterprise Host Provisioning](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Sign In Enterprise Host Provisioning to support provisioning with Azure AD
+Contact Sign In Enterprise Host support to configure Sign In Enterprise Host to support provisioning with Azure AD.
+
+## Step 3. Add Sign In Enterprise Host Provisioning from the Azure AD application gallery
+
+Add Sign In Enterprise Host Provisioning from the Azure AD application gallery to start managing provisioning to Sign In Enterprise Host Provisioning. If you have previously setup Sign In Enterprise Host Provisioning for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Sign In Enterprise Host Provisioning
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Sign In Enterprise Host Provisioning in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Sign In Enterprise Host Provisioning**.
+
+ ![Screenshot of the Sign In Enterprise Host Provisioning link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Sign In Enterprise Host Provisioning Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Sign In Enterprise Host Provisioning. If the connection fails, ensure your Sign In Enterprise Host Provisioning account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Sign In Enterprise Host Provisioning**.
+
+1. Review the user attributes that are synchronized from Azure AD to Sign In Enterprise Host Provisioning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Sign In Enterprise Host Provisioning for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Sign In Enterprise Host Provisioning API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Sign In Enterprise Host Provisioning|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |emails[type eq "work"].value|String||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |emails[type eq "other"].value|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Sign In Enterprise Host Provisioning**.
+
+1. Review the group attributes that are synchronized from Azure AD to Sign In Enterprise Host Provisioning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Sign In Enterprise Host Provisioning for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Sign In Enterprise Host Provisioning|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Sign In Enterprise Host Provisioning, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Sign In Enterprise Host Provisioning by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Usertesting Saml Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/usertesting-saml-tutorial.md
+
+ Title: Azure Active Directory SSO integration with UserTesting
+description: Learn how to configure single sign-on between Azure Active Directory and UserTesting.
++++++++ Last updated : 04/26/2023++++
+# Azure Active Directory SSO integration with UserTesting
+
+In this article, you learn how to integrate UserTesting with Azure Active Directory (Azure AD). UserTesting is a platform for getting rapid customer feedback on almost any customer experience you can imagine, including websites, mobile apps, prototypes, and real world experiences. When you integrate UserTesting with Azure AD, you can:
+
+* Control in Azure AD who has access to UserTesting.
+* Enable your users to be automatically signed-in to UserTesting with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for UserTesting in a test environment. UserTesting supports **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with UserTesting, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* UserTesting single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the UserTesting application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add UserTesting from the Azure AD gallery
+
+Add UserTesting from the Azure AD application gallery to configure single sign-on with UserTesting. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **UserTesting** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<Account_Name>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ ` https://auth.usertesting.com/sso/saml2/<ID>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ a. In the **Sign on URL** textbox, type the URL:
+ `https://app.usertesting.com/users/sso_sign_in`
+
+ b.In the **Relay State** textbox, type the URL:
+ `https://app.usertesting.com/sessions/from_idp`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [UserTesting Client support team](mailto:support@usertesting.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up UserTesting** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure UserTesting SSO
+
+To configure single sign-on on **UserTesting** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [UserTesting support team](mailto:support@usertesting.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create UserTesting test user
+
+In this section, you create a user called Britta Simon at UserTesting. Work with [UserTesting support team](mailto:support@usertesting.com) to add the users in the UserTesting platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to UserTesting Sign-on URL where you can initiate the login flow.
+
+* Go to UserTesting Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the UserTesting for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the UserTesting tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the UserTesting for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure UserTesting you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/standards-overview.md
To learn more about supported compliance frameworks, see [Azure compliance offer
## Next steps
+* See, Standards documentation [Implement identity standards with Azure Active Directory](index.yml)
* [Configure Azure Active Directory to achieve NIST authenticator assurance levels](nist-overview.md) * [Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
active-directory Linkedin Employment Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/linkedin-employment-verification.md
# LinkedIn employment verification
-If your organization wants its employees get verified on LinkedIn, you need to follow these few steps:
+If your organization wants its employees to get verified on LinkedIn, you need to follow these few steps:
1. Setup your Microsoft Entra Verified ID service by following these [instructions](verifiable-credentials-configure-tenant.md). 1. [Create](how-to-use-quickstart-verifiedemployee.md#create-a-verified-employee-credential) a Verified ID Employee credential.
advisor Advisor How To Improve Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-improve-reliability.md
You can evaluate the reliability of posture of your applications, assess risks a
:::image type="content" source="media/advisor-reliability-workbook.png#lightbox" alt-text="Screenshot of the Azure Advisor reliability workbook template.":::
+> [!NOTE]
+> The workbook is to be used as a guidance only and does not represent a guarantee for service level.
## Next steps
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS) description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Previously updated : 03/30/2023 Last updated : 04/27/2023
The CSI storage driver support on AKS allows you to natively use:
> [!NOTE] > It is recommended to delete the corresponding PersistentVolumeClaim object instead of the PersistentVolume object when deleting a CSI volume. The external provisioner in the CSI driver will react to the deletion of the PersistentVolumeClaim and based on its reclamation policy, it will issue the DeleteVolume call against the CSI volume driver commands to delete the volume. The PersistentVolume object will then be deleted.
->
+>
> Azure Disks CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure Disks CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites - You need the Azure CLI version 2.42 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. - If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the Azure Blob storage driver.
+- To enforce the Azure Policy for AKS [policy definition][azure-policy-aks-definition] **Kubernetes clusters should use Container Storage Interface(CSI) driver StorageClass**, the Azure Policy add-on needs to be enabled on new and existing clusters. For an existing cluster, review the [Learn Azure Policy for Kubernetes][learn-azure-policy-kubernetes] to enable it.
## Enable CSI storage drivers on an existing cluster
To review the migration options for your storage classes and upgrade your cluste
[azure-disk-csi]: azure-disk-csi.md [azure-files-csi]: azure-files-csi.md [migrate-from-in-tree-csi-drivers]: csi-migrate-in-tree-volumes.md
+[learn-azure-policy-kubernetes]: ../governance/policy/concepts/policy-for-kubernetes.md
+[azure-policy-aks-definition]: ../governance/policy/samples/built-in-policies.md#kubernetes
aks Gpu Multi Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-multi-instance.md
nvgfd/gpu-feature-discovery
### Confirm multi-instance GPU capability As an example, if you used MIG1g as the GPU instance profile, confirm the node has multi-instance GPU capability by running: ```
-kubectl describe mignode
+kubectl describe node mignode
``` If you're using single strategy, you'll see: ```
aks Istio Deploy Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md
Confirm the `istiod` pod has a status of `Running`. For example:
``` NAME READY STATUS RESTARTS AGE
-istiod-asm-1-17-74f7f7c46c-xfdtl 2/2 Running 0 2m
+istiod-asm-1-17-74f7f7c46c-xfdtl 1/1 Running 0 2m
``` ## Enable sidecar injection
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks. Previously updated : 11/3/2022 Last updated : 04/26/2023+ #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
This article shows you how to create a connection to an AKS node and update the
## Before you begin
-This article assumes you have an SSH key. If not, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. Make sure you save the key pair in an OpenSSH format, other formats like .ppk are not supported.
+This article assumes you have an SSH key. If not, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. Make sure you save the key pair in an OpenSSH format, other formats like .ppk aren't supported.
You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
To create the SSH connection to the Windows Server node from another node, use t
> [!IMPORTANT] >
-> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. If you didn't use this method to create your cluster, you'll use a password instead of an SSH key. To do this, see [Create the SSH connection to a Windows node using a password](#create-the-ssh-connection-to-a-windows-node-using-a-password)
+> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. If you didn't use this method to create your cluster, use a password instead of an SSH key. To do this, see [Create the SSH connection to a Windows node using a password](#create-the-ssh-connection-to-a-windows-node-using-a-password)
Open a new terminal window and use the `kubectl get pods` command to get the name of the pod started by `kubectl debug`.
aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.
In the above example, *10.240.0.67* is the internal IP address of the Windows Server node.
-Create an SSH connection to the Windows Server node using the internal IP address, and connect to port 22 through port 2022 on your development computer. The default username for AKS nodes is *azureuser*. Accept the prompt to continue with the connection. You are then provided with the bash prompt of your Windows Server node:
+Create an SSH connection to the Windows Server node using the internal IP address, and connect to port 22 through port 2022 on your development computer. The default username for AKS nodes is *azureuser*. Accept the prompt to continue with the connection. You're then provided with the bash prompt of your Windows Server node:
```bash ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' azureuser@10.240.0.67
kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
> [!NOTE] > Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
-Use the [az aks update][az-aks-update] command to update the SSH key on the cluster. This operation will update the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
+Use the [az aks update][az-aks-update] command to update the SSH key on the cluster. This operation updates the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
```azurecli az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value <new SSH key value or SSH key file>
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
description: Learn how to create an RDP connection with Azure Kubernetes Service (AKS) cluster Windows Server nodes for troubleshooting and maintenance tasks. Previously updated : 07/06/2022+ Last updated : 04/26/2023 #Customer intent: As a cluster operator, I want to learn how to use RDP to connect to nodes in an AKS cluster to perform maintenance or troubleshoot a problem.
Last updated 07/06/2022
Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS Windows Server node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access the AKS Windows Server nodes using RDP. For security purposes, the AKS nodes aren't exposed to the internet.
-Alternatively, if you want to SSH to your AKS Windows Server nodes, you'll need access to the same key-pair that was used during cluster creation. Follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps].
+Alternatively, if you want to SSH to your AKS Windows Server nodes, you need access to the same key-pair that was used during cluster creation. Follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps].
This article shows you how to create an RDP connection with an AKS node using their private IP addresses.
The following example creates a virtual machine named *myVM* in the *myResourceG
### [Azure CLI](#tab/azure-cli)
-You'll need to get the subnet ID used by your Windows Server node pool. The commands below will query for the following information:
+You need to get the subnet ID used by your Windows Server node pool and query for:
* The cluster's node resource group * The virtual network * The subnet's name
Record the public IP address of the virtual machine. You'll use this address in
### [Azure PowerShell](#tab/azure-powershell)
-You'll need to get the subnet ID used by your Windows Server node pool. The commands below will query for the following information:
+You'll need to get the subnet ID used by your Windows Server node pool and query for:
* The cluster's node resource group * The virtual network * The subnet's name and address prefix
The following example output shows the VM has been successfully created and disp
13.62.204.18 ```
-Record the public IP address of the virtual machine. You'll use this address in a later step.
+Record the public IP address of the virtual machine and use the address in a later step.
Connect to the public IP address of the virtual machine you created earlier usin
![Image of connecting to the virtual machine using an RDP client](media/rdp/vm-rdp.png)
-After you've connected to your virtual machine, connect to the *internal IP address* of the Windows Server node you want to troubleshoot using an RDP client from within your virtual machine.
+After you have connected to your virtual machine, connect to the *internal IP address* of the Windows Server node you want to troubleshoot using an RDP client from within your virtual machine.
![Image of connecting to the Windows Server node using an RDP client](media/rdp/node-rdp.png)
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
You can retrieve this information using the Azure CLI command: [az keyvault list
To disable the Azure AD workload identity on the AKS cluster where it's been enabled and configured, you can run the following command: ```azurecli
-az aks update --resource-group myResourceGroup --name myAKSCluster --enable-workload-identity false
+az aks update --resource-group myResourceGroup --name myAKSCluster --disable-workload-identity
``` ## Next steps
api-management Authentication Basic Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md
Use the `authentication-basic` policy to authenticate with a backend service usi
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- This policy can only be used once in a policy section.
+ ## Example ```xml
api-management Authentication Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-certificate-policy.md
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Examples
api-management Authentication Managed Identity Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-managed-identity-policy.md
Both system-assigned identity and any of the multiple user-assigned identities c
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Examples
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
-When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend.
+* When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend.
+- This policy can only be used once in a policy section.
+ ## Examples
api-management Cache Lookup Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-value-policy.md
Use the `cache-lookup-value` policy to perform cache lookup by key and return a
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Cache Remove Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-remove-value-policy.md
The `cache-remove-value` deletes a cached item identified by its key. The key ca
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
The `cache-store` policy caches responses according to the specified cache setti
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) outbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- This policy can only be used once in a policy section.
++ ## Examples ### Example with corresponding cache-lookup policy
api-management Cache Store Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-value-policy.md
The `cache-store-value` performs cache storage by key. The key can have an arbit
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Choose Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/choose-policy.md
The `choose` policy must contain at least one `<when/>` element. The `<otherwise
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Examples
api-management Cors Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cors-policy.md
The `cors` policy adds cross-origin resource sharing (CORS) support to an operat
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes * You may configure the `cors` policy at more than one scope (for example, at the product scope and the global scope). Ensure that the `base` element is configured at the operation, API, and product scopes to inherit needed policies at the parent scopes. * Only the `cors` policy is evaluated on the `OPTIONS` request during preflight. Remaining configured policies are evaluated on the approved request.
+- This policy can only be used once in a policy section.
## About CORS
api-management Emit Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md
The `emit-metric` policy sends custom metrics in the specified format to Applica
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Find And Replace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/find-and-replace-policy.md
The `find-and-replace` policy finds a request or response substring and replaces
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
The `forward-request` policy forwards the incoming request to the backend servic
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) backend-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Examples
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
class Authorization
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption ### Usage notes
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
The policy inserts the policy fragment as-is at the location you select in the p
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Invoke Dapr Binding Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/invoke-dapr-binding-policy.md
The policy assumes that Dapr runtime is running in a sidecar container in the sa
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) self-hosted ### Usage notes
api-management Ip Filter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/ip-filter-policy.md
The `ip-filter` policy filters (allows/denies) calls from specific IP addresses
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Json To Xml Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md
The `json-to-xml` policy converts a request or response body from JSON to XML.
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Jsonp Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/jsonp-policy.md
The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) outbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted+
+### Usage notes
+
+- This policy can only be used once in a policy section.
+ ## Example ```xml
api-management Limit Concurrency Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/limit-concurrency-policy.md
The `limit-concurrency` policy prevents enclosed policies from executing by more
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Log To Eventhub Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/log-to-eventhub-policy.md
The `log-to-eventhub` policy sends messages in the specified format to an event
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
The `mock-response` policy, as the name implies, is used to mock APIs and operat
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Proxy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/proxy-policy.md
The `proxy` policy allows you to route requests forwarded to backends via an HTT
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Publish To Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-to-dapr-policy.md
The policy assumes that Dapr runtime is running in a sidecar container in the sa
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) self-hosted ### Usage notes
api-management Quota By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted ### Usage notes
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted ### Usage notes
api-management Redirect Content Urls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/redirect-content-urls-policy.md
The `redirect-content-urls` policy rewrites (masks) links in the response body s
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- This policy can only be used once in a policy section.
++ ## Example ```xml
api-management Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md
The `retry` policy may contain any other policies as its child elements.
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Examples
api-management Return Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/return-response-policy.md
The `return-response` policy cancels pipeline execution and returns either a def
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Rewrite Uri Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rewrite-uri-policy.md
This policy can be used when a human and/or browser-friendly URL should be trans
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Send One Way Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md
The `send-one-way-request` policy sends the provided request to the specified UR
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
The `send-request` policy sends the provided request to the specified URL, waiti
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Set Backend Service Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-dapr-policy.md
The policy assumes that Dapr runs in a sidecar container in the same pod as the
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) self-hosted ### Usage notes
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
Use the `set-backend-service` policy to redirect an incoming request to a differ
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
OriginalUrl.
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Set Graphql Resolver Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-graphql-resolver-policy.md
The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) backend-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated ### Usage notes
api-management Set Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-header-policy.md
The `set-header` policy assigns a value to an existing HTTP response and/or requ
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Set Method Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-method-policy.md
The value of the element specifies the HTTP method, such as `POST`, `GET`, and s
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Set Query Parameter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-query-parameter-policy.md
The `set-query-parameter` policy adds, replaces value of, or deletes request que
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Examples
api-management Set Status Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-status-policy.md
The `set-status` policy sets the HTTP status code to the specified value.
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Set Variable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-variable-policy.md
The `set-variable` policy declares a [context](api-management-policy-expressions
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Allowed types
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
The `trace` policy adds a custom trace into the request tracing output in the te
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Validate Client Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-client-certificate-policy.md
For more information about custom CA certificates and certificate authorities, s
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Validate Content Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md
The policy validates the following content in the request or response against th
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted [!INCLUDE [api-management-validation-policy-common](../../includes/api-management-validation-policy-common.md)]
api-management Validate Headers Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-headers-policy.md
The `validate-headers` policy validates the response headers against the API sch
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- This policy can only be used once in a policy section.
+ [!INCLUDE [api-management-validation-policy-common](../../includes/api-management-validation-policy-common.md)]
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
The `validate-jwt` policy enforces existence and validity of a supported JSON we
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
api-management Validate Parameters Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-parameters-policy.md
The `validate-parameters` policy validates the header, query, or path parameters
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- This policy can only be used once in a policy section.
+ [!INCLUDE [api-management-validation-policy-common](../../includes/api-management-validation-policy-common.md)]
api-management Validate Status Code Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-status-code-policy.md
The `validate-status-code` policy validates the HTTP status codes in responses a
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- This policy can only be used once in a policy section.
+ [!INCLUDE [api-management-validation-policy-common](../../includes/api-management-validation-policy-common.md)] ## Example
api-management Wait Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/wait-policy.md
May contain as child elements only `send-request`, `cache-lookup-value`, and `ch
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Xml To Json Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xml-to-json-policy.md
The `xml-to-json` policy converts a request or response body from XML to JSON. T
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
api-management Xsl Transform Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xsl-transform-policy.md
The `xsl-transform` policy applies an XSL transformation to XML in the request o
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- This policy can only be used once in a policy section.
+ ## Examples ### Transform request body
applied-ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-add-on-capabilities.md
Previously updated : 03/03/2023 Last updated : 04/25/2023 monikerRange: 'form-recog-3.0.0' recommendations: false
recommendations: false
# Azure Form Recognizer add-on capabilities (preview)
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+**This article applies to:** ![Form Recognizer checkmark](medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
> [!NOTE] >
applied-ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-classifier.md
Previously updated : 03/03/2023 Last updated : 04/25/2023 monikerRange: 'form-recog-3.0.0'
recommendations: false
# Custom classification model
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+**This article applies to:** ![Form Recognizer checkmark](medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
> [!IMPORTANT] >
applied-ai-services Concept Query Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-query-fields.md
Previously updated : 03/07/2023 Last updated : 04/25/2023 monikerRange: 'form-recog-3.0.0' recommendations: false
recommendations: false
# Azure Form Recognizer query field extraction (preview)
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+**This article applies to:** ![Form Recognizer checkmark](medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
> [!IMPORTANT] >
applied-ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md
Previously updated : 03/30/2023 Last updated : 04/25/2023 monikerRange: 'form-recog-3.0.0' recommendations: false
recommendations: false
# Build and train a custom classification model (preview)
+**This article applies to:** ![Form Recognizer checkmark](../medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
> [!IMPORTANT] >
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
implementation("com.azure:azure-ai-formrecognizer:4.0.6")
### [JavaScript](#tab/javascript) ```javascript
-npm i @azure/ai-form-recognizer
+npm i @azure/ai-form-recognizer@4.0.0
``` ### [Python](#tab/python) ```python
-pip install azure-ai-formrecognizer
+pip install azure-ai-formrecognizer==3.2.0
```- ### 2. Import the SDK client library into your application
applied-ai-services Sdk Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-preview.md
Last updated 04/25/2023
+monikerRange: 'form-recog-3.0.0'
recommendations: false
recommendations: false
# Form Recognizer SDK (public preview)
-**This article applies to:** ![Form Recognizer checkmark](media/yes-icon.png) **Form Recognizer version 2023-02-28-preview**.
+**The SDKs referenced in this article are supported by:** ![Form Recognizer checkmark](media/yes-icon.png) **Form Recognizer REST API version 2023-02-28-preview**.
> [!IMPORTANT] >
azure-app-configuration Enable Dynamic Configuration Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-app.md
ms.devlang: java Previously updated : 05/02/2022 Last updated : 04/11/2023
App Configuration has two libraries for Spring.
-* `azure-spring-cloud-appconfiguration-config` requires Spring Boot and takes a dependency on `spring-cloud-context`.
-* `azure-spring-cloud-appconfiguration-config-web` requires Spring Web along with Spring Boot, and also adds support for automatic checking of configuration refresh.
+* `spring-cloud-azure-appconfiguration-config` requires Spring Boot and takes a dependency on `spring-cloud-context`.
+* `spring-cloud-azure-appconfiguration-config-web` requires Spring Web along with Spring Boot, and also adds support for automatic checking of configuration refresh.
Both libraries support manual triggering to check for refreshed configuration values.
-Refresh allows you to update your configuration values without having to restart your application, though it will cause all beans in the `@RefreshScope` to be recreated. It checks for any changes to configured triggers, including metadata. By default, the minimum amount of time between checks for changes, refresh interval, is set to 30 seconds.
+Refresh allows you to update your configuration values without having to restart your application, though it causes all beans in the `@RefreshScope` to be recreated. It checks for any changes to configured triggers, including metadata. By default, the minimum amount of time between checks for changes, refresh interval, is set to 30 seconds.
-`azure-spring-cloud-appconfiguration-config-web`'s automated refresh is triggered based on activity, specifically Spring Web's `ServletRequestHandledEvent`. If a `ServletRequestHandledEvent` is not triggered, `azure-spring-cloud-appconfiguration-config-web`'s automated refresh will not trigger a refresh even if the cache expiration time has expired.
+`spring-cloud-azure-appconfiguration-config-web`'s automated refresh is triggered based on activity, specifically Spring Web's `ServletRequestHandledEvent`. If a `ServletRequestHandledEvent` is not triggered, `spring-cloud-azure-appconfiguration-config-web`'s automated refresh does not trigger a refresh even if the cache expiration time has expired.
## Use manual refresh To use manual refresh, start with a Spring Boot app that uses App Configuration, such as the app you create by following the [Spring Boot quickstart for App Configuration](quickstart-java-spring-app.md).
-App Configuration exposes `AppConfigurationRefresh` which can be used to check if the cache is expired and if it is expired trigger a refresh.
+App Configuration exposes `AppConfigurationRefresh`, which can be used to check if the cache is expired and if it is expired a refresh is triggered.
1. Update HelloController to use `AppConfigurationRefresh`.
App Configuration exposes `AppConfigurationRefresh` which can be used to check i
} ```
- `AppConfigurationRefresh`'s `refreshConfigurations()` returns a `Future` that is true if a refresh has been triggered, and false if not. False means either the cache expiration time hasn't expired, there was no change, or another thread is currently checking for a refresh.
+ `AppConfigurationRefresh`'s `refreshConfigurations()` returns a `Mono` that is true if a refresh has been triggered, and false if not. False means either the cache expiration time hasn't expired, there was no change, or another thread is currently checking for a refresh.
1. Update `bootstrap.properties` to enable refresh
App Configuration exposes `AppConfigurationRefresh` which can be used to check i
||| | /application/config.message | Hello - Updated |
-1. Update the sentinel key you created earlier to a new value. This change will trigger the application to refresh all configuration keys once the refresh interval has passed.
+1. Update the sentinel key you created earlier to a new value. This change triggers the application to refresh all configuration keys once the refresh interval has passed.
| Key | Value | |||
App Configuration exposes `AppConfigurationRefresh` which can be used to check i
To use automated refresh, start with a Spring Boot app that uses App Configuration, such as the app you create by following the [Spring Boot quickstart for App Configuration](quickstart-java-spring-app.md).
-Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `azure-spring-cloud-appconfiguration-config-web` using the following code.
+Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `spring-cloud-azure-appconfiguration-config-web` using the following code.
**Spring Boot** ```xml <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.6.0</version>
+ <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
+ <version>4.7.0</version>
</dependency> ```
-> [!NOTE]
-> If you need support for older dependencies see our [previous library](https://github.com/Azure/azure-sdk-for-jav).
- 1. Update `bootstrap.properties` to enable refresh ```properties
Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `azu
||| | /application/config.message | Hello - Updated |
-1. Update the sentinel key you created earlier to a new value. This change will trigger the application to refresh all configuration keys once the refresh interval has passed.
+1. Update the sentinel key you created earlier to a new value. This change triggers the application to refresh all configuration keys once the refresh interval has passed.
| Key | Value | |||
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
ms.devlang: java Previously updated : 05/07/2022 Last updated : 04/11/2023 #Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
In this tutorial, you learn how to:
```xml <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.6.0</version>
+ <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
+ <version>4.7.0</version>
</dependency> <!-- Adds the Ability to Push Refresh -->
Event Grid Web Hooks require validation on creation. You can validate by followi
1. Update your `pom.xml` under the `azure-webapp-maven-plugin`'s `configuration` add
-```xml
-<appSettings>
- <AppConfigurationConnectionString>${AppConfigurationConnectionString}</AppConfigurationConnectionString>
-</appSettings>
-```
+ ```xml
+ <appSettings>
+ <AppConfigurationConnectionString>${AppConfigurationConnectionString}</AppConfigurationConnectionString>
+ </appSettings>
+ ```
1. Run the following command to build the console app:
Event Grid Web Hooks require validation on creation. You can validate by followi
In this tutorial, you enabled your Java app to dynamically refresh configuration settings from App Configuration. For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Howto Convert To The New Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-convert-to-the-new-spring-boot.md
ms.devlang: java
Previously updated : 05/02/2022 Last updated : 04/11/2023 # Convert to new App Configuration Spring Boot library
-A new version of the App Configuration library for Spring Boot is now available. The version introduces new features such as Push refresh, but also a number of breaking changes. These changes aren't backwards compatible with configuration setups that were using the previous library version. For the following topics.
+A new version of the App Configuration library for Spring Boot is now available. The version introduces new features such as Azure Spring global properties, but also some breaking changes. These changes aren't backwards compatible with configuration setups that were using the previous library version. For the following topics:
* Group and Artifact Ids
-* Spring Profiles
-* Configuration loading and reloading
+* Package path renamed
+* Classes renamed
* Feature flag loading
+* Possible conflicts with Azure Spring global properties
this article provides a reference on the change and actions needed to migrate to the new library version.
All of the Azure Spring Boot libraries have had their Group and Artifact IDs upd
```xml <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
- <version>2.6.0</version>
+ <artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
+ <version>4.7.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.6.0</version>
+ <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
+ <version>4.7.0</version>
+</dependency>
+<dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-feature-management</artifactId>
+ <version>4.7.0</version>
+</dependency>
+<dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-feature-management-web</artifactId>
+ <version>4.7.0</version>
</dependency> ```
-## Use of Spring Profiles
-
-In the previous release, Spring Profiles were used as part of the configuration so they could match the format of the configuration files. For example,
-
-```properties
-/<application name>_dev/config.message
-```
-
-This has been changed so the default label(s) in a query are the Spring Profiles with the following format, with a label that matches the Spring Profile:
-
-```properties
-/application/config.message
-```
+> [!NOTE]
+> The 4.7.0 version is the first 4.x version of the library. This is to match the version of the other Spring Cloud Azure libraries.
- To convert to the new format, you can run the bellow commands with your store name:
+As of the 4.7.0 version, the App Configuration and Feature Management libraries are now part of the spring-cloud-azure-dependencies BOM. The BOM file makes it so that you no longer need to specify the version of the libraries in your project. The BOM automatically manages the version of the libraries.
-```azurecli
-az appconfig kv export -n your-stores-name -d file --format properties --key /application_dev* --prefix /application_dev/ --path convert.properties --skip-features --yes
-az appconfig kv import -n your-stores-name -s file --format properties --label dev --prefix /application/ --path convert.properties --skip-features --yes
+```xml
+<dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.7.0</version>
+ <type>pom</type>
+</dependency>
```
-or use the Import/Export feature in the portal.
+## Package path renamed
-When you are completely moved to the new version, you can remove the old keys by running:
+The package path for the `spring-cloud-azure-feature-managment` and `spring-cloud-azure-feature-management-web` libraries have been renamed from `com.azure.spring.cloud.feature.manager` to `com.azure.spring.cloud.feature.management` and `com.azure.spring.cloud.feature.management.web`.
-```azurecli
-az appconfig kv delete -n ConversionTest --key /application_dev/*
-```
+## Classes renamed
-This command will list all of the keys you are about to delete so you can verify no unexpected keys will be removed. Keys can also be deleted in the portal.
+* `ConfigurationClientBuilderSetup` has been renamed to `ConfigurationClientCustomizer` and its `setup` method has been renamed to `customize`
+* `SecretClientBuilderSetup` has been renamed to `SecretClientCustomizer` and its `setup` method has been renamed to `customize`
+* `AppConfigurationCredentialProvider` and `KeyVaultCredentialProvider` have been removed. Instead you can use [Azure Spring common configuration properties](/azure/developer/java/spring-framework/configuration) or modify the credentials using `ConfigurationClientCustomizer`/`SecretClientCustomizer`.
-## Which configurations are loaded
+## Feature flag loading
-The default case of loading configuration matching `/application/*` hasn't changed. The change is that `/${spring.application.name}/*` will not be used in addition automatically anymore unless set. Instead, to use `/${spring.application.name}/*` you can use the new Selects configuration.
+Feature flags now support loading using multiple key/label filters.
```properties
-spring.cloud.azure.appconfiguration.stores[0].selects[0].key-filter=/${spring.application.name}/*
+spring.cloud.azure.appconfiguration.stores[0].feature-flags.enable
+spring.cloud.azure.appconfiguration.stores[0].feature-flags.selects[0].key-filter
+spring.cloud.azure.appconfiguration.stores[0].feature-flags.selects[0].label-filter
+spring.cloud.azure.appconfiguration.stores[0].monitoring.feature-flag-refresh-interval
```
-## Configuration reloading
+> [!NOTE]
+> The property `spring.cloud.azure.appconfiguration.stores[0].feature-flags.label` has been removed. Instead you can use `spring.cloud.azure.appconfiguration.stores[0].feature-flags.selects[0].label-filter` to specify a label filter.
-The monitoring of all configuration stores is now disabled by default. A new configuration has been added to the library to allow config stores to have monitoring enabled. In addition, cache-expiration has been renamed to refresh-interval and has also been changed to be per config store. Also if monitoring of a config store is enabled at least one watched key is required to be configured, with an optional label.
+## Possible conflicts with Azure Spring global properties
-```properties
-spring.cloud.azure.appconfiguration.stores[0].monitoring.enabled
-spring.cloud.azure.appconfiguration.stores[0].monitoring.refresh-interval
-spring.cloud.azure.appconfiguration.stores[0].monitoring.trigger[0].key
-spring.cloud.azure.appconfiguration.stores[0].monitoring.trigger[0].label
-```
+[Azure Spring common configuration properties](/azure/developer/java/spring-framework/configuration) enables you to customize your connections to Azure services. The new App Configuration library will picks up any global or app configuration setting configured with Azure Spring common configuration properties. Your connection to app configuration will change if the configurations have been set for another Azure Spring library.
-There has been no change to how the refresh-interval works, the change is renaming the configuration to clarify functionality. The requirement of a watched key makes sure that when configurations are being changed the library will not attempt to load the configurations until all changes are done.
+> [!NOTE]
+> You can override this by using `ConfigurationClientCustomizer`/`SecretClientCustomizer` to modify the clients.
-## Feature flag loading
-
-By default, loading of feature flags is now disabled. In addition, Feature Flags now have a label filter as well as a refresh-interval.
-
-```properties
-spring.cloud.azure.appconfiguration.stores[0].feature-flags.enable
-spring.cloud.azure.appconfiguration.stores[0].feature-flags.label-filter
-spring.cloud.azure.appconfiguration.stores[0].monitoring.feature-flag-refresh-interval
-```
+> [!WARNING]
+> You may now run into an issue where more than one connection method is provided as Azure Spring global properties will automatically pick up credentials, such as Environment Variables, and use them to connect to Azure services. This can cause issues if you are using a different connection method, such as Managed Identity, and the global properties are overriding it.
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
ms.devlang: java Previously updated : 03/20/2023 Last updated : 04/11/2023 #Customer intent: As an Spring Boot developer, I want to use feature flags to control feature availability quickly and confidently.
To create a new Spring Boot project:
```xml <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.6.0</version>
+ <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
+ <version>4.7.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-feature-management-web</artifactId>
- <version>2.4.0</version>
+ <artifactId>spring-cloud-azure-feature-management-web</artifactId>
+ <version>4.7.0</version>
</dependency> <dependency> <groupId>org.springframework.boot</groupId>
To create a new Spring Boot project:
``` > [!NOTE]
-> * If you need to support an older version of Spring Boot see our [old appconfiguration library](https://github.com/Azure/azure-sdk-for-jav).
> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-feature-management) for differences. ## Connect to an App Configuration store
To create a new Spring Boot project:
import org.springframework.stereotype.Controller; import org.springframework.ui.Model;
- import com.azure.spring.cloud.feature.manager.FeatureManager;
+ import com.azure.spring.cloud.feature.management.FeatureManager;
import org.springframework.web.bind.annotation.GetMapping;
To create a new Spring Boot project:
</header> <div class="container body-content"> <h1 class="mt-5">Welcome</h1>
- <p>Learn more about <a href="https://github.com/Azure/azure-sdk-for-jav">Feature Management with Spring Cloud Azure</a></p>
+ <p>Learn more about <a href="https://github.com/Azure/azure-sdk-for-jav">Feature Management with Spring Cloud Azure</a></p>
</div> <footer class="footer">
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
ms.devlang: java Previously updated : 02/22/2023 Last updated : 04/11/2023 #Customer intent: As a Java Spring developer, I want to manage all my app settings in one place.
To install the Spring Cloud Azure Config starter module, add the following depen
```xml <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
- <version>2.11.0</version>
+ <artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
+ <version>4.7.0</version>
</dependency> ```
-> [!NOTE]
-> If you need to support an older version of Spring Boot, see our [old library](https://github.com/Azure/azure-sdk-for-jav).
- ### Code the application To use the Spring Cloud Azure Config starter to have your application communicate with the App Configuration store that you create, configure the application by using the following steps.
azure-app-configuration Use Feature Flags Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-spring-boot.md
ms.devlang: java Previously updated : 05/02/2022 Last updated : 04/11/2023
The easiest way to connect your Spring Boot application to App Configuration is
```xml <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-feature-management-web</artifactId>
- <version>2.6.0</version>
+ <artifactId>spring-cloud-azure-feature-management-web</artifactId>
+ <version>4.7.0</version>
</dependency> ```
-> [!NOTE]
-> If you need to support an older version of Spring Boot see our [old library](https://github.com/Azure/azure-sdk-for-jav).
- ## Feature flag declaration Each feature flag has two parts: a name and a list of one or more filters that are used to evaluate if a feature's state is *on* (that is, when its value is `True`). A filter defines a use case for when a feature should be turned on.
public String index(Model model) {
} ```
-When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered `IDisabledFeaturesHandler` interface is called. The default `IDisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
+When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered `DisabledFeaturesHandler` interface is called. The default `DisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
## MVC filters
public class FeatureFlagFilter implements Filter {
@Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
- if(!featureManager.isEnabledAsync("feature-a").block()) {
+ if(!featureManager.isEnabled("feature-a")) {
chain.doFilter(request, response); return; }
public String getOldFeature() {
## Next steps
-In this tutorial, you learned how to implement feature flags in your Spring Boot application by using the `azure-spring-cloud-feature-management-web` libraries. For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works.For more information about feature management support in Spring Boot and App Configuration, see the following resources:
+In this tutorial, you learned how to implement feature flags in your Spring Boot application by using the `spring-cloud-azure-feature-management-web` libraries. For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works.For more information about feature management support in Spring Boot and App Configuration, see the following resources:
* [Spring Boot feature flag sample code](./quickstart-feature-flag-spring-boot.md) * [Manage feature flags](./manage-feature-flags.md)
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
To add a secret to the vault, you need to take just a few additional steps. In t
} ```
-1. Create a new file called *AzureCredentials.java* and add the code below.
-
- ```java
- package com.example.demo;
-
- import com.azure.core.credential.TokenCredential;
- import com.azure.identity.EnvironmentCredentialBuilder;
- import com.azure.spring.cloud.config.AppConfigurationCredentialProvider;
- import com.azure.spring.cloud.config.KeyVaultCredentialProvider;
-
- public class AzureCredentials implements AppConfigurationCredentialProvider, KeyVaultCredentialProvider{
-
- @Override
- public TokenCredential getKeyVaultCredential(String uri) {
- return getCredential();
- }
-
- @Override
- public TokenCredential getAppConfigCredential(String uri) {
- return getCredential();
- }
-
- private TokenCredential getCredential() {
- return new EnvironmentCredentialBuilder().build();
- }
-
- }
- ```
-
-1. Create a new file called *AppConfiguration.java*. And add the code below.
-
- ```java
- package com.example.demo;
-
- import org.springframework.context.annotation.Bean;
- import org.springframework.context.annotation.Configuration;
-
- @Configuration
- public class AppConfiguration {
-
- @Bean
- public AzureCredentials azureCredentials() {
- return new AzureCredentials();
- }
- }
- ```
-
-1. Create a new file in your resources META-INF directory called *spring.factories* and add the code below.
-
- ```factories
- org.springframework.cloud.bootstrap.BootstrapConfiguration=\
- com.example.demo.AppConfiguration
- ```
- 1. Build your Spring Boot application with Maven and run it, for example: ```shell
azure-arc Backup Controller Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-controller-database.md
+
+ Title: Backup controller database
+description: Explains how to backup the controller database for Azure Arc-enabled data services
++++++ Last updated : 04/26/2023+++
+# Backup controller database
+
+When you deploy Azure Arc data services, the Azure Arc Data Controller is one of the most critical components of the deployment. The data controller:
+
+- Provisions and deprovisions resources
+- Orchestrates most of the activities for Azure Arc-enabled SQL Managed Instance
+- Captures the billing and usage information of each Arc SQL managed instance.
+
+All information such as inventory of all the Arc SQL managed instances, billing, usage and the current state of all these SQL managed instances is stored in a database called `controller` under the SQL Server instance that is deployed into the `controldb-0` pod.
+
+This article explains how to back up the controller database.
+
+Following steps are needed in order to back up the `controller` database:
+
+1. Retrieve the credentials for the secret
+1. Decode the base64 encoded credentials
+1. Use the decoded credentials to connect to the SQL instance hosting the controller database, and issue the `BACKUP` command
+
+## Retrieve the credentials for the secret
+
+`controller-db-rw-secret` is the secret that holds the credentials for the `controldb-rw-user` user account that can be used to connect to the SQL instance.
+Run the following command to retrieve the secret contents:
+
+```azurecli
+kubectl get secret controller-db-rw-secret --namespace [namespace] -o yaml
+```
+
+For example:
+
+```azurecli
+kubectl get secret controller-db-rw-secret --namespace arcdataservices -o yaml
+```
+
+## Decode the base64 encoded credentials
+
+The contents of the yaml file of the secret `controller-db-rw-secret` contain a `password` and `username`. You can use any base64 decoder tool to decode the contents of the `password`.
+
+## Back up the database
+
+With the decoded credentials, run the following command to issue a T-SQL `BACKUP` command to back up the controller database.
+
+```azurecli
+kubectl exec controldb-0 -n contosons -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U controldb-rw-user -P "<password>" -Q "BACKUP DATABASE [controller] TO DISK = N'/var/opt/controller.bak' WITH NOFORMAT, NOINIT, NAME = N'Controldb-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10, CHECKSUM"
+```
+
+Once the backup is created, you can move the `controller.bak` file to a remote storage for any recovery purposes.
+
+> [!TIP]
+> Back up the controller database before and after any custom resource changes such as creating or deleting an Arc-enabled SQL Managed Instance.
+
+## Next steps
+
+[Azure Data Studio dashboards](azure-data-studio-dashboards.md)
azure-arc Sizing Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/sizing-guidance.md
Each SQL managed instance must have the following minimum resource requests and
|Service tier|General Purpose|Business Critical| ||||
-|CPU request|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 1<br/> Maximum: unlimited<br/> Default: 4|
-|CPU limit|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 1<br/> Maximum: unlimited<br/> Default: 4|
+|CPU request|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 3<br/> Maximum: unlimited<br/> Default: 4|
+|CPU limit|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 3<br/> Maximum: unlimited<br/> Default: 4|
|Memory request|Minimum: `2Gi`<br/> Maximum: `128Gi`<br/> Default: `4Gi`|Minimum: `2Gi`<br/> Maximum: unlimited<br/> Default: `4Gi`| |Memory limit|Minimum: `2Gi`<br/> Maximum: `128Gi`<br/> Default: `4Gi`|Minimum: `2Gi`<br/> Maximum: unlimited<br/> Default: `4Gi`|
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
Title: "Azure RBAC for Azure Arc-enabled Kubernetes clusters" Previously updated : 03/13/2023 Last updated : 04/27/2023 description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters."
description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kuber
# Use Azure RBAC for Azure Arc-enabled Kubernetes clusters
-Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. By using this feature, you can use Azure Active Directory (Azure AD) and role assignments in Azure to control authorization checks on the cluster. This means that you can use Azure role assignments to granularly control who can read, write, and delete Kubernetes objects like deployment, pod, and service.
+Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. By using this feature, you can use Azure Active Directory (Azure AD) and role assignments in Azure to control authorization checks on the cluster. Azure role assignments let you granularly control which users can read, write, and delete Kubernetes objects such as deployment, pod, and service.
For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md).
For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled
## Set up Azure AD applications
-### [AzureCLI >= v2.37](#tab/AzureCLI)
+### [Azure CLI >= v2.3.7](#tab/AzureCLI)
#### Create a server application
For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled
az rest --method PATCH --headers "Content-Type=application/json" --uri https://graph.microsoft.com/v1.0/applications/${CLIENT_OBJECT_ID}/ --body '{"api":{"requestedAccessTokenVersion": 1}}' ```
-### [AzureCLI < v2.37](#tab/AzureCLI236)
+### [Azure CLI < v2.3.7](#tab/AzureCLI236)
#### Create a server application
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
> > Use `--skip-azure-rbac-list` with the preceding command for a comma-separated list of usernames, emails, and OpenID connections undergoing authorization checks by using Kubernetes native `ClusterRoleBinding` and `RoleBinding` objects instead of Azure RBAC.
-### Generic cluster where no reconciler is running on the apiserver specification
+### Generic cluster where no reconciler is running on the `apiserver` specification
1. SSH into every master node of the cluster and take the following steps:
Owners of the Azure Arc-enabled Kubernetes resource can use either built-in role
| Role | Description | |||
-| [Azure Arc Kubernetes Viewer](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-viewer) | Allows read-only access to see most objects in a namespace. This role doesn't allow viewing secrets. This is because `read` permission on secrets would enable access to `ServiceAccount` credentials in the namespace. These credentials would in turn allow API access through that `ServiceAccount` value (a form of privilege escalation). |
+| [Azure Arc Kubernetes Viewer](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-viewer) | Allows read-only access to see most objects in a namespace. This role doesn't allow viewing secrets, because `read` permission on secrets would enable access to `ServiceAccount` credentials in the namespace. These credentials would in turn allow API access through that `ServiceAccount` value (a form of privilege escalation). |
| [Azure Arc Kubernetes Writer](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-writer) | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing secrets and running pods as any `ServiceAccount` value in the namespace, so it can be used to gain the API access levels of any `ServiceAccount` value in the namespace. | | [Azure Arc Kubernetes Admin](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-admin) | Allows admin access. It's intended to be granted within a namespace through `RoleBinding`. If you use it in `RoleBinding`, it allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. | | [Azure Arc Kubernetes Cluster Admin](../../role-based-access-control/built-in-roles.md#azure-arc-kubernetes-cluster-admin) | Allows superuser access to execute any action on any resource. When you use it in `ClusterRoleBinding`, it gives full control over every resource in the cluster and in all namespaces. When you use it in `RoleBinding`, it gives full control over every resource in the role binding's namespace, including the namespace itself.|
After the proxy process is running, you can open another tab in your console to
### Use a shared kubeconfig file
+Using a shared kubeconfig requires slightly different steps depending on your Kubernetes version.
+
+### [Kubernetes version >= 1.26](#tab/kubernetes-latest)
+ 1. Run the following command to set the credentials for the user: ```console
After the proxy process is running, you can open another tab in your console to
name: azure ```
+> [!NOTE]
+>[Exec plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) is a Kubernetes authentication strategy that allows `kubectl` to execute an external command to receive user credentials to send to `apiserver`. Starting with Kubernetes version 1.26, the default Azure authorization plugin is no longer included in `client-go` and `kubectl`. With later versions, in order to use the exec plugin to receive user credentials you must use [Azure Kubelogin](https://azure.github.io/kubelogin/https://docsupdatetracker.net/index.html), a `client-go` credential (exec) plugin that implements Azure authentication.
+
+4. Install Azure Kubelogin:
+
+ - For Windows or Mac, follow the [Azure Kubelogin installation instructions](https://azure.github.io/kubelogin/install.html#installation).
+ - For Linux or Ubuntu, download the [latest version of kubelogin](https://github.com/Azure/kubelogin/releases), then run the following commands:
+
+ ```bash
+ curl -LO https://github.com/Azure/kubelogin/releases/download/"$KUBELOGIN_VERSION"/kubelogin-linux-amd64.zip
+
+ unzip kubelogin-linux-amd64.zip
+
+ sudo mv bin/linux_amd64/kubelogin /usr/local/bin/
+
+ sudo chmod +x /usr/local/bin/kubelogin
+ ```
+
+5. [Convert](https://azure.github.io/kubelogin/cli/convert-kubeconfig.html) the kubelogin to the appropriate [login mode](https://azure.github.io/kubelogin/concepts/login-modes.html). For example, for [device code login](https://azure.github.io/kubelogin/concepts/login-modes/devicecode.html) with an Azure Active Directory user, the commands would be as follows:
+
+ ```bash
+ export KUBECONFIG=/path/to/kubeconfig
+
+ kubelogin convert-kubeconfig
+ ```
+
+### [Kubernetes < v1.26](#tab/Kubernetes-earlier)
+
+1. Run the following command to set the credentials for the user:
+
+ ```console
+ kubectl config set-credentials <testuser>@<mytenant.onmicrosoft.com> \
+ --auth-provider=azure \
+ --auth-provider-arg=environment=AzurePublicCloud \
+ --auth-provider-arg=client-id=<clientApplicationId> \
+ --auth-provider-arg=tenant-id=<tenantId> \
+ --auth-provider-arg=apiserver-id=<serverApplicationId>
+ ```
+
+1. Open the *kubeconfig* file that you created earlier. Under `contexts`, verify that the context associated with the cluster points to the user credentials that you created in the previous step. To set the current context to these user credentials, run the following command:
+
+ ```console
+ kubectl config set-context --current=true --user=<testuser>@<mytenant.onmicrosoft.com>
+ ```
+
+1. Add the **config-mode** setting under `user` > `config`:
+
+ ```console
+ name: testuser@mytenant.onmicrosoft.com
+ user:
+ auth-provider:
+ config:
+ apiserver-id: $SERVER_APP_ID
+ client-id: $CLIENT_APP_ID
+ environment: AzurePublicCloud
+ tenant-id: $TENANT_ID
+ config-mode: "1"
+ name: azure
+ ```
+++ ## Send requests to the cluster 1. Run any `kubectl` command. For example:
Access the cluster again. For example, run the `kubectl get nodes` command to vi
kubectl get nodes ```
-Follow the instructions to sign in again. An error message states that you're successfully logged in, but your admin requires the device that's requesting access to be managed by Azure AD to access the resource. Follow these steps:
+Follow the instructions to sign in again. An error message states that you're successfully logged in, but your admin requires the device that's requesting access to be managed by Azure AD in order to access the resource. Follow these steps:
1. In the Azure portal, go to **Azure Active Directory**. 1. Select **Enterprise applications**. Then under **Activity**, select **Sign-ins**.
After you've made the assignments, verify that just-in-time access is working by
kubectl get nodes ```
-Note the authentication requirement and follow the steps to authenticate. If authentication is successful, you should see output similar to the following:
+Note the authentication requirement and follow the steps to authenticate. If authentication is successful, you should see output similar to this:
```output To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
node-3 Ready agent 6m33s v1.18.14
## Refresh the secret of the server application
-If the secret for the server application's service principal has expired, you will need to rotate it.
+If the secret for the server application's service principal has expired, you'll need to rotate it.
```azurecli SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --credential-description "ArcSecret" --query password -o tsv) ```
-Update the secret on the cluster. Please add any optional parameters you configured when this command was originally run.
+Update the secret on the cluster. Include any optional parameters you configured when the command was originally run.
```azurecli az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id "${SERVER_APP_ID}" --app-secret "${SERVER_APP_SECRET}"
azure-arc Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md
- # Azure Arc resource bridge (preview) deployment command overview [Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge. When deploying Arc resource bridge with a corresponding partner product, the Azure CLI commands may be combined into an automation script, along with additional provider-specific commands. To learn about installing Arc resource bridge with a corresponding partner product, see:
Once the `create` command initiates the connection, it will return in the termin
## `az arcappliance show`
-The `show` command gets the status of the Arc resource bridge and ARM resource information. It can be used to check the progress of the connection between the ARM resource and on-premises appliance VM.
+The `show` command gets the status of the Arc resource bridge and ARM resource information. It can be used to check the progress of the connection between the ARM resource and on-premises appliance VM.
While the Arc resource bridge is connecting the ARM resource to the on-premises VM, the resource bridge progresses through the following stages: `ProvisioningState` may be `Creating`, `Created`, `Failed`, `Deleting`, or `Succeeded`.
-`Status` transitions between `WaitingForHeartbeat` -> `Validating` -> `Connected` -> `Running`.
+`Status` transitions between `WaitingForHeartbeat` -> `Validating` -> `Connecting` -> `Connected` -> `Running`.
+
+- WaitingForHeartbeat: Azure is waiting to receive a signal from the appliance VM
+
+- Validating: Appliance VM is checking Azure services for connectivity and serviceability
+
+- Connecting: Appliance VM is syncing on-premises resources to Azure
+
+- Connected: Appliance VM completed sync of on-premises resources to Azure
+
+- Running: Appliance VM and Azure have completed hybrid sync and Arc resource bridge is now operational.
Successful Arc resource bridge creation results in `ProvisioningState = Succeeded` and `Status = Running`.
If a deployment fails, run this command to clean up the environment before you a
- Explore the full list of [Azure CLI commands and required parameters](/cli/azure/arcappliance) for Arc resource bridge. - Get [troubleshooting tips for Arc resource bridge](troubleshoot-resource-bridge.md).++
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
The management machine has the following requirements:
- [Azure CLI x64](/cli/azure/install-azure-cli-windows?tabs=azure-cli) installed. - Open communication to Control Plane IP (`controlplaneendpoint` parameter in `createconfig` command).-- Open communication to Appliance VM IP (`k8snodeippoolstart` parameter in `createconfig` command).-- Open communication to the reserved Appliance VM IP for upgrade (`k8snodeippoolend` parameter in `createconfig` command).
+- Open communication to Appliance VM IP (`k8snodeippoolstart` parameter in `createconfig` command. May be referred to in partner products as Start Range IP, RB IP Start or VM IP 1).
+- Open communication to the reserved Appliance VM IP for upgrade (`k8snodeippoolend` parameter in `createconfig` command. (May be referred to as End Range IP, RB IP End or VM IP 2).
- Internal and external DNS resolution. The DNS server must resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses that are [required URLs](network-requirements.md#outbound-connectivity) for deployment. - If using a proxy, the proxy server configuration on the management machine must allow the machine to have internet access and to connect to [required URLs](network-requirements.md#outbound-connectivity) needed for deployment, such as the URL to download OS images. ## Appliance VM requirements
-Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for availability in Azure Resource Manager (ARM). The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command.
+Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for projection into Azure Resource Manager (ARM). The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command (May be referred to in partner products as Start Range IP, RB IP Start or VM IP 1).
The appliance VM has the following requirements:
The appliance VM has the following requirements:
## Reserved appliance VM IP requirements
-Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade. During upgrade, a new appliance VM is created with the reserved appliance VM IP. Once the new appliance VM is created, the old appliance VM is deleted, and its IP address becomes reserved for a future upgrade. The reserved appliance VM IP is assigned an IP address from the `k8snodeippoolend` parameter in the `az arcappliance createconfig` command.
+Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade. During upgrade, a new appliance VM is created with the reserved appliance VM IP. Once the new appliance VM is created, the old appliance VM is deleted, and its IP address becomes reserved for a future upgrade. The reserved appliance VM IP is assigned an IP address from the `k8snodeippoolend` parameter in the `az arcappliance createconfig` command (May be referred to as End Range IP, RB IP End or VM IP 2).
The reserved appliance VM IP has the following requirements:
The control plane IP has the following requirements:
## User account and credentials
-Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (such as Arc-enabled VMware vSphere or Arc-enabled SCVMM). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM.
+Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (ex: Arc-enabled VMware vSphere). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM.
If the user account is set to periodically change passwords, [the credentials must be immediately updated on the resource bridge](maintenance.md#update-credentials-in-the-appliance-vm). This user account may also be set with a lockout policy to protect the on-premises infrastructure, in case the credentials aren't updated and the resource bridge makes multiple attempts to use expired credentials to access the on-premises control center.
There are several different types of configuration files, based on the on-premis
### Appliance configuration files
-Three configuration files are created when the `createconfig` command completes (or the equivalent commands used by Azure Stack HCI and AKS hybrid): resource.yaml, appliance.yaml and infra.yaml.
+Three configuration files are created when the `createconfig` command completes (or the equivalent commands used by Azure Stack HCI and AKS hybrid): <resourcename>-resource.yaml, <resourcename>-appliance.yaml and <resourcename>-infra.yaml.
By default, these files are generated in the current CLI directory when `createconfig` completes. These files should be saved in a secure location on the management machine, because they're required for maintaining the appliance VM. Because the configuration files reference each other, all three files must be stored in the same location. If the files are moved from their original location at deployment, open the files to check that the reference paths to the configuration files are accurate.
When deploying Arc resource bridge with AKS on Azure Stack HCI (AKS Hybrid), the
- Understand [network requirements for Azure Arc resource bridge (preview)](network-requirements.md). - Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about features and benefits. - Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).++
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
To secure the storage for an existing function app:
| Setting name | Value | Comment | |-|-|-| | `AzureWebJobsStorage`| Storage connection string | This is the connection string for a secured storage account. |
- | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account. |
- | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside. |
+ | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account.This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
+ | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside.This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
| `WEBSITE_CONTENTOVERVNET` | 1 | A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. | 1. Select **Save** to save the application settings. Changing app settings causes the app to restart.
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
# [Isolated process](#tab/isolated-process/fixed-delay)
-Retry policies aren't yet supported when they're running in an isolated worker process.
+```csharp
+[Function("EventHubsFunction")]
+[FixedDelayRetry(5, "00:00:10")]
+[EventHubOutput("dest", Connection = "EventHubConnectionAppSetting")]
+public static string Run([EventHubTrigger("src", Connection = "EventHubConnectionAppSetting")] string[] input,
+ FunctionContext context)
+{
+// ...
+}
+ ```
+|Property | Description |
+||-|
+|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.|
# [C# script](#tab/csharp-script/fixed-delay)
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
You can make GET requests from a browser passing data in the query string. For a
#### Non-HTTP triggered functions
-For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an _administration endpoint_. Calling this endpoint with an HTTP POST request on the local server triggers the function.
+For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an _administration endpoint_. Calling this endpoint with an HTTP POST request on the local server triggers the function. You can call the `functions` administrator endpoint (`http://localhost:{port}/admin/functions/`) to get URLs for all available functions, both HTTP triggered and non-HTTP triggered.
+
+When running locally, authentication and authorization is bypassed. However, when you try to call the same administrator endpoints on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
+
+>[!IMPORTANT]
+>Access keys are valuable shared secrets. When used locally, they must be securely stored outside of source control. Because authentication and authorization isn't required by Functions when running locally, you should avoid using and storing access keys unless your scenarios require it.
To test Event Grid triggered functions locally, see [Local testing with viewer web app](event-grid-how-tos.md#local-testing-with-viewer-web-app).
curl --request POST -H "Content-Type:application/json" --data "{'input':'sample
```
-The administrator endpoint also provides a list of all (HTTP triggered and non-HTTP triggered) functions on `http://localhost:{port}/admin/functions/`.
-
-When you call an administrator endpoint on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
- ## <a name="publish"></a>Publish to Azure The Azure Functions Core Tools supports two types of deployment:
azure-functions Python Memory Profiler Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-memory-profiler-reference.md
Title: Memory profiling on Python apps in Azure Functions
-description: Learn how to profile Python apps memory usage and identify memory bottleneck.
+ Title: Memory profiling of Python apps in Azure Functions
+description: Learn how to profile the memory usage of Python apps and identify memory bottleneck.
Previously updated : 3/22/2021 Last updated : 4/11/2023 ms.devlang: python # Profile Python apps memory usage in Azure Functions
-During development or after deploying your local Python function app project to Azure, it's a good practice to analyze for potential memory bottlenecks in your functions. Such bottlenecks can decrease the performance of your functions and lead to errors. The following instruction show you how to use the [memory-profiler](https://pypi.org/project/memory-profiler) Python package, which provides line-by-line memory consumption analysis of your functions as they execute.
+During development or after deploying your local Python function app project to Azure, it's a good practice to analyze for potential memory bottlenecks in your functions. Such bottlenecks can decrease the performance of your functions and lead to errors. The following instructions show you how to use the [memory-profiler](https://pypi.org/project/memory-profiler) Python package, which provides line-by-line memory consumption analysis of your functions as they execute.
> [!NOTE]
-> Memory profiling is intended only for memory footprint analysis on development environment. Please do not apply the memory profiler on production function apps.
+> Memory profiling is intended only for memory footprint analysis in development environments. Please do not apply the memory profiler on production function apps.
## Prerequisites Before you start developing a Python function app, you must meet these requirements:
-* [Python 3.6.x or above](https://www.python.org/downloads/release/python-374/). To check the full list of supported Python versions in Azure Functions, please visit [Python developer guide](functions-reference-python.md#python-version).
+* [Python 3.7 or above](https://www.python.org/downloads). To check the full list of supported Python versions in Azure Functions, see the [Python developer guide](functions-reference-python.md#python-version).
-* The [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
+* The [Azure Functions Core Tools](functions-run-local.md#v2), version 4.x or greater. Check your version with `func --version`. To learn about updating, see [Azure Functions Core Tools on GitHub](https://github.com/Azure/azure-functions-core-tools).
* [Visual Studio Code](https://code.visualstudio.com/) installed on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
Before you start developing a Python function app, you must meet these requireme
## Memory profiling process
-1. In your requirements.txt, add `memory-profiler` to ensure the package will be bundled with your deployment. If you are developing on your local machine, you may want to [activate a Python virtual environment](create-first-function-cli-python.md#create-venv) and do a package resolution by `pip install -r requirements.txt`.
+1. In your requirements.txt, add `memory-profiler` to ensure the package is bundled with your deployment. If you're developing on your local machine, you may want to [activate a Python virtual environment](create-first-function-cli-python.md#create-venv) and do a package resolution by `pip install -r requirements.txt`.
-2. In your function script (usually \_\_init\_\_.py), add the following lines above the `main()` function. This will ensure the root logger reports the child logger names, so that the memory profiling logs are distinguishable by the prefix `memory_profiler_logs`.
+2. In your function script (for example, *\_\_init\_\_.py* for the Python v1 programming model and *function_app.py* for the v2 model), add the following lines above the `main()` function. These lines ensure the root logger reports the child logger names, so that the memory profiling logs are distinguishable by the prefix `memory_profiler_logs`.
```python import logging
Before you start developing a Python function app, you must meet these requireme
root_logger.handlers[0].setFormatter(logging.Formatter("%(name)s: %(message)s")) profiler_logstream = memory_profiler.LogFile('memory_profiler_logs', True)
-3. Apply the following decorator above any functions that need memory profiling. This does not work directly on the trigger entrypoint `main()` method. You need to create subfunctions and decorate them. Also, due to a memory-profiler known issue, when applying to an async coroutine, the coroutine return value will always be None.
+3. Apply the following decorator above any functions that need memory profiling. The decorator doesn't work directly on the trigger entrypoint `main()` method. You need to create subfunctions and decorate them. Also, due to a memory-profiler known issue, when applying to an async coroutine, the coroutine return value is always `None`.
```python @memory_profiler.profile(stream=profiler_logstream)
-4. Test the memory profiler on your local machine by using azure Functions Core Tools command `func host start`. This should generate a memory usage report with file name, line of code, memory usage, memory increment, and the line content in it.
+4. Test the memory profiler on your local machine by using Azure Functions Core Tools command `func host start`. When you invoke the functions, they should generate a memory usage report. The report contains file name, line of code, memory usage, memory increment, and the line content in it.
-5. To check the memory profiling logs on an existing function app instance in Azure, you can query the memory profiling logs in recent invocations by pasting the following Kusto queries in Application Insights, Logs.
+5. To check the memory profiling logs on an existing function app instance in Azure, you can query the memory profiling logs for recent invocations with [Kusto](/azure/azure-monitor/logs/log-query-overview) queries in Application Insights, Logs.
+ :::image type="content" source="media/python-memory-profiler-reference/application-insights-query.png" alt-text="Screenshot showing the query memory usage of a Python app in Application Insights.":::
-```text
-traces
-| where timestamp > ago(1d)
-| where message startswith_cs "memory_profiler_logs:"
-| parse message with "memory_profiler_logs: " LineNumber " " TotalMem_MiB " " IncreMem_MiB " " Occurences " " Contents
-| union (
+ ```kusto
traces | where timestamp > ago(1d)
- | where message startswith_cs "memory_profiler_logs: Filename: "
- | parse message with "memory_profiler_logs: Filename: " FileName
- | project timestamp, FileName, itemId
-)
-| project timestamp, LineNumber=iff(FileName != "", FileName, LineNumber), TotalMem_MiB, IncreMem_MiB, Occurences, Contents, RequestId=itemId
-| order by timestamp asc
-```
-
+ | where message startswith_cs "memory_profiler_logs:"
+ | parse message with "memory_profiler_logs: " LineNumber " " TotalMem_MiB " " IncreMem_MiB " " Occurrences " " Contents
+ | union (
+ traces
+ | where timestamp > ago(1d)
+ | where message startswith_cs "memory_profiler_logs: Filename: "
+ | parse message with "memory_profiler_logs: Filename: " FileName
+ | project timestamp, FileName, itemId
+ )
+ | project timestamp, LineNumber=iff(FileName != "", FileName, LineNumber), TotalMem_MiB, IncreMem_MiB, Occurrences, Contents, RequestId=itemId
+ | order by timestamp asc
+ ```
+
## Example
-Here is an example of performing memory profiling on an asynchronous and a synchronous HTTP triggers, named "HttpTriggerAsync" and "HttpTriggerSync" respectively. We will build a Python function app that simply sends out GET requests to the Microsoft's home page.
+Here's an example of performing memory profiling on an asynchronous and a synchronous HTTP trigger, named "HttpTriggerAsync" and "HttpTriggerSync" respectively. We'll build a Python function app that simply sends out GET requests to the Microsoft's home page.
### Create a Python function app A Python function app should follow Azure Functions specified [folder structure](functions-reference-python.md#folder-structure). To scaffold the project, we recommend using the Azure Functions Core Tools by running the following commands:
+# [v1](#tab/v1)
+ ```bash func init PythonMemoryProfilingDemo --python cd PythonMemoryProfilingDemo
func new -l python -t HttpTrigger -n HttpTriggerAsync -a anonymous
func new -l python -t HttpTrigger -n HttpTriggerSync -a anonymous ```
+# [v2](#tab/v2)
+
+```bash
+func init PythonMemoryProfilingDemov2 --python -m v2
+cd PythonMemoryProfilingDemov2
+```
+
+For the Python V2 programming model, triggers and bindings are created as decorators within the Python file itself, the *function_app.py* file. For information on how to create a new function with the new programming model, see the [Azure Functions Python developer guide](https://aka.ms/pythonprogrammingmodel). `func new` isn't supported for the preview of the V2 Python programming model.
+++ ### Update file contents
-The requirements.txt defines the packages that will be used in our project. Besides the Azure Functions SDK and memory-profiler, we introduce `aiohttp` for asynchronous HTTP requests and `requests` for synchronous HTTP calls.
+The *requirements.txt* defines the packages that are used in our project. Besides the Azure Functions SDK and memory-profiler, we introduce `aiohttp` for asynchronous HTTP requests and `requests` for synchronous HTTP calls.
```text # requirements.txt
aiohttp
requests ```
-We also need to rewrite the asynchronous HTTP trigger `HttpTriggerAsync/__init__.py` and configure the memory profiler, root logger format, and logger streaming binding.
+Create the asynchronous HTTP trigger.
+
+# [v1](#tab/v1)
+
+Replace the code in the asynchronous HTTP trigger *HttpTriggerAsync/\_\_init\_\_.py* with the following code, which configures the memory profiler, root logger format, and logger streaming binding.
```python # HttpTriggerAsync/__init__.py
profiler_logstream = memory_profiler.LogFile('memory_profiler_logs', True)
async def main(req: func.HttpRequest) -> func.HttpResponse: await get_microsoft_page_async('https://microsoft.com') return func.HttpResponse(
- f"Microsoft Page Is Loaded",
+ f"Microsoft page loaded.",
status_code=200 )
async def get_microsoft_page_async(url: str):
# GitHub Issue: https://github.com/pythonprofilers/memory_profiler/issues/289 ```
-For synchronous HTTP trigger, please refer to the following `HttpTriggerSync/__init__.py` code section:
+# [v2](#tab/v2)
+
+Replace the code in the *function_app.py* file with the following code, which configures the memory profiler, root logger format, and logger streaming binding.
+
+```python
+# function_app.py
+import azure.functions as func
+import logging
+import aiohttp
+import requests
+import memory_profiler
+
+app = func.FunctionApp()
+
+# Update root logger's format to include the logger name. Ensure logs generated
+# from memory profiler can be filtered by "memory_profiler_logs" prefix.
+root_logger = logging.getLogger()
+root_logger.handlers[0].setFormatter(logging.Formatter("%(name)s: %(message)s"))
+profiler_logstream = memory_profiler.LogFile('memory_profiler_logs', True)
+
+@app.function_name(name="HttpTriggerAsync")
+@app.route(route="HttpTriggerAsync", auth_level=func.AuthLevel.ANONYMOUS)
+async def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ await get_microsoft_page_async('https://microsoft.com')
+ return func.HttpResponse(f"Microsoft page loaded.")
+
+@memory_profiler.profile(stream=profiler_logstream)
+async def get_microsoft_page_async(url: str):
+ async with aiohttp.ClientSession() as client:
+ async with client.get(url) as response:
+ await response.text()
+ # @memory_profiler.profile does not support return for coroutines.
+ # All returns become None in the parent functions.
+ # GitHub Issue: https://github.com/pythonprofilers/memory_profiler/issues/289
+```
+++
+Create the synchronous HTTP trigger.
+
+# [v1](#tab/v1)
+
+Replace the code in the asynchronous HTTP trigger *HttpTriggerSync/\_\_init\_\_.py* with the following code.
```python # HttpTriggerSync/__init__.py
profiler_logstream = memory_profiler.LogFile('memory_profiler_logs', True)
def main(req: func.HttpRequest) -> func.HttpResponse: content = profile_get_request('https://microsoft.com') return func.HttpResponse(
- f"Microsoft Page Response Size: {len(content)}",
+ f"Microsoft page response size: {len(content)}",
status_code=200 )
def profile_get_request(url: str):
return response.content ```
+# [v2](#tab/v2)
+
+Add this code to the bottom of the existing *function_app.py* file.
+
+```python
+@app.function_name(name="HttpTriggerSync")
+@app.route(route="HttpTriggerSync", auth_level=func.AuthLevel.ANONYMOUS)
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ content = profile_get_request('https://microsoft.com')
+ return func.HttpResponse(f"Microsoft Page Response Size: {len(content)}")
+
+@memory_profiler.profile(stream=profiler_logstream)
+def profile_get_request(url: str):
+ response = requests.get(url)
+ return response.content
+```
+++ ### Profile Python function app in local development environment
-After making all the above changes, there are a few more steps to initialize a Python virtual envionment for Azure Functions runtime.
+After you make the above changes, there are a few more steps to initialize a Python virtual environment for Azure Functions runtime.
1. Open a Windows PowerShell or any Linux shell as you prefer. 2. Create a Python virtual environment by `py -m venv .venv` in Windows, or `python3 -m venv .venv` in Linux.
-3. Activate the Python virutal environment with `.venv\Scripts\Activate.ps1` in Windows PowerShell or `source .venv/bin/activate` in Linux shell.
-4. Restore the Python dependencies with `pip install requirements.txt`
+3. Activate the Python virtual environment with `.venv\Scripts\Activate.ps1` in Windows PowerShell or `source .venv/bin/activate` in Linux shell.
+4. Restore the Python dependencies with `pip install -r requirements.txt`
5. Start the Azure Functions runtime locally with Azure Functions Core Tools `func host start` 6. Send a GET request to `https://localhost:7071/api/HttpTriggerAsync` or `https://localhost:7071/api/HttpTriggerSync`.
-7. It should show a memory profiling report similiar to below section in Azure Functions Core Tools.
+7. It should show a memory profiling report similar to the following section in Azure Functions Core Tools.
```text Filename: <ProjectRoot>\HttpTriggerAsync\__init__.py
- Line # Mem usage Increment Occurences Line Contents
+ Line # Mem usage Increment Occurrences Line Contents
============================================================ 19 45.1 MiB 45.1 MiB 1 @memory_profiler.profile 20 async def get_microsoft_page_async(url: str):
azure-maps How To Use Image Templates Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md
Title: Image templates in the Azure Maps Web SDK | Microsoft Azure Maps description: Learn how to add image icons and pattern-filled polygons to maps by using the Azure Maps Web SDK. View available image and fill pattern templates.--++ Last updated 8/6/2019 -+ # How to use image templates Images can be used with HTML markers and various layers within the Azure Maps web SDK:
+- Symbol layers can render points on the map with an image icon. Symbols can also be rendered along a lines path.
+- Polygon layers can be rendered with a fill pattern image.
+- HTML markers can render points using images and other HTML elements.
In order to ensure good performance with layers, load the images into the map image sprite resource before rendering. The [IconOptions](/javascript/api/azure-maps-control/atlas.iconoptions), of the SymbolLayer, preloads a couple of marker images in a handful of colors into the map image sprite, by default. These marker images and more are available as SVG templates. They can be used to create images with custom scales, or used as a customer primary and secondary color. In total there are 42 image templates provided: 27 symbol icons and 15 polygon fill patterns.
The following code shows how to create an image from one of the built-in templat
```javascript map.imageSprite.createFromTemplate('myTemplatedIcon', 'marker-flat', 'teal', '#fff').then(function () {
- //Add a symbol layer that uses the custom created icon.
- map.layers.add(new atlas.layer.SymbolLayer(datasource, null, {
- iconOptions: {
- image: 'myTemplatedIcon'
- }
- }));
+ //Add a symbol layer that uses the custom created icon.
+ map.layers.add(new atlas.layer.SymbolLayer(datasource, null, {
+ iconOptions: {
+ image: 'myTemplatedIcon'
+ }
+ }));
}); ```
map.imageSprite.createFromTemplate('myTemplatedIcon', 'marker-flat', 'teal', '#f
Once an image template is loaded into the map image sprite, it can be rendered as a symbol in a symbol layer by referencing the image resource ID in the `image` option of the `iconOptions`.
-The following sample renders a symbol layer using the `marker-flat` image template with a teal primary color and a white secondary color.
+The following sample renders a symbol layer using the `marker-flat` image template with a teal primary color and a white secondary color.
<br/>
The following sample renders a symbol layer using the `marker-flat` image templa
## Use an image template along a lines path
-Once an image template is loaded into the map image sprite, it can be rendered along the path of a line by adding a LineString to a data source and using a symbol layer with a `lineSpacing`option and by referencing the ID of the image resource in the `image` option of th `iconOptions`.
+Once an image template is loaded into the map image sprite, it can be rendered along the path of a line by adding a LineString to a data source and using a symbol layer with a `lineSpacing`option and by referencing the ID of the image resource in the `image` option of th `iconOptions`.
-The following sample renders a pink line on the map and uses a symbol layer using the `car` image template with a dodger blue primary color and a white secondary color.
+The following sample renders a pink line on the map and uses a symbol layer using the `car` image template with a dodger blue primary color and a white secondary color.
<br/>
The following sample renders a polygon layer using the `dot` image template with
</iframe> > [!TIP]
-> Setting the secondary color of fill patterns makes it easier to see the underlying map will still providing the primary pattern.
+> Setting the secondary color of fill patterns makes it easier to see the underlying map will still providing the primary pattern.
## Use an image template with an HTML marker
The following sample uses the `marker-arrow` template with a red primary color,
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe> - > [!TIP] > Image templates can be used outside of the map too. The getImageTemplate funciton returns an SVG string that has placeholders; `{color}`, `{secondaryColor}`, `{scale}`, `{text}`. Replace these placeholder values to create a valid SVG string. You can then either add the SVG string directly to the HTML DOM or convert it into a data URI and insert it into an image tag. For example:
+>
> ```JavaScript > //Retrieve an SVG template and replace the placeholder values. > var svg = atlas.getImageTemplate('marker').replace(/{color}/, 'red').replace(/{secondaryColor}/, 'white').replace(/{text}/, '').replace(/{scale}/, 1);
The following sample uses the `marker-arrow` template with a red primary color,
## Create custom reusable templates
-If your application uses the same icon with different icons or if you are creating a module that adds additional image templates, you can easily add and retrieve these icons from the Azure Maps web SDK. Use the following static functions on the `atlas` namespace.
+If your application uses the same icon within different modules or if you're creating a module that adds more image templates, you can easily add and retrieve these icons from the Azure Maps web SDK. Use the following static functions on the `atlas` namespace.
-| Name | Return Type | Description |
-|-|-|-|
+| Name | Return Type | Description |
+||-|-|
| `addImageTemplate(templateName: string, template: string, override: boolean)` | | Adds a custom SVG image template to the atlas namespace. | | `getImageTemplate(templateName: string, scale?: number)`| string | Retrieves an SVG template by name. | | `getAllImageTemplateNames()` | string[] | Retrieves an SVG template by name. | SVG image templates support the following placeholder values:
-| Placeholder | Description |
-|-|-|
-| `{color}` | The primary color. |
-| `{secondaryColor}` | The secondary color. |
-| `{scale}` | The SVG image is converted to an png image when added to the map image sprite. This placeholder can be used to scale a template before it is converted to ensure it renders clearly. |
+| Placeholder | Description |
+|-|--|
+| `{color}` | The primary color. |
+| `{secondaryColor}` | The secondary color. |
+| `{scale}` | The SVG image is converted to an png image when added to the map image sprite. This placeholder can be used to scale a template before it's converted to ensure it renders clearly. |
| `{text}` | The location to render text when used with an HTML Marker. |
-The following example shows how to take an SVG template, and add it to the Azure Maps web SDK as a reusable icon template.
+The following example shows how to take an SVG template, and add it to the Azure Maps web SDK as a reusable icon template.
<br/>
This table lists all image templates currently available within the Azure Maps w
:::column-end::: :::row-end::: - **Polygon fill pattern templates** :::row:::
This table lists all image templates currently available within the Azure Maps w
**Preloaded image icons**
-The map preloads a set of icons into the maps image sprite using the `marker`, `pin`, and `pin-round` templates. These icon names and their color values are listed in the table below.
+The map preloads a set of icons into the maps image sprite using the `marker`, `pin`, and `pin-round` templates. These icon names and their color values are listed in the following table.
| icon name | color | secondaryColor | |--|-|-|
The map preloads a set of icons into the maps image sprite using the `marker`, `
| `pin-round-darkblue` | `#003963` | `#ffffff` | | `pin-round-red` | `#ef4c4c` | `#ffffff` | - ## Try it now tool With the following tool, you can render the different built-in image templates in various ways and customize the primary and secondary colors and scale.
Learn more about the classes and methods used in this article:
> [ImageSpriteManager](/javascript/api/azure-maps-control/atlas.imagespritemanager) > [!div class="nextstepaction"]
-> [atlas namespace](/javascript/api/azure-maps-control/atlas#functions
-)
+> [atlas namespace](/javascript/api/azure-maps-control/atlas#functions)
See the following articles for more code samples where image templates can be used:
See the following articles for more code samples where image templates can be us
> [Add a polygon layer](map-add-shape.md) > [!div class="nextstepaction"]
-> [Add HTML Makers](map-add-bubble-layer.md)
+> [Add HTML Makers](map-add-bubble-layer.md)
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
description: Describe the symptoms, causes, and resolution for the most common i
Previously updated : 10/21/2021- Last updated : 04/25/2023+
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
If the query returns results, you need to determine if a particular data type is
|8000 |HealthService |This event will specify if a workflow related to performance, event, or other data type collected is unable to forward to the service for ingestion to the workspace. | Event ID 2136 from source HealthService is written together with this event and can indicate the agent is unable to communicate with the service. Possible reasons might be misconfiguration of the proxy and authentication settings, network outage, or the network firewall or proxy doesn't allow TCP traffic from the computer to the service.| |10102 and 10103 |Health Service Modules |Workflow couldn't resolve the data source. |This issue can occur if the specified performance counter or instance doesn't exist on the computer or is incorrectly defined in the workspace data settings. If this is a user-specified [performance counter](data-sources-performance-counters.md#configure-performance-counters), verify the information specified follows the correct format and exists on the target computers. | |26002 |Health Service Modules |Workflow couldn't resolve the data source. |This issue can occur if the specified Windows event log doesn't exist on the computer. This error can be safely ignored if the computer isn't expected to have this event log registered. Otherwise, if this is a user-specified [event log](data-sources-windows-events.md#configure-windows-event-logs), verify the information specified is correct. |
+
+## Pinned Certificate Issues with Older Microsoft Monitoring Agents - Breaking Change
+
+*Root CA Change Overview*
+
+As of 30 June 2023, Log Analytics back-end will no longer be accepting connections from MMA that reference an outdate root certificate. These MMAs are older versions prior to the Winter 2020 release (Log Analytics Agent) and prior to SCOM 2019 UR3 (SCOM). Any version, Bundle: 10.20.18053 / Extension: 1.0.18053.0, or greater will not have any issues, as well as any version above SCOM 2019 UR3. Any agent older than that will break and no longer be working and uploading to Log Analytics.
+
+*What exactly is changing?*
+
+As part of an ongoing security effort across various Azure services, Azure Log Analytics will be officially switching from the Baltimore CyberTrust CA Root to the [DigiCert Global G2 CA Root](https://www.digicert.com/kb/digicert-root-certificates.htm). This change will impact TLS communications with Log Analytics if the new DigiCert Global G2 CA Root certificate is missing from the OS, or the application is referencing the old Baltimore Root CA. **What this means is that Log Analytics will no longer accept connections from MMA that use this old root CA after it's retired.**
+
+*Solution products*
+
+You may have received the breaking change notification even if you have not personally installed the Microsoft Monitoring Agent. That is because various Azure products leverage the Microsoft Monitoring Agent. If youΓÇÖre using one of these products, you may be affected as they leverage the Windows Log Analytics Agent. For those products with links below there may be specific instructions that will require you to upgrade to the latest agent.
+
+- VM Insights
+- [System Center Operations Manager (SCOM)](/system-center/scom/deploy-upgrade-agents)
+- [System Center Service Manager (SCSM)](/system-center/scsm/upgrade-service-manager)
+- [Microsoft Defender for Server](/microsoft-365/security/defender-endpoint/update-agent-mma-windows)
+- [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/update-agent-mma-windows)
+- Azure Sentinel
+- [Azure Automation Agent-based Hybrid Worker](../../automation/automation-windows-hrw-install.md#update-log-analytics-agent-to-latest-version)
+- [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md?tabs=python-2#update-log-analytics-agent-to-latest-version)
+- [Azure Automation Update Management](../../automation/update-management/overview.md#update-windows-log-analytics-agent-to-latest-version)
++
+*Identifying and Remidiating Breaking Agents*
+
+For deployments with a limited number of agents, we highly recommend you upgrading your agent per node via [these management instructions](https://aka.ms/MMA-Upgrade).
+
+For deployments with multiple nodes, we've written a script that will detect any affected breaking MMAs per subscription and then subsequently upgrade them to the latest version. These scripts need to be run sequentially, starting with UpdateMMA.ps1 then UpgradeMMA.ps1. Depending on the machine, the script may take a while. PowerShell 7 or greater is needed to run to avoid a timeout.
+
+*UpdateMMA.ps1*
+This script will go through VMs in your subscriptions, check for existing MMAs installed and then generate a .csv file of agents that need to be upgraded.
+
+*UpgradeMMA.ps1*
+This script will use the .CSV file generated in UpdateMMA.ps1 to upgrade all breaking MMAs.
+
+Both of these scripts may take a while to complete.
+
+# [UpdateMMA](#tab/UpdateMMA)
+
+```powershell
+# UpdateMMA.ps1
+# This script is to be run per subscription, the customer has to set the az subscription before running this within the terminal scope.
+# This script uses parallel processing, modify the $parallelThrottleLimit parameter to either increase or decrease the number of parallel processes
+# PS> .\UpdateMMA.ps1 GetInventory
+# The above command will generate a csv file with the details of VM's and VMSS that require MMA upgrade.
+# The customer can modify the the csv by adding/removing rows if needed
+# Update the MMA by running the script again and passing the csv file as parameter as shown below:
+# PS> .\UpdateMMA.ps1 Upgrade
+# If you don't want to check the inventory, then run the script wiht an additional -no-inventory-check
+# PS> .\UpdateMMA.ps1 GetInventory & .\UpdateMMA.ps1 Upgrade
++
+# This version of the script requires Powershell version >= 7 in order to improve performance via ForEach-Object -Parallel
+# https://docs.microsoft.com/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.1
+if ($PSVersionTable.PSVersion.Major -lt 7)
+{
+ Write-Host "This script requires Powershell version 7 or newer to run. Please see https://docs.microsoft.com/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.1."
+ exit 1
+}
+
+$parallelThrottleLimit = 16
+$mmaFixVersion = [version]"10.20.18053.0"
+
+function GetVmsWithMMAInstalled
+{
+ param(
+ $fileName
+ )
+
+ $vmList = az vm list --show-details --query "[?powerState=='VM running'].{ResourceGroup:resourceGroup, VmName:name}" | ConvertFrom-Json
+
+ if(!$vmList)
+ {
+ Write-Host "Cannot get the VM list, this script can only detect the running VM's"
+ return
+ }
+
+ $vmsCount = $vmList.Length
+
+ $vmParallelThrottleLimit = $parallelThrottleLimit
+ if ($vmsCount -lt $vmParallelThrottleLimit)
+ {
+ $vmParallelThrottleLimit = $vmsCount
+ }
+
+ if($vmsCount -eq 1)
+ {
+ $vmGroups += ,($vmList[0])
+ }
+ else
+ {
+ # split the vm's into batches to do parallel processing
+ for ($i = 0; $i -lt $vmsCount; $i += $vmParallelThrottleLimit)
+ {
+ $vmGroups += , ($vmList[$i..($i + $vmParallelThrottleLimit - 1)])
+ }
+ }
+
+ Write-Host "Detected $vmsCount Vm's running in this subscription."
+ $hash = [hashtable]::Synchronized(@{})
+ $hash.One = 1
+
+ $vmGroups | Foreach-Object -ThrottleLimit $parallelThrottleLimit -Parallel {
+ $len = $using:vmsCount
+ $hash = $using:hash
+ $_ | ForEach-Object {
+ $percent = 100 * $hash.One++ / $len
+ Write-Progress -Activity "Getting VM Inventory" -PercentComplete $percent
+ $vmName = $_.VmName
+ $resourceGroup = $_.ResourceGroup
+ $responseJson = az vm run-command invoke --command-id RunPowerShellScript --name $vmName -g $resourceGroup --scripts '@UpgradeMMA.ps1' --parameters "functionName=GetMMAVersion" --output json | ConvertFrom-Json
+ if($responseJson)
+ {
+ $mmaVersion = $responseJson.Value[0].message
+ if ($mmaVersion)
+ {
+ $extensionName = az vm extension list -g $resourceGroup --vm-name $vmName --query "[?name == 'MicrosoftMonitoringAgent'].name" | ConvertFrom-Json
+ if ($extensionName)
+ {
+ $installType = "Extension"
+ }
+ else
+ {
+ $installType = "Installer"
+ }
+ $csvObj = New-Object -TypeName PSObject -Property @{
+ 'Name' = $vmName
+ 'Resource_Group' = $resourceGroup
+ 'Resource_Type' = "VM"
+ 'Install_Type' = $installType
+ 'Version' = $mmaVersion
+ "Instance_Id" = ""
+ }
+ $csvObj | Export-Csv $using:fileName -Append -Force
+ }
+ }
+ }
+ }
+}
+
+function GetVmssWithMMAInstalled
+{
+ param(
+ $fileName
+ )
+
+ # get the vmss list which are successfully provisioned
+ $vmssList = az vmss list --query "[?provisioningState=='Succeeded'].{ResourceGroup:resourceGroup, VmssName:name}" | ConvertFrom-Json
+
+ $vmssCount = $vmssList.Length
+ Write-Host "Detected $vmssCount Vmss running in this subscription."
+ $hash = [hashtable]::Synchronized(@{})
+ $hash.One = 1
+
+ $vmssList | Foreach-Object -ThrottleLimit $parallelThrottleLimit -Parallel {
+ $len = $using:vmsCount
+ $hash = $using:hash
+ $percent = 100 * $hash.One++ / $len
+ Write-Progress -Activity "Getting VMSS Inventory" -PercentComplete $percent
+ $vmssName = $_.VmssName
+ $resourceGroup = $_.ResourceGroup
+
+ # get running vmss instance ids
+ $vmssInstanceIds = az vmss list-instances --resource-group $resourceGroup --name $vmssName --expand instanceView --query "[?instanceView.statuses[1].displayStatus=='VM running'].instanceId" | ConvertFrom-Json
+ if ($vmssInstanceIds.Length -gt 0)
+ {
+ $isMMAExtensionInstalled = az vmss extension list -g $resourceGroup --vmss-name $vmssName --query "[?name == 'MicrosoftMonitoringAgent'].name" | ConvertFrom-Json
+ if ($isMMAExtensionInstalled )
+ {
+ # check an instance in vmss, if it needs an MMA upgrade. Since the extension is installed at VMSS level, checking for bad version in 1 instance should be fine.
+ $responseJson = az vmss run-command invoke --command-id RunPowerShellScript --name $vmssName -g $resourceGroup --instance-id $vmssInstanceIds[0] --scripts '@UpgradeMMA.ps1' --parameters "functionName=GetMMAVersion" --output json | ConvertFrom-Json
+ $mmaVersion = $responseJson.Value[0].message
+ if ($mmaVersion)
+ {
+ $csvObj = New-Object -TypeName PSObject -Property @{
+ 'Name' = $vmssName
+ 'Resource_Group' = $resourceGroup
+ 'Resource_Type' = "VMSS"
+ 'Install_Type' = "Extension"
+ 'Version' = $mmaVersion
+ "Instance_Id" = ""
+ }
+ $csvObj | Export-Csv $using:fileName -Append -Force
+ }
+ }
+ else
+ {
+ foreach ($instanceId in $vmssInstanceIds)
+ {
+ $responseJson = az vmss run-command invoke --command-id RunPowerShellScript --name $vmssName -g $resourceGroup --instance-id $instanceId --scripts '@UpgradeMMA.ps1' --parameters "functionName=GetMMAVersion" --output json | ConvertFrom-Json
+ $mmaVersion = $responseJson.Value[0].message
+ if ($mmaVersion)
+ {
+ $csvObj = New-Object -TypeName PSObject -Property @{
+ 'Name' = $vmssName
+ 'Resource_Group' = $resourceGroup
+ 'Resource_Type' = "VMSS"
+ 'Install_Type' = "Installer"
+ 'Version' = $mmaVersion
+ "Instance_Id" = $instanceId
+ }
+ $csvObj | Export-Csv $using:fileName -Append -Force
+ }
+ }
+ }
+ }
+ }
+}
+
+function Upgrade
+{
+ param(
+ $fileName = "MMAInventory.csv"
+ )
+ Import-Csv $fileName | ForEach-Object -ThrottleLimit $parallelThrottleLimit -Parallel {
+ $mmaVersion = [version]$_.Version
+ if($mmaVersion -lt $using:mmaFixVersion)
+ {
+ if ($_.Install_Type -eq "Extension")
+ {
+ if ($_.Resource_Type -eq "VMSS")
+ {
+ # if the extension is installed with a custom name, provide the name using the flag: --extension-instance-name <extension name>
+ az vmss extension set --name MicrosoftMonitoringAgent --publisher Microsoft.EnterpriseCloud.Monitoring --force-update --vmss-name $_.Name --resource-group $_.Resource_Group --no-wait --output none
+ }
+ else
+ {
+ # if the extension is installed with a custom name, provide the name using the flag: --extension-instance-name <extension name>
+ az vm extension set --name MicrosoftMonitoringAgent --publisher Microsoft.EnterpriseCloud.Monitoring --force-update --vm-name $_.Name --resource-group $_.Resource_Group --no-wait --output none
+ }
+ }
+ else {
+ if ($_.Resource_Type -eq "VMSS")
+ {
+ az vmss run-command invoke --command-id RunPowerShellScript --name $_.Name -g $_.Resource_Group --instance-id $_.Instance_Id --scripts '@UpgradeMMA.ps1' --parameters "functionName=UpgradeMMA" --output none
+ }
+ else
+ {
+ az vm run-command invoke --command-id RunPowerShellScript --name $_.Name -g $_.Resource_Group --scripts '@UpgradeMMA.ps1' --parameters "functionName=UpgradeMMA" --output none
+ }
+ }
+ }
+ }
+}
+
+function GetInventory
+{
+ param(
+ $fileName = "MMAInventory.csv"
+ )
+
+ # create a new file
+ New-Item -Name $fileName -ItemType File -Force
+ GetVmsWithMMAInstalled $fileName
+ GetVmssWithMMAInstalled $fileName
+}
+
+switch ($args.Count)
+{
+ 0 {
+ Write-Host "The arguments provided are incorrect."
+ Write-Host "To get the Inventory: Run the script as: PS> .\UpdateMMA.ps1 GetInventory"
+ Write-Host "To update MMA from Inventory: Run the script as: PS> .\UpdateMMA.ps1 Upgrade"
+ Write-Host "To do the both steps together: PS> .\UpdateMMA.ps1 GetInventory & .\UpdateMMA.ps1 Upgrade"
+ }
+ 1 {
+ $funcname = $args[0]
+ Invoke-Expression "& $funcname"
+ }
+ 2 {
+ $funcname = $args[0]
+ $funcargs = $args[1]
+ Invoke-Expression "& $funcname $funcargs"
+ }
+}
+```
+
+# [UpgradeMMA](#tab/UpgradeMMA)
+
+```powershell
+#UpgradeMMA.ps1
+
+param(
+ $functionName
+)
+
+$mmaLatestVersion32bitDownloadUrl = "https://go.microsoft.com/fwlink/?LinkId=828604"
+$mmaLatestVersion64bitDownloadUrl = "https://go.microsoft.com/fwlink/?LinkId=828603"
+$mmaName = 'Microsoft Monitoring Agent'
+$mmaFixVersion = [version]"10.20.18053.0"
+$regPath = 'HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\*'
+
+function GetMMAVersion
+{
+ $mmaVersion = (Get-ItemProperty $regPath | Where-Object { $_.DisplayName -eq $mmaName }).DisplayVersion
+ return $mmaVersion
+}
+
+function MMAUpgradeRequirementCheck
+{
+ $mmaVersion = [version](GetMMAVersion)
+ if ($mmaVersion -and ($mmaVersion -lt $mmaFixVersion))
+ {
+ return $TRUE
+ }
+ return $FALSE
+}
+
+function GetMMAUpgradeUrl
+{
+ $osArchitecture = (Get-WmiObject Win32_OperatingSystem).OSArchitecture
+ if ($osArchitecture -eq "64-bit")
+ {
+ $newMMADownloadUrl = $mmaLatestVersion64bitDownloadUrl
+ }
+ else
+ {
+ $newMMADownloadUrl = $mmaLatestVersion32bitDownloadUrl
+ }
+
+ return $newMMADownloadUrl
+}
+
+function UpgradeMMA
+{
+ $mmaUpgradeRequired = MMAUpgradeRequirementCheck
+ if ($mmaUpgradeRequired)
+ {
+ $mmaDownloadUrl = GetMMAUpgradeUrl
+ if ($mmaDownloadUrl)
+ {
+ $downloadedFile = "MMASetup.exe"
+ # Download mma exe files
+ Invoke-WebRequest "$mmaDownloadUrl" -OutFile $downloadedFile
+ if(Test-Path $PSScriptRoot\MMA)
+ {
+ Remove-Item $PSScriptRoot\MMA -Recurse -Force
+ }
+ # Extract MMA exe file
+ Start-Process -Wait -NoNewWindow -FilePath "$PSScriptRoot\$downloadedFile" -ArgumentList "/c /t:$PSScriptRoot\MMA"
+ # Run Setup.exe
+ Start-Process -Wait -NoNewWindow -FilePath "MMA\Setup.exe" -ArgumentList "/qn /l*v AgentUpgrade.log AcceptEndUserLicenseAgreement=1"
+ }
+ }
+}
+
+if ($functionName -eq "GetMMAVersion")
+{
+ GetMMAVersion
+}
+elseif ($functionName -eq "UpgradeMMA" )
+{
+ UpgradeMMA
+}
+else
+{
+ return "Wrong parameters"
+}
+```
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
Azure Monitor Agent supports connecting by using direct proxies, Log Analytics g
## Virtual network service tags
-Azure Monitor Agent supports [Azure virtual network service tags](../../virtual-network/service-tags-overview.md). Both *AzureMonitor* and *AzureResourceManager* tags are required.
+Azure Monitor Agent supports [Azure virtual network service tags](../../virtual-network/service-tags-overview.md). Both *AzureMonitor* and *AzureResourceManager* tags are required.
+
+Azure Virtual network service tags can be used to define network access controls on [network security groups](../../virtual-network/network-security-groups-overview.md#security-rules), [Azure Firewall](../../firewall/service-tags.md), and user-defined routes. Use service tags in place of specific IP addresses when you create security rules and routes. For scenarios where Azure virtual network service tags can not be used, the Firewall requirements are given below.
## Firewall requirements
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
For more information, see [Application monitoring for Azure App Service and Java
For more information, see [Monitoring Azure Functions with Azure Monitor Application Insights](./monitor-functions.md#distributed-tracing-for-java-applications-preview).
+## Azure Spring Apps
+
+For more information, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](../../spring-apps/how-to-application-insights.md).
+ ## Containers ### Docker entry point
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
You can find a Flask sample application that tracks requests in the [Azure Monit
## Track FastAPI applications
-OpenCensus doesn't have an extension for FastAPI. To write your own FastAPI middleware:
- 1. The following dependencies are required: - [fastapi](https://pypi.org/project/fastapi/) - [uvicorn](https://pypi.org/project/uvicorn/) In a production setting, we recommend that you deploy [uvicorn with gunicorn](https://www.uvicorn.org/deployment/#gunicorn).
-1. Add [FastAPI middleware](https://fastapi.tiangolo.com/tutorial/middleware/). Make sure that you set the span kind server: `span.span_kind = SpanKind.SERVER`.
-
-1. Run your application. Calls made to your FastAPI application should be automatically tracked. Telemetry should be logged directly to Azure Monitor.
-
- ```python
- # Opencensus imports
- from opencensus.ext.azure.trace_exporter import AzureExporter
- from opencensus.trace.samplers import ProbabilitySampler
- from opencensus.trace.tracer import Tracer
- from opencensus.trace.span import SpanKind
- from opencensus.trace.attributes_helper import COMMON_ATTRIBUTES
- # FastAPI imports
- from fastapi import FastAPI, Request
- # uvicorn
- import uvicorn
-
- app = FastAPI()
-
- HTTP_URL = COMMON_ATTRIBUTES['HTTP_URL']
- HTTP_STATUS_CODE = COMMON_ATTRIBUTES['HTTP_STATUS_CODE']
-
- exporter=AzureExporter(connection_string='<your-appinsights-connection-string-here>')
- sampler=ProbabilitySampler(1.0)
-
- # fastapi middleware for opencensus
- @app.middleware("http")
- async def middlewareOpencensus(request: Request, call_next):
- tracer = Tracer(exporter=exporter, sampler=sampler)
- with tracer.span("main") as span:
- span.span_kind = SpanKind.SERVER
+2. Download and install `opencensus-ext-fastapi` from [PyPI](https://pypi.org/project/opencensus-ext-fastapi/).
- response = await call_next(request)
+ `pip install opencensus-ext-fastapi`
- tracer.add_attribute_to_current_span(
- attribute_key=HTTP_STATUS_CODE,
- attribute_value=response.status_code)
- tracer.add_attribute_to_current_span(
- attribute_key=HTTP_URL,
- attribute_value=str(request.url))
+3. Instrument your application with the `fastapi` middleware.
- return response
+ ```python
+ from fastapi import FastAPI
+ from opencensus.ext.fastapi.fastapi_middleware import FastAPIMiddleware
- @app.get("/")
- async def root():
- return "Hello World!"
+ app = FastAPI(__name__)
+ app.add_middleware(FastAPIMiddleware)
- if __name__ == '__main__':
- uvicorn.run("example:app", host="127.0.0.1", port=5000, log_level="info")
+ @app.get('/')
+ def hello():
+ return 'Hello World!'
```
+4. Run your application. Calls made to your FastAPI application should be automatically tracked. Telemetry should be logged directly to Azure Monitor.
+ ## Next steps * [Application Map](./app-map.md)
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
# Use predictive autoscale to scale out before load demands in virtual machine scale sets
-*Predictive autoscale* uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. It predicts the overall CPU load by observing and learning from historical usage. This process ensures that scale-out occurs in time to meet the demand.
+Predictive autoscale uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. It predicts the overall CPU load by observing and learning from historical usage. This process ensures that scale-out occurs in time to meet the demand.
Predictive autoscale needs a minimum of 7 days of history to provide predictions. The most accurate results come from 15 days of historical data.
Predictive autoscale adheres to the scaling boundaries you've set for your virtu
:::image type="content" source="media/autoscale-predictive/predictive-charts-6.png" alt-text="Screenshot that shows three charts for predictive autoscale." lightbox="media/autoscale-predictive/predictive-charts-6.png":::
- - The top chart shows an overlaid comparison of actual versus predicted total CPU percentage. The time span of the graph shown is from the last 24 hours to the next 24 hours.
- - The middle chart shows the number of instances running at specific times over the last 24 hours.
- - The bottom chart shows the current Average CPU utilization over the last 24 hours.
+ - The top chart shows an overlaid comparison of actual versus predicted total CPU percentage. The time span of the graph shown is from the last 7 days to the next 24 hours.
+ - The middle chart shows the maximum number of instances running over the last 7 days.
+ - The bottom chart shows the current Average CPU utilization over the last 7 days.
## Enable using an Azure Resource Manager template
azure-monitor Autoscale Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-using-powershell.md
+
+ Title: Configure autoscale using PowerShell
+description: Configure autoscale for a Virtual Machine Scale Set using PowerShell
+++ Last updated : 01/05/2023+++
+# Customer intent: As a user or dev ops administrator, I want to use powershell to set up autoscale so I can scale my VMSS.
+++
+# Configure autoscale with PowerShell
+
+Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale using the Azure portal, Azure CLI, PowerShell or ARM or Bicep templates.
+
+This article shows you how to configure autoscale for a Virtual Machine Scale Set with PowerShell, using the following steps:
+++ Create a scale set that you can autoscale++ Create rules to scale in and scale out++ Create a profile that uses your rules++ Apply the autoscale settings++ Update your autoscale settings with notifications+
+## Prerequisites
+
+To configure autoscale using PowerShell, you need an Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free).
+
+## Set up your environment
+
+```azurepowershell
+#Set the subscription Id, VMSS name, and resource group name
+$subscriptionId = (Get-AzContext).Subscription.Id
+$resourceGroupName="rg-powershell-autoscale"
+$vmssName="vmss-001"
+```
+
+## Create a Virtual Machine Scale Set
+
+Create a scale set using the following cmdlets. Set the `$resourceGroupName` and `$vmssName` variables to suite your environment.
+
+```azurepowershell
+# create a new resource group
+New-AzResourceGroup -ResourceGroupName $resourceGroupName -Location "EastUS"
+
+# Create login credentials for the VMSS
+$vmPassword = ConvertTo-SecureString "ChangeThisPassword1" -AsPlainText -Force
+$vmCred = New-Object System.Management.Automation.PSCredential('azureuser', $vmPassword)
++
+New-AzVmss `
+ -ResourceGroupName $resourceGroupName `
+ -Location "EastUS" `
+ -VMScaleSetName $vmssName `
+ -Credential $vmCred `
+ -VirtualNetworkName "myVnet" `
+ -SubnetName "mySubnet" `
+ -PublicIpAddressName "myPublicIPAddress" `
+ -LoadBalancerName "myLoadBalancer" `
+ -OrchestrationMode "Flexible"
+
+```
+
+## Create autoscale settings
+
+To create autoscale setting using PowerShell, follow the sequence below:
+
+1. Create rules using `New-AzAutoscaleScaleRuleObject`
+1. Create a profile using `New-AzAutoscaleProfileObject`
+1. Create the autoscale settings using `New-AzAutoscaleSetting`
+1. Update the settings using `Update-AzAutoscaleSetting`
+
+### Create rules
+
+Create scale in and scale out rules then associated them with a profile.
+Rules are created using the [`New-AzAutoscaleScaleRuleObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalescaleruleobject).
+
+The following PowerShell script creates two rules.
+++ Scale out when Percentage CPU exceeds 70%++ Scale in when Percentage CPU is less than 30%+
+```azurepowershell
+
+$rule1=New-AzAutoscaleScaleRuleObject `
+ -MetricTriggerMetricName "Percentage CPU" `
+ -MetricTriggerMetricResourceUri "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName" `
+ -MetricTriggerTimeGrain ([System.TimeSpan]::New(0,1,0)) `
+ -MetricTriggerStatistic "Average" `
+ -MetricTriggerTimeWindow ([System.TimeSpan]::New(0,5,0)) `
+ -MetricTriggerTimeAggregation "Average" `
+ -MetricTriggerOperator "GreaterThan" `
+ -MetricTriggerThreshold 70 `
+ -MetricTriggerDividePerInstance $false `
+ -ScaleActionDirection "Increase" `
+ -ScaleActionType "ChangeCount" `
+ -ScaleActionValue 1 `
+ -ScaleActionCooldown ([System.TimeSpan]::New(0,5,0))
++
+$rule2=New-AzAutoscaleScaleRuleObject `
+ -MetricTriggerMetricName "Percentage CPU" `
+ -MetricTriggerMetricResourceUri "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName" `
+ -MetricTriggerTimeGrain ([System.TimeSpan]::New(0,1,0)) `
+ -MetricTriggerStatistic "Average" `
+ -MetricTriggerTimeWindow ([System.TimeSpan]::New(0,5,0)) `
+ -MetricTriggerTimeAggregation "Average" `
+ -MetricTriggerOperator "LessThan" `
+ -MetricTriggerThreshold 30 `
+ -MetricTriggerDividePerInstance $false `
+ -ScaleActionDirection "Decrease" `
+ -ScaleActionType "ChangeCount" `
+ -ScaleActionValue 1 `
+ -ScaleActionCooldown ([System.TimeSpan]::New(0,5,0))
+```
+The table below describes the parameters used in the `New-AzAutoscaleScaleRuleObject` cmdlet.
+
+|Parameter| Description|
+|||
+|`MetricTriggerMetricName` |Sets the autoscale trigger metric
+|`MetricTriggerMetricResourceUri`| Specifies the resource that the `MetricTriggerMetricName` metric belongs to. `MetricTriggerMetricResourceUri` can be any resource and not just the resource that's being scaled. For example, you can scale your Virtual Machine Scale Sets based on metrics created by a load balancer, database, or the scale set itself. The `MetricTriggerMetricName` must exist for the specified `MetricTriggerMetricResourceUri`.
+|`MetricTriggerTimeGrain`|The sampling frequency of the metric that the rule monitors. `MetricTriggerTimeGrain` must be one of the predefined values for the specified metric and must be between 12 hours and 1 minute. For example, `MetricTriggerTimeGrain` = *PT1M*"* means that the metrics are sampled every 1 minute and aggregated using the aggregation method specified in `MetricTriggerStatistic`.
+|`MetricTriggerTimeAggregation` | The aggregation method within the timeGrain period. For example, statistic = "Average" and timeGrain = "PT1M" means that the metrics are aggregated every 1 minute by taking the average.
+|`MetricTriggerStatistic` |The aggregation method used to aggregate the sampled metrics. For example, TimeAggregation = "Average" aggregates the sampled metrics by taking the average.
+|`MetricTriggerTimeWindow` | The amount of time that the autoscale engine looks back to aggregate the metric. This value must be greater than the delay in metric collection, which varies by resource. It must be between 5 minutes and 12 hours. For example, 10 minutes means that every time autoscale runs, it queries metrics for the past 10 minutes. This feature allows your metrics to stabilize and avoids reacting to transient spikes.
+|`MetricTriggerThreshold`|Defines the value of the metric that triggers a scale event.
+|`MetricTriggerOperator` |Specifies the logical comparative operating to use when evaluating the metric value.
+|`MetricTriggerDividePerInstance`| When set to `true` divides the trigger metric by the total number of instances. For example, If message count is 300 and there are 5 instances running, the calculated metric value is 60 messages per instance. This property isn't applicable for all metrics.
+| `ScaleActionDirection`| Specify scaling in or out. Valid values are `Increase` and `Decrease`.
+|`ScaleActionType` |Scale by a specific number of instances, scale to a specific instance count, or scale by percentage of the current instance count. Valid values include `ChangeCount`, `ExactCount`, and `PercentChangeCount`.
+|`ScaleActionCooldown`| The minimum amount of time to wait between scale operations. This is to allow the metrics to stabilize and avoids [flapping](./autoscale-flapping.md). For example, if `ScaleActionCooldown` is 10 minutes and a scale operation just occurred, Autoscale won't attempt to scale again for 10 minutes.
++
+### Create a default autoscale profile and associate the rules
+
+After defining the scale rules, create a profile. The profile specifies the default, upper, and lower instance count limits, and the times that the associated rules can be applied. Use the [`New-AzAutoscaleProfileObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscaleprofileobject) cmdlet to create a new autoscale profile. As this is a default profile, it doesn't have any schedule parameters. The default profile is active at times that no other profiles are active
+
+```azurepowershell
+$defaultProfile=New-AzAutoscaleProfileObject `
+ -Name "default" `
+ -CapacityDefault 1 `
+ -CapacityMaximum 10 `
+ -CapacityMinimum 1 `
+ -Rule $rule1, $rule2
+```
+
+The table below describes the parameters used in the `New-AzAutoscaleProfileObject` cmdlet.
+
+|Parameter|Description|
+|||
+|`CapacityDefault`| The number of instances that are if metrics aren't available for evaluation. The default is only used if the current instance count is lower than the default.
+| `CapacityMaximum` |The maximum number of instances for the resource. The maximum number of instances is further limited by the number of cores that are available in the subscription.
+| `CapacityMinimum` |The minimum number of instances for the resource.
+|`FixedDateEnd`| The end time for the profile in ISO 8601 format for.
+|`FixedDateStart` |The start time for the profile in ISO 8601 format.
+| `Rule` |A collection of rules that provide the triggers and parameters for the scaling action when this profile is active. A maximum of 10, comma separated rules can be specified.
+|`RecurrenceFrequency` | How often the scheduled profile takes effect. This value must be `week`.
+|`ScheduleDay`| A collection of days that the profile takes effect on when specifying a recurring schedule. Possible values are Sunday through Saturday. For more information on recurring schedules, see [Add a recurring profile using CLI](./autoscale-multiprofile.md?tabs=powershell#add-a-recurring-profile-using-powershell)
+|`ScheduleHour`| A collection of hours that the profile takes effect on. Values supported are 0 to 23.
+|`ScheduleMinute`| A collection of minutes at which the profile takes effect.
+|`ScheduleTimeZone` |The timezone for the hours of the profile.
+
+### Apply the autoscale settings
+
+After fining the rules and profile, apply the autoscale settings using [`New-AzAutoscaleSetting`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalesetting). To update existing autoscale setting use [`Update-AzAutoscaleSetting`](https://learn.microsoft.com/powershell/module/az.monitor/add-azautoscalesetting)
+
+```azurepowershell
+New-AzAutoscaleSetting `
+ -Name vmss-autoscalesetting1 `
+ -ResourceGroupName $resourceGroupName `
+ -Location eastus `
+ -Profile $defaultProfile `
+ -Enabled `
+ -PropertiesName "vmss-autoscalesetting1" `
+ -TargetResourceUri "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName"
+```
+
+### Add notifications to your autoscale settings
+
+Add notifications to your sale setting to trigger a webhook or send email notifications when a scale event occurs.
+For more information on webhook notifications, see [`New-AzAutoscaleWebhookNotificationObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalewebhooknotificationobject)
+
+Set a webhook using the following cmdlet;
+```azurepowershell
+
+ $webhook1=New-AzAutoscaleWebhookNotificationObject -Property @{} -ServiceUri "http://contoso.com/webhook1"
+```
+
+Configure the notification using the webhook and set up email notification using the [`New-AzAutoscaleNotificationObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalenotificationobject) cmdlet:
+
+```azurepowershell
+
+ $notification1=New-AzAutoscaleNotificationObject `
+ -EmailCustomEmail "jason@contoso.com" `
+ -EmailSendToSubscriptionAdministrator $true `
+ -EmailSendToSubscriptionCoAdministrator $true `
+ -Webhook $webhook1
+```
+
+Update your autoscale settings to apply the notification
+
+```azurepowershell
+
+Update-AzAutoscaleSetting `
+ -Name vmss-autoscalesetting1 `
+ -ResourceGroupName $resourceGroupName `
+ -Profile $defaultProfile `
+ -Notification $notification1 `
+ -TargetResourceUri "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName"
+
+```
+
+## Review your autoscale settings
+
+To review your autoscale settings, load the settings into a variable using `Get-AzAutoscaleSetting` then output the variable as follows:
+
+```azurepowershell
+ $autoscaleSetting=Get-AzAutoscaleSetting -ResourceGroupName $resourceGroupName -Name vmss-autoscalesetting1
+ $autoscaleSetting | Select-Object -Property *
+```
+
+Get your autoscale history using `AzAutoscaleHistory`
+```azurepowershell
+Get-AzAutoscaleHistory -ResourceId /subscriptions/<subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName
+```
+
+## Scheduled and recurring profiles
+
+### Add a scheduled profile for a special event
+
+Set up autoscale profiles to scale differently for specific events. For example, for a day when demand will be higher than usual, create a profile with increased maximum and minimum instance limits.
+
+The following example uses the same rules as the default profile defined above, but sets new instance limits for a specific date. You can also configure different rules to be used with the new profile.
+
+```azurepowershell
+$highDemandDay=New-AzAutoscaleProfileObject `
+ -Name "High-demand-day" `
+ -CapacityDefault 7 `
+ -CapacityMaximum 30 `
+ -CapacityMinimum 5 `
+ -FixedDateEnd ([System.DateTime]::Parse("2023-12-31T14:00:00Z")) `
+ -FixedDateStart ([System.DateTime]::Parse("2023-12-31T13:00:00Z")) `
+ -FixedDateTimeZone "UTC" `
+ -Rule $rule1, $rule2
+
+Update-AzAutoscaleSetting `
+ -Name vmss-autoscalesetting1 `
+ -ResourceGroupName $resourceGroupName `
+ -Profile $defaultProfile, $highDemandDay `
+ -Notification $notification1 `
+ -TargetResourceUri "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName"
+
+```
+
+### Add a recurring scheduled profile
+
+Recurring profiles let you schedule a scaling profile that repeats each week. For example, scale to a single instance on the weekend from Friday night to Monday morning.
+
+While scheduled profiles have a start and end date, recurring profiles don't have an end time. A profile remains active until the next profile's start time. Therefore, when you create a recurring profile you must create a recurring default profile that starts when you want the previous recurring profile to finish.
+
+For example, to configure a weekend profile that starts on Friday nights and ends on Monday mornings, create a profile that starts on Friday night, then create recurring profile with your default settings that starts on Monday morning.
+
+The following script creates a weekend profile and an addition default profile to end the weekend profile.
+```azurepowershell
+$fridayProfile=New-AzAutoscaleProfileObject `
+ -Name "Weekend" `
+ -CapacityDefault 1 `
+ -CapacityMaximum 1 `
+ -CapacityMinimum 1 `
+ -RecurrenceFrequency week `
+ -ScheduleDay "Friday" `
+ -ScheduleHour 22 `
+ -ScheduleMinute 00 `
+ -ScheduleTimeZone "Pacific Standard Time" `
+ -Rule $rule1, $rule2
++
+$defaultRecurringProfile=New-AzAutoscaleProfileObject `
+ -Name "default recurring profile" `
+ -CapacityDefault 2 `
+ -CapacityMaximum 10 `
+ -CapacityMinimum 2 `
+ -RecurrenceFrequency week `
+ -ScheduleDay "Monday" `
+ -ScheduleHour 00 `
+ -ScheduleMinute 00 `
+ -ScheduleTimeZone "Pacific Standard Time" `
+ -Rule $rule1, $rule2
+
+New-AzAutoscaleSetting `
+ -Location eastus `
+ -Name vmss-autoscalesetting1 `
+ -ResourceGroupName $resourceGroupName `
+ -Profile $defaultRecurringProfile, $fridayProfile `
+ -Notification $notification1 `
+ -TargetResourceUri "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssName"
+
+```
+
+For more information on scheduled profiles, see [Autoscale with multiple profiles](./autoscale-multiprofile.md)
+
+## Other autoscale commands
+
+For a complete list of PowerShell cmdlets for autoscale, see the [PowerShell Module Browser](https://learn.microsoft.com/powershell/module/?term=azautoscale)
+
+## Clean up resources
+
+To clean up the resources you created in this tutorial, delete the resource group that you created.
+The following cmdlet deletes the resource group and all of its resources.
+```azurecli
+
+Remove-AzResourceGroup -Name $resourceGroupName
+
+```
+
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
This article describes [Cost optimization](/azure/architecture/framework/cost/)
| Recommendation | Benefit | |:|:|
-| Configure VM agents to collect only important events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#controlling-costs) for guidance on data to collect and strategies for using [XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to limit it.|
+| Configure VM agents to collect only important events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#control-costs) for guidance on data to collect and strategies for using [XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to limit it.|
| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. | | Use transformations to filter unnecessary data from collected events. | [Transformations](essentials/data-collection-transformations.md) can be used in data collection rules to remove unnecessary data or even entire columns from events collected from the virtual machine which can significantly reduce the cost for their ingestion and retention. |
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
The following table summarizes the access modes:
| Who is each model intended for? | Central administration.<br>Administrators who need to configure data collection and users who need access to a wide variety of resources. Also currently required for users who need to access logs for resources outside of Azure. | Application teams.<br>Administrators of Azure resources being monitored. Allows them to focus on their resource without filtering. | | What does a user require to view logs? | Permissions to the workspace.<br>See "Workspace permissions" in [Manage access using workspace permissions](./manage-access.md#azure-rbac). | Read access to the resource.<br>See "Resource permissions" in [Manage access using Azure permissions](./manage-access.md#azure-rbac). Permissions can be inherited from the resource group or subscription or directly assigned to the resource. Permission to the logs for the resource will be automatically assigned. The user doesn't require access to the workspace.| | What is the scope of permissions? | Workspace.<br>Users with access to the workspace can query all logs in the workspace from tables they have permissions to. See [Set table-level read access](./manage-access.md#set-table-level-read-access). | Azure resource.<br>Users can query logs for specific resources, resource groups, or subscriptions they have access to in any workspace, but they can't query logs for other resources. |
-| How can a user access logs? | On the **Azure Monitor** menu, select **Logs**.<br><br>Select **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#azure-workbooks). | Select **Logs** on the menu for the Azure resource. Users will have access to data for that resource.<br><br>Select **Logs** on the **Azure Monitor** menu. Users will have access to data for all resources they have access to.<br><br>Select **Logs** from **Log Analytics workspaces**. Users will have access to data for all resources they have access to.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#azure-workbooks). |
+| How can a user access logs? | On the **Azure Monitor** menu, select **Logs**.<br><br>Select **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#azure-workbooks). | Select **Logs** on the menu for the Azure resource. Users will have access to data for that resource.<br><br>Select **Logs** on the **Azure Monitor** menu. Users will have access to data for all resources they have access to.<br><br>Select **Logs** from **Log Analytics workspaces**, if users have access to the workspace.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#azure-workbooks). |
## Access control mode
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-overview.md
Title: Profile production apps in Azure with Application Insights Profiler
-description: Identify the hot path in your web server code with a low-footprint profiler
+description: Identify the hot path in your web server code with a low-footprint profiler.
ms.contributor: charles.weininger Last updated 07/15/2022
# Profile production applications in Azure with Application Insights Profiler
-Diagnosing performance issues can prove difficult, especially when your application is running on production environment in the cloud. The cloud is dynamic, with machines coming and going, user input and other conditions constantly changing, and the potential for high scale. Slow responses in your application could be caused by infrastructure, framework, or application code handling the request in the pipeline.
+Diagnosing performance issues can be difficult, especially when your application is running on a production environment in the cloud. The cloud is dynamic. Machines come and go, and user input and other conditions are constantly changing. There's also potential for high scale. Slow responses in your application could be caused by infrastructure, framework, or application code handling the request in the pipeline.
-With Application Insights Profiler, you can capture and view performance traces for your application in all these dynamic situations, automatically at-scale, without negatively affecting your end users. The Profiler captures the following information so you can easily identify performance issues while your app is running in Azure:
+With Application Insights Profiler, you can capture and view performance traces for your application in all these dynamic situations. The process occurs automatically at scale and doesn't negatively affect your users. Profiler captures the following information so that you can easily identify performance issues while your app is running in Azure:
-- The median, fastest, and slowest response times for each web request made by your customers.-- Helps you identify the ΓÇ£hotΓÇ¥ code path spending the most time handling a particular web request.
+- Identifies the median, fastest, and slowest response times for each web request made by your customers.
+- Helps you identify the "hot" code path spending the most time handling a particular web request.
-Enable the Profiler on all of your Azure applications to catch issues early and prevent your customers from being widely impacted. When you enable the Profiler, it will gather data with these triggers:
+Enable the Profiler on all your Azure applications to catch issues early and prevent your customers from being widely affected. When you enable Profiler, it gathers data with these triggers:
-- **Sampling Trigger**: starts the Profiler randomly about once an hour for 2 minutes.-- **CPU Trigger**: starts the Profiler when the CPU usage percentage is over 80%.-- **Memory Trigger**: starts the Profiler when memory usage is above 80%.
+- **Sampling trigger**: Starts Profiler randomly about once an hour for two minutes.
+- **CPU trigger**: Starts Profiler when the CPU usage percentage is over 80 percent.
+- **Memory trigger**: Starts Profiler when memory usage is above 80 percent.
Each of these triggers can be configured, enabled, or disabled on the [Configure Profiler page](./profiler-settings.md#trigger-settings). ## Overhead and sampling algorithm
-Profiler randomly runs two minutes/hour on each virtual machine hosting the application with Profiler enabled for capturing traces. When Profiler is running, it adds from 5-15% CPU overhead to the server.
+Profiler randomly runs two minutes per hour on each virtual machine hosting the application with Profiler enabled for capturing traces. When Profiler is running, it adds from 5 percent to 15 percent CPU overhead to the server.
## Supported in Profiler
-Profiler works with .NET applications deployed on the following Azure services. View specific instructions for enabling Profiler for each service type in the links below.
+Profiler works with .NET applications deployed on the following Azure services. View specific instructions for enabling Profiler for each service type in the following links.
| Compute platform | .NET (>= 4.6) | .NET Core | Java | | - | - | | - | | [Azure App Service](profiler.md) | Yes | Yes | No |
-| [Azure Virtual Machines and virtual machine scale sets for Windows](profiler-vm.md) | Yes | Yes | No |
-| [Azure Virtual Machines and virtual machine scale sets for Linux](profiler-aspnetcore-linux.md) | No | Yes | No |
+| [Azure Virtual Machines and Virtual Machine Scale Sets for Windows](profiler-vm.md) | Yes | Yes | No |
+| [Azure Virtual Machines and Virtual Machine Scale Sets for Linux](profiler-aspnetcore-linux.md) | No | Yes | No |
| [Azure Cloud Services](profiler-cloudservice.md) | Yes | Yes | N/A | | [Azure Container Instances for Windows](profiler-containers.md) | No | Yes | No | | [Azure Container Instances for Linux](profiler-containers.md) | No | Yes | No |
Profiler works with .NET applications deployed on the following Azure services.
| Azure Spring Cloud | N/A | No | No | | [Azure Service Fabric](profiler-servicefabric.md) | Yes | Yes | No |
-If you've enabled Profiler but aren't seeing traces, check our [Troubleshooting guide](profiler-troubleshooting.md).
+If you've enabled Profiler but aren't seeing traces, see the [Troubleshooting guide](profiler-troubleshooting.md).
## Limitations -- **Data retention**: The default data retention period is five days. -- **Profiling web apps**:
- - While you can use the Profiler at no extra cost, your web app must be hosted in the basic tier of the Web Apps feature of Azure App Service, at minimum.
- - You can only attach 1 profiler to each web app.
+- **Data retention**: The default data retention period is five days.
+- **Profiling web apps**:
+ - Although you can use Profiler at no extra cost, your web app must be hosted in the basic tier of the Web Apps feature of Azure App Service, at minimum.
+ - You can attach only one profiler to each web app.
## Next steps Learn how to enable Profiler on your Azure service: - [Azure App Service](./profiler.md) - [Azure Functions app](./profiler-azure-functions.md)-- [Cloud Service](./profiler-cloudservice.md)-- [Service Fabric app](./profiler-servicefabric.md)-- [Azure Virtual Machine](./profiler-vm.md)
+- [Azure Cloud Services](./profiler-cloudservice.md)
+- [Azure Service Fabric app](./profiler-servicefabric.md)
+- [Azure Virtual Machines](./profiler-vm.md)
- [ASP.NET Core application hosted in Linux on Azure App Service](./profiler-aspnetcore-linux.md) - [ASP.NET Core application running in containers](./profiler-containers.md)
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-settings.md
Title: Configure Application Insights Profiler | Microsoft Docs
-description: Use the Azure Application Insights Profiler settings pane to see Profiler status and start profiling sessions
+description: Use the Application Insights Profiler settings pane to see Profiler status and start profiling sessions
ms.contributor: Charles.Weininger Last updated 08/09/2022
Last updated 08/09/2022
# Configure Application Insights Profiler
-Once you've enabled the Application Insights Profiler, you can:
+After you enable Application Insights Profiler, you can:
-- Start a new profiling session-- Configure Profiler triggers-- View recent profiling sessions
+- Start a new profiling session.
+- Configure Profiler triggers.
+- View recent profiling sessions.
-To open the Azure Application Insights Profiler settings pane, select **Performance** from the pane on the left within your Application Insights page.
+To open the Application Insights Profiler settings pane, select **Performance** on the left pane on your Application Insights page.
-View profiler traces across your Azure resources via two methods:
+You can view Profiler traces across your Azure resources via two methods:
-**Profiler button**
+- By the **Profiler** button:
-Select the **Profiler** button from the top menu.
+ Select **Profiler**.
+ :::image type="content" source="./media/profiler-overview/profiler-button-inline.png" alt-text="Screenshot that shows the Profiler button on the Performance pane." lightbox="media/profiler-settings/profiler-button.png":::
-**By operation**
+- By operation:
-1. Select an operation from the **Operation name** list ("Overall" is highlighted by default).
-1. Select the **Profiler traces** button.
+ 1. Select an operation from the **Operation name** list. **Overall** is highlighted by default.
+ 1. Select **Profiler traces**.
- :::image type="content" source="./media/profiler-settings/operation-entry-inline.png" alt-text="Screenshot of selecting operation and Profiler traces to view all profiler traces." lightbox="media/profiler-settings/operation-entry.png":::
+ :::image type="content" source="./media/profiler-settings/operation-entry-inline.png" alt-text="Screenshot that shows selecting operation and Profiler traces to view all Profiler traces." lightbox="media/profiler-settings/operation-entry.png":::
-1. Select one of the requests from the list to the left.
-1. Select **Configure Profiler**.
+ 1. Select one of the requests from the list on the left.
+ 1. Select **Configure Profiler**.
- :::image type="content" source="./media/profiler-settings/configure-profiler-inline.png" alt-text="Screenshot of the overall selection and clicking Profiler traces to view all profiler traces." lightbox="media/profiler-settings/configure-profiler.png":::
+ :::image type="content" source="./media/profiler-settings/configure-profiler-inline.png" alt-text="Screenshot that shows the overall selection and clicking Profiler traces to view all profiler traces." lightbox="media/profiler-settings/configure-profiler.png":::
-Once within the Profiler, you can configure and view the Profiler. The **Application Insights Profiler** page has these features:
+Within Profiler, you can configure and view Profiler. The **Application Insights Profiler** page has the following features.
| Feature | Description | |-|-|
-Profile Now | Starts profiling sessions for all apps that are linked to this instance of Application Insights.
-Triggers | Allows you to configure triggers that cause the profiler to run.
-Recent profiling sessions | Displays information about past profiling sessions, which you can sort using the filters at the top of the page.
+**Profile now** | Starts profiling sessions for all apps that are linked to this instance of Application Insights.
+**Triggers** | Allows you to configure triggers that cause Profiler to run.
+**Recent profiling sessions** | Displays information about past profiling sessions, which you can sort by using the filters at the top of the page.
-## Profile Now
-Select **Profile Now** to start a profiling session on demand. When you click this link, all profiler agents that are sending data to this Application Insights instance will start to capture a profile. After 5 to 10 minutes, the profile session will show in the list below.
+## Profile now
+Select **Profile now** to start a profiling session on demand. When you select this link, all Profiler agents that are sending data to this Application Insights instance start to capture a profile. After 5 to 10 minutes, the profile session is shown in the list.
-To manually trigger a profiler session, you'll need, at minimum, *write* access on your role for the Application Insights component. In most cases, you get write access automatically. If you're having issues, you'll need the "Application Insights Component Contributor" subscription scope role added. [See more about role access control with Azure Monitoring](../app/resources-roles-access-control.md).
+To manually trigger a Profiler session, you need, at minimum, *write* access on your role for the Application Insights component. In most cases, you get write access automatically. If you're having issues, you need the **Application Insights Component Contributor** subscription scope role added. For more information, see [Resources, roles, and access control in Application Insights](../app/resources-roles-access-control.md).
-## Trigger Settings
+## Trigger settings
-Select the Triggers button on the menu bar to open the CPU, Memory, and Sampling trigger settings pane.
+Select **Triggers** to open the **Trigger Settings** pane that has the **CPU**, **Memory**, and **Sampling** trigger tabs.
-**CPU or Memory triggers**
+### CPU or Memory triggers
-You can set up a trigger to start profiling when the percentage of CPU or Memory use hits the level you set.
+You can set up a trigger to start profiling when the percentage of CPU or memory use hits the level you set.
| Setting | Description | |-|-|
-On / Off Button | On: profiler can be started by this trigger; Off: profiler won't be started by this trigger.
-Memory threshold | When this percentage of memory is in use, the profiler will be started.
-Duration | Sets the length of time the profiler will run when triggered.
-Cooldown | Sets the length of time the profiler will wait before checking for the memory or CPU usage again after it's triggered.
+On/Off button | On: Starts Profiler. Off: Doesn't start Profiler.
+Memory threshold | When this percentage of memory is in use, Profiler is started.
+Duration | Sets the length of time Profiler runs when triggered.
+Cooldown | Sets the length of time Profiler waits before checking for the memory or CPU usage again after it's triggered.
-**Sampling trigger**
+### Sampling trigger
-Unlike CPU or memory triggers, the Sampling trigger isn't triggered by an event. Instead, it's triggered randomly to get a truly random sample of your application's performance. You can:
+Unlike CPU or Memory triggers, an event doesn't trigger the Sampling trigger. Instead, it's triggered randomly to get a truly random sample of your application's performance.
+You can:
- Turn this trigger off to disable random sampling.-- Set how often profiling will occur and the duration of the profiling session.
+- Set how often profiling occurs and the duration of the profiling session.
| Setting | Description | |-|-|
-On / Off Button | On: profiler can be started by this trigger; Off: profiler won't be started by this trigger.
-Sample rate | The rate at which the profiler can occur. </br> <ul><li>The **Normal** setting collects data 5% of the time, which is about 2 minutes per hour.</li><li>The **High** setting profiles 50% of the time.</li><li>The **Maximum** setting profiles 75% of the time.</li></ul> </br> Normal is recommended for production environments.
-Duration | Sets the length of time the profiler will run when triggered.
+On/Off button | On: Starts Profiler. Off: Doesn't start Profiler.
+Sample rate | The rate at which Profiler can occur. </br> <ul><li>The **Normal** setting collects data 5% of the time, which is about 2 minutes per hour.</li><li>The **High** setting profiles 50% of the time.</li><li>The **Maximum** setting profiles 75% of the time.</li></ul> </br> We recommend the **Normal** setting for production environments.
+Duration | Sets the length of time Profiler runs when triggered.
-## Recent Profiling Sessions
-This section of the Profiler page displays recent profiling session information. A profiling session represents the time taken by the profiler agent while profiling one of the machines hosting your application. Open the profiles from a session by clicking on one of the rows. For each session, we show:
+## Recent profiling sessions
+This section of the **Profiler** page displays recent profiling session information. A profiling session represents the time taken by the Profiler agent while profiling one of the machines that hosts your application. Open the profiles from a session by selecting one of the rows. For each session, we show the following settings.
| Setting | Description | |-|-|
-Triggered by | How the session was started, either by a trigger, Profile Now, or default sampling.
+Triggered by | How the session was started, either by a trigger, Profile now, or default sampling.
App Name | Name of the application that was profiled.
-Machine Instance | Name of the machine the profiler agent ran on.
+Machine Instance | Name of the machine the Profiler agent ran on.
Timestamp | Time when the profile was captured.
-CPU % | Percentage of CPU that was being used while the profiler was running.
-Memory % | Percentage of memory that was being used while the profiler was running.
+CPU % | Percentage of CPU used while Profiler was running.
+Memory % | Percentage of memory used while Profiler was running.
## Next steps [Enable Profiler and view traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json)
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-trackrequests.md
Title: Write code to track requests with Azure Application Insights | Microsoft Docs
+ Title: Write code to track requests with Application Insights | Microsoft Docs
description: Write code to track requests with Application Insights so you can get profiles for your requests.
# Write code to track requests with Application Insights
-Azure Application Insights needs to track requests for your application in order to provide profiles for your application on the Performance page in the Azure portal.
+Application Insights needs to track requests for your application to provide profiles for your application on the **Performance** page in the Azure portal.
-For applications built on already-instrumented frameworks (like ASP.NET and ASP.NET Core)S, Application Insights can automatically track requests.
+For applications built on already-instrumented frameworks (like ASP.NET and ASP.NET Core), Application Insights can automatically track requests.
-But for other applications (like Azure Cloud Services worker roles and Service Fabric stateless APIs), you need to track requests with code that tells Application Insights where your requests begin and end. Requests telemetry is then sent to Application Insights, which you can view on the Performance page. Profiles are collected for those requests.
+For other applications (like Azure Cloud Services worker roles and Azure Service Fabric stateless APIs), you need to track requests with code that tells Application Insights where your requests begin and end. Requests telemetry is then sent to Application Insights, which you can view on the **Performance** page. Profiles are collected for those requests.
To manually track requests:
- 1. Early in the application lifetime, add the following code:
+ 1. Early in the application lifetime, add the following code:
```csharp using Microsoft.ApplicationInsights.Extensibility;
To manually track requests:
TelemetryConfiguration.Active.InstrumentationKey = "00000000-0000-0000-0000-000000000000"; ```
- For more information about this global instrumentation key configuration, see [Use Service Fabric with Application Insights](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/blob/dev/appinsights/ApplicationInsights.md).
+ For more information about this global instrumentation key configuration, see [Use Service Fabric with Application Insights](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/blob/dev/appinsights/ApplicationInsights.md).
1. For any piece of code that you want to instrument, add a `StartOperation<RequestTelemetry>` **using** statement around it, as shown in the following example:
To manually track requests:
} ```
- Calling `StartOperation<RequestTelemetry>` within another `StartOperation<RequestTelemetry>` scope isn't supported. You can use `StartOperation<DependencyTelemetry>` in the nested scope instead. For example:
+ Calling `StartOperation<RequestTelemetry>` within another `StartOperation<RequestTelemetry>` scope isn't supported. You can use `StartOperation<DependencyTelemetry>` in the nested scope instead. For example:
```csharp using (var getDetailsOperation = client.StartOperation<RequestTelemetry>("GetProductDetails"))
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
It will display a status page similar to:
#### Manual installation
-When you configure Profiler, updates are made to the web app's settings. If necessary, you can [apply the updates manually](./profiler.md#verify-always-on-setting-is-enabled).
+When you configure Profiler, updates are made to the web app's settings. If necessary, you can [apply the updates manually](./profiler.md#verify-the-always-on-setting-is-enabled).
#### Too many active profiling sessions
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-In this article, you learn how to run Application Insights Profiler on your Azure virtual machine (VM) or Azure virtual machine scale set via three different methods. Using any of these methods, you will:
+In this article, you learn how to run Application Insights Profiler on your Azure virtual machine (VM) or Azure virtual machine scale set via three different methods. With any of these methods, you:
- Configure the Azure Diagnostics extension to run Profiler.-- Install the Application Insights SDK onto a VM.
+- Install the Application Insights SDK on a VM.
- Deploy your application.-- View Profiler traces via the Application Insights instance on Azure portal.
+- View Profiler traces via the Application Insights instance in the Azure portal.
-## Pre-requisites
+## Prerequisites
-- A functioning [ASP.NET Core application](/aspnet/core/getting-started)
+You need:
+
+- A functioning [ASP.NET Core application](/aspnet/core/getting-started).
- An [Application Insights resource](../app/create-workspace-resource.md).-- Review the Azure Resource Manager templates for the Azure Diagnostics extension:
+- To review the Azure Resource Manager templates (ARM templates) for the Azure Diagnostics extension:
- [VM](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachine.json) - [Virtual machine scale set](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachineScaleSet.json)
-## Add Application Insights SDK to your application
+## Add the Application Insights SDK to your application
1. Open your ASP.NET core project in Visual Studio. 1. Select **Project** > **Add Application Insights Telemetry**.
-1. Select **Azure Application Insights**, then click **Next**.
+1. Select **Azure Application Insights** > **Next**.
-1. Select the subscription where your Application Insights resource lives, then click **Next**.
+1. Select the subscription where your Application Insights resource lives and select **Next**.
-1. Select where to save connection string, then click **Next**.
+1. Select where to save the connection string and select **Next**.
1. Select **Finish**. > [!NOTE]
-> For full instructions, including enabling Application Insights on your ASP.NET Core application without Visual Studio, see the [Application Insights for ASP.NET Core applications](../app/asp-net-core.md).
+> For full instructions, including how to enable Application Insights on your ASP.NET Core application without Visual Studio, see the [Application Insights for ASP.NET Core applications](../app/asp-net-core.md).
## Confirm the latest stable release of the Application Insights SDK
In this article, you learn how to run Application Insights Profiler on your Azur
1. Select **Microsoft.ApplicationInsights.AspNetCore**.
-1. In the side pane, select the latest version of the SDK from the dropdown.
+1. On the side pane, select the latest version of the SDK from the dropdown.
1. Select **Update**.
- :::image type="content" source="../app/media/asp-net-core/update-nuget-package.png" alt-text="Screenshot of where to select the Application Insights package for update.":::
+ :::image type="content" source="../app/media/asp-net-core/update-nuget-package.png" alt-text="Screenshot that shows where to select the Application Insights package for update.":::
## Enable Profiler
-You can enable Profiler by any of the following three ways:
+You can enable Profiler by any of three ways:
-- Within your ASP.NET Core application using an Azure Resource Manager template and Visual Studio (recommended).-- Using a PowerShell command via the Azure CLI.-- Using Azure Resource Explorer.
+- Within your ASP.NET Core application by using an Azure Resource Manager template and Visual Studio. We recommend this method.
+- By using a PowerShell command via the Azure CLI.
+- By using Azure Resource Explorer.
# [Visual Studio and ARM template](#tab/vs-arm) ### Install the Azure Diagnostics extension
-1. Choose which Azure Resource Manager template to use:
+1. Choose which ARM template to use:
- [VM](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachine.json)
- - [Virtual machine scale set](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachineScaleSet.json).
+ - [Virtual machine scale set](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachineScaleSet.json)
1. In the template, locate the resource of type `extension`.
-1. In Visual Studio, navigate to the `arm.json` file in your ASP.NET Core application that was added when you installed the Application Insights SDK.
+1. In Visual Studio, go to the `arm.json` file in your ASP.NET Core application that was added when you installed the Application Insights SDK.
1. Add the resource type `extension` from the template to the `arm.json` file to set up a VM or virtual machine scale set with Azure Diagnostics.
-1. Within the `WadCfg` tag, add your Application Insights instrumentation key to the `MyApplicationInsightsProfilerSink`.
-
- ```json
- "WadCfg": {
- "SinksConfig": {
- "Sink": [
- {
- "name": "MyApplicationInsightsProfilerSink",
- "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
+1. Within the `WadCfg` tag, add your Application Insights instrumentation key to `MyApplicationInsightsProfilerSink`.
+
+
+ ```json
+ "WadCfg": {
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "MyApplicationInsightsProfilerSink",
+ "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
+ }
+ ]
}
- ]
- }
- }
- ```
+ }
+ ```
1. Deploy your application.
You can enable Profiler by any of the following three ways:
The following PowerShell commands are an approach for existing VMs that touch only the Azure Diagnostics extension. > [!NOTE]
-> If you deploy the VM again, the sink will be lost. You'll need to update the config you use when deploying the VM to preserve this setting.
+> If you deploy the VM again, the sink will be lost. You need to update the config you use when you deploy the VM to preserve this setting.
-### Install Application Insights via Azure Diagnostics config
+### Install Application Insights via the Azure Diagnostics config
1. Export the currently deployed Azure Diagnostics config to a file:
The following PowerShell commands are an approach for existing VMs that touch on
If the intended application is running through [IIS](https://www.microsoft.com/web/downloads/platform.aspx), enable the `IIS Http Tracing` Windows feature:
-1. Establish remote access to the environment.
+1. Establish remote access to the environment.
-1. Use the [Add Windows features](/iis/configuration/system.webserver/tracing/) window, or run the following command in PowerShell (as administrator):
+1. Use the [Add Windows features](/iis/configuration/system.webserver/tracing/) window, or run the following command in PowerShell (as administrator):
```powershell Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All ```
- If establishing remote access is a problem, you can use the [Azure CLI](/cli/azure/get-started-with-azure-cli) to run the following command:
+ If establishing remote access is a problem, you can use the [Azure CLI](/cli/azure/get-started-with-azure-cli) to run the following command:
```cli az vm run-command invoke -g MyResourceGroupName -n MyVirtualMachineName --command-id RunPowerShellScript --scripts "Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All"
If the intended application is running through [IIS](https://www.microsoft.com/w
# [Azure Resource Explorer](#tab/azure-resource-explorer)
-### Set Profiler sink using Azure Resource Explorer
+### Set the Profiler sink by using Azure Resource Explorer
-Since the Azure portal doesn't provide a way to set the Application Insights Profiler sink, you can use [Azure Resource Explorer](https://resources.azure.com) to set the sink.
+Because the Azure portal doesn't provide a way to set the Application Insights Profiler sink, you can use [Azure Resource Explorer](https://resources.azure.com) to set the sink.
> [!NOTE]
-> If you deploy the VM again, the sink will be lost. You'll need to update the config you use when deploying the VM to preserve this setting.
+> If you deploy the VM again, the sink will be lost. You need to update the config you use when you deploy the VM to preserve this setting.
-1. Verify the Microsoft Azure Diagnostics extension is installed by viewing the extensions installed for your virtual machine.
+1. Verify that the Microsoft Azure Diagnostics extension is installed by viewing the extensions installed for your virtual machine.
- :::image type="content" source="./media/profiler-vm/wad-extension.png" alt-text="Screenshot of checking if WAD extension is installed.":::
+ :::image type="content" source="./media/profiler-vm/wad-extension.png" alt-text="Screenshot that shows checking if the WAD extension is installed.":::
1. Find the VM Diagnostics extension for your VM:
- 1. Go to [https://resources.azure.com](https://resources.azure.com).
- 1. Expand **subscriptions** and find the subscription holding the resource group with your VM.
- 1. Drill down to your VM extensions by selecting your resource group, followed by **Microsoft.Compute** > **virtualMachines** > **[your virtual machine]** > **extensions**.
+ 1. Go to [Azure Resource Explorer](https://resources.azure.com).
+ 1. Expand **subscriptions** and find the subscription that holds the resource group with your VM.
+ 1. Drill down to your VM extensions by selecting your resource group. Then select **Microsoft.Compute** > **virtualMachines** > **[your virtual machine]** > **extensions**.
- :::image type="content" source="./media/profiler-vm/azure-resource-explorer.png" alt-text="Screenshot of navigating to WAD config in Azure Resource Explorer.":::
+ :::image type="content" source="./media/profiler-vm/azure-resource-explorer.png" alt-text="Screenshot that shows going to WAD config in Azure Resource Explorer.":::
-1. Add the Application Insights Profiler sink to the `SinksConfig` node under WadCfg. If you don't already have a `SinksConfig` section, you may need to add one. To add the sink:
+1. Add the Application Insights Profiler sink to the `SinksConfig` node under `WadCfg`. If you don't already have a `SinksConfig` section, you might need to add one. To add the sink:
- - Specify the proper Application Insights iKey in your settings.
- - Switch the explorers mode to Read/Write in the upper right corner.
- - Press the blue **Edit** button.
+ - Specify the proper Application Insights iKey in your settings.
+ - Switch the Explorer mode to **Read/Write** in the upper-right corner.
+ - Select **Edit**.
- :::image type="content" source="./media/profiler-vm/resource-explorer-sinks-config.png" alt-text="Screenshot of adding Application Insights Profiler sink.":::
+ :::image type="content" source="./media/profiler-vm/resource-explorer-sinks-config.png" alt-text="Screenshot that shows adding the Application Insights Profiler sink.":::
- ```json
- "WadCfg": {
- "SinksConfig": {
- "Sink": [
- {
- "name": "MyApplicationInsightsProfilerSink",
- "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
+ ```json
+ "WadCfg": {
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "MyApplicationInsightsProfilerSink",
+ "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
+ }
+ ]
}
- ]
- }
- }
- ```
-
+ }
+ ```
-1. When you're done editing the config, press **PUT**.
+1. After you've finished editing the config, select **PUT**.
-1. If the `put` is successful, a green check will appear in the middle of the screen.
+1. If the `put` is successful, a green check mark appears in the middle of the screen.
- :::image type="content" source="./media/profiler-vm/resource-explorer-put.png" alt-text="Screenshot of sending the put request to apply changes.":::
+ :::image type="content" source="./media/profiler-vm/resource-explorer-put.png" alt-text="Screenshot that shows sending the PUT request to apply changes.":::
## Can Profiler run on on-premises servers?
-Currently, Application Insights Profiler is not supported for on-premises servers.
+Currently, Application Insights Profiler isn't supported for on-premises servers.
## Next steps
-Learn how to...
> [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md)
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
# Enable Profiler for Azure App Service apps
-Application Insights Profiler is pre-installed as part of the App Services runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on Azure App Service using Basic service tier or higher. Follow these steps even if you've included the App Insights SDK in your application at build time.
+Application Insights Profiler is preinstalled as part of the Azure App Service runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on App Service by using the Basic service tier or higher. Follow these steps even if you've included the Application Insights SDK in your application at build time.
To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps instructions](profiler-aspnetcore-linux.md). > [!NOTE]
-> Codeless installation of Application Insights Profiler follows the .NET Core support policy.
+> Codeless installation of Application Insights Profiler follows the .NET Core support policy.
> For more information about supported runtime, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). - ## Prerequisites -- An [Azure App Services ASP.NET/ASP.NET Core app](../../app-service/quickstart-dotnetcore.md).-- [Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource) connected to your App Service app.
+You need:
+
+- An [Azure App Service ASP.NET/ASP.NET Core app](../../app-service/quickstart-dotnetcore.md).
+- An [Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource) connected to your App Service app.
-## Verify "Always On" setting is enabled
+## Verify the Always on setting is enabled
-1. In the Azure portal, navigate to your App Service.
-1. Under **Settings** in the left side menu, select **Configuration**.
+1. In the Azure portal, go to your App Service instance.
+1. Under **Settings** on the left pane, select **Configuration**.
- :::image type="content" source="./media/profiler/configuration-menu.png" alt-text="Screenshot of selecting Configuration from the left side menu.":::
+ :::image type="content" source="./media/profiler/configuration-menu.png" alt-text="Screenshot that shows selecting Configuration on the left pane.":::
1. Select the **General settings** tab.
-1. Verify **Always On** > **On** is selected.
+1. Verify that **Always on** > **On** is selected.
- :::image type="content" source="./media/profiler/always-on.png" alt-text="Screenshot of the General tab on the Configuration pane and showing the Always On being enabled.":::
+ :::image type="content" source="./media/profiler/always-on.png" alt-text="Screenshot that shows the General tab on the Configuration pane showing that Always On is enabled.":::
-1. Select **Save** if you've made changes.
+1. Select **Save** if you made changes.
## Enable Application Insights and Profiler
+The following sections show you how to enable Application Insights for the same subscription or different subscriptions.
+ ### For Application Insights and App Service in the same subscription
-If your Application Insights resource is in the same subscription as your App Service:
+If your Application Insights resource is in the same subscription as your instance of App Service:
-1. Under **Settings** in the left side menu, select **Application Insights**.
+1. Under **Settings** on the left pane, select **Application Insights**.
- :::image type="content" source="./media/profiler/app-insights-menu.png" alt-text="Screenshot of selecting Application Insights from the left side menu.":::
+ :::image type="content" source="./media/profiler/app-insights-menu.png" alt-text="Screenshot that shows selecting Application Insights on the left pane.":::
1. Under **Application Insights**, select **Enable**.
-1. Verify you've connected an Application Insights resource to your app.
+1. Verify that you connected an Application Insights resource to your app.
- :::image type="content" source="./media/profiler/enable-app-insights.png" alt-text="Screenshot of enabling App Insights on your app.":::
+ :::image type="content" source="./media/profiler/enable-app-insights.png" alt-text="Screenshot that shows enabling Application Insights on your app.":::
1. Scroll down and select the **.NET** or **.NET Core** tab, depending on your app.
-1. Verify **Collection Level** > **Recommended** is selected.
-1. Under **Profiler**, select **On**.
- - If you chose the **Basic** collection level earlier, the Profiler setting is disabled.
-1. Select **Apply**, then **Yes** to confirm.
+1. Verify that **Collection level** > **Recommended** is selected.
+1. Under **Profiler**, select **On**.
- :::image type="content" source="./media/profiler/enable-profiler.png" alt-text="Screenshot of enabling Profiler on your app.":::
+ If you chose the **Basic** collection level earlier, the Profiler setting is disabled.
+1. Select **Apply** > **Yes** to confirm.
+
+ :::image type="content" source="./media/profiler/enable-profiler.png" alt-text="Screenshot that shows enabling Profiler on your app.":::
### For Application Insights and App Service in different subscriptions
-If your Application Insights resource is in a different subscription from your App Service, you'll need to enable Profiler manually by creating app settings for your Azure App Service. You can automate the creation of these settings using a template or other means. The settings needed to enable the Profiler:
+If your Application Insights resource is in a different subscription from your instance of App Service, you need to enable Profiler manually by creating app settings for your App Service instance. You can automate the creation of these settings by using a template or other means. Here are the settings you need to enable Profiler.
-|App Setting | Value |
+|App setting | Value |
||-| |APPINSIGHTS_INSTRUMENTATIONKEY | iKey for your Application Insights resource | |APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 | |DiagnosticServices_EXTENSION_VERSION | ~3 |
-Set these values using:
-- [Azure Resource Manager Templates](../app/azure-web-apps-net-core.md#app-service-application-settings-with-azure-resource-manager)
+Set these values by using:
+- [Azure Resource Manager templates](../app/azure-web-apps-net-core.md#app-service-application-settings-with-azure-resource-manager)
- [Azure PowerShell](/powershell/module/az.websites/set-azwebapp) - [Azure CLI](/cli/azure/webapp/config/appsettings) ## Enable Profiler for regional clouds
-Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+Currently, the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
-|App Setting | US Government Cloud | China Cloud |
+|App setting | US Government Cloud | China Cloud |
|||-| |ApplicationInsightsProfilerEndpoint | `https://profiler.monitor.azure.us` | `https://profiler.monitor.azure.cn` | |ApplicationInsightsEndpoint | `https://dc.applicationinsights.us` | `https://dc.applicationinsights.azure.cn` | ## Enable Azure Active Directory authentication for profile ingestion
-Application Insights Profiler supports Azure AD authentication for profiles ingestion. For all profiles of your application to be ingested, your application must be authenticated and provide the required application settings to the Profiler agent.
+Application Insights Profiler supports Azure Active Directory (Azure AD) authentication for profile ingestion. For all profiles of your application to be ingested, your application must be authenticated and provide the required application settings to the Profiler agent.
-Profiler only supports Azure AD authentication when you reference and configure Azure AD using the [Application Insights SDK](../app/asp-net-core.md#configure-the-application-insights-sdk) in your application.
+Profiler only supports Azure AD authentication when you reference and configure Azure AD by using the [Application Insights SDK](../app/asp-net-core.md#configure-the-application-insights-sdk) in your application.
-To enable Azure AD for profiles ingestion:
+To enable Azure AD for profile ingestion:
-1. Create and add the managed identity to authenticate against your Application Insights resource to your App Service.
+1. Create and add the managed identity to authenticate against your Application Insights resource to your App Service:
- a. [System-Assigned Managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
+ 1. [System-assigned managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
- b. [User-Assigned Managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
+ 1. [User-assigned managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
1. [Configure and enable Azure AD](../app/azure-ad-authentication.md?tabs=net#configure-and-enable-azure-ad-based-authentication) in your Application Insights resource.
-1. Add the following application setting to let the Profiler agent know which managed identity to use:
+1. Add the following application setting to let the Profiler agent know which managed identity to use.
- For System-Assigned Identity:
+ - For system-assigned identity:
- | App Setting | Value |
- | -- | |
- | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` |
+ | App setting | Value |
+ | -- | |
+ | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` |
- For User-Assigned Identity:
+ - For user-assigned identity:
- | App Setting | Value |
- | - | -- |
- | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` |
+ | App setting | Value |
+ | - | -- |
+ | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` |
## Disable Profiler To stop or restart Profiler for an individual app's instance:
-1. Under **Settings** in the left side menu, select **WebJobs**.
+1. Under **Settings** on the left pane, select **WebJobs**.
- :::image type="content" source="./media/profiler/web-jobs-menu.png" alt-text="Screenshot of selecting web jobs from the left side menu.":::
+ :::image type="content" source="./media/profiler/web-jobs-menu.png" alt-text="Screenshot that shows selecting web jobs on the left pane.":::
1. Select the webjob named `ApplicationInsightsProfiler3`.
-1. Click **Stop** from the top menu.
+1. Select **Stop**.
- :::image type="content" source="./media/profiler/stop-web-job.png" alt-text="Screenshot of selecting stop for stopping the webjob.":::
+ :::image type="content" source="./media/profiler/stop-web-job.png" alt-text="Screenshot that shows selecting stop for stopping the webjob.":::
1. Select **Yes** to confirm. We recommend that you have Profiler enabled on all your apps to discover any performance issues as early as possible.
-Profiler's files can be deleted when using WebDeploy to deploy changes to your web application. You can prevent the deletion by excluding the App_Data folder from being deleted during deployment.
+You can delete Profiler's files when you use WebDeploy to deploy changes to your web application. You can prevent the deletion by excluding the *App_Data* folder from being deleted during deployment.
## Next steps
-Learn how to...
+ > [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md)
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
# Monitor virtual machines with Azure Monitor: Collect data
-This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to configure collection of data once you've deployed the Azure Monitor agent to your Azure and hybrid virtual machines in Azure Monitor.
+This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to configure collection of data after you deploy Azure Monitor Agent to your Azure and hybrid virtual machines in Azure Monitor.
-This article provides guidance on collecting the most common types of telemetry from virtual machines. The exact configuration that you choose will depend on the workloads that you run on your machines. Included in each section are sample log query alerts that you can use with that data.
+This article provides guidance on collecting the most common types of telemetry from virtual machines. The exact configuration that you choose depends on the workloads that you run on your machines. Included in each section are sample log query alerts that you can use with that data.
-- See [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md) for more information about analyzing telemetry collected from your virtual machines. -- See [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md) for more information about using telemetry collected from your virtual machines to create alerts in Azure Monitor.
+- For more information about analyzing telemetry collected from your virtual machines, see [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md).
+- For more information about using telemetry collected from your virtual machines to create alerts in Azure Monitor, see [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md).
> [!NOTE] > This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md). - ## Data collection rules
-Data collection from the Azure Monitor agent is defined by one or more [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) stored in your Azure subscription and are associated with your virtual machines.
+Data collection from Azure Monitor Agent is defined by one or more [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that are stored in your Azure subscription and associated with your virtual machines.
-For virtual machines, DCRs will define data such as events and performance counters to collect and specify the Log Analytics workspaces that data should be sent to. The DCR can also use [transformations](../essentials/data-collection-transformations.md) to filter out unwanted data and to add calculated columns. A single machine can be associated with multiple DCRs, and a single DCR can be associated with multiple machines. DCRs are delivered to any machines they're associated with where they're processed by the Azure Monitor agent.
+For virtual machines, DCRs define data such as events and performance counters to collect and specify the Log Analytics workspaces where data should be sent. The DCR can also use [transformations](../essentials/data-collection-transformations.md) to filter out unwanted data and to add calculated columns. A single machine can be associated with multiple DCRs, and a single DCR can be associated with multiple machines. DCRs are delivered to any machines they're associated with where Azure Monitor Agent processes them.
### View data collection rules
-You can view the DCRs in your Azure subscription from **Data Collection Rules** in the **Monitor** menu in the Azure portal. DCRs support other data collection scenarios in Azure Monitor, so all of your DCRs won't necessarily be for virtual machines.
+You can view the DCRs in your Azure subscription from **Data Collection Rules** on the **Monitor** menu in the Azure portal. DCRs support other data collection scenarios in Azure Monitor, so all of your DCRs aren't necessarily for virtual machines.
:::image type="content" source="../essentials/media/data-collection-rule-overview/view-data-collection-rules.png" lightbox="../essentials/media/data-collection-rule-overview/view-data-collection-rules.png" alt-text="Screenshot that shows DCRs in the Azure portal."::: - ### Create data collection rules
-There are multiple methods to create data collection rules depending on the data collection scenario. In some cases, the Azure portal will walk you through the configuration while other scenarios will require you to edit the DCR directly. When you configure VM insights, it will create a preconfigured DCR for you automatically. The sections below identify common data to collect and how to configure data collection.
+There are multiple methods to create DCRs depending on the data collection scenario. In some cases, the Azure portal walks you through the configuration. Other scenarios require you to edit a DCR directly. When you configure VM insights, it creates a preconfigured DCR for you automatically. The following sections identify common data to collect and how to configure data collection.
-In some cases, you may need to [edit an existing DCR](../essentials/data-collection-rule-edit.md) to add functionality. For example, you may use the Azure portal to create a DCR that collects Windows or Syslog events. You then want to add a transformation to that DCR to filter out columns in the events that you don't want to collect.
+In some cases, you might need to [edit an existing DCR](../essentials/data-collection-rule-edit.md) to add functionality. For example, you might use the Azure portal to create a DCR that collects Windows or Syslog events. You then want to add a transformation to that DCR to filter out columns in the events that you don't want to collect.
-As your environment matures and grows in complexity, you should implement a strategy for organizing your DCRs to assist in their management. See [Best practices for data collection rule creation and management in Azure Monitor](../essentials/data-collection-rule-best-practices.md) for guidance on different strategies.
+As your environment matures and grows in complexity, you should implement a strategy for organizing your DCRs to help their management. For guidance on different strategies, see [Best practices for data collection rule creation and management in Azure Monitor](../essentials/data-collection-rule-best-practices.md).
-## Controlling costs
-Since your Azure Monitor cost is dependent on how much data you collect, you should ensure that you're not collecting any more than you need to meet your monitoring requirements. Your configuration will be a balance between your budget and how much insight you want into the operation of your virtual machines.
+## Control costs
+Because your Azure Monitor cost is dependent on how much data you collect, ensure that you're not collecting more than you need to meet your monitoring requirements. Your configuration is a balance between your budget and how much insight you want into the operation of your virtual machines.
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
-A typical virtual machine will generate between 1GB and 3GB of data per month, but this data size is highly dependent on the configuration of the machine itself, the workloads running on it, and the configuration of your data collection rules. Before you configure data collection across your entire virtual machine environment, you should begin collection on some representative machines to better predict your expected costs when deployed across your environment. Use log queries in [Data volume by computer](../logs/analyze-usage.md#data-volume-by-computer) to determine the amount of billable data collected for each machine and adjust accordingly.
-
-Each data source that you collect may have a different method for filtering out unwanted data. You can also use [transformations](../essentials/data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
-
+A typical virtual machine generates between 1 GB and 3 GB of data per month. This data size depends on the configuration of the machine, the workloads running on it, and the configuration of your DCRs. Before you configure data collection across your entire virtual machine environment, begin collection on some representative machines to better predict your expected costs when deployed across your environment. Use log queries in [Data volume by computer](../logs/analyze-usage.md#data-volume-by-computer) to determine the amount of billable data collected for each machine and adjust accordingly.
+Each data source that you collect might have a different method for filtering out unwanted data. You can use [transformations](../essentials/data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
## Default data collection
-Azure Monitor will automatically perform the following data collection without requiring any additional configuration.
+Azure Monitor automatically performs the following data collection without requiring any other configuration.
### Platform metrics
-Platform metrics for Azure virtual machines include important host metrics such as CPU, network, and disk utilization. They can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience), analyzed with [metrics explorer](../essentials/tutorial-metrics.md) for the machine in the Azure portal and used for [metric alerts](tutorial-monitor-vm-alert-recommended.md).
+Platform metrics for Azure virtual machines include important host metrics such as CPU, network, and disk utilization. They can be:
+
+- Viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience).
+- Analyzed with [metrics explorer](../essentials/tutorial-metrics.md) for the machine in the Azure portal.
+- Used for [metric alerts](tutorial-monitor-vm-alert-recommended.md).
### Activity log
-The [Activity log](../essentials/activity-log.md) is collected automatically and includes the recent activity of the machine, such as any configuration changes and when it was stopped and started. You can view the platform metrics and Activity log collected for each virtual machine host in the Azure portal.
+The [activity log](../essentials/activity-log.md) is collected automatically. It includes the recent activity of the machine, such as any configuration changes and when it was stopped and started. You can view the platform metrics and activity log collected for each virtual machine host in the Azure portal.
-You can [view the Activity log](../essentials/activity-log.md#view-the-activity-log) for an individual machine or for all resources in a subscription. You should [create a diagnostic setting](../essentials/diagnostic-settings.md) to send this data into the same Log Analytics workspace used by your Azure Monitor agent to analyze it with the other monitoring data collected for the virtual machine. There's no cost for ingestion or retention of Activity log data.
+You can [view the activity log](../essentials/activity-log.md#view-the-activity-log) for an individual machine or for all resources in a subscription. [Create a diagnostic setting](../essentials/diagnostic-settings.md) to send this data into the same Log Analytics workspace used by Azure Monitor Agent to analyze it with the other monitoring data collected for the virtual machine. There's no cost for ingestion or retention of activity log data.
### VM availability information in Azure Resource Graph
-[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service that allows you to use the same KQL query language used in log queries to query your Azure resources at scale with complex filtering, grouping, and sorting by resource properties. You can use [VM health annotations](../../service-health/resource-health-vm-annotation.md) to Azure Resource Graph (ARG) for detailed failure attribution and downtime analysis.
+With [Azure Resource Graph](../../governance/resource-graph/overview.md), you can use the same Kusto Query Language used in log queries to query your Azure resources at scale with complex filtering, grouping, and sorting by resource properties. You can use [VM health annotations](../../service-health/resource-health-vm-annotation.md) to Resource Graph for detailed failure attribution and downtime analysis.
-See [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md) for details on what data is collected and how to view it.
+For information on what data is collected and how to view it, see [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md).
### VM insights
-When you enable VM insights, then it will create a data collection rule, with the **_MSVMI-_** prefix that collects the following information. You can use this same DCR with other machines as opposed to creating a new one for each VM.
+When you enable VM insights, it creates a DCR with the *_MSVMI-_* prefix that collects the following information. You can use this same DCR with other machines as opposed to creating a new one for each VM.
-- Common performance counters for the client operating system are sent to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table in the Log Analytics workspace. Counter names will be normalized to use the same common name regardless of the operating system type. See [How to query logs from VM insights](vminsights-log-query.md#performance-records) for a list of performance counters that are collected.-- If you specified processes and dependencies to be collected, then the following tables are populated:
+- Common performance counters for the client operating system are sent to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table in the Log Analytics workspace. Counter names are normalized to use the same common name regardless of the operating system type. For a list of performance counters that are collected, see [How to query logs from VM insights](vminsights-log-query.md#performance-records).
+- If you specified processes and dependencies to be collected, the following tables are populated:
- - [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) - Traffic for open server ports on the machine
- - [VMComputer](/azure/azure-monitor/reference/tables/vmcomputer) - Inventory data for the machine
- - [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) - Traffic for inbound and outbound connections to and from the machine
- - [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) - Processes running on the machine
-
-By default, [VM insights](../vm/vminsights-overview.md) will not enable collection of processes and dependencies to save data ingestion costs. This data is required for the map feature and will also deploy the dependency agent to the machine. [Enable this collection](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent) if you want to use this feature.
--
+ - [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport): Traffic for open server ports on the machine
+ - [VMComputer](/azure/azure-monitor/reference/tables/vmcomputer): Inventory data for the machine
+ - [VMConnection](/azure/azure-monitor/reference/tables/vmconnection): Traffic for inbound and outbound connections to and from the machine
+ - [VMProcess](/azure/azure-monitor/reference/tables/vmprocess): Processes running on the machine
+By default, [VM insights](../vm/vminsights-overview.md) won't enable collection of processes and dependencies to save data ingestion costs. This data is required for the Map feature and also deploys the dependency agent to the machine. [Enable this collection](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent) if you want to use this feature.
## Collect Windows and Syslog events
-The operating system and applications in virtual machines will often write to the Windows Event Log or Syslog. You may create an alert as soon as a single event is found or wait for a series of matching events within a particular time window. You may also collect events for later analysis such as identifying particular trends over time, or for performing troubleshooting after a problem occurs.
+The operating system and applications in virtual machines often write to the Windows event log or Syslog. You might create an alert as soon as a single event is found or wait for a series of matching events within a particular time window. You might also collect events for later analysis, such as identifying particular trends over time, or for performing troubleshooting after a problem occurs.
-See [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) for guidance on creating a DCR to collect Windows and Syslog events. This will allow you to quickly create a DCR using the most common Windows event logs and Syslog facilities filtering by event level. For more granular filtering by criteria such as event ID, you can create a custom filter using [XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries). You can further filter the collected data by [editing the DCR](../essentials/data-collection-rule-edit.md) to add a [transformation](../essentials/data-collection-transformations.md).
+For guidance on how to create a DCR to collect Windows and Syslog events, see [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md). You can quickly create a DCR by using the most common Windows event logs and Syslog facilities filtering by event level.
-Use the following guidance as a recommended starting point for event collection. Modify the DCR settings to filter unneeded events and add additional events depending on your requirements.
+For more granular filtering by criteria such as event ID, you can create a custom filter by using [XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries). You can further filter the collected data by [editing the DCR](../essentials/data-collection-rule-edit.md) to add a [transformation](../essentials/data-collection-transformations.md).
+Use the following guidance as a recommended starting point for event collection. Modify the DCR settings to filter unneeded events and add other events depending on your requirements.
| Source | Strategy | |:|:|
-| Windows events | Collect at least **Critical**, **Error**, and **Warning** events for the **System** and **Application** logs to support alerting. Add **Information** events to analyze trends and support troubleshooting. **Verbose** events will rarely be useful and typically shouldn't be collected. |
-| Syslog events | Collect at least **LOG_WARNING** events for each facility to support alerting. Add **Information** events to analyze trends and support troubleshooting. **LOG_DEBUG** events will rarely be useful and typically shouldn't be collected. |
+| Windows events | Collect at least **Critical**, **Error**, and **Warning** events for the **System** and **Application** logs to support alerting. Add **Information** events to analyze trends and support troubleshooting. **Verbose** events are rarely useful and typically shouldn't be collected. |
+| Syslog events | Collect at least **LOG_WARNING** events for each facility to support alerting. Add **Information** events to analyze trends and support troubleshooting. **LOG_DEBUG** events are rarely useful and typically shouldn't be collected. |
+### Sample log queries: Windows events
-### Sample log queries - Windows events
-
-| Query | Description |
+| Query | Description |
|:|:|
-| `Event` | All Windows events. |
-| `Event | where EventLevelName == "Error"` |All Windows events with severity of error. |
-| `Event | summarize count() by Source` |Count of Windows events by source. |
-| `Event | where EventLevelName == "Error" | summarize count() by Source` |Count of Windows error events by source. |
+| `Event` | All Windows events |
+| `Event | where EventLevelName == "Error"` |All Windows events with severity of error |
+| `Event | summarize count() by Source` |Count of Windows events by source |
+| `Event | where EventLevelName == "Error" | summarize count() by Source` |Count of Windows error events by source |
-### Sample log queries - Syslog events
+### Sample log queries: Syslog events
-| Query | Description |
-|: |: |
+| Query | Description |
+|:|:|
| `Syslog` |All Syslogs | | `Syslog | where SeverityLevel == "error"` |All Syslog records with severity of error | | `Syslog | summarize AggregatedValue = count() by Computer` |Count of Syslog records by computer | | `Syslog | summarize AggregatedValue = count() by Facility` |Count of Syslog records by facility | -
-## Collect performance counters
-Performance data from the client can be sent to either [Azure Monitor Metrics](../essentials/data-platform-metrics.md) or [Azure Monitor Logs](../logs/data-platform-logs.md), and you'll typically send them to both destinations. If you enabled VM insights, then a common set of performance counters is collected in Logs to support its performance charts. You can't modify this set of counters, but you can create additional DCRs to collect additional counters and send them to different destinations.
+## Collect performance counters
+Performance data from the client can be sent to either [Azure Monitor Metrics](../essentials/data-platform-metrics.md) or [Azure Monitor Logs](../logs/data-platform-logs.md), and you typically send them to both destinations. If you enabled VM insights, a common set of performance counters is collected in Logs to support its performance charts. You can't modify this set of counters, but you can create other DCRs to collect more counters and send them to different destinations.
There are multiple reasons why you would want to create a DCR to collect guest performance: - You aren't using VM insights, so client performance data isn't already being collected.-- Collect additional performance counters that aren't being collected by VM insights.
+- Collect other performance counters that VM insights isn't collecting.
- Collect performance counters from other workloads running on your client. - Send performance data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) where you can use them with metrics explorer and metrics alerts.
-See [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) for guidance on creating a DCR to collect performance counters. This will allow you to quickly create a DCR using the most common counters. For more granular filtering by criteria such as event ID, you can create a custom filter using [XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
+For guidance on creating a DCR to collect performance counters, see [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md). You can quickly create a DCR by using the most common counters. For more granular filtering by criteria such as event ID, you can create a custom filter by using [XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
> [!NOTE]
-> You may choose to combine performance and event collection in the same data collection rule.
--
+> You might choose to combine performance and event collection in the same DCR.
Destination | Description | |:|:|
-| Metrics | Host metrics are automatically sent to Azure Monitor Metrics, and you can use a DCR to collect client metrics so they can be analyzed together with [metrics explorer](../essentials/metrics-getting-started.md) or used with [metrics alerts](../alerts/alerts-create-new-alert-rule.md?tabs=metric). This data is stored for 93 days. |
-| Logs | Performance data stored in Azure Monitor Logs can be stored for extended periods and can be analyzed along with your event data using [log queries](../logs/log-query-overview.md) with [Log Analytics](../logs/log-analytics-overview.md) or [log query alerts](../alerts/alerts-create-new-alert-rule.md?tabs=log). You can also corelate data using complex logic across multiple machines, regions, and subscriptions.<br><br>Performance data is sent to the following tables:<br>VM insights - [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics)<br>Other performance data - [Perf](/azure/azure-monitor/reference/tables/perf) |
+| Metrics | Host metrics are automatically sent to Azure Monitor Metrics. You can use a DCR to collect client metrics so that they can be analyzed together with [metrics explorer](../essentials/metrics-getting-started.md) or used with [metrics alerts](../alerts/alerts-create-new-alert-rule.md?tabs=metric). This data is stored for 93 days. |
+| Logs | Performance data stored in Azure Monitor Logs can be stored for extended periods. The data can be analyzed along with your event data by using [log queries](../logs/log-query-overview.md) with [Log Analytics](../logs/log-analytics-overview.md) or [log query alerts](../alerts/alerts-create-new-alert-rule.md?tabs=log). You can also correlate data by using complex logic across multiple machines, regions, and subscriptions.<br><br>Performance data is sent to the following tables:<br>- VM insights: [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics)<br>- Other performance data: [Perf](/azure/azure-monitor/reference/tables/perf) |
### Sample log queries
-The following samples use the `Perf` table with custom performance data. For details on performance data collected by VM insights, see [How to query logs from VM insights](../vm/vminsights-log-query.md#performance-records).
+The following samples use the `Perf` table with custom performance data. For information on performance data collected by VM insights, see [How to query logs from VM insights](../vm/vminsights-log-query.md#performance-records).
-| Query | Description |
-|: |:|
+| Query | Description |
+|:|:|
| `Perf` | All Performance data | | `Perf | where Computer == "MyComputer"` |All Performance data from a particular computer | | `Perf | where CounterName == "Current Disk Queue Length"` |All Performance data for a particular counter | | `Perf | where ObjectName == "Processor" and CounterName == "% Processor Time" and InstanceName == "_Total" | summarize AVGCPU = avg(CounterValue) by Computer` |Average CPU Utilization across all computers |
-| `Perf | where CounterName == "% Processor Time" | summarize AggregatedValue = max(CounterValue) by Computer` |Maximum CPU Utilization across all computers |
-| `Perf | where ObjectName == "LogicalDisk" and CounterName == "Current Disk Queue Length" and Computer == "MyComputerName" | summarize AggregatedValue = avg(CounterValue) by InstanceName` |Average Current Disk Queue length across all the instances of a given computer |
+| `Perf | where CounterName == "% Processor Time" | summarize AggregatedValue = max(CounterValue) by Computer` |Maximum CPU Utilization across all computers |
+| `Perf | where ObjectName == "LogicalDisk" and CounterName == "Current Disk Queue Length" and Computer == "MyComputerName" | summarize AggregatedValue = avg(CounterValue) by InstanceName` |Average Current Disk Queue length across all the instances of a given computer |
| `Perf | where CounterName == "Disk Transfers/sec" | summarize AggregatedValue = percentile(CounterValue, 95) by Computer` |95th Percentile of Disk Transfers/Sec across all computers | | `Perf | where CounterName == "% Processor Time" and InstanceName == "_Total" | summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 1h), Computer` |Hourly average of CPU usage across all computers | | `Perf | where Computer == "MyComputer" and CounterName startswith_cs "%" and InstanceName == "_Total" | summarize AggregatedValue = percentile(CounterValue, 70) by bin(TimeGenerated, 1h), CounterName` | Hourly 70 percentile of every % percent counter for a particular computer | | `Perf | where CounterName == "% Processor Time" and InstanceName == "_Total" and Computer == "MyComputer" | summarize ["min(CounterValue)"] = min(CounterValue), ["avg(CounterValue)"] = avg(CounterValue), ["percentile75(CounterValue)"] = percentile(CounterValue, 75), ["max(CounterValue)"] = max(CounterValue) by bin(TimeGenerated, 1h), Computer` |Hourly average, minimum, maximum, and 75-percentile CPU usage for a specific computer |
-| | |
| `Perf | where ObjectName == "MSSQL$INST2:Databases" and InstanceName == "master"` | All Performance data from the Database performance object for the master database from the named SQL Server instance INST2. | ## Collect text logs Some applications write events written to a text log stored on the virtual machine. Create a [custom table and DCR](../agents/data-collection-text-log.md) to collect this data. You define the location of the text log, its detailed configuration, and the schema of the custom table. There's a cost for the ingestion and retention of this data in the workspace. ### Sample log queries
-The column names used here are for example only. The column names for your log will most likely be different.
+The column names used here are examples only. The column names for your log will most likely be different.
-| Query | Description |
-|: |: |
+| Query | Description |
+|:|:|
| `MyApp_CL | summarize count() by code` | Count the number of events by code. | | `MyApp_CL | where status == "Error" | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)` | Create an alert rule on any error event. |
-
-- ## Collect IIS logs
-IIS running on Windows machines writes logs to a text file. Configure IIS log collection using [Collect IIS logs with Azure Monitor Agent](../agents/data-collection-iis.md). There's a cost for the ingestion and retention of this data in the workspace. Records from the IIS log are stored in the [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) table in the Log Analytics workspace. There's a cost for the ingestion and retention of this data in the workspace.
+IIS running on Windows machines writes logs to a text file. Configure IIS log collection by using [Collect IIS logs with Azure Monitor Agent](../agents/data-collection-iis.md). There's a cost for the ingestion and retention of this data in the workspace.
-### Sample log queries
+Records from the IIS log are stored in the [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) table in the Log Analytics workspace. There's a cost for the ingestion and retention of this data in the workspace.
+### Sample log queries
-| Query | Description |
-|: |: |
+| Query | Description |
+|:|:|
| `W3CIISLog | where csHost=="www.contoso.com" | summarize count() by csUriStem` | Count the IIS log entries by URL for the host www.contoso.com. | | `W3CIISLog | summarize sum(csBytes) by Computer` | Review the total bytes received by each IIS machine. | - ## Monitor a service or daemon
-To monitor the status of a Windows service or Linux daemon, enable the [Change Tracking and Inventory](../../automation/change-tracking/overview.md) solution in [Azure Automation](../../automation/automation-intro.md).
+To monitor the status of a Windows service or Linux daemon, enable the [Change Tracking and Inventory](../../automation/change-tracking/overview.md) solution in [Azure Automation](../../automation/automation-intro.md).
+ Azure Monitor has no ability on its own to monitor the status of a service or daemon. There are some possible methods to use, such as looking for events in the Windows event log, but this method is unreliable. You can also look for the process associated with the service running on the machine from the [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) table populated by VM insights. This table only updates every hour, which isn't typically sufficient if you want to use this data for alerting. > [!NOTE] > The Change Tracking and Analysis solution is different from the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario.
-For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](../../automation/quickstarts/create-azure-automation-account-portal.md) to support the solution.
+For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You have to [create an Azure Automation account](../../automation/quickstarts/create-azure-automation-account-portal.md) to support the solution.
When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for logs queries and log query alert rules.
When you enable Change Tracking and Inventory, two new tables are created in you
| [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationdata) | Changes to in-guest configuration data | | [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Last reported state for in-guest configuration data | - ### Sample log queries - **List all services and daemons that have recently started.**
When you enable Change Tracking and Inventory, two new tables are created in you
| sort by Computer, SvcName ``` -- **Alert when a specific service stops.**
-Use this query in a log alert rule.
+- **Alert when a specific service stops.** Use this query in a log alert rule.
```kusto ConfigurationData
Use this query in a log alert rule.
| summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m) ``` -- **Alert when one of a set of services stops.**
-Use this query in a log alert rule.
+- **Alert when one of a set of services stops.** Use this query in a log alert rule.
```kusto let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]);
Use this query in a log alert rule.
Port monitoring verifies that a machine is listening on a particular port. Two potential strategies for port monitoring are described here. ### Dependency agent tables
-If you're using VM insights with Processes and dependencies collection enabled, you can use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The VMBoundPort table is updated every minute with each process running on the computer and the port it's listening on. You can create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port.
+If you're using VM insights with **Processes and dependencies collection** enabled, you can use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The `VMBoundPort` table is updated every minute with each process running on the computer and the port it's listening on. You can create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port.
--- **Review the count of ports open on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**
+- **Review the count of ports open on your VMs to assess which VMs have configuration and security vulnerabilities.**
```kusto VMBoundPort
If you're using VM insights with Processes and dependencies collection enabled,
| order by OpenPorts desc ``` -- **List the bound ports on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**
+- **List the bound ports on your VMs to assess which VMs have configuration and security vulnerabilities.**
```kusto VMBoundPort | distinct Computer, Port, ProcessName ``` - - **Analyze network activity by port to determine how your application or service is configured.** ```kusto
If you're using VM insights with Processes and dependencies collection enabled,
### Connection Manager The [Connection Monitor](../../network-watcher/connection-monitor-overview.md) feature of [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is used to test connections to a port on a virtual machine. A test verifies that the machine is listening on the port and that it's accessible on the network.
-Connection Manager requires the Network Watcher extension on the client machine initiating the test. It doesn't need to be installed on the machine being tested. For details, see [Tutorial - Monitor network communication using the Azure portal](../../network-watcher/connection-monitor.md).
-There's an extra cost for Connection Manager. For details, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+Connection Manager requires the Network Watcher extension on the client machine initiating the test. It doesn't need to be installed on the machine being tested. For more information, see [Tutorial: Monitor network communication using the Azure portal](../../network-watcher/connection-monitor.md).
+There's an extra cost for Connection Manager. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
## Run a process on a local machine
-Monitoring of some workloads requires a local process. An example is a PowerShell script that runs on the local machine to connect to an application and collect or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md), which is part of [Azure Automation](../../automation/automation-intro.md), to run a local PowerShell script. There's no direct charge for Hybrid Runbook Worker, but there is a cost for each runbook that it uses.
-
-The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log and then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
-
+Monitoring of some workloads requires a local process. An example is a PowerShell script that runs on the local machine to connect to an application and collect or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md), which is part of [Azure Automation](../../automation/automation-intro.md), to run a local PowerShell script. There's no direct charge for Hybrid Runbook Worker, but there's a cost for each runbook that it uses.
+The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log. Then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
## Next steps * [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md) * [Create alerts from collected data](monitor-virtual-machine-alerts.md)-
azure-monitor Tutorial Monitor Vm Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md
Title: Tutorial - Collect guest logs and metrics from Azure virtual machine
-description: Create data collection rule to collect guest logs and metrics from Azure virtual machine.
+ Title: 'Tutorial: Collect guest logs and metrics from an Azure virtual machine'
+description: Create a data collection rule to collect guest logs and metrics from Azure virtual machine.
Last updated 12/03/2022
-# Tutorial: Collect guest logs and metrics from Azure virtual machine
-To monitor the guest operating system and workloads on an Azure virtual machine, you need to install the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and create a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that specifies which data to collect. VM insights will install the agent and collection performance data, but you need to create additional data collection rules to collect log data such as Windows event log and syslog. VM insights also doesn't send guest performance data to Azure Monitor Metrics where it can be analyzed with metrics explorer and used with metrics alerts.
+# Tutorial: Collect guest logs and metrics from an Azure virtual machine
+To monitor the guest operating system and workloads on an Azure virtual machine, install [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) and create a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that specifies which data to collect. VM insights installs the agent and collection performance data, but you need to create more DCRs to collect log data such as Windows event logs and Syslog. VM insights also doesn't send guest performance data to Azure Monitor Metrics where it can be analyzed with metrics explorer and used with metrics alerts.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a data collection rule that sends guest performance data to Azure Monitor Metrics and log events to Azure Monitor Logs.
+> * Create a DCR that sends guest performance data to Azure Monitor Metrics and log events to Azure Monitor Logs.
> * View guest logs in Log Analytics. > * View guest metrics in metrics explorer. ## Prerequisites
-To complete this tutorial you need the following:
--- An Azure virtual machine to monitor.
+To complete this tutorial, you need an Azure virtual machine to monitor.
> [!IMPORTANT]
-> This tutorial does not require VM insights to be enabled for the virtual machine. The Azure Monitor agent will be installed on the VM if it isn't already installed.
+> This tutorial doesn't require VM insights to be enabled for the virtual machine. Azure Monitor Agent is installed on the VM if it isn't already installed.
-## Create data collection rule
-[Data collection rules](../essentials/data-collection-rule-overview.md) in Azure Monitor define data to collect and where it should be sent. When you define the data collection rule using the Azure portal, you specify the virtual machines it should be applied to. The Azure Monitor agent will automatically be installed on any virtual machines that don't already have it.
+## Create a data collection rule
+[Data collection rules](../essentials/data-collection-rule-overview.md) in Azure Monitor define data to collect and where it should be sent. When you define the DCR by using the Azure portal, you specify the virtual machines it should be applied to. Azure Monitor Agent is automatically installed on any virtual machines that don't already have it.
> [!NOTE]
-> You must currently install the Azure Monitor agent from **Monitor** menu in the Azure portal. This functionality is not yet available from the virtual machine's menu.
+> You must currently install Azure Monitor Agent from the **Monitor** menu in the Azure portal. This functionality isn't yet available from the virtual machine's menu.
-From the **Monitor** menu in the Azure portal, select **Data Collection Rules** and then **Create** to create a new data collection rule.
+On the **Monitor** menu in the Azure portal, select **Data Collection Rules**. Then select **Create** to create a new DCR.
-On the **Basics** tab, provide a **Rule Name** which is the name of the rule displayed in the Azure portal. Select a **Subscription**, **Resource Group**, and **Region** where the DCR and its associations will be stored. These do not need to be the same as the resources being monitored. The **Platform Type** defines the options that are available as you define the rest of the DCR. Select *Windows* or *Linux* if it will be associated only those resources or *Custom* if it will be associated with both types.
+On the **Basics** tab, enter a **Rule Name**, which is the name of the rule displayed in the Azure portal. Select a **Subscription**, **Resource Group**, and **Region** where the DCR and its associations are stored. These resources don't need to be the same as the resources being monitored. The **Platform Type** defines the options that are available as you define the rest of the DCR. Select **Windows** or **Linux** if the rule is associated only with those resources or select **Custom** if it's associated with both types.
## Select resources
-On the **Resources** tab, identify one or more virtual machines that the data collection rule will apply to. The Azure Monitor agent will be installed on any that don't already have it. Click **Add resources** and select either your virtual machines or the resource group or subscription where your virtual machine is located. The data collection rule will apply to all virtual machines in the selected scope.
+On the **Resources** tab, identify one or more virtual machines to which the DCR applies. Azure Monitor Agent is installed on any VMs that don't already have it. Select **Add resources** and select either your virtual machines or the resource group or subscription where your virtual machine is located. The DCR applies to all virtual machines in the selected scope.
## Select data sources
-A single data collection rule can have multiple data sources. For this tutorial, we'll use the same rule to collect both guest metrics and guest logs. We'll send metrics to both to Azure Monitor Metrics and to Azure Monitor Logs so that they can be analyzed both with metrics explorer and Log Analytics.
+A single DCR can have multiple data sources. For this tutorial, we use the same rule to collect both guest metrics and guest logs. We send metrics to Azure Monitor Metrics and to Azure Monitor Logs so that they can both be analyzed with metrics explorer and Log Analytics.
-On the **Collect and deliver** tab, click **Add data source**. For the **Data source type**, select **Performance counters**. Leave the **Basic** setting and select the counters that you want to collect. **Custom** allows you to select individual metric values.
+On the **Collect and deliver** tab, select **Add data source**. For the **Data source type**, select **Performance counters**. Leave the **Basic** setting and select the counters that you want to collect. Use **Custom** to select individual metric values.
-Select the **Destination** tab. **Azure Monitor Metrics** should already be listed. Click **Add destination** to add another. Select **Azure Monitor Logs** for the **Destination type**. Select your Log Analytics workspace for the **Account or namespace**. Click **Add data source** to save the data source.
+Select the **Destination** tab. **Azure Monitor Metrics** should already be listed. Select **Add destination** to add another. Select **Azure Monitor Logs** for **Destination type**. Select your Log Analytics workspace for **Account or namespace**. Select **Add data source** to save the data source.
-Click **Add data source** again to add logs to the data collection rule. For the **Data source type**, select **Windows event logs** or **Linux syslog**. Select the types of log data that you want to collect.
+Select **Add data source** again to add logs to the DCR. For the **Data source type**, select **Windows event logs** or **Linux syslog**. Select the types of log data that you want to collect.
-Select the **Destination** tab. **Azure Monitor Logs** should already be selected for the **Destination type**. Select your Log Analytics workspace for the **Account or namespace**. If you don't already have a workspace, then you can select the default workspace for your subscription, which will automatically be created. Click **Add data source** to save the data source.
+Select the **Destination** tab. **Azure Monitor Logs** should already be selected for **Destination type**. Select your Log Analytics workspace for **Account or namespace**. If you don't already have a workspace, you can select the default workspace for your subscription, which is automatically created. Select **Add data source** to save the data source.
-Click **Review + create** to create the data collection rule and install the Azure Monitor agent on the selected virtual machines.
+Select **Review + create** to create the DCR and install the Azure Monitor agent on the selected virtual machines.
-## Viewing logs
-Data is retrieved from a Log Analytics workspace using a log query written in Kusto Query Language (KQL). While a set of pre-created queries are available for virtual machines, we'll use a simple query to have a look at the events that we're collecting.
+## View logs
+Data is retrieved from a Log Analytics workspace by using a log query written in Kusto Query Language. Although a set of precreated queries are available for virtual machines, we use a simple query to have a look at the events that we're collecting.
-Select **Logs** from your virtual machines's menu. Log Analytics opens with an empty query window with the scope set to that machine. Any queries will include only records collected from that machine.
+Select **Logs** from your virtual machine's menu. Log Analytics opens with an empty query window with the scope set to that machine. Any queries include only records collected from that machine.
> [!NOTE]
-> The **Queries** window may open when you open Log Analytics. This includes pre-created queries that you can use. For now, close this window since we're going to manually create a simple query.
-
+> The **Queries** window might open when you open Log Analytics. It includes precreated queries that you can use. For now, close this window because we're going to manually create a simple query.
-In the empty query window, type either `Event` or `Syslog` depending on whether your machine is running Windows or Linux and then click **Run**. The events collected within the **Time range** are displayed.
+In the empty query window, enter either **Event** or **Syslog** depending on whether your machine is running Windows or Linux. Then select **Run**. The events collected within the **Time range** are displayed.
> [!NOTE]
-> If the query doesn't return any data, then you may need wait a few minutes until events are created on the virtual machine to be collected. You may also need to modify the data source in the data collection rule to include additional categories of events.
+> If the query doesn't return any data, you might need to wait a few minutes until events are created on the virtual machine to be collected. You might also need to modify the data source in the DCR to include other categories of events.
For a tutorial on using Log Analytics to analyze log data, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md). For a tutorial on creating alert rules from log data, see [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md). ## View guest metrics
-You can view metrics for your host virtual machine with metrics explorer without a data collection rule just like [any other Azure resource](../essentials/tutorial-metrics.md). With the data collection rule though, you can use metrics explorer to view guest metrics in addition to host metrics.
+You can view metrics for your host virtual machine with metrics explorer without a DCR like [any other Azure resource](../essentials/tutorial-metrics.md). With the DCR, you can use metrics explorer to view guest metrics and host metrics.
-Select **Metrics** from your virtual machines's menu. Metrics explorer opens with the scope set to your virtual machine. Click **Metric Namespace**, and select **Virtual Machine Guest**.
+Select **Metrics** from your virtual machine's menu. Metrics explorer opens with the scope set to your virtual machine. Select **Metric Namespace** > **Virtual Machine Guest**.
> [!NOTE]
-> If you don't see **Virtual Machine Guest**, you may just need to wait a few more minutes for the agent to be deployed and data to begin collecting.
--
+> If you don't see **Virtual Machine Guest**, you might need to wait a few minutes for the agent to deploy and data to begin collecting.
-The available guest metrics are displayed. Select a **Metric** to add to the chart.
-
-You can get a complete tutorial on viewing and analyzing metric data using metrics explorer in [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md) and on creating metrics alerts in [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md).
+The available guest metrics are displayed. Select a metric to add to the chart.
+For a tutorial on how to view and analyze metric data by using metrics explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). For a tutorial on how to create metrics alerts, see [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md).
## Next steps [Recommended alerts](tutorial-monitor-vm-alert-recommended.md) and the [VM Availability metric](tutorial-monitor-vm-alert-availability.md) alert from the virtual machine host but don't have any visibility into the guest operating system and its workloads. Now that you're collecting guest metrics for the virtual machine, you can create metric alerts based on guest metrics such as logical disk space. > [!div class="nextstepaction"] > [Create a metric alert in Azure Monitor](../alerts/tutorial-metric-alert.md)--
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
Last updated 06/08/2022
# Enable VM insights for a hybrid virtual machine This article describes how to enable VM insights for a virtual machine outside of Azure, including on-premises and other cloud environments.
-> [!IMPORTANT]
-> The recommended method of enabling hybrid VMs is first enabling [Azure Arc for servers](../../azure-arc/servers/overview.md) so that the VMs can be enabled for VM insights using processes similar to Azure VMs. This article describes how to onboard hybrid VMs if you choose not to use Azure Arc.
+To enable hybrid VMs, first enable [Azure Arc for servers](../../azure-arc/servers/overview.md) so that the VMs can be enabled for VM insights by using processes similar to Azure VMs. This article describes how to onboard hybrid VMs if you choose not to use Azure Arc.
[!INCLUDE [monitoring-limits](../../../includes/azure-monitor-vminsights-agent.md)] - ## Prerequisites - [Create and configure a Log Analytics workspace](./vminsights-configure-workspace.md).-- See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or virtual machine scale set you're enabling is supported. -
+- See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or virtual machine scale set you're enabling is supported.
## Overview
-Virtual machines outside of Azure require the same Log Analytics agent and Dependency agent that are used for Azure VMs. Since you can't use VM extensions to install the agents though, you must manually install them in the guest operating system or have them installed through some other method.
+Virtual machines outside of Azure require the same Log Analytics agent and Dependency agent that are used for Azure VMs. Because you can't use VM extensions to install the agents, you must manually install them in the guest operating system or have them installed through some other method.
-See [Connect Windows computers to Azure Monitor](../agents/agent-windows.md) or [Connect Linux computers to Azure Monitor](../agents/agent-linux.md) for details on deploying the Log Analytics agent. Details for the Dependency agent are provided in this article.
+For information on how to deploy the Log Analytics agent, see [Connect Windows computers to Azure Monitor](../agents/agent-windows.md) or [Connect Linux computers to Azure Monitor](../agents/agent-linux.md). Details for the Dependency agent are provided in this article.
## Firewall requirements
-Firewall requirements for the Log Analytics agent are provided in [Log Analytics agent overview](../agents/log-analytics-agent.md#network-requirements). The VM insights Map Dependency agent doesn't transmit any data itself, and it doesn't require any changes to firewalls or ports. The Map data is always transmitted by the Log Analytics agent to the Azure Monitor service, either directly or through the [Operations Management Suite gateway](../../azure-monitor/agents/gateway.md) if your IT security policies don't allow computers on the network to connect to the internet.
+Firewall requirements for the Log Analytics agent are provided in [Log Analytics agent overview](../agents/log-analytics-agent.md#network-requirements). The VM insights Map Dependency agent doesn't transmit any data itself, and it doesn't require any changes to firewalls or ports.
+The Map data is always transmitted by the Log Analytics agent to the Azure Monitor service. Data is transmitted either directly or through the [Operations Management Suite gateway](../../azure-monitor/agents/gateway.md) if your IT security policies don't allow computers on the network to connect to the internet.
## Dependency agent >[!NOTE]
->The following information described in this section is also applicable to the [Service Map solution](./service-map.md).
+>The following information described in this section also applies to the [Service Map solution](./service-map.md).
You can download the Dependency agent from these locations:
You can download the Dependency agent from these locations:
| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.16.22650 | BE537D4396625ADD93B8C1D5AF098AE9D9472D8A20B2682B32920C5517F1C041 | | [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.16.22650 | FF86D821BA845833C9FE5F6D5C8A5F7A60617D3AD7D84C75143F3E244ABAAB74 | - ## Install the Dependency agent on Windows
-You can install the Dependency agent manually on Windows computers by running `InstallDependencyAgent-Windows.exe`. If you run this executable file without any options, it starts a setup wizard that you can follow to install the agent interactively. You require *Administrator* privileges on the guest OS to install or uninstall the agent.
+You can install the Dependency agent manually on Windows computers by running `InstallDependencyAgent-Windows.exe`. If you run this executable file without any options, it starts a setup wizard that you can follow to install the agent interactively. You require Administrator privileges on the guest OS to install or uninstall the agent.
The following table highlights the parameters that are supported by setup for the agent from the command line.
Invoke-WebRequest "https://aka.ms/dependencyagentwindows" -OutFile InstallDepend
.\InstallDependencyAgent-Windows.exe /S ``` - ## Install the Dependency agent on Linux The Dependency agent is installed on Linux servers from *InstallDependencyAgent-Linux64.bin*, a shell script with a self-extracting binary. You can run the file by using `sh` or add execute permissions to the file itself.
Files for the Dependency agent are placed in the following directories:
| Service executable files | /opt/microsoft/dependency-agent/bin/microsoft-dependency-agent<br>/opt/microsoft/dependency-agent/bin/microsoft-dependency-agent-manager | | Binary storage files | /var/opt/microsoft/dependency-agent/storage |
-### Shell script
+### Shell script
Use the following sample shell script to download and install the agent: ```
sudo sh InstallDependencyAgent-Linux64.bin -s
## Desired State Configuration
-To deploy the Dependency agent using Desired State Configuration (DSC), you can use the xPSDesiredStateConfiguration module with the following example code:
+To deploy the Dependency agent by using Desired State Configuration (DSC), you can use the `xPSDesiredStateConfiguration` module with the following example code:
```powershell configuration VMInsights {
configuration VMInsights {
} ``` -- ## Troubleshooting
-### VM doesn't appear on the map
+This section offers troubleshooting tips for common issues.
-If your Dependency agent installation succeeded, but you don't see your computer on the map, diagnose the problem by following these steps.
+### VM doesn't appear on the map
-1. Is the Dependency agent installed successfully? You can validate this by checking to see if the service is installed and running.
+If your Dependency agent installation succeeded but you don't see your computer on the map, diagnose the problem by following these steps:
- **Windows**: Look for the service named "Microsoft Dependency agent."
+1. Is the Dependency agent installed successfully? Check to see if the service is installed and running.
- **Linux**: Look for the running process "microsoft-dependency-agent."
+ - **Windows**: Look for the service named "Microsoft Dependency agent."
+ - **Linux**: Look for the running process "microsoft-dependency-agent."
-2. Are you on the [Free pricing tier of Log Analytics](/previous-versions/azure/azure-monitor/insights/solutions)? The Free plan allows for up to five unique computers. Any subsequent computers won't show up on the map, even if the prior five are no longer sending data.
+1. Are you on the [Free pricing tier of Log Analytics](/previous-versions/azure/azure-monitor/insights/solutions)? The Free plan allows for up to five unique computers. Any subsequent computers won't show up on the map, even if the prior five are no longer sending data.
-3. Is the computer sending log and perf data to Azure Monitor Logs? Perform the following query for your computer:
+1. Is the computer sending log and perf data to Azure Monitor Logs? Perform the following query for your computer:
```Kusto Usage | where Computer == "computer-name" | summarize sum(Quantity), any(QuantityUnit) by DataType ```
- Did it return one or more results? Is the data recent? If so, your Log Analytics agent is operating correctly and communicating with the service. If not, check the agent on your server: [Log Analytics agent for Windows troubleshooting](../agents/agent-windows-troubleshoot.md) or [Log Analytics agent for Linux troubleshooting](../agents/agent-linux-troubleshoot.md).
+ Did it return one or more results? Is the data recent? If so, your Log Analytics agent is operating correctly and communicating with the service. If not, check the agent on your server. See [Log Analytics agent for Windows troubleshooting](../agents/agent-windows-troubleshoot.md) or [Log Analytics agent for Linux troubleshooting](../agents/agent-linux-troubleshoot.md).
#### Computer appears on the map but has no processes
-If you see your server on the map, but it has no process or connection data, that indicates that the Dependency agent is installed and running, but the kernel driver didn't load.
-
-Check the C:\Program Files\Microsoft Dependency Agent\logs\wrapper.log file (Windows) or /var/opt/microsoft/dependency-agent/log/service.log file (Linux). The last lines of the file should indicate why the kernel didn't load. For example, the kernel might not be supported on Linux if you updated your kernel.
+You see your server on the map, but it has no process or connection data. In this case, the Dependency agent is installed and running, but the kernel driver didn't load.
+Check the *C:\Program Files\Microsoft Dependency Agent\logs\wrapper.log* file (Windows) or */var/opt/microsoft/dependency-agent/log/service.log* file (Linux). The last lines of the file should indicate why the kernel didn't load. For example, the kernel might not be supported on Linux if you updated your kernel.
## Next steps Now that monitoring is enabled for your virtual machines, this information is available for analysis with VM insights. - To view discovered application dependencies, see [View VM insights Map](vminsights-maps.md).- - To identify bottlenecks and overall utilization with your VM's performance, see [View Azure VM performance](vminsights-performance.md).
azure-monitor Vminsights Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-policy.md
Title: Enable VM insights by using Azure Policy
-description: Describes how you enable VM insights for multiple Azure virtual machines or Virtual Machine Scale Sets using Azure Policy.
+description: This article describes how you enable VM insights for multiple Azure virtual machines or virtual machine scale sets by using Azure Policy.
Last updated 12/13/2022
# Enable VM insights by using Azure Policy
-[Azure Policy](../../governance/policy/overview.md) lets you set and enforce requirements for all new resources you create and resources you modify. VM insights policy initiatives, which are predefined sets of policies created for VM insights, install the agents required for VM insights and enable monitoring on all new virtual machines in your Azure environment. This article explains how to enable VM insights for Azure virtual machines, Virtual Machine Scale Sets, and hybrid virtual machines connected with Azure Arc using predefined VM insights policy initiates.
+[Azure Policy](../../governance/policy/overview.md) lets you set and enforce requirements for all new resources you create and resources you modify. VM insights policy initiatives, which are predefined sets of policies created for VM insights, install the agents required for VM insights and enable monitoring on all new virtual machines in your Azure environment.
+
+This article explains how to enable VM insights for Azure virtual machines, virtual machine scale sets, and hybrid virtual machines connected with Azure Arc by using predefined VM insights policy initiates.
> [!NOTE] > For information about how to use Azure Policy with Azure virtual machine scale sets and how to work with Azure Policy directly to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](../best-practices.md). ## VM insights initiatives
-VM insights policy initiatives install Azure Monitor Agent and Dependency Agent on new virtual machines in your Azure environment. Assign these initiatives to a management group, subscription, or resource group to install the agents on Windows or Linux Azure virtual machines in the defined scope automatically.
+VM insights policy initiatives install Azure Monitor Agent and the Dependency agent on new virtual machines in your Azure environment. Assign these initiatives to a management group, subscription, or resource group to install the agents on Windows or Linux Azure virtual machines in the defined scope automatically.
-The initiatives apply to new machines you create and machines you modify, but not to existing VMs.
+The initiatives apply to new machines you create and machines you modify, but not to existing VMs.
|Name |Description |
-|:|:|
-| Enable Azure Monitor for VMs with Azure Monitoring Agent (AMA) | Installs Azure Monitor Agent and Dependency agent on Azure VMs. |
-| Enable Azure Monitor for VMSS with Azure Monitoring Agent (AMA) | Installs Azure Monitor Agent and Dependency agent on Azure Virtual Machine Scale Sets. |
-| Enable Azure Monitor for Hybrid VMs with AMA | Installs Azure Monitor Agent and Dependency agent on hybrid VMs connected with Azure Arc. |
-| Legacy - Enable Azure Monitor for VMs | Installs the Log Analytics agent and Dependency agent on Azure Virtual Machine Scale Sets. |
-| Legacy - Enable Azure Monitor for virtual machine scale sets | Installs the Log Analytics agent and Dependency agent on Azure Virtual Machine Scale Sets. |
+|--||
+| Enable Azure Monitor for VMs with Azure Monitoring Agent | Installs Azure Monitor Agent and the Dependency agent on Azure VMs. |
+| Enable Azure Monitor for virtual machine scale sets with Azure Monitoring Agent | Installs Azure Monitor Agent and the Dependency agent on virtual machine scale sets. |
+| Enable Azure Monitor for Hybrid VMs with Azure Monitoring Agent | Installs Azure Monitor Agent and Dependency agent on hybrid VMs connected with Azure Arc. |
+| Legacy: Enable Azure Monitor for VMs | Installs the Log Analytics agent and Dependency agent on virtual machine scale sets. |
+| Legacy: Enable Azure Monitor for virtual machine scale sets | Installs the Log Analytics agent and Dependency agent on virtual machine scale sets. |
## Assign a VM insights policy initiative To assign a VM insights policy initiative to a subscription or management group from the Azure portal:
-1. Search for and open **Policy**.
+1. Search for and open **Policy**.
1. Select **Assignments** > **Assign initiative**. :::image type="content" source="media/vminsights-enable-policy/vm-insights-assign-initiative.png" lightbox="media/vminsights-enable-policy/vm-insights-assign-initiative.png" alt-text="Screenshot that shows the Policy Assignments screen with the Assign initiative button highlighted.":::
- This opens the **Assign initiative** screen.
+ The **Assign initiative** screen appears.
- [![Assign initiative](media/vminsights-enable-policy/assign-initiative.png)](media/vminsights-enable-policy/assign-initiative.png#lightbox)
+ [![Screenshot that shows Assign initiative.](media/vminsights-enable-policy/assign-initiative.png)](media/vminsights-enable-policy/assign-initiative.png#lightbox)
1. Configure the initiative assignment:
- 1. In the **Scope** field, select the management group or subscription to which you'll assign the initiative.
+ 1. In the **Scope** field, select the management group or subscription to which you'll assign the initiative.
1. (Optional) Select **Exclusions** to exclude specific resources from the initiative assignment. For example, if your scope is a management group, you might specify a subscription in that management group to be excluded from the assignment.
- 1. Select the ellipsis (...) next to **Initiative assignment** to launch the policy definition picker, and select one of the VM insights initiatives.
- 1. (Optional) Change the **Assignment name** and add a **Description**.
- 1. On the **Parameters** tab, select a **Log Analytics workspace** to which all virtual machines in the assignment will send data. For virtual machines to send data to different workspaces, create multiple assignments, each with their own scope.
+ 1. Select the ellipsis (**...**) next to **Initiative assignment** to start the policy definition picker. Select one of the VM insights initiatives.
+ 1. (Optional) Change the **Assignment name** and add a **Description**.
+ 1. On the **Parameters** tab, select a **Log Analytics workspace** to which all virtual machines in the assignment will send data. For virtual machines to send data to different workspaces, create multiple assignments, each with their own scope.
If you're assigning a legacy initiative, the workspace must have the *VMInsights* solution installed, as described in [Configure Log Analytics workspace for VM insights](vminsights-configure-workspace.md).
- [![Workspace](media/vminsights-enable-policy/assignment-workspace.png)](media/vminsights-enable-policy/assignment-workspace.png#lightbox)
+ [![Screenshot that shows a workspace.](media/vminsights-enable-policy/assignment-workspace.png)](media/vminsights-enable-policy/assignment-workspace.png#lightbox)
> [!NOTE]
- > If you select a workspace that's not within the scope of the assignment, grant *Log Analytics Contributor* permissions to the policy assignment's Principal ID. Otherwise, you might get a deployment failure like `The client '343de0fe-e724-46b8-b1fb-97090f7054ed' with object id '343de0fe-e724-46b8-b1fb-97090f7054ed' does not have authorization to perform action 'microsoft.operationalinsights/workspaces/read' over scope ...`
+ > If you select a workspace that's not within the scope of the assignment, grant *Log Analytics Contributor* permissions to the policy assignment's principal ID. Otherwise, you might get a deployment failure like:
+ >
+ > `The client '343de0fe-e724-46b8-b1fb-97090f7054ed' with object id '343de0fe-e724-46b8-b1fb-97090f7054ed' does not have authorization to perform action 'microsoft.operationalinsights/workspaces/read' over scope ...`
-1. Select **Review + Create** to review the initiative assignment details and select **Create** to create the assignment.
+1. Select **Review + create** to review the initiative assignment details. Select **Create** to create the assignment.
- Don't create a remediation task at this point because you'll probably need multiple remediation tasks to enable existing virtual machines. For more information about creating remediation tasks, see [Remediate compliance results](#create-a-remediation-task).
+ Don't create a remediation task at this point because you'll probably need multiple remediation tasks to enable existing virtual machines. For more information about how to create remediation tasks, see [Remediate compliance results](#create-a-remediation-task).
-## Review compliance for a VM insights policy initiative
+## Review compliance for a VM insights policy initiative
-After you assign an initiative, you can review and manage compliance for the initiative across your management groups and subscriptions.
+After you assign an initiative, you can review and manage compliance for the initiative across your management groups and subscriptions.
To see how many virtual machines exist in each of the management groups or subscriptions and their compliance status:
-1. Search for and open **Azure Monitor**.
-1. Select **Virtual machines** > **Overview** > **Other onboarding options** and then **Enable** under **Enable using policy**.
+1. Search for and open **Azure Monitor**.
+1. Select **Virtual machines** > **Overview** > **Other onboarding options**. Then under **Enable using policy**, select **Enable**.
- :::image type="content" source="media/vminsights-enable-policy/other-onboarding-options.png" lightbox="media/vminsights-enable-policy/other-onboarding-options.png" alt-text="Screenshot showing other onboarding options page of VM insights with the Enable using policy option.":::
+ :::image type="content" source="media/vminsights-enable-policy/other-onboarding-options.png" lightbox="media/vminsights-enable-policy/other-onboarding-options.png" alt-text="Screenshot that shows other onboarding options page of VM insights with the Enable using policy option.":::
- This opens the **Azure Monitor for VMs Policy Coverage** page.
+ The **Azure Monitor for VMs Policy Coverage** page appears.
- [![VM insights Manage Policy page](media/vminsights-enable-policy/manage-policy-page-01.png)](media/vminsights-enable-policy/manage-policy-page-01.png#lightbox)
+ [![Screenshot that shows the VM insights Azure Monitor for VMs Policy Coverage page.](media/vminsights-enable-policy/manage-policy-page-01.png)](media/vminsights-enable-policy/manage-policy-page-01.png#lightbox)
The following table describes the compliance information presented on the **Azure Monitor for VMs Policy Coverage** page.
- | Function | Description |
- |-|-|
+ | Function | Description |
+ |-|-|
| **Scope** | Management group or subscription to which the initiative applies.|
- | **Role** | Your role in the scope. The role can be **Reader**, **Owner**, **Contributor**, or blank if you have access to the subscription but not to the management group it belongs to. Your role determines which data you can see and whether you can assign policies or initiatives (owner), edit them, or view compliance. |
- | **Total VMs** | Total number of VMs in the scope, regardless of their status. For a management group, this is the sum total of VMs in all related subscriptions or child management groups. |
+ | **My Role** | Your role in the scope. The role can be Reader, Owner, Contributor, or blank if you have access to the subscription but not to the management group to which it belongs. Your role determines which data you can see and whether you can assign policies or initiatives (owner), edit them, or view compliance. |
+ | **Total VMs** | Total number of VMs in the scope, regardless of their status. For a management group, this number is the sum total of VMs in all related subscriptions or child management groups. |
| **Assignment Coverage** | Percentage of VMs covered by the initiative. When you assign the initiative, the scope you select in the assignment could be the scope listed or a subset of it. For instance, if you create an assignment for a subscription (initiative scope) and not a management group (coverage scope), the value of **Assignment Coverage** indicates the VMs in the initiative scope divided by the VMs in coverage scope. In another case, you might exclude some VMs, resource groups, or a subscription from the policy scope. If the value is blank, it indicates that either the policy or initiative doesn't exist or you don't have permission.|
- | **Assignment Status** | **Success** - Azure Monitor Agent or Log Analytics agent and Dependency agent deployed on all machines in scope.<br>**Warning** - The subscription isn't under a management group.<br>**Not Started** - A new assignment was added.<br>**Lock** - You don't have sufficient privileges to the management group.<br>**Blank** - No VMs exist or a policy isn't assigned. |
- | **Compliant VMs** | Number of VMs that have both Azure Monitor Agent or Log Analytics agent and Dependency agent installed. This is blank if there are no assignments, no VMs in the scope, or if you don't have the relevant permissions. |
+ | **Assignment Status** | **Success**: Azure Monitor Agent or the Log Analytics agent and Dependency agent deployed on all machines in scope.<br>**Warning**: The subscription isn't under a management group.<br>**Not Started**: A new assignment was added.<br>**Lock**: You don't have sufficient privileges to the management group.<br>**Blank**: No VMs exist or a policy isn't assigned. |
+ | **Compliant VMs** | Number of VMs that have both Azure Monitor Agent or Log Analytics agent and Dependency agent installed. This field is blank if there are no assignments, no VMs in the scope, or if you don't have the relevant permissions. |
| **Compliance** | The overall compliance number is the sum of distinct compliant resources divided by the sum of all distinct resources. |
- | **Compliance State** | **Compliant** - All VMs in the scope have the Azure Monitor Agent or Log Analytics agent and Dependency agent deployed to them, or any new VMs in the scope haven't yet been evaluated.<br>**Non-compliant** - There are VMs that aren't enabled and may need remediation.<br>**Not Started** - A new assignment was added.<br>**Lock** - You don't have sufficient privileges to the management group.<br>**Blank** - No policy assigned. |
-
-1. Select the ellipsis (...) > **View Compliance**.
+ | **Compliance State** | **Compliant**: All VMs in the scope have Azure Monitor Agent or the Log Analytics agent and Dependency agent deployed to them, or any new VMs in the scope haven't yet been evaluated.<br>**Noncompliant**: There are VMs that aren't enabled and might need remediation.<br>**Not Started**: A new assignment was added.<br>**Lock**: You don't have sufficient privileges to the management group.<br>**Blank**: No policy assigned. |
- [![View compliance](media/vminsights-enable-policy/view-compliance.png)](media/vminsights-enable-policy/view-compliance.png#lightbox)
-
- This opens the **Compliance** page, which lists assignments that match the specified filter and indicates whether they're compliant.
-
- [![Policy compliance for Azure VMs](./media/vminsights-enable-policy/policy-view-compliance.png)](./media/vminsights-enable-policy/policy-view-compliance.png#lightbox)
-
-1. Select an assignment to view its details. This opens the **Initiative compliance** page, which lists the policy definitions in the initiative and whether each is in compliance.
-
- [![Compliance details](media/vminsights-enable-policy/compliance-details.png)](media/vminsights-enable-policy/compliance-details.png#lightbox)
+1. Select the ellipsis (**...**) > **View Compliance**.
+
+ [![Screenshot that shows View Compliance.](media/vminsights-enable-policy/view-compliance.png)](media/vminsights-enable-policy/view-compliance.png#lightbox)
+
+ The **Compliance** page appears. It lists assignments that match the specified filter and indicates whether they're compliant.
- Policy definitions are considered non-compliant if:
+ [![Screenshot that shows Policy compliance for Azure VMs.](./media/vminsights-enable-policy/policy-view-compliance.png)](./media/vminsights-enable-policy/policy-view-compliance.png#lightbox)
- * Azure Monitor Agent, Log Analytics agent, or Dependency agent aren't deployed. Create a remediation task to mitigate.
+1. Select an assignment to view its details. The **Initiative compliance** page appears. It lists the policy definitions in the initiative and whether each is in compliance.
+
+ [![Screenshot that shows Compliance details.](media/vminsights-enable-policy/compliance-details.png)](media/vminsights-enable-policy/compliance-details.png#lightbox)
+
+ Policy definitions are considered noncompliant if:
+
+ * Azure Monitor Agent, the Log Analytics agent, or the Dependency agent aren't deployed. Create a remediation task to mitigate.
* VM image (OS) isn't identified in the policy definition. Policies can only verify well-known Azure VM images. Check the documentation to see whether the VM OS is supported. * Some VMs in the initiative scope are connected to a Log Analytics workspace other than the one that's specified in the policy assignment.
-1. Select a policy definition to open the **Policy compliance** page.
+1. Select a policy definition to open the **Policy compliance** page.
## Create a remediation task If your assignment doesn't show 100% compliance, create remediation tasks to evaluate and enable existing VMs. You'll most likely need to create multiple remediation tasks, one for each policy definition. You can't create a remediation task for an initiative. To create a remediation task:
-
-1. From the **Initiative compliance** page, select **Create Remediation Task**.
- [![Policy compliance details](media/vminsights-enable-policy/policy-compliance-details.png)](media/vminsights-enable-policy/policy-compliance-details.png#lightbox)
+1. On the **Initiative compliance** page, select **Create Remediation Task**.
- This opens the **New remediation task** page.
-
- [![New remediation task](media/vminsights-enable-policy/new-remediation-task.png)](media/vminsights-enable-policy/new-remediation-task.png#lightbox)
+ [![Screenshot that shows Policy compliance details.](media/vminsights-enable-policy/policy-compliance-details.png)](media/vminsights-enable-policy/policy-compliance-details.png#lightbox)
-1. Review **Remediation settings** and **Resources to remediate** and modify as necessary, then select **Remediate** to create the task.
+ The **New remediation task** page appears.
- Once the remediation tasks are complete, your VMs should be compliant with agents installed and enabled for VM insights.
+ [![Screenshot that shows the New remediation task page.](media/vminsights-enable-policy/new-remediation-task.png)](media/vminsights-enable-policy/new-remediation-task.png#lightbox)
+
+1. Review **Remediation settings** and **Resources to remediate** and modify as necessary. Then select **Remediate** to create the task.
+
+ After the remediation tasks are finished, your VMs should be compliant with agents installed and enabled for VM insights.
## Track remediation tasks
-To track the progress of remediation tasks, select **Remediate** from the **Policy** menu and select the **Remediation tasks** tab.
+To track the progress of remediation tasks, on the **Policy** menu, select **Remediation** and select the **Remediation tasks** tab.
-[![Screenshot shows the Policy Remediation pane for Monitor | Virtual Machines.](media/vminsights-enable-policy/remediation.png)](media/vminsights-enable-policy/remediation.png#lightbox)
+[![Screenshot that shows the Policy Remediation page for Monitor | Virtual Machines.](media/vminsights-enable-policy/remediation.png)](media/vminsights-enable-policy/remediation.png#lightbox)
-
## Next steps
-Learn how to:
+Learn how to:
- [View VM insights Map](vminsights-maps.md) to see application dependencies. -- [View Azure VM performance](vminsights-performance.md) to identify bottlenecks and overall utilization of your VM's performance.
+- [View Azure VM performance](vminsights-performance.md) to identify bottlenecks and overall utilization of your VM's performance.
azure-monitor Vminsights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-optout.md
Last updated 06/08/2022
# Disable monitoring of your VMs in VM insights
-After you enable monitoring of your virtual machines (VMs), you can later choose to disable monitoring in VM insights. This article shows how to disable monitoring for one or more VMs.
+After you enable monitoring of your virtual machines (VMs), you can later choose to disable monitoring in VM insights. This article shows how to disable monitoring for one or more VMs.
Currently, VM insights doesn't support selective disabling of VM monitoring. Your Log Analytics workspace might support VM insights and other solutions. It might also collect other monitoring data. If your Log Analytics workspace provides these services, you need to understand the effect and methods of disabling monitoring before you start.
VM insights relies on the following components to deliver its experience:
* A Log Analytics workspace, which stores monitoring data from VMs and other sources. * A collection of performance counters configured in the workspace. The collection updates the monitoring configuration on all VMs connected to the workspace.
-* `VMInsights`, which is a monitoring solution configured in the workspace. This solution updates the monitoring configuration on all VMs connected to the workspace.
-* `MicrosoftMonitoringAgent` (for Windows) or `OmsAgentForLinux` (for Linux), and `DependencyAgent`, which are Azure VM extensions. These extensions collect and send data to the workspace.
+* The `VMInsights` monitoring solution is configured in the workspace. This solution updates the monitoring configuration on all VMs connected to the workspace.
+* Azure VM extensions `MicrosoftMonitoringAgent` (for Windows) or `OmsAgentForLinux` (for Linux) and `DependencyAgent`. These extensions collect and send data to the workspace.
As you prepare to disable monitoring of your VMs, keep these considerations in mind: * If you evaluated with a single VM and used the preselected default Log Analytics workspace, you can disable monitoring by uninstalling the Dependency agent from the VM and disconnecting the Log Analytics agent from this workspace. This approach is appropriate if you intend to use the VM for other purposes and decide later to reconnect it to a different workspace.
-* If you selected a preexisting Log Analytics workspace that supports other monitoring solutions and data collection from other sources, you can remove solution components from the workspace without interrupting or affecting your workspace.
+* If you selected a preexisting Log Analytics workspace that supports other monitoring solutions and data collection from other sources, you can remove solution components from the workspace without interrupting or affecting your workspace.
>[!NOTE]
-> After removing the solution components from your workspace, you might continue to see performance and map data for your Azure VMs. Data will eventually stop appearing in the **Performance** and **Map** views. The **Enable** option will be available from the selected Azure VM so you can re-enable monitoring in the future.
+> After you remove the solution components from your workspace, you might continue to see performance and map data for your Azure VMs. Data eventually stops appearing in the **Performance** and **Map** views. The **Enable** option is available from the selected Azure VM so that you can reenable monitoring in the future.
## Remove VM insights completely
-If you still need the Log Analytics workspace, follow these steps to completely remove VM insights. You'll remove the `VMInsights` solution from the workspace.
+If you still need the Log Analytics workspace, you can remove the `VMInsights` solution from the workspace.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, select **All services**. In the list of resources, type **Log Analytics**. As you begin typing, the list filters suggestions based on your input. Select **Log Analytics**.
-3. In your list of Log Analytics workspaces, select the workspace you chose when you enabled VM insights.
-4. On the left, select **Legacy solutions**.
-5. In the list of solutions, select **VMInsights(workspace name)**. On the **Overview** page for the solution, select **Delete**. When prompted to confirm, select **Yes**.
+1. In the Azure portal, select **All services**. In the list of resources, enter **Log Analytics**. As you begin typing, the list filters suggestions based on your input. Select **Log Analytics**.
+1. In your list of Log Analytics workspaces, select the workspace you chose when you enabled VM insights.
+1. On the left, select **Legacy solutions**.
+1. In the list of solutions, select **VMInsights(workspace name)**. On the **Overview** page for the solution, select **Delete**. When you're prompted to confirm, select **Yes**.
-## Disable monitoring and keep the workspace
+## Disable monitoring and keep the workspace
-If your Log Analytics workspace still needs to support monitoring from other sources, following these steps to disable monitoring on the VM that you used to evaluate VM insights. For Azure VMs, you'll remove the dependency agent VM extension and the Log Analytics agent VM extension for Windows or Linux directly from the VM.
+If your Log Analytics workspace still needs to support monitoring from other sources, you can disable monitoring on the VM that you used to evaluate VM insights. For Azure VMs, you remove the dependency agent VM extension and the Log Analytics agent VM extension for Windows or Linux directly from the VM.
>[!NOTE]
->Don't remove the Log Analytics agent if:
+>Don't remove the Log Analytics agent if:
>
-> * Azure Automation manages the VM to orchestrate processes or to manage configuration or updates.
-> * Microsoft Defender for Cloud manages the VM for security and threat detection.
+> * Azure Automation manages the VM to orchestrate processes or to manage configuration or updates.
+> * Microsoft Defender for Cloud manages the VM for security and threat detection.
>
-> If you do remove the Log Analytics agent, you will prevent those services and solutions from proactively managing your VM.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, select **Virtual Machines**.
-3. From the list, select a VM.
-4. On the left, select **Extensions**. On the **Extensions** page, select **DependencyAgent**.
-5. On the extension properties page, select **Uninstall**.
-6. On the **Extensions** page, select **MicrosoftMonitoringAgent**. On the extension properties page, select **Uninstall**.
+> If you do remove the Log Analytics agent, you'll prevent those services and solutions from proactively managing your VM.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the Azure portal, select **Virtual Machines**.
+1. From the list, select a VM.
+1. On the left, select **Extensions**. On the **Extensions** page, select **DependencyAgent**.
+1. On the extension properties page, select **Uninstall**.
+1. On the **Extensions** page, select **MicrosoftMonitoringAgent**. On the extension properties page, select **Uninstall**.
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 02/23/2023 Last updated : 04/27/2023 # Resource limits for Azure NetApp Files
For limits and constraints related to Azure NetApp Files network features, see [
## Determine if a directory is approaching the limit size <a name="directory-limit"></a>
-You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
+You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB). If you reach the maximum size limit for a single directory for Azure NetApp Files, the error `No space left on device` occurs.
For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
azure-resource-manager Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-metrics.md
Title: Control plane metrics in Azure Monitor
description: Azure Resource Manager metrics in Azure Monitor | Traffic and latency observability for subscription-level control plane requests Previously updated : 12/01/2021 Last updated : 04/26/2023 # Azure Resource Manager metrics in Azure Monitor When you create and manage resources in Azure, your requests are orchestrated through Azure's [control plane](./control-plane-and-data-plane.md), Azure Resource Manager. This article describes how to monitor the volume and latency of control plane requests made to Azure.
-With these metrics, you can observe traffic and latency for control plane requests throughout your subscriptions. For example, you can now figure out when your requests have been throttled or failed by filtering for specific status codes. We've dug into this below in [examining throttled requests](#examining-throttled-requests) and [examining server errors](#examining-server-errors).
+With these metrics, you can observe traffic and latency for control plane requests throughout your subscriptions. For example, you can now figure out when your requests have been throttled or failed by filtering for specific status codes by [examining throttled requests](#examining-throttled-requests) and [examining server errors](#examining-server-errors).
The metrics are available for up to three months (93 days) and only track synchronous requests. For a scenario like a VM creation, the metrics do not represent the performance or reliability of the long running asynchronous operation.
curl --location --request GET 'https://management.azure.com/subscriptions/000000
--header 'Authorization: bearer {{bearerToken}}' ```
-This will return the definition for the metrics schema. Notably, this schema includes the dimensions you can filter on with the Monitor API:
+This snippet returns the definition for the metrics schema. Notably, this schema includes the dimensions you can filter on with the Monitor API:
| Dimension Name | Description | | - | -- |
Then, after selecting **Apply**, you can visualize your Traffic or Latency contr
### Query traffic and latency control plane metrics via REST API
-After you are authenticated with Azure, you can make a request to retrieve control plane metrics for your subscription. In the script shared below, please replace "00000000-0000-0000-0000-000000000000" with your subscription ID.
-
-The request below will retrieve the average request latency (in seconds) and the total request count for the 2 day timespan, broken down by 1 day intervals:
+After you are authenticated with Azure, you can make a request to retrieve control plane metrics for your subscription. In the script, replace "00000000-0000-0000-0000-000000000000" with your subscription ID. The script will retrieve the average request latency (in seconds) and the total request count for the two day timespan, broken down by one day intervals:
```bash curl --location --request GET "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.insights/metrics?api-version=2021-05-01&interval=P1D&metricnames=Latency&metricnamespace=microsoft.resources/subscriptions&region=global&aggregation=average,count&timespan=2021-11-01T00:00:00Z/2021-11-03T00:00:00Z" \ --header "Authorization: bearer {{bearerToken}}" ```
-In the case of Azure Resource Manager metrics, you can retrieve the traffic count by using the Latency metric and including the 'count' aggregation. You'll see the JSON response for the request below:
+In the case of Azure Resource Manager metrics, you can retrieve the traffic count by using the Latency metric and including the 'count' aggregation. You'll see a JSON response for the request:
```Json {
curl --location --request GET 'https://management.azure.com/subscriptions/000000
--header 'Authorization: bearer {{bearerToken}}' ```
-You can also accomplish generic server errors filtering within portal by setting the filter property to 'StatusCodeClass' and the value to '5xx', similar to what was done in the throttling example above.
+You can also accomplish generic server errors filtering within portal by setting the filter property to 'StatusCodeClass' and the value to '5xx', similar to what was done in the throttling example.
## Next steps
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
Title: Set up Azure Backup Server for Azure VMware Solution
description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server. Previously updated : 04/20/2023 Last updated : 04/27/2023 # Set up Azure Backup Server for Azure VMware Solution
This article helps you prepare your Azure VMware Solution environment to back up
- **Folder-level auto protection:** vCenter Server lets you organize your VMs into Virtual Machine folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. During the protection of folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders. - **Azure Backup Server continues to protect vMotioned VMs within the cluster:** As VMs are vMotioned for dynamic resource load balancing within the cluster, Azure Backup Server automatically detects and continues VM protection. - **Recover necessary files faster:** Azure Backup Server can recover files or folders from a Windows VM without recovering the entire VM.-- **Application Consistent Backups:** If the *VMware Tools* isn't installed, a crash consistent backup will be executed. When the *VMware Tools* is installed with Microsoft Windows virtual machines, all applications that support VSS freeze and thaw operations will support application consistent backups. When the *VMware Tools* is installed with Linux virtual machines, application consistent snapshots are supported by calling the pre and post scripts.
+- **Application Consistent Backups:** If the *VMware Tools* aren't installed, a crash consistent backup will be executed. When the *VMware Tools* are installed with Microsoft Windows virtual machines, all applications that support VSS freeze and thaw operations will support application consistent backups. When the *VMware Tools* are installed with Linux virtual machines, application consistent snapshots are supported by calling the pre and post scripts.
## Limitations
azure-web-pubsub Howto Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-common-issues.md
+
+ Title: "Troubleshooting guide for Azure Web PubSub Service"
+description: Learn how to troubleshoot common issues for Web PubSub
+++ Last updated : 04/28/2023+
+ms.devlang: csharp
++
+# Troubleshooting guide for common issues
+
+This article provides troubleshooting guidance for some of the common issues that customers might encounter. Listed errors are available to check when you turn on [`live trace tool`](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool) or collect from [Azure Monitor](./howto-troubleshoot-resource-logs.md#capture-resource-logs-with-azure-monitor).
++
+## 404 from HttpHandlerUnexpectedResponse
+
+### Possible errors
+
+`Sending message during operation hub:<your-hub>,event:connect,type:sys,category:connections,requestType:Connect got unexpected response with status code 404.`
+
+### Root cause
+
+This error indicates the event is registered in Web PubSub settings but fail to get a response from registered upstream URL.
+
+### Troubleshooting guide
+
+- Check your upstream server function or method whether it's good to work.
+- Check whether this event is intended to register. If not, remove it from the hub settings in Web PubSub side.
+
+## 500 from HttpHandlerUnexpectedResponse
+
+### Possible errors
+
+- `Sending message during operation handshake got unexpected response with status code 500. Detail: Get error from upstream: 'Request is denied as target server is invalid'`
+- `Sending message during operation hub:<your-hub>,event:connect,type:sys,category:connections,requestType:Connect got unexpected response with status code 500.`
+
+### Root cause
+
+This error indicates event request get a `500` response from registered upstream.
+
+### Troubleshooting guide
+
+- Check upstream side logs to investigate if there's some errors during handling the reported event.
+
+## AbuseProtectionResponseMissingAllowedOrigin
+
+### Possible errors
+
+- `Abuse protection for 'https://<upstream-host>/<upstream-path>' missing allowed origins: .`
+
+### Root cause
+
+Web PubSub follows the [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. Every registered upstream webhook URL will be validated. The `WebHook-Request-Origin` request header is set to the service domain name `<web-pubsub-name>.webpubsub.azure.com`, and it expects the response to have a header `WebHook-Allowed-Origin` to contain this domain name or `*`.
+
+### Troubleshooting guide
+
+Review the upstream side code to ensure when upstream receives the `OPTIONS` preflight request from Web PubSub service, it's correctly handled following the rule that contains the expected header `WebHook-Allowed-Origin` and value.
+
+Besides, you can update to convenience server SDK, which automatically handles `Abuse Protection` for you.
+
+- [@web-pubsub-express for JavaScript ](https://www.npmjs.com/package/@azure/web-pubsub-express)
+- [Microsoft.Azure.WebPubSub.AspNetCore for C#](https://www.nuget.org/packages/Microsoft.Azure.WebPubSub.AspNetCore)
+
+## 401 Unauthorized from AbuseProtectionResponseInvalidStatusCode
+
+### Possible errors
+
+- `Abuse protection for 'https://<upstream-host>/<upstream-path>' failed: 401.`
+
+### Root cause
+
+This error indicates the `Abuse Protection` request get a `401` response from the registered upstream URL. For more information, see [`Abuse Protection`](./howto-develop-eventhandler.md#upstream-and-validation).
+
+### Troubleshooting guide
+
+- Check if there's any authentication enabled in upstream side, for example, the `App Keys` for a `WebPubSubTrigger` Azure Function is set correctly, see [example](./quickstart-serverless.md?#configure-the-web-pubsub-service-event-handler).
+- Check upstream side logs to investigate how is the `Abuse Protection` request processed.
+
+## Client connection drops
+
+When the client is connected to Azure Web PubSub, the persistent connection between the client and Azure Web PubSub can sometimes drop for different reasons. This section describes several possibilities causing such connection drop and provides some guidance on how to identify the root cause.
+
+You can check the metric `Connection Close Count` from Azure portal.
+
+### Possible reasons and root cause
+
+| Reason | Root cause |
+|--|--|
+| Normal | Close by clients |
+| ClosedByAppServer | Close by server triggered Rest API call like [`CloseConnection`](/rest/api/webpubsub/dataplane/web-pub-sub/close-connection?tabs=HTTP) |
+| ServiceReload | Close by service due to regular maintenance or backend auto scales |
+| PingTimeout | Close by service due to client status unhealthy that service doesn't receive any regular pings |
+| SlowClient | Close by service due to clients are not able to receive buffered messages fast enough |
+
+### Troubleshooting guide
+
+`PingTimeout` and `SlowClient` indicates that you have some clients not able to afford current traffic load. It's suggested to control the message sending speed and investigate [client traces](./howto-troubleshoot-network-trace.md) if client side performance can be improved.
+
+## ConnectionCountLimitReached
+
+Web PubSub different tiers have a hard limit on concurrent connection. This error indicates your traffic is beyond the supported connection count. For more information about pricing, see [Web PubSub pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
+
+### Solution
+
+Scale up to a paid tier(Standard or Premium) to have at least 1000 connections or scale out to more units that support more connections.
+
azure-web-pubsub Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/key-concepts.md
Previously updated : 07/28/2022 Last updated : 04/28/2023
Here are some important terms used by the service:
[!INCLUDE [Terms](includes/terms.md)]
+> [!IMPORTANT]
+> `Hub`, `Group`, `UserId` are important roles when you manage clients and send messages. They will be required parameters in different REST API calls as plain text. So __DO NOT__ put sensitive information in these fields. For example, credentials or bearer tokens which will have high leak risk.
+ ## Workflow A typical workflow using the service is shown as below:
backup Backup Azure Microsoft Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-microsoft-azure-backup.md
Title: Use Azure Backup Server to back up workloads description: In this article, learn how to prepare your environment to protect and back up workloads using Microsoft Azure Backup Server (MABS). Previously updated : 03/01/2023 Last updated : 04/27/2023
After you've downloaded all the files, select **MicrosoftAzureBackupInstaller.ex
![Setup extracting files for install](./media/backup-azure-microsoft-azure-backup/extract/03.png)
-Once the extraction process complete, check the box to launch the freshly extracted *setup.exe* to begin installing Microsoft Azure Backup Server and select the **Finish** button.
+Once the extraction process completes, check the box to launch the freshly extracted *setup.exe* to begin installing Microsoft Azure Backup Server and select the **Finish** button.
### Installing the software package
-1. Select **Microsoft Azure Backup** to launch the setup wizard.
+1. Select **Microsoft Azure Backup Server** to launch the setup wizard.
- ![Microsoft Azure Backup Setup Wizard](./media/backup-azure-microsoft-azure-backup/launch-screen2.png)
-2. On the Welcome screen, select the **Next** button. This takes you to the *Prerequisite Checks* section. On this screen, select **Check** to determine if the hardware and software prerequisites for Azure Backup Server have been met. If all prerequisites are met successfully, you'll see a message indicating that the machine meets the requirements. Select the **Next** button.
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/launch-setup-wizard.png" alt-text="Screenshot shows Microsoft Azure Backup Setup Wizard.":::
+2. On the **Welcome** screen, select **Next**.
- ![Azure Backup Server - Welcome and Prerequisites check](./media/backup-azure-microsoft-azure-backup/prereq/prereq-screen2.png)
+ This takes you to the *Prerequisite Checks* section. On this screen, select **Check** to determine if the hardware and software prerequisites for Azure Backup Server have been met. If all prerequisites are met successfully, you'll see a message indicating that the machine meets the requirements. Select the **Next** button.
+
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/prereq/welcome-screen.png" alt-text="Screenshot shows Azure Backup Server welcome and prerequisites check.":::
3. The Azure Backup Server installation package comes bundled with the appropriate SQL Server binaries needed. When starting a new Azure Backup Server installation, pick the option **Install new Instance of SQL Server with this Setup** and select the **Check and Install** button. Once the prerequisites are successfully installed, select **Next**. >[!NOTE]
Once the extraction process complete, check the box to launch the freshly extrac
>If you wish to use your own SQL server, the supported SQL Server versions are SQL Server 2022 and 2019. All SQL Server versions should be Standard or Enterprise 64-bit. >Azure Backup Server won't work with a remote SQL Server instance. The instance being used by Azure Backup Server needs to be local. If you're using an existing SQL server for MABS, the MABS setup only supports the use of *named instances* of SQL server.
- ![Azure Backup Server - SQL check](./media/backup-azure-microsoft-azure-backup/sql/01.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/sql/install-new-instance-of-sql-server.png" alt-text="Screenshot shows Azure Backup Server SQL check.":::
If a failure occurs with a recommendation to restart the machine, do so and select **Check Again**. If there are any SQL configuration issues, reconfigure SQL according to the SQL guidelines and retry to install/upgrade MABS using the existing instance of SQL.
Once the extraction process complete, check the box to launch the freshly extrac
4. Provide a location for the installation of Microsoft Azure Backup server files and select **Next**.
- ![Provide location for installation of files](./media/backup-azure-microsoft-azure-backup/space-screen.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/space-screen.png" alt-text="Screenshot shows how to provide location for installation of files.":::
The scratch location is a requirement for back up to Azure. Ensure the scratch location is at least 5% of the data planned to be backed up to the cloud. For disk protection, separate disks need to be configured once the installation completes. For more information about storage pools, see [Prepare data storage](/system-center/dpm/plan-long-and-short-term-data-storage).
Once the extraction process complete, check the box to launch the freshly extrac
5. Provide a strong password for restricted local user accounts and select **Next**.
- ![Provide strong password](./media/backup-azure-microsoft-azure-backup/security-screen.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/security-screen.png" alt-text="Screenshot shows how to provide strong password.":::
6. Select whether you want to use *Microsoft Update* to check for updates and select **Next**. > [!NOTE]
Once the extraction process complete, check the box to launch the freshly extrac
> >
- ![Microsoft Update Opt-In](./media/backup-azure-microsoft-azure-backup/update-opt-screen2.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/update-opt-screen2.png" alt-text="Screenshot shows the Microsoft Update Opt-In page.":::
7. Review the *Summary of Settings* and select **Install**.
- ![Summary of settings](./media/backup-azure-microsoft-azure-backup/summary-screen.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/summary-screen.png" alt-text="Screenshot shows the summary of settings.":::
+ 8. The installation happens in phases. In the first phase, the Microsoft Azure Recovery Services Agent is installed on the server. The wizard also checks for Internet connectivity. If Internet connectivity is available, you can continue with the installation. If not, you need to provide proxy details to connect to the Internet. >[!Important]
Once the extraction process complete, check the box to launch the freshly extrac
The next step is to configure the Microsoft Azure Recovery Services Agent. As a part of the configuration, you'll have to provide your vault credentials to register the machine to the Recovery Services vault. You'll also provide a passphrase to encrypt/decrypt the data sent between Azure and your premises. You can automatically generate a passphrase or provide your own minimum 16-character passphrase. Continue with the wizard until the agent has been configured.
- ![Register Server Wizard](./media/backup-azure-microsoft-azure-backup/mars/04.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/mars/register-server-wizard.png" alt-text="Screenshot shows the Register Server Wizard.":::
9. Once registration of the Microsoft Azure Backup server successfully completes, the overall setup wizard proceeds to the installation and configuration of SQL Server and the Azure Backup Server components. Once the SQL Server component installation completes, the Azure Backup Server components are installed.
- ![Azure Backup Server setup progress](./media/backup-azure-microsoft-azure-backup/final-install/venus-installation-screen.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/final-install/venus-installation-screen.png" alt-text="Screenshot shows the Azure Backup Server setup progress.":::
When the installation step has completed, the product's desktop icons will have been created as well. Double-click the icon to launch the product.
backup Backup Azure Sql Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-mabs.md
-# Back up SQL Server to Azure by using Azure Backup Server
+# Back up SQL Server to Azure using Azure Backup Server
This article describes how to back up and restore SQL Server to Azure by using Microsoft Azure Backup Server (MABS).
To protect SQL Server databases in Azure, first create a backup policy:
1. Select **Next**. MABS shows the overall storage space available. It also shows the potential disk space utilization.
- ![Screenshot shows how to set up disk allocation in MABS.](./media/backup-azure-backup-sql/pg-storage.png)
+ :::image type="content" source="./media/backup-azure-backup-sql/postgresql-storage-inline.png" alt-text="Screenshot shows how to set up disk allocation in MABS." lightbox="./media/backup-azure-backup-sql/postgresql-storage-expanded.png":::
*Total data size* is the size of the data you want to back up, and disk space to be provisioned on DPM is the space that MABS recommends for the protection group. DPM chooses the ideal backup volume based on the settings. However, you can edit the backup volume choices in the disk allocation details. For the workloads, select the preferred storage in the dropdown menu. The edits change the values for *Total Storage* and *Free Storage* in the **Available Disk Storage** pane. *Underprovisioned space* is the amount of storage that DPM suggests you add to the volume for continuous smooth backups.
backup Backup Mabs Install Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-install-azure-stack.md
Azure Backup Server shares code with Data Protection Manager. You'll see referen
> Azure Backup Server won't work with a remote SQL Server instance. The instance used by Azure Backup Server must be local. >
- ![Azure Backup Server - SQL settings](./media/backup-mabs-install-azure-stack/mabs-install-wizard-sql-install-9.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/sql/install-new-instance-of-sql-server.png" alt-text="Screenshot shows Azure Backup Server SQL check.":::
After checking, if the virtual machine has the necessary prerequisites to install Azure Backup Server, select **Next**.
Azure Backup Server shares code with Data Protection Manager. You'll see referen
6. On the **Security Settings** screen, provide a strong password for restricted local user accounts and select **Next**.
- ![Security settings screen](./media/backup-mabs-install-azure-stack/mabs-install-wizard-security-12.png)
+ :::image type="content" source="./media/backup-azure-microsoft-azure-backup/update-opt-screen2.png" alt-text="Screenshot shows the Microsoft Update Opt-In page.":::
7. On the **Microsoft Update Opt-In** screen, select whether you want to use *Microsoft Update* to check for updates and select **Next**.
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
By default, users in your org will have only read access to shared links. If a u
## Considerations * Shareable Links isn't currently supported for peered VNETs across tenants.
-* Shareable Links isn't supported for national clouds during preview.
+* Shareable Links isn't currently supported over Virtual WAN.
+* Shareable Links does not support connection to on-premises or non-Azure VMs and VMSS. 
* The Standard SKU is required for this feature. ## Prerequisites
batch Batch Parallel Node Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-parallel-node-tasks.md
Title: Run tasks concurrently to maximize usage of Batch compute nodes
-description: Increase efficiency and lower costs by using fewer compute nodes and running tasks in parallel on each node in an Azure Batch pool
+description: Learn how to increase efficiency and lower costs by using fewer compute nodes and parallelism in an Azure Batch pool.
Previously updated : 04/13/2021 Last updated : 04/10/2023 ms.devlang: csharp
You can maximize resource usage on a smaller number of compute nodes in your poo
While some scenarios work best with all of a node's resources dedicated to a single task, certain workloads may see shorter job times and lower costs when multiple tasks share those resources. Consider the following scenarios: -- **Minimize data transfer** for tasks that are able to share data. You can dramatically reduce data transfer charges by copying shared data to a smaller number of nodes, then executing tasks in parallel on each node. This especially applies if the data to be copied to each node must be transferred between geographic regions.-- **Maximize memory usage** for tasks which require a large amount of memory, but only during short periods of time, and at variable times during execution. You can employ fewer, but larger, compute nodes with more memory to efficiently handle such spikes. These nodes will have multiple tasks running in parallel on each node, but each task can take advantage of the nodes' plentiful memory at different times.
+- **Minimize data transfer** for tasks that are able to share data. You can dramatically reduce data transfer charges by copying shared data to a smaller number of nodes, then executing tasks in parallel on each node. This strategy especially applies if the data to be copied to each node must be transferred between geographic regions.
+- **Maximize memory usage** for tasks that require a large amount of memory, but only during short periods of time, and at variable times during execution. You can employ fewer, but larger, compute nodes with more memory to efficiently handle such spikes. These nodes have multiple tasks running in parallel on each node, but each task can take advantage of the nodes' plentiful memory at different times.
- **Mitigate node number limits** when inter-node communication is required within a pool. Currently, pools configured for inter-node communication are limited to 50 compute nodes. If each node in such a pool is able to execute tasks in parallel, a greater number of tasks can be executed simultaneously. - **Replicate an on-premises compute cluster**, such as when you first move a compute environment to Azure. If your current on-premises solution executes multiple tasks per compute node, you can increase the maximum number of node tasks to more closely mirror that configuration.
While some scenarios work best with all of a node's resources dedicated to a sin
As an example, imagine a task application with CPU and memory requirements such that [Standard\_D1](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes are sufficient. However, in order to finish the job in the required time, 1,000 of these nodes are needed.
-Instead of using Standard\_D1 nodes that have 1 CPU core, you could use [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes that have 16 cores each, and enable parallel task execution. This means that 16 times fewer nodes could be used--instead of 1,000 nodes, only 63 would be required. If large application files or reference data are required for each node, job duration and efficiency are improved, since the data is copied to only 63 nodes.
+Instead of using Standard\_D1 nodes that have one CPU core, you could use [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes that have 16 cores each, and enable parallel task execution. You could potentially use 16 times fewer nodes instead of 1,000 nodes, only 63 would be required. If large application files or reference data are required for each node, job duration and efficiency are improved, since the data is copied to only 63 nodes.
## Enable parallel task execution
You configure compute nodes for parallel task execution at the pool level. With
> [!NOTE] > You can set the `taskSlotsPerNode` element and [TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool) property only at pool creation time. They can't be modified after a pool has already been created.
-Azure Batch allows you to set task slots per node up to (4x) the number of node cores. For example, if the pool is configured with nodes of size "Large" (four cores), then `taskSlotsPerNode` may be set to 16. However, regardless of how many cores the node has, you can't have more than 256 task slots per node. For details on the number of cores for each of the node sizes, see [Sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md). For more information on service limits, see [Quotas and limits for the Azure Batch service](batch-quota-limit.md).
+Azure Batch allows you to set task slots per node up to (4x) the number of node cores. For example, if the pool is configured with nodes of size "Large" (four cores), then `taskSlotsPerNode` may be set to 16. However, regardless of how many cores the node has, you can't have more than 256 task slots per node. For details on the number of cores for each of the node sizes, see [Sizes for Cloud Services (classic)](../cloud-services/cloud-services-sizes-specs.md). For more information on service limits, see [Batch service quotas and limits](batch-quota-limit.md).
> [!TIP]
-> Be sure to take into account the `taskSlotsPerNode` value when you construct an [autoscale formula](/rest/api/batchservice/pool/enableautoscale) for your pool. For example, a formula that evaluates `$RunningTasks` could be dramatically affected by an increase in tasks per node. For more information, see [Automatically scale compute nodes in an Azure Batch pool](batch-automatic-scaling.md).
+> Be sure to take into account the `taskSlotsPerNode` value when you construct an [autoscale formula](/rest/api/batchservice/pool/enableautoscale) for your pool. For example, a formula that evaluates `$RunningTasks` could be dramatically affected by an increase in tasks per node. For more information, see [Create an automatic formula for scaling compute nodes in a Batch pool](batch-automatic-scaling.md).
## Specify task distribution
When enabling concurrent tasks, it's important to specify how you want the tasks
By using the [CloudPool.TaskSchedulingPolicy](/dotnet/api/microsoft.azure.batch.cloudpool.taskschedulingpolicy) property, you can specify that tasks should be assigned evenly across all nodes in the pool ("spreading"). Or you can specify that as many tasks as possible should be assigned to each node before tasks are assigned to another node in the pool ("packing").
-As an example, consider the pool of [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes (in the example above) that is configured with a [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) value of 16. If the [CloudPool.TaskSchedulingPolicy](/dotnet/api/microsoft.azure.batch.cloudpool.taskschedulingpolicy) is configured with a [ComputeNodeFillType](/dotnet/api/microsoft.azure.batch.common.computenodefilltype) of *Pack*, it would maximize usage of all 16 cores of each node and allow an [autoscaling pool](batch-automatic-scaling.md) to remove unused nodes (nodes without any tasks assigned) from the pool. This minimizes resource usage and saves money.
+As an example, consider the pool of [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes (in the previous example) that is configured with a [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) value of 16. If the [CloudPool.TaskSchedulingPolicy](/dotnet/api/microsoft.azure.batch.cloudpool.taskschedulingpolicy) is configured with a [ComputeNodeFillType](/dotnet/api/microsoft.azure.batch.common.computenodefilltype) of *Pack*, it would maximize usage of all 16 cores of each node and allow an [autoscaling pool](batch-automatic-scaling.md) to remove unused nodes (nodes without any tasks assigned) from the pool. Autoscaling minimizes resource usage and can save money.
## Define variable slots per task
-A task can be defined with [CloudTask.RequiredSlots](/dotnet/api/microsoft.azure.batch.cloudtask.requiredslots) property, specifying how many slots it requires to run on a compute node. The default value is 1. You can set variable task slots if your tasks have different weights regarding to resource usage on the compute node. This lets each compute node have a reasonable number of concurrent running tasks without overwhelming system resources like CPU or memory.
+A task can be defined with the [CloudTask.RequiredSlots](/dotnet/api/microsoft.azure.batch.cloudtask.requiredslots) property, specifying how many slots it requires to run on a compute node. The default value is 1. You can set variable task slots if your tasks have different weights associated with their resource usage on the compute node. Variable task slots let each compute node have a reasonable number of concurrent running tasks without overwhelming system resources like CPU or memory.
-For example, for a pool with property `taskSlotsPerNode = 8`, you can submit multi-core required CPU-intensive tasks with `requiredSlots = 8`, while other tasks can be set to `requiredSlots = 1`. When this mixed workload is scheduled, the CPU-intensive tasks will run exclusively on their compute nodes, while other tasks can run concurrently (up to eight tasks at once) on other nodes. This helps you balance your workload across compute nodes and improve resource usage efficiency.
+For example, for a pool with property `taskSlotsPerNode = 8`, you can submit multi-core required CPU-intensive tasks with `requiredSlots = 8`, while other tasks can be set to `requiredSlots = 1`. When this mixed workload is scheduled, the CPU-intensive tasks run exclusively on their compute nodes, while other tasks can run concurrently (up to eight tasks at once) on other nodes. The mixed workload helps you balance your workload across compute nodes and improve resource usage efficiency.
-Be sure you don't specify a task's `requiredSlots` to be greater than the pool's `taskSlotsPerNode`. This will result in the task never being able to run. The Batch Service doesn't currently validate this conflict when you submit tasks because a job may not have a pool bound at submission time, or it could be changed to a different pool by disabling/re-enabling.
+Be sure you don't specify a task's `requiredSlots` to be greater than the pool's `taskSlotsPerNode`, or the task never runs. The Batch Service doesn't currently validate this conflict when you submit tasks. It doesn't validate the conflict, because a job may not have a pool bound at submission time, or it could change to a different pool by disabling/re-enabling.
> [!TIP] > When using variable task slots, it's possible that large tasks with more required slots can temporarily fail to be scheduled because not enough slots are available on any compute node, even when there are still idle slots on some nodes. You can raise the job priority for these tasks to increase their chance to compete for available slots on nodes. >
-> The Batch service emits the [TaskScheduleFailEvent](batch-task-schedule-fail-event.md) when it fails to schedule a task to run, and keeps retrying the scheduling until required slots become available. You can listen to that event to detect potential task scheduling issues and mitigate accordingly.
+> The Batch service emits the [TaskScheduleFailEvent](batch-task-schedule-fail-event.md) when it fails to schedule a task to run and keeps retrying the scheduling until required slots become available. You can listen to that event to detect potential task scheduling issues and mitigate accordingly.
## Batch .NET example
The following [Batch .NET](/dotnet/api/microsoft.azure.batch) API code snippets
### Create a pool with multiple task slots per node
-This code snippet shows a request to create a pool that contains four nodes, with four task slots allowed per node. It specifies a task scheduling policy that will fill each node with tasks prior to assigning tasks to another node in the pool.
+This code snippet shows a request to create a pool that contains four nodes, with four task slots allowed per node. It specifies a task scheduling policy that fills each node with tasks prior to assigning tasks to another node in the pool.
For more information on adding pools by using the Batch .NET API, see [BatchClient.PoolOperations.CreatePool](/dotnet/api/microsoft.azure.batch.pooloperations.createpool).
pool.Commit();
### Create a task with required slots
-This code snippet creates a task with non-default `requiredSlots`. This task will only run when there are enough free slots available on a compute node.
+This code snippet creates a task with nondefault `requiredSlots`. This task runs when there are enough free slots available on a compute node.
```csharp CloudTask task = new CloudTask(taskId, taskCommandLine)
CloudTask task = new CloudTask(taskId, taskCommandLine)
### List compute nodes with counts for running tasks and slots
-This code snippet lists all compute nodes in the pool, and prints out the counts for running tasks and task slots per node.
+This code snippet lists all compute nodes in the pool and prints the counts for running tasks and task slots per node.
```csharp ODATADetailLevel nodeDetail = new ODATADetailLevel(selectClause: "id,runningTasksCount,runningTaskSlotsCount");
For more information on adding pools by using the REST API, see [Add a pool to a
### Create a task with required slots
-This snippet shows a request to add a task with non-default `requiredSlots`. This task will only run when there are enough free slots available on the compute node.
+This snippet shows a request to add a task with nondefault `requiredSlots`. This task only runs when there are enough free slots available on the compute node.
```json {
The [ParallelTasks](https://github.com/Azure/azure-batch-samples/tree/master/CSh
This C# console application uses the [Batch .NET](/dotnet/api/microsoft.azure.batch) library to create a pool with one or more compute nodes. It executes a configurable number of tasks on those nodes to simulate a variable load. Output from the application shows which nodes executed each task. The application also provides a summary of the job parameters and duration.
-As an example, below is the summary portion of the output from two different runs of the ParallelTasks sample application. Job durations shown here don't include pool creation time, since each job was submitted to a previously created pool whose compute nodes were in the *Idle* state at submission time.
+The following example shows the summary portion of the output from two different runs of the ParallelTasks sample application. Job durations shown here don't include pool creation time, since each job was submitted to a previously created pool whose compute nodes were in the *Idle* state at submission time.
The first execution of the sample application shows that with a single node in the pool and the default setting of one task per node, the job duration is over 30 minutes.
-```
+```console
Nodes: 1 Node size: large Task slots per node: 1
Tasks: 32
Duration: 00:30:01.4638023 ```
-The second run of the sample shows a significant decrease in job duration. This is because the pool was configured with four tasks per node, allowing for parallel task execution to complete the job in nearly a quarter of the time.
+The second run of the sample shows a significant decrease in job duration. This reduction is because the pool was configured with four tasks per node, allowing for parallel task execution to complete the job in nearly a quarter of the time.
-```
+```console
Nodes: 1 Node size: large Task slots per node: 4
Duration: 00:08:48.2423500
## Next steps -- Try the [Batch Explorer](https://azure.github.io/BatchExplorer/) Heat Map. Batch Explorer is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. When you're executing the [ParallelTasks](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ParallelTasks) sample application, the Batch Explorer Heat Map feature lets you easily visualize the execution of parallel tasks on each node.-- Explore [Azure Batch samples on GitHub](https://github.com/Azure/azure-batch-samples).-- Learn more about [Batch task dependencies](batch-task-dependencies.md).
+- [Batch Explorer](https://azure.github.io/BatchExplorer/)
+- [Azure Batch samples on GitHub](https://github.com/Azure/azure-batch-samples).
+- [Create task dependencies to run tasks that depend on other tasks](batch-task-dependencies.md).
batch Quick Run Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-run-dotnet.md
Title: Quickstart - Run your first Azure Batch job with the .NET API
-description: "In this quickstart, you run an Azure Batch sample job and tasks from a C# application with the Batch .NET client library."
+ Title: 'Quickstart: Use .NET to create a pool and run a job'
+description: Follow this quickstart to run a C# app that uses the Batch .NET client library to create and run Batch pools, nodes, jobs, and tasks.
Previously updated : 05/25/2021 Last updated : 04/20/2023 ms.devlang: csharp
-# Quickstart: Run your first Azure Batch job with the .NET API
+# Quickstart: Use .NET to create a Batch pool and run a job
-Get started with Azure Batch by running a job from a C# application built on the Azure Batch .NET API. The app uploads several input data files to Azure storage and then creates a pool of Batch compute nodes (virtual machines). Then, it creates a sample job that runs tasks to process each input file on the pool using a basic command.
+This quickstart shows you how to get started with Azure Batch by running a C# app that uses the [Azure Batch .NET API](/dotnet/api/overview/azure/batch). The .NET app:
-After completing this quickstart, you'll understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale.
+> [!div class="checklist"]
+> - Uploads several input data files to an Azure Storage blob container to use for Batch task processing.
+> - Creates a pool of two virtual machines (VMs), or compute nodes, running Windows Server.
+> - Creates a job that runs tasks on the nodes to process each input file by using a Windows command line.
+> - Displays the output files that the tasks return.
-![Diagram showing an overview of the Azure Batch app workflow.](./media/quick-run-dotnet/sampleapp.png)
+After you complete this quickstart, you understand the [key concepts of the Batch service](batch-service-workflow-features.md) and are ready to use Batch with more realistic, larger scale workloads.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-- A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+- A Batch account with a linked Azure Storage account. You can create the accounts by using any of the following methods: [Azure CLI](quick-create-cli.md) | [Azure portal](quick-create-portal.md) | [Bicep](quick-create-bicep.md) | [ARM template](quick-create-template.md) | [Terraform](quick-create-terraform.md).
-- [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1) for Linux, macOS, or Windows.
+- [Visual Studio 2019](https://www.visualstudio.com/vs) or later, or [.NET 6.0](https://dotnet.microsoft.com/download/dotnet) or later, for Linux or Windows.
-## Sign in to Azure
+## Run the app
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+To complete this quickstart, you download or clone the app, provide your account values, build and run the app, and verify the output.
+### Download or clone the app
-## Download the sample
+Download or clone the [Azure Batch .NET Quickstart](https://github.com/Azure-Samples/batch-dotnet-quickstart) app from GitHub. Use the following command to clone the app repo with a Git client:
-[Download or clone the sample app](https://github.com/Azure-Samples/batch-dotnet-quickstart) from GitHub. To clone the sample app repo with a Git client, use the following command:
-
-```
+```cmd
git clone https://github.com/Azure-Samples/batch-dotnet-quickstart.git ```
-Navigate to the directory that contains the Visual Studio solution file `BatchDotNetQuickstart.sln`.
+### Provide your account information
+
+The app needs to use your Batch and Storage account names, account key values, and Batch account endpoint. You can get this information from the Azure portal, Azure APIs, or command-line tools.
+
+To get your account information from the [Azure portal](https://portal.azure.com):
+
+ 1. From the Azure Search bar, search for and select your Batch account name.
+ 1. On your Batch account page, select **Keys** from the left navigation.
+ 1. On the **Keys** page, copy the following values:
+
+ - **Batch account**
+ - **Account endpoint**
+ - **Primary access key**
+ - **Storage account name**
+ - **Key1**
-Open the solution file in Visual Studio, and update the credential strings in `Program.cs` with the values you obtained for your accounts. For example:
+Navigate to your downloaded *batch-dotnet-quickstart* folder and edit the credential strings in *Program.cs* to provide the values you copied:
```csharp // Batch account credentials
-private const string BatchAccountName = "mybatchaccount";
-private const string BatchAccountKey = "xxxxxxxxxxxxxxxxE+yXrRvJAqT9BlXwwo1CwF+SwAYOxxxxxxxxxxxxxxxx43pXi/gdiATkvbpLRl3x14pcEQ==";
-private const string BatchAccountUrl = "https://mybatchaccount.mybatchregion.batch.azure.com";
+private const string BatchAccountName = "<batch account>";
+private const string BatchAccountKey = "<primary access key>";
+private const string BatchAccountUrl = "<account endpoint>";
// Storage account credentials
-private const string StorageAccountName = "mystorageaccount";
-private const string StorageAccountKey = "xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfwpbIC5aAWA8wDu+AFXZB827Mt9lybZB1nUcQbQiUrkPtilK5BQ==";
+private const string StorageAccountName = "<storage account name>";
+private const string StorageAccountKey = "<key1>
```
+>[!IMPORTANT]
+>Exposing account keys in the app source isn't recommended for Production usage. You should restrict access to credentials and refer to them in your code by using variables or a configuration file. It's best to store Batch and Storage account keys in Azure Key Vault.
-## Build and run the app
+### Build and run the app and view output
-To see the Batch workflow in action, build and run the application in Visual Studio, or at the command line with the `dotnet build` and `dotnet run` commands. After running the application, review the code to learn what each part of the application does. For example, in Visual Studio:
+To see the Batch workflow in action, build and run the application in Visual Studio. You can also use the command line `dotnet build` and `dotnet run` commands.
-- Right-click the solution in Solution Explorer, and click **Build Solution**.
+In Visual Studio:
-- Confirm the restoration of any NuGet packages, if you're prompted. If you need to download missing packages, ensure the [NuGet Package Manager](https://docs.nuget.org/consume/installing-nuget) is installed.
+1. Open the *BatchDotNetQuickstart.sln* file, right-click the solution in **Solution Explorer**, and select **Build**. If prompted, use [NuGet Package Manager](https://docs.nuget.org/consume/installing-nuget) to update or restore NuGet packages.
-When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started. Tasks are queued to run as soon as the first compute node is running. Go to your Batch account in the [Azure portal](https://portal.azure.com) to monitor the pool, compute nodes, job, and tasks.
+1. Once the build completes, select **BatchDotNetQuickstart** in the top menu bar to run the app.
-```
-Sample start: 11/16/2018 4:02:54 PM
+Typical run time with the default configuration is approximately five minutes. Initial pool node setup takes the most time. To rerun the job, delete the job from the previous run, but don't delete the pool. On a preconfigured pool, the job completes in a few seconds.
+
+The app returns output similar to the following example:
+
+```output
+Sample start: 11/16/2022 4:02:54 PM
Container [input] created. Uploading file taskdata0.txt to container [input]...
Adding 3 tasks to job [DotNetQuickstartJob]...
Monitoring all tasks for 'Completed' state, timeout in 00:30:00... ```
-After tasks complete, you see output similar to the following for each task:
+There's a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes start. As tasks are created, Batch queues them to run on the pool. As soon as the first compute node is available, the first task runs on the node. You can monitor node, task, and job status from your Batch account page in the Azure portal.
-```
+After each task completes, you see output similar to the following example:
+
+```output
Printing task output. Task: Task0 Node: tvm-2850684224_3-20171205t000401z Standard out:
-Batch processing began with mainframe computers and punch cards. Today it still plays a central role in business, engineering, science, and other pursuits that require running lots of automated tasks....
+Batch processing began with mainframe computers and punch cards. Today it still plays a central role...
stderr: ... ```
-Typical execution time is approximately 5 minutes when you run the application in its default configuration. Initial pool setup takes the most time. To run the job again, delete the job from the previous run and don't delete the pool. On a preconfigured pool, the job completes in a few seconds.
- ## Review the code
-The .NET app in this quickstart does the following:
--- Uploads three small text files to a blob container in your Azure storage account. These files are inputs for processing by Batch.-- Creates a pool of compute nodes running Windows Server.-- Creates a job and three tasks to run on the nodes. Each task processes one of the input files using a Windows command line. -- Displays files returned by the tasks.-
-See the file `Program.cs` and the following sections for details.
-
-### Preliminaries
+Review the code to understand the steps in the [Azure Batch .NET Quickstart](https://github.com/Azure-Samples/batch-dotnet-quickstart).
-To interact with a storage account, the app uses the Azure Storage Client Library for .NET. It creates a reference to the account with [CloudStorageAccount](/dotnet/api/microsoft.azure.storage.cloudstorageaccount), and from that creates a [CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient).
+### Create service clients and upload resource files
-```csharp
-CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
-```
+1. To interact with the storage account, the app uses the Azure Storage Client Library for .NET to create a reference to the account with [CloudStorageAccount](/dotnet/api/microsoft.azure.storage.cloudstorageaccount), and from that creates a [CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient).
-The app uses the `blobClient` reference to create a container in the storage account and to upload data files to the container. The files in storage are defined as Batch [ResourceFile](/dotnet/api/microsoft.azure.batch.resourcefile) objects that Batch can later download to compute nodes.
+ ```csharp
+ CloudBlobClient blobClient = CreateCloudBlobClient(StorageAccountName, StorageAccountKey);
+ ```
-```csharp
-List<string> inputFilePaths = new List<string>
-{
- "taskdata0.txt",
- "taskdata1.txt",
- "taskdata2.txt"
-};
+1. The app uses the `blobClient` reference to create a container in the storage account and upload data files to the container. The files in storage are defined as Batch [ResourceFile](/dotnet/api/microsoft.azure.batch.resourcefile) objects that Batch can later download to the compute nodes.
-List<ResourceFile> inputFiles = new List<ResourceFile>();
+ ```csharp
+ List<string> inputFilePaths = new List<string>
+ {
+ "taskdata0.txt",
+ "taskdata1.txt",
+ "taskdata2.txt"
+ };
+
+ List<ResourceFile> inputFiles = new List<ResourceFile>();
+
+ foreach (string filePath in inputFilePaths)
+ {
+ inputFiles.Add(UploadFileToContainer(blobClient, inputContainerName, filePath));
+ }
+ ```
-foreach (string filePath in inputFilePaths)
-{
- inputFiles.Add(UploadFileToContainer(blobClient, inputContainerName, filePath));
-}
-```
+1. The app creates a [BatchClient](/dotnet/api/microsoft.azure.batch.batchclient) object to create and manage Batch pools, jobs, and tasks. The Batch client uses shared key authentication. Batch also supports Azure Active Directory (Azure AD) authentication.
-The app creates a [BatchClient](/dotnet/api/microsoft.azure.batch.batchclient) object to create and manage pools, jobs, and tasks in the Batch service. The Batch client in the sample uses shared key authentication. (Batch also supports Azure Active Directory authentication.)
-
-```csharp
-BatchSharedKeyCredentials cred = new BatchSharedKeyCredentials(BatchAccountUrl, BatchAccountName, BatchAccountKey);
-
-using (BatchClient batchClient = BatchClient.Open(cred))
-...
-```
+ ```csharp
+ BatchSharedKeyCredentials cred = new BatchSharedKeyCredentials(BatchAccountUrl, BatchAccountName, BatchAccountKey);
+
+ using (BatchClient batchClient = BatchClient.Open(cred))
+ ...
+ ```
### Create a pool of compute nodes
-To create a Batch pool, the app uses the [BatchClient.PoolOperations.CreatePool](/dotnet/api/microsoft.azure.batch.pooloperations.createpool) method to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/dotnet/api/microsoft.azure.batch.virtualmachineconfiguration) object specifies an [ImageReference](/dotnet/api/microsoft.azure.batch.imagereference) to a Windows Server image published in the Azure Marketplace. Batch supports a wide range of Linux and Windows Server images in the Azure Marketplace, as well as custom VM images.
+To create a Batch pool, the app uses the [BatchClient.PoolOperations.CreatePool](/dotnet/api/microsoft.azure.batch.pooloperations.createpool) method to set the number of nodes, VM size, and pool configuration. The following [VirtualMachineConfiguration](/dotnet/api/microsoft.azure.batch.virtualmachineconfiguration) object specifies an [ImageReference](/dotnet/api/microsoft.azure.batch.imagereference) to a Windows Server Marketplace image. Batch supports a wide range of Windows Server and Linux Marketplace OS images, and also supports custom VM images.
-The number of nodes (`PoolNodeCount`) and VM size (`PoolVMSize`) are defined constants. The sample by default creates a pool of two *Standard_A1_v2* nodes. The size suggested offers a good balance of performance versus cost for this quick example.
+The `PoolNodeCount` and VM size `PoolVMSize` are defined constants. The app creates a pool of two Standard_A1_v2 nodes. This size offers a good balance of performance versus cost for this quickstart.
The [Commit](/dotnet/api/microsoft.azure.batch.cloudpool.commit) method submits the pool to the Batch service.
private static void CreateBatchPool(BatchClient batchClient, VirtualMachineConfi
### Create a Batch job
-A Batch job is a logical grouping of one or more tasks. A job includes settings common to the tasks, such as priority and the pool to run tasks on. The app uses the [BatchClient.JobOperations.CreateJob](/dotnet/api/microsoft.azure.batch.joboperations.createjob) method to create a job on your pool.
+A Batch job is a logical grouping of one or more tasks. The job includes settings common to the tasks, such as priority and the pool to run tasks on.
-The [Commit](/dotnet/api/microsoft.azure.batch.cloudjob.commit) method submits the job to the Batch service. Initially the job has no tasks.
+The app uses the [BatchClient.JobOperations.CreateJob](/dotnet/api/microsoft.azure.batch.joboperations.createjob) method to create a job on your pool. The [Commit](/dotnet/api/microsoft.azure.batch.cloudjob.commit) method submits the job to the Batch service. Initially the job has no tasks.
```csharp try
try
### Create tasks
-The app creates a list of [CloudTask](/dotnet/api/microsoft.azure.batch.cloudtask) objects. Each task processes an input `ResourceFile` object using a [CommandLine](/dotnet/api/microsoft.azure.batch.cloudtask.commandline) property. In the sample, the command line runs the Windows `type` command to display the input file. This command is a simple example for demonstration purposes. When you use Batch, the command line is where you specify your app or script. Batch provides several ways to deploy apps and scripts to compute nodes.
+Batch provides several ways to deploy apps and scripts to compute nodes. This app creates a list of [CloudTask](/dotnet/api/microsoft.azure.batch.cloudtask) input `ResourceFile` objects. Each task processes an input file by using a [CommandLine](/dotnet/api/microsoft.azure.batch.cloudtask.commandline) property. The Batch command line is where you specify your app or script.
-Then, the app adds tasks to the job with the [AddTask](/dotnet/api/microsoft.azure.batch.joboperations.addtask) method, which queues them to run on the compute nodes.
+The command line in the following code runs the Windows `type` command to display the input files. Then, the app adds each task to the job with the [AddTask](/dotnet/api/microsoft.azure.batch.joboperations.addtask) method, which queues the task to run on the compute nodes.
```csharp for (int i = 0; i < inputFiles.Count; i++)
batchClient.JobOperations.AddTask(JobId, tasks);
### View task output
-The app creates a [TaskStateMonitor](/dotnet/api/microsoft.azure.batch.taskstatemonitor) to monitor the tasks to make sure they complete. Then, the app uses the [CloudTask.ComputeNodeInformation](/dotnet/api/microsoft.azure.batch.cloudtask.computenodeinformation) property to display the `stdout.txt` file generated by each completed task. When the task runs successfully, the output of the task command is written to `stdout.txt`:
+The app creates a [TaskStateMonitor](/dotnet/api/microsoft.azure.batch.taskstatemonitor) to monitor the tasks and make sure they complete. When each task runs successfully, its output writes to *stdout.txt*. The app then uses the [CloudTask.ComputeNodeInformation](/dotnet/api/microsoft.azure.batch.cloudtask.computenodeinformation) property to display the *stdout.txt* file for each completed task.
```csharp foreach (CloudTask task in completedtasks)
foreach (CloudTask task in completedtasks)
## Clean up resources
-The app automatically deletes the storage container it creates, and gives you the option to delete the Batch pool and job. You are charged for the pool while the nodes are running, even if no jobs are scheduled. When you no longer need the pool, delete it. After you delete the pool, all task output on the nodes is deleted.
+The app automatically deletes the storage container it creates, and gives you the option to delete the Batch pool and job. Pools and nodes incur charges while the nodes are running, even if they aren't running jobs. If you no longer need the pool, delete it.
-When no longer needed, delete the resource group, Batch account, and storage account. To do so in the Azure portal, select the resource group for the Batch account and click **Delete resource group**.
+When you no longer need your Batch account and storage account, you can delete the resource group that contains them. In the Azure portal, select **Delete resource group** at the top of the resource group page. On the **Delete a resource group** screen, enter the resource group name, and then select **Delete**.
## Next steps
-In this quickstart, you ran a small app built using the Batch .NET API to create a Batch pool and a Batch job. The job ran sample tasks, and downloaded output created on the nodes. Now that you understand the key concepts of the Batch service, you can try Batch with more realistic workloads at larger scale. To learn more about Azure Batch, and walk through a parallel workload with a real-world application, continue to the Batch .NET tutorial.
+In this quickstart, you ran an app that uses the Batch .NET API to create a Batch pool, nodes, job, and tasks. The job uploaded resource files to a storage container, ran tasks on the nodes, and displayed output from the nodes.
+
+Now that you understand the key concepts of the Batch service, you're ready to use Batch with more realistic, larger scale workloads. To learn more about Azure Batch and walk through a parallel workload with a real-world application, continue to the Batch .NET tutorial.
> [!div class="nextstepaction"] > [Process a parallel workload with .NET](tutorial-parallel-dotnet.md)
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
description: Walkthrough of how Azure Cloud Shell persists files. ms.contributor: jahelmic Previously updated : 11/14/2022 Last updated : 04/25/2023 tags: azure-resource-manager Title: Persist files in Azure Cloud Shell+ # Persist files in Azure Cloud Shell
cognitive-services Use Case Dwell Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/use-case-dwell-time.md
# Overview: Monitor dwell time in front of displays with Spatial Analysis
-Spatial Analysis can provide real-time information about how long customers spend in front of a display in a retail store. The service monitors the length of time customers spend in a zone you specif. You can use this information to track customer engagement with promotions/displays within a store or understand customers' preference toward specific products.
+Spatial Analysis can provide real-time information about how long customers spend in front of a display in a retail store. The service monitors the length of time customers spend in a zone you specify. You can use this information to track customer engagement with promotions/displays within a store or understand customers' preference toward specific products.
:::image type="content" source="media/use-cases/dwell-time.jpg" alt-text="Photo of a person in a warehouse with stacks of boxes.":::
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
In this example, we will add a new source to an existing project. You can also r
| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.| | `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.| | `PROJECT-NAME` | The name of project where you would like to update sources.|
+|`METHOD`| PATCH |
### Example query
curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applic
"sourceContentStructureKind": "semistructured" } }
-]' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/sources?api-version=2021-10-01'
+]' -i '{LanguageServiceName}.cognitiveservices.azure.com//language/query-knowledgebases/projects/{projectName}/sources?api-version=2021-10-01'
``` A successful call to update a source results in an `Operation-Location` header being returned which can be used to check the status of the import job. In many of our examples, we haven't needed to look at the response headers and thus haven't always been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
cognitive-services Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/prompt-engineering.md
As you develop more complex prompts, it's helpful to keep this fundamental behav
### Prompt components
-When using the Completion API while there is no differentiation between different parts of the prompt, it can still be useful for learning and discussion to identify underlying prompt components. With the [Chat Completion API](../how-to/chatgpt.md) there are distinct sections of the prompt that are sent to the API in the form of an array of dictionaries with associated roles: system, user, and assistant. This guidance will focus more generally on how to think about prompt construction rather than providing prescriptive guidance that is specific to one API over another.
+When using the Completion API while there's no differentiation between different parts of the prompt, it can still be useful for learning and discussion to identify underlying prompt components. With the [Chat Completion API](../how-to/chatgpt.md) there are distinct sections of the prompt that are sent to the API in the form of an array of dictionaries with associated roles: system, user, and assistant. This guidance will focus more generally on how to think about prompt construction rather than providing prescriptive guidance that is specific to one API over another.
-It is also important to understand that while there could be other valid ways to dissect prompts, the goal of this breakdown is to provide a relatively simple way to think about prompt construction. With the Completion API, all the components are optional, but at least one must be present and most prompts include more than one component. There can be some grey area between components as well. The order presented below roughly corresponds to how commonly each component is used, from most to least.
+It's also important to understand that while there could be other valid ways to dissect prompts, the goal of this breakdown is to provide a relatively simple way to think about prompt construction. With the Completion API, all the components are optional, but at least one must be present and most prompts include more than one component. There can be some grey area between components as well. The order presented below roughly corresponds to how commonly each component is used, from most to least.
#### Instructions
GPT models can also handle primary content that is structured. In the example be
Successful prompts often rely on the practice of ΓÇ£one-shotΓÇ¥ or ΓÇ£few-shotΓÇ¥ learning. This refers to the inclusion of one or more examples of the desired behavior of the model, typically by including input and output pairs. This is not learning in the sense that the model is permanently changed, but rather that the examples better condition the model to respond as desired for only the current inference. The use of prompts with no examples is sometimes referred to as ΓÇ£zero-shotΓÇ¥ learning. Please note that with the Chat Completion API few-shot learning examples are typically added to the messages array in the form of example user/assistant interactions after the initial system message.
-| Learning Type| Prompt| Completion|
-|- |-|--|
-| Headline: Coach confident injury won't derail Warriors<br>Topic:| The coach is confident that the injury won't derail the Warriors' season. The team is still focused on their goals and that they will continue to work hard to achieve them.|
-| Headline: Twins' Correa to use opt-out, test free agency<br>Topic: Baseball<br>Headline: Qatar World Cup to have zones for sobering up<br>Topic: Soccer<br>Headline: Yates: Fantasy football intel for Week 6<br>Topic: Football<br>Headline: Coach confident injury won't derail Warriors<br>Topic: | Basketball |
+| Learning Type| Prompt| Completion|
+|- |-|--|
+| Zero-shot | Headline: Coach confident injury won't derail Warriors<br>Topic:| The coach is confident that the injury won't derail the Warriors' season. The team is still focused on their goals and that they will continue to work hard to achieve them.|
+| Few-shot | Headline: Twins' Correa to use opt-out, test free agency<br>Topic: Baseball<br>Headline: Qatar World Cup to have zones for sobering up<br>Topic: Soccer<br>Headline: Yates: Fantasy football intel for Week 6<br>Topic: Football<br>Headline: Coach confident injury won't derail Warriors<br>Topic: | Basketball |
The example above illustrates the utility of few-shot learning. Without the examples, the model seems to be guessing at the desired behavior, while the examples cleanly show the model how to operate. This also demonstrates the power of the model, it can infer the category of label that is wanted, even without a ΓÇÿbasketballΓÇÖ label in the examples.
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
Previously updated : 04/24/2023 Last updated : 04/25/2023
The following sections provide you with a quick guide to the quotas and limits t
|--|--| | OpenAI resources per region per Azure subscription | 3 | | Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 18 <br> All other models: 300 |
-| Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> All other models: 120,000 |
+| Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> GPT-4 8k model: 10,000 <br> GPT-4 32k model: 32,000 <br> All other models: 120,000 |
| Max fine-tuned model deployments* | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 |
To minimize issues related to throttling, it's a good idea to use the following
The next sections describe specific cases of adjusting quotas.
-### How to request an increase to the transactions-per-minute, number of fine-tuned models deployed or token per minute quotas.
+### How to request increases to the default quotas and limits
-If you need to increase the limit, you can apply for a quota increase here: <https://aka.ms/oai/quotaincrease>
+At this time, due to overwhelming demand we cannot accept any new resource or quota increase requests.
-### How to request an increase to the number of resources per region
-
-If you need to increase the number of resources, you can apply for a resource increase here: <https://aka.ms/oai/resourceincrease>
+ 
> [!NOTE] > Ensure that you thoroughly assess your current resource utilization, approaching its full capacity. Be aware that we will not grant additional resources if efficient usage of existing resources is not observed.
communication-services Enable Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/enable-closed-captions.md
Closed captions are a textual representation of a voice or video conversation that is displayed to users in real-time. Azure Communication Services Closed captions offer developers the ability to allow users to select when they wish to toggle captions on or off. These captions are only available during the call/meeting for the user that has selected to enable captions, ACS does **not** store these captions anywhere. Closed captions can be accessed through Azure Communication Services client-side SDKs for Web, Windows, iOS and Android.
-In this document we're going to be looking at specifically Teams interoperability scenarios. For example, an Azure Communication Services user joins a Teams meeting and enabling captions or two Microsoft 365 users using Azure Communication Calling SDK to join a call or meeting.
+In this document, we're going to be looking at specifically Teams interoperability scenarios. For example, an Azure Communication Services user joins a Teams meeting and enabling captions or two Microsoft 365 users using Azure Communication Calling SDK to join a call or meeting.
## Supported scenarios
In this document we're going to be looking at specifically Teams interoperabilit
*Usage of translations through Teams generated captions requires the organizer to have assigned a Teams Premium license, or in the case of Microsoft 365 users they must have a Teams premium license. More information about Teams Premium can be found [here](https://www.microsoft.com/microsoft-teams/premium#tabx93f55452286a4264a2778ef8902fb81a).*
-In scenarios where there's a Teams user on a Teams client or a Microsoft 365 user with ACS SDKs in the call, the developer can use Teams caption. This allows developers to work with the Teams captioning technology that may already be familiar with today. With Teams captions developers are limited to what their Teams license allows. Basic captions allows only one spoken and one caption language for the call. With Teams premium license developers can use the translation functionality offered by Teams to provide one spoken language for the call and translated caption languages on a per user basis. In a Teams interop scenario, captions enabled through ACS follows the same policies that are defined in Teams for [meetings](/powershell/module/skype/set-csteamsmeetingpolicy) and [calls](/powershell/module/skype/set-csteamscallingpolicy).
+In scenarios where there's a Teams user on a Teams client or a Microsoft 365 user with ACS SDKs in the call, the developer can use Teams caption. This allows developers to work with the Teams captioning technology that may already be familiar with today. With Teams captions developers are limited to what their Teams license allows. Basic captions allow only one spoken and one caption language for the call. With Teams premium license developers can use the translation functionality offered by Teams to provide one spoken language for the call and translated caption languages on a per user basis. In a Teams interop scenario, captions enabled through ACS follows the same policies that are defined in Teams for [meetings](/powershell/module/skype/set-csteamsmeetingpolicy) and [calls](/powershell/module/skype/set-csteamscallingpolicy).
## Common use cases
Provide translation ΓÇô Use the translation functions provided to provide transl
## Privacy concerns
-Closed captions are only available during the call or meeting for the participant that has selected to enable captions, ACS does not store these captions anywhere. Many countries and states have laws and regulations that apply to storing of data. It is your responsibility to use the closed captions in compliance with the law should you choose to store any of the data generated through closed captions. You must obtain consent from the parties involved in a manner that complies with the laws applicable to each participant.
+Closed captions are only available during the call or meeting for the participant that has selected to enable captions, ACS doesn't store these captions anywhere. Many countries and states have laws and regulations that apply to storing of data. It is your responsibility to use the closed captions in compliance with the law should you choose to store any of the data generated through closed captions. You must obtain consent from the parties involved in a manner that complies with the laws applicable to each participant.
Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when closed captions are enabled in a Teams call or meeting and being stored. Microsoft indicates to you via the Azure Communication Services API that recording or closed captions has commenced, and you must communicate this fact, in real-time, to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation.
+## Known limitations
+- Closed captions feature isn't supported on Firefox.
+ ## Next steps -- Learn how to use [closed captions for Teams interopability](../../how-tos/calling-sdk/closed-captions-teams-interop-how-to.md).
+- Learn how to use [closed captions for Teams interoperability](../../how-tos/calling-sdk/closed-captions-teams-interop-how-to.md).
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
Publishing locations for individual SDK packages are detailed below.
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat)| [NuGet](https://www.NuGet.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases)| [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | -| | SMS| [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Sms)| [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | -| -| -| | Email| [npm](https://www.npmjs.com/package/@azure/communication-email) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Email)| [PyPi](https://pypi.org/project/azure-communication-email/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-email) | -| -| -|
-| Calling| [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Calling) | -| - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/)| -|
+| Calling| [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Calling.WindowsClient) | -| - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/)| -|
|Call Automation||[NuGet](https://www.NuGet.org/packages/Azure.Communication.CallAutomation/)||[Maven](https://search.maven.org/artifact/com.azure/azure-communication-callautomation) |Network Traversal| [npm](https://www.npmjs.com/package/@azure/communication-network-traversal)|[NuGet](https://www.NuGet.org/packages/Azure.Communication.NetworkTraversal/) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | -|- | - | | UI Library| [npm](https://www.npmjs.com/package/@azure/communication-react) | - | - | - | [GitHub](https://github.com/Azure/communication-ui-library-ios) | [GitHub](https://github.com/Azure/communication-ui-library-android) | [GitHub](https://github.com/Azure/communication-ui-library), [Storybook](https://azure.github.io/communication-ui-library/?path=/story/overview--page) |
communication-services Add Azure Managed Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-azure-managed-domains.md
Last updated 03/31/2023
+ # Quickstart: How to add Azure Managed Domains to Email Communication Service In this quick start, you learn about how to provision the Azure Managed domain in Azure Communication Services to send email.
In this quick start, you learn about how to provision the Azure Managed domain i
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/). - An Azure Email Communication Services Resource created and ready to provision the domains [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md)
+## Azure Managed Domains vs. Custom Domains
+
+Before provisioning an Azure Managed Domain, review the following table to determine which domain type is most appropriate for your particular use case.
+
+| | [Azure Managed Domains](./add-azure-managed-domains.md) | [Custom Domains](./add-custom-verified-domains.md) |
+||||
+|**Pros:** | - Setup is quick & easy<br/>- No domain verification required<br /> | - Emails are sent from your own domain |
+|**Cons:** | - Sender domain is not personalized and cannot be changed | - Requires verification of domain records <br /> - Longer setup for verification |
++ ## Provision Azure Managed Domain 1. Go the overview page of the Email Communications Service resource that you created earlier.
You can optionally configure your MailFrom address to be something other than th
## Next steps
-* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
- * [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+* [How to send an email using Azure Communication Service](../../quickstarts/email/send-email.md)
+ The following documents may be interesting to you: - Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)-- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with custom verified domains? [Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
communication-services Add Custom Verified Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-custom-verified-domains.md
In this quick start, you learn about how to add a custom domain and verify in Az
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/). - An Azure Email Communication Services Resource created and ready to provision the domains [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md)
+## Azure Managed Domains vs. Custom Domains
+
+Before provisioning a Custom Domain, review the following table to determine which domain type is most appropriate for your particular use case.
+
+| | [Azure Managed Domains](./add-azure-managed-domains.md) | [Custom Domains](./add-custom-verified-domains.md) |
+||||
+|**Pros:** | - Setup is quick & easy<br/>- No domain verification required<br /> | - Emails are sent from your own domain |
+|**Cons:** | - Sender domain is not personalized and cannot be changed | - Requires verification of domain records <br /> - Longer setup for verification |
+ ## Provision custom domain To provision a custom domain, you need to:
You can optionally configure your MailFrom address to be something other than th
## Next steps
-* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
- * [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+* [How to send an email using Azure Communication Service](../../quickstarts/email/send-email.md)
+ The following documents may be interesting to you:
communication-services Add Multiple Senders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-multiple-senders.md
# Quickstart: How to add and remove Multiple Sender Addresses to Email Communication Service - In this quick start, you learn about how to add and remove multiple sender addresses in Azure Communication Services to send email. ## Prerequisites
communication-services Manually Poll For Email Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-advanced/manually-poll-for-email-status.md
+
+ Title: Quickstart - Manually poll for email status when sending email
+
+description: Learn how to manually poll for email status while sending email using Azure Communication Services.
++++ Last updated : 04/07/2023++++
+# Quickstart: Manually poll for email status when sending email
+
+In this quick start, you'll learn about how to manually poll for email status while sending email using our Email SDKs.
++
+## Troubleshooting
+
+### Email Delivery
+
+To troubleshoot issues related to email delivery, you can [get status of the email delivery](../handle-email-events.md) to capture delivery details.
+
+> [!IMPORTANT]
+> The success result returned by polling for the status of the send operation only validates the fact that the email has successfully been sent out for delivery. To get additional information about the status of the delivery on the recipient end, you will need to reference [how to handle email events](../handle-email-events.md).
+
+### Email Throttling
+
+If you see that your application is hanging it could be due to email sending being throttled. You can [handle this through logging or by implementing a custom policy](../send-email-advanced/throw-exception-when-tier-limit-reached.md).
+
+> [!NOTE]
+> This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
+
+## Clean up Azure Communication Service resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quick start, you learned how to manually poll for status when sending email using Azure Communication Services.
+
+You may also want to:
+
+ - Learn how to [send email to multiple recipients](./send-email-to-multiple-recipients.md)
+ - Learn more about [sending email with attachments](./send-email-with-attachments.md)
+ - Familiarize yourself with [email client library](../../../concepts/email/sdk-features.md)
communication-services Send Email To Multiple Recipients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-advanced/send-email-to-multiple-recipients.md
+
+ Title: Quickstart - Send email to multiple recipients using Azure Communication Service
+
+description: Learn how to send email to multiple recipients using Azure Communication Services.
++++ Last updated : 04/07/2023++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Quickstart: Send email to multiple recipients
+
+In this quick start, you'll learn about how to send email to multiple recipients using our Email SDKs.
+++++
+## Troubleshooting
+
+### Email Delivery
+
+To troubleshoot issues related to email delivery, you can [get status of the email delivery](../handle-email-events.md) to capture delivery details.
+
+> [!IMPORTANT]
+> The success result returned by polling for the status of the send operation only validates the fact that the email has successfully been sent out for delivery. To get additional information about the status of the delivery on the recipient end, you will need to reference [how to handle email events](../handle-email-events.md).
+
+### Email Throttling
+
+If you see that your application is hanging it could be due to email sending being throttled. You can [handle this through logging or by implementing a custom policy](../send-email-advanced/throw-exception-when-tier-limit-reached.md).
+
+> [!NOTE]
+> This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
+
+## Clean up Azure Communication Service resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quick start, you learned how to manually poll for status when sending email using Azure Communication Services.
+
+You may also want to:
+
+ - Learn how to [manually poll for email status](./manually-poll-for-email-status.md)
+ - Learn more about [sending email with attachments](./send-email-with-attachments.md)
+ - Familiarize yourself with [email client library](../../../concepts/email/sdk-features.md)
communication-services Send Email With Attachments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-advanced/send-email-with-attachments.md
+
+ Title: Quickstart - Send email with attachments using Azure Communication Service
+
+description: Learn how to send an email message with attachments using Azure Communication Services.
++++ Last updated : 04/07/2023++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Quickstart: Send email with attachments
+
+In this quick start, you'll learn about how to send email with attachments using our Email SDKs.
+++++
+## Troubleshooting
+
+### Email Delivery
+
+To troubleshoot issues related to email delivery, you can [get status of the email delivery](../handle-email-events.md) to capture delivery details.
+
+> [!IMPORTANT]
+> The success result returned by polling for the status of the send operation only validates the fact that the email has successfully been sent out for delivery. To get additional information about the status of the delivery on the recipient end, you will need to reference [how to handle email events](../handle-email-events.md).
+
+### Email Throttling
+
+If you see that your application is hanging it could be due to email sending being throttled. You can [handle this through logging or by implementing a custom policy](../send-email-advanced/throw-exception-when-tier-limit-reached.md).
+
+> [!NOTE]
+> This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
+
+## Clean up Azure Communication Service resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quick start, you learned how to manually poll for status when sending email using Azure Communication Services.
+
+You may also want to:
+
+ - Learn how to [manually poll for email status](./manually-poll-for-email-status.md)
+ - Learn more about [sending email to multiple recipients](./send-email-to-multiple-recipients.md)
+ - Familiarize yourself with [email client library](../../../concepts/email/sdk-features.md)
communication-services Throw Exception When Tier Limit Reached https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-advanced/throw-exception-when-tier-limit-reached.md
+
+ Title: Quickstart - Throw an exception when email sending tier limit is reached using Azure Communication Service
+
+description: Learn how to throw an exception when sending tier limit is reached using Azure Communication Services.
++++ Last updated : 04/07/2023++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Quickstart: Throw an exception when email sending tier limit is reached
+
+In this quick start, you'll learn about how to throw an exception when the email sending tier limit is reached using our Email SDKs.
+++++
+## Troubleshooting
+
+### Email Delivery
+
+To troubleshoot issues related to email delivery, you can [get status of the email delivery](../handle-email-events.md) to capture delivery details.
+
+> [!IMPORTANT]
+> The success result returned by polling for the status of the send operation only validates the fact that the email has successfully been sent out for delivery. To get additional information about the status of the delivery on the recipient end, you will need to reference [how to handle email events](../handle-email-events.md).
+
+### Email Throttling
+
+If you see that your application is hanging it could be due to email sending being throttled. You can [handle this through logging or by implementing a custom policy](../send-email-advanced/throw-exception-when-tier-limit-reached.md).
+
+> [!NOTE]
+> This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
+
+## Clean up Azure Communication Service resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quick start, you learned how to manually poll for status when sending email using Azure Communication Services.
+
+You may also want to:
+
+ - Learn how to [send email to multiple recipients](./send-email-to-multiple-recipients.md)
+ - Learn more about [sending email with attachments](./send-email-with-attachments.md)
+ - Familiarize yourself with [email client library](../../../concepts/email/sdk-features.md)
communication-services Use Email Object Model For Payload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-advanced/use-email-object-model-for-payload.md
+
+ Title: Quickstart - Use the email object model to send the email payload using Azure Communication Service
+
+description: Learn how to use the email object model to send the email payload using Azure Communication Services.
++++ Last updated : 04/07/2023++++
+# Quickstart: Use the email object model to send the email payload
+
+In this quick start, you'll learn about how to use the email object model to send the email payload using our Email SDKs.
++
+## Troubleshooting
+
+### Email Delivery
+
+To troubleshoot issues related to email delivery, you can [get status of the email delivery](../handle-email-events.md) to capture delivery details.
+
+> [!IMPORTANT]
+> The success result returned by polling for the status of the send operation only validates the fact that the email has successfully been sent out for delivery. To get additional information about the status of the delivery on the recipient end, you will need to reference [how to handle email events](../handle-email-events.md).
+
+### Email Throttling
+
+If you see that your application is hanging it could be due to email sending being throttled. You can [handle this through logging or by implementing a custom policy](../send-email-advanced/throw-exception-when-tier-limit-reached.md).
+
+> [!NOTE]
+> This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
+
+## Clean up Azure Communication Service resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quick start, you learned how to manually poll for status when sending email using Azure Communication Services.
+
+You may also want to:
+
+ - Learn how to [send email to multiple recipients](./send-email-to-multiple-recipients.md)
+ - Learn more about [sending email with attachments](./send-email-with-attachments.md)
+ - Familiarize yourself with [email client library](../../../concepts/email/sdk-features.md)
+
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
In this quick start, you'll learn about how to send email using our Email SDKs.
## Troubleshooting
-To troubleshoot issues related to email delivery, you can get the status of the email delivery to capture delivery details.
+### Email Delivery
+
+To troubleshoot issues related to email delivery, you can [get status of the email delivery](./handle-email-events.md) to capture delivery details.
+
+> [!IMPORTANT]
+> The success result returned by polling for the status of the send operation only validates the fact that the email has successfully been sent out for delivery. To get additional information about the status of the delivery on the recipient end, you will need to reference [how to handle email events](./handle-email-events.md).
+
+### Email Throttling
+
+If you see that your application is hanging it could be due to email sending being throttled. You can [handle this through logging or by implementing a custom policy](./send-email-advanced/throw-exception-when-tier-limit-reached.md).
+
+> [!NOTE]
+> This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
## Clean up Azure Communication Service resources
confidential-computing Tdx Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/tdx-confidential-vm-overview.md
Title: DCesv5 and ECesv5 series confidential VMs
+ Title: Preview of DCesv5 & ECesv5 confidential VMs
description: Learn about Azure DCesv5 and ECesv5 series confidential virtual machines (confidential VMs). These series are for tenants with high security and confidentiality requirements.
Last updated 4/25/2023
-# DCesv5 and ECesv5 series confidential VMs
+# Preview of DCesv5 & ECesv5 confidential VMs
Starting with the 4th Gen Intel® Xeon® Scalable processors, Azure has begun supporting VMs backed by an all-new hardware-based Trusted Execution Environment called [Intel® Trust Domain Extensions (TDX)](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html#inpage-nav-2). Organizations can use these VMs to seamlessly bring confidential workloads to the cloud without any code changes to their applications.
Some of the benefits of Confidential VMs with Intel TDX include:
- Ability to retrieve raw hardware evidence and submit for judgment to attestation provider, including open-sourcing our client application. - Support for [Microsoft Azure Attestation](https://learn.microsoft.com/azure/attestation) (coming soon) backed by high availability zonal capabilities and disaster recovery capabilities. - Support for operator-independent remote attestation with [Intel Project Amber](http://projectamber.intel.com/).-- Support for Ubuntu 22.04, SUSE Linux Enterprise Server 15 SP5 and SUSE Linux Enterprise Server for SAP 15 SP5.
+- Support for Ubuntu 22.04, SUSE Linux Enterprise Server 15 SP5 and SAP 15 SP5.
## See also
container-apps Azure Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md
Previously updated : 03/20/2023 Last updated : 04/27/2023
No. Apps can't be assigned managed identities when running in Azure Arc. If your
### Are there any scaling limits?
-All applications deployed with Azure Container Apps on Azure Arc-enabled Kubernetes are able to scale within the limits of the underlying Kubernetes cluster. If the cluster runs out of available compute resources (CPU and memory primarily), then applications will scale to the number of instances of the application that Kubernetes can schedule with available resource.
+All applications deployed with Azure Container Apps on Azure Arc-enabled Kubernetes are able to scale within the limits of the underlying Kubernetes cluster. If the cluster runs out of available compute resources (CPU and memory primarily), then applications scale to the number of instances of the application that Kubernetes can schedule with available resource.
### What logs are collected?
ARM64 based clusters aren't supported at this time.
- Upgrade of KEDA to 2.9.1 - Upgrade of Dapr to 1.9.5 - Increase Envoy Controller resource limits to 200 m CPU
+ - Increase Container App Controller resource limits to 1-GB memory
- Reduce EasyAuth sidecar resource limits to 50 m CPU - Resolve KEDA error logging for missing metric values ### Container Apps extension v1.0.50 (March 2023)
+
- Updated logging images in sync with Public Cloud
+### Container Apps extension v1.5.1 (April 2023)
+
+ - New versioning number format
+ - Upgrade of Dapr to 1.10.4
+ - Maintain scale of Envoy after deployments of new revisions
+ - Change to when default startup probes are added to a container, if developer doesn't define both startup and readiness probes, then default startup probes are added
+ - Adds CONTAINER_APP_REPLICA_NAME environment variable to custom containers
+ - Improvement in performance when multiple revisions are stopped
+ ## Next steps [Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md)
container-registry Container Registry Get Started Docker Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-docker-cli.md
Both commands return `Login Succeeded` once completed.
First, pull a public Nginx image to your local computer. This example pulls an image from Microsoft Container Registry. ```
-docker pull mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+docker pull mcr.microsoft.com/oss/nginx/nginx:stable
``` ## Run the container locally
docker pull mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
Execute the following [docker run](https://docs.docker.com/engine/reference/run/) command to start a local instance of the Nginx container interactively (`-it`) on port 8080. The `--rm` argument specifies that the container should be removed when you stop it. ```
-docker run -it --rm -p 8080:80 mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+docker run -it --rm -p 8080:80 mcr.microsoft.com/oss/nginx/nginx:stable
``` Browse to `http://localhost:8080` to view the default web page served by Nginx in the running container. You should see a page similar to the following:
To stop and remove the container, press `Control`+`C`.
Use [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) to create an alias of the image with the fully qualified path to your registry. This example specifies the `samples` namespace to avoid clutter in the root of the registry. ```
-docker tag mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine myregistry.azurecr.io/samples/nginx
+docker tag mcr.microsoft.com/oss/nginx/nginx:stable myregistry.azurecr.io/samples/nginx
``` For more information about tagging with namespaces, see the [Repository namespaces](container-registry-best-practices.md#repository-namespaces) section of [Best practices for Azure Container Registry](container-registry-best-practices.md).
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
Title: Build, Sign and Verify a container image using notation and certificate in Azure Key Vault description: In this tutorial you'll learn to create a signing certificate, build a container image, remote sign image with notation and Azure Key Vault, and then verify the container image using the Azure Container Registry.-+ Previously updated : 12/12/2022 Last updated : 4/23/2023 # Build, sign, and verify container images using Notary and Azure Key Vault (Preview)
-The Azure Key Vault (AKV) is used to store a signing key that can be utilized by **notation** with the notation AKV plugin (azure-kv) to sign and verify container images and other artifacts. The Azure Container Registry (ACR) allows you to attach these signatures using the **az** or **oras** CLI commands.
+The Azure Key Vault (AKV) is used to store a signing key that can be utilized by [notation](http://notaryproject.dev/) with the notation AKV plugin (azure-kv) to sign and verify container images and other artifacts. The Azure Container Registry (ACR) allows you to attach these signatures using the **az** or **oras** CLI commands.
-The signed containers enable users to assure deployments are built from a trusted entity and verify artifact hasn't been tampered with since their creation. The signed artifact ensures integrity and authenticity before the user pulls an artifact into any environment and avoid attacks.
+The signed image enables users to assure deployments are built from a trusted entity and verify artifact hasn't been tampered with since their creation. The signed artifact ensures integrity and authenticity before the user pulls an artifact into any environment and avoid attacks.
In this tutorial:
In this tutorial:
## Install the notation CLI and AKV plugin
-1. Install notation v1.0.0-rc.1 with plugin support on a Linux environment. You can also download the package for other environments from the [release page](https://github.com/notaryproject/notation/releases/tag/v1.0.0-rc.1).
+1. Install notation v1.0.0-rc.4 on a Linux environment. You can also download the package for other environments by following the [Notation installation guide](https://notaryproject.dev/docs/installation/cli/).
```bash # Download, extract and install
- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz
+ curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.4/notation_1.0.0-rc.4_linux_amd64.tar.gz
tar xvzf notation.tar.gz # Copy the notation cli to the desired bin directory in your PATH
In this tutorial:
2. Install the notation Azure Key Vault plugin for remote signing and verification. > [!NOTE]
- > The plugin directory varies depending upon the operating system being used. The directory path below assumes Ubuntu.
- > Please read the [notation config article](https://github.com/notaryproject/notaryproject.dev/blob/main/content/en/docs/how-to/directory-structure.md) for more information.
+ > The plugin directory varies depending upon the operating system being used. The directory path below assumes Ubuntu. Please read the [Notation directory structure for system configuration](https://notaryproject.dev/docs/concepts/directory-structure/) for more information.
```bash # Create a directory for the plugin
In this tutorial:
# Download the plugin curl -Lo notation-azure-kv.tar.gz \
- https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_Linux_amd64.tar.gz
+ https://github.com/Azure/notation-azure-kv/releases/download/v0.6.0/notation-azure-kv_0.6.0_Linux_amd64.tar.gz
# Extract to the plugin directory tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv ```
-3. List the available plugins and verify that the plugin is available.
+3. List the available plugins.
```bash notation plugin ls
In this tutorial:
## Store the signing certificate in AKV
-If you have an existing certificate, upload it to AKV. For more information on how to use your own signing key, see the [signing certificate requirements.](https://github.com/notaryproject/notaryproject/blob/v1.0.0-rc.1/specs/signature-specification.md)
+If you have an existing certificate, upload it to AKV. For more information on how to use your own signing key, see the [signing certificate requirements.](https://github.com/Azure/notation-azure-kv/blob/release-0.6/docs/ca-signed-workflow.md)
Otherwise create an x509 self-signed certificate storing it in AKV for remote signing using the steps below. ### Create a self-signed certificate (Azure CLI) 1. Create a certificate policy file.
- Once the certificate policy file is executed as below, it creates a valid signing certificate compatible with **notation** in AKV. The EKU listed is for code-signing, but isn't required for notation to sign artifacts. The subject is used later as trust identity that user tursts during verification.
+ Once the certificate policy file is executed as below, it creates a valid signing certificate compatible with **notation** in AKV. The EKU listed is for code-signing, but isn't required for notation to sign artifacts. The subject is used later as trust identity that user trust during verification.
```bash cat <<EOF > ./my_policy.json
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
az keyvault certificate download --file $CERT_PATH --id $CERT_ID --encoding PEM ```
-5. Add a signing key referencing the key id.
+5. Add a signing key referencing the key ID.
```bash notation key add $KEY_NAME --plugin azure-kv --id $KEY_ID
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
```bash notation key ls ```
-
+ 7. Add the downloaded public certificate to named trust store for signature verification. ```bash
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
notation cert add --type $STORE_TYPE --store $STORE_NAME $CERT_PATH ```
-8. List the certificate to confirm
+8. List the certificate to confirm.
```bash notation cert ls
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
az acr build -r $ACR_NAME -t $IMAGE $IMAGE_SOURCE ```
-2. Authenticate with your individual Azure AD identity to use an ACR token.
+2. Authenticate with your individual Azure AD identity to use an ACR token.
```azure-cli export USER_NAME="00000000-0000-0000-0000-000000000000"
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
notation login -u $USER_NAME -p $PASSWORD $REGISTRY ```
+> [!NOTE]
+> Currently, `notation` relies on [Docker Credential Store](https://docs.docker.com/engine/reference/commandline/login/#credentials-store) for authentication. Notation requires additional configuration on Linux. If `notation login` is failing, you can configure the Docker Credential Store or Notation environment variables by following the guide [Authenticate with OCI-compliant registries](https://notaryproject.dev/docs/how-to/registry-authentication/).
+ 3. Sign the container image with the [COSE](https://datatracker.ietf.org/doc/html/rfc8152) signature format using the signing key added in previous step. ```bash
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
notation ls $IMAGE ```
-## View the graph of artifacts with the ORAS CLI (optional)
-
-ACR support for OCI artifacts enables a linked graph of supply chain artifacts that can be viewed through the ORAS CLI or the Azure CLI.
-
-1. Signed images can be view with the ORAS CLI.
-
- ```bash
- oras login -u $USER_NAME -p $PASSWORD $REGISTRY
- oras discover -o tree $IMAGE
- ```
-
-## View the graph of artifacts with the Azure CLI (optional)
-
-1. List the manifest details for the container image.
-
- ```azure-cli
- az acr manifest show-metadata $IMAGE -o jsonc
- ```
-
-2. Generates a result, showing the `digest` representing the notary v2 signature.
-
- ```json
- {
- "changeableAttributes": {
- "deleteEnabled": true,
- "listEnabled": true,
- "readEnabled": true,
- "writeEnabled": true
- },
- "createdTime": "2022-05-13T23:15:54.3478293Z",
- "digest": "sha256:effba96d9b7092a0de4fa6710f6e73bf8c838e4fbd536e95de94915777b18613",
- "lastUpdateTime": "2022-05-13T23:15:54.3478293Z",
- "name": "v1",
- "quarantineState": "Passed",
- "signed": false
- }
- ```
- ## Verify the container image 1. Configure trust policy before verification. The trust policy is a JSON document named `trustpolicy.json`, which is stored under the notation configuration directory. Users who verify signed artifacts from a registry use the trust policy to specify trusted identities that sign the artifacts, and the level of signature verification to use.
- Use the following command to configure trust policy for this tutorial. Upon successful execution of the command, one trust policy named `wabbit-networks-images` is created. This trust policy applies to all the artifacts stored in repositories defined in `$REGISTRY/$REPO`. The trust identity that user trusts has the x509 subject `$CERT_SUBJECT` from previous step, and stored under trust store named `$STORE_NAME` of type `$STORE_TYPE`. See [Trust store and trust policy specification](https://notaryproject.dev/docs/concepts/trust-store-trust-policy-specification/) for details.
+ Use the following command to configure trust policy. Upon successful execution of the command, one trust policy named `wabbit-networks-images` is created. This trust policy applies to all the artifacts stored in repositories defined in `$REGISTRY/$REPO`. The trust identity that user trusts has the x509 subject `$CERT_SUBJECT` from previous step, and stored under trust store named `$STORE_NAME` of type `$STORE_TYPE`. See [Trust store and trust policy specification](https://notaryproject.dev/docs/concepts/trust-store-trust-policy-specification/) for details.
```bash
- cat <<EOF > $HOME/.config/notation/trustpolicy.json
+ cat <<EOF > ./trustpolicy.json
{ "version": "1.0", "trustPolicies": [
ACR support for OCI artifacts enables a linked graph of supply chain artifacts t
} EOF ```+
+3. Use `notation policy` to import the trust policy configuration from a JSON file that we created previously.
+
+ ```bash
+ notation policy import ./trustpolicy.json
+ notation policy show
+ ```
-2. The notation command can also help to ensure the container image hasn't been tampered with since build time by comparing the `sha` with what is in the registry.
+4. The notation command can also help to ensure the container image hasn't been tampered with since build time by comparing the `sha` with what is in the registry.
```bash notation verify $IMAGE ```
- Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output messages.
+ Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message.
## Next steps
-See [Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)
+See [Ratify on Azure: Allow only signed images to be deployed on AKS with Notation and Ratify](https://github.com/deislabs/ratify/blob/main/docs/examples/ratify-verify-azure-cmd.md).
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
Title: Create and manage intra-account container copy jobs in Azure Cosmos DB description: Learn how to create, monitor, and manage container copy jobs within an Azure Cosmos DB account using CLI commands.-+ Last updated 08/01/2022-+
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
Title: Intra-account container copy jobs description: Copy container data between containers within an account in Azure Cosmos DB.--++
The rate of container copy job progress is determined by these factors:
Container copy jobs don't work with accounts having following capabilities enabled. You will need to disable these features before running the container copy jobs. -- [Disable local auth](https://learn.microsoft.com/azure/cosmos-db/how-to-setup-rbac#use-azure-resource-manager-templates)-- [Private endpoint / IP Firewall enabled](https://learn.microsoft.com/azure/cosmos-db/how-to-configure-firewall#allow-requests-from-global-azure-datacenters-or-other-sources-within-azure). You will need to provide access to connections within public Azure datacenters to run container copy jobs.-- [Merge partition](https://learn.microsoft.com/azure/cosmos-db/merge).
+- [Disable local auth](how-to-setup-rbac.md#use-azure-resource-manager-templates)
+- [Private endpoint / IP Firewall enabled](how-to-configure-firewall.md#allow-requests-from-global-azure-datacenters-or-other-sources-within-azure). You will need to provide access to connections within public Azure datacenters to run container copy jobs.
+- [Merge partition](merge.md).
### Account Configurations
cosmos-db Choose Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/choose-model.md
Title: Choose between RU-based and vCore-based models description: Choose whether the RU-based or vCore-based option for Azure Cosmos DB for MongoDB is ideal for your workload.--++
cosmos-db Integrations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/integrations-overview.md
Title: Integrations overview in Azure Cosmos DB for MongoDB description: Learn how to integrate Azure Cosmos DB for MongoDB account with other Azure services.-+ Last updated 07/25/2022-+ # Integrate Azure Cosmos DB for MongoDB with Azure services
cosmos-db Migrate Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/migrate-databricks.md
Title: Migrate from MongoDB to Azure Cosmos DB for MongoDB, using Databricks and Spark description: Learn how to use Databricks Spark to migrate large datasets from MongoDB instances to Azure Cosmos DB.--++
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Title: Compatibility and feature support description: Review Azure Cosmos DB for MongoDB vCore supported features and syntax including; commands, query support, datatypes, aggregation, and operators.--++
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
Title: Migrate data from MongoDB description: Learn about the various options to migrate your data from other MongoDB sources to Azure Cosmos DB for MongoDB vCore.--++
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-processor.md
Title: Change feed processor in Azure Cosmos DB
-description: Learn how to use the Azure Cosmos DB change feed processor to read the change feed, the components of the change feed processor
+description: Learn how to use the Azure Cosmos DB Change Feed Processor to read the change feed, the components of the change feed processor
ms.devlang: csharp Previously updated : 04/05/2022 Last updated : 04/26/2023
The main benefit of change feed processor library is its fault-tolerant behavior
There are four main components of implementing the change feed processor:
-1. **The monitored container:** The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
+* **The monitored container:** The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
-1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
+* **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
-1. **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
+* **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
-1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
+* **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
-To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges (each range representing a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items.
+To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores items and uses 'City' as the partition key. The partition key values are distributed in ranges (each range representing a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items.
There are two compute instances and the change feed processor is assigning different ranges to each instance to maximize compute distribution, each instance has a unique and different name. Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container through a *lease* document. The combination of the leases represents the current state of the change feed processor.
The point of entry is always the monitored container, from a `Container` instanc
[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=DefineProcessor)]
-Where the first parameter is a distinct name that describes the goal of this processor and the second name is the delegate implementation that will handle changes.
+Where the first parameter is a distinct name that describes the goal of this processor and the second name is the delegate implementation that handles changes.
An example of a delegate would be: - [!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)]
-Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, this should be unique and different in each compute instance you are deploying, and finally which is the container to maintain the lease state with `WithLeaseContainer`.
+Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, which should be unique and different in each compute instance you're deploying, and finally, which is the container to maintain the lease state with `WithLeaseContainer`.
-Calling `Build` will give you the processor instance that you can start by calling `StartAsync`.
+Calling `Build` gives you the processor instance that you can start by calling `StartAsync`.
## Processing life cycle
The normal life cycle of a host instance is:
## Error handling
-The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
+The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes stops, and a new thread is eventually created. The new thread checks the latest point in time the lease store has saved for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
> [!NOTE] > There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store doesn't matter, simply that the unprocessed changes are persisted.
In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed or use the [life cycle notifications](#life-cycle-notifications) to detect underlying failures.
The change feed processor lets you hook to relevant events in its [life cycle](#
A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
-For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
+For example, you might have one deployment unit that triggers an external API anytime there's a change in your container. Another deployment unit might move data, in real time, each time there's a change. When a change happens in your monitored container, all your deployment units get notified.
## Dynamic scaling As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
-1. All instances should have the same lease container configuration.
-1. All instances should have the same `processorName`.
-1. Each instance needs to have a different instance name (`WithInstanceName`).
+* All instances should have the same lease container configuration.
+* All instances should have the same `processorName`.
+* Each instance needs to have a different instance name (`WithInstanceName`).
-If these three conditions apply, then the change feed processor will distribute all the leases in the lease container across all running instances of that deployment unit and parallelize compute using an equal distribution algorithm. One lease can only be owned by one instance at a given time, so the number of instances should not be greater than the number of leases.
+If these three conditions apply, then the change feed processor distributes all the leases in the lease container across all running instances of that deployment unit and parallelizes compute using an equal distribution algorithm. A lease is owned by one instance at a given time, so the number of instances shouldn't be greater than the number of leases.
The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly. Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
-## Change feed and provisioned throughput
-
-Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
-
-Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
- ## Starting time
-By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
+By default, when a change feed processor starts the first time, it initializes the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor is initialized for the first time aren't detected.
### Reading from a previous date and time
It's possible to initialize the change feed processor to read changes starting a
The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
-> [!NOTE]
-> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
- ### Reading from the beginning In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can use `WithStartTime` on the builder extension, but passing `DateTime.MinValue.ToUniversalTime()`, which would generate the UTC representation of the minimum `DateTime` value, like so:
An example of a delegate implementation would be:
> In the above we pass a variable `options` of type `ChangeFeedProcessorOptions`, which can be used to set various values including `setStartFromBeginning`: > [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=ChangeFeedProcessorOptions)]
-We assign this to a `changeFeedProcessorInstance`, passing parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`) and the `leaseContainer`. We then start the change feed processor:
+We assign the result of `buildChangeFeedProcessor()` to a `changeFeedProcessorInstance`, passing parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`) and the `leaseContainer`. We then start the change feed processor:
[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=StartChangeFeedProcessor)]
The normal life cycle of a host instance is:
## Error handling
-The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
+The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes is stopped, and a new thread is created. The new thread checks the latest point in time the lease store has saved for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee.
> [!NOTE] > There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
-To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store doesn't matter, simply that the unprocessed changes are persisted.
In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed.
-<!-- ## Life-cycle notifications
-
-The change feed processor lets you hook to relevant events in its [life cycle](#processing-life-cycle), you can choose to be notified to one or all of them. The recommendation is to at least register the error notification:
-
-* Register a handler for `WithLeaseAcquireNotification` to be notified when the current host acquires a lease to start processing it.
-* Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it.
-* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing, being able to distinguish if the source is the user delegate (unhandled exception) or an error the processor is encountering trying to access the monitored container (for example, networking issues).
-
-[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartWithNotifications)] -->
- ## Deployment unit A single change feed processor deployment unit consists of one or more compute instances with the same lease container configuration, the same `leasePrefix`, but different `hostName` name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
-For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
+For example, you might have one deployment unit that triggers an external API anytime there's a change in your container. Another deployment unit might move data, in real time, each time there's a change. When a change happens in your monitored container, all your deployment units get notified.
## Dynamic scaling As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
-1. All instances should have the same lease container configuration.
-1. All instances should have the same value set in `options.setLeasePrefix` (or none set at all).
-1. Each instance needs to have a different `hostName`.
+* All instances should have the same lease container configuration.
+* All instances should have the same value set in `options.setLeasePrefix` (or none set at all).
+* Each instance needs to have a different `hostName`.
-If these three conditions apply, then the change feed processor will distribute all the leases in the lease container across all running instances of that deployment unit and parallelize compute using an equal distribution algorithm. One lease can only be owned by one instance at a given time, so the number of instances should not be greater than the number of leases.
+If these three conditions apply, then the change feed processor distributes all the leases in the lease container across all running instances of that deployment unit and parallelizes compute using an equal distribution algorithm. A lease is owned by one instance at a given time, so the number of instances shouldn't be greater than the number of leases.
The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix`. Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
-## Change feed and provisioned throughput
-
-Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
-
-Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
- ## Starting time
-By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
+By default, when a change feed processor starts the first time, it initializes the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
### Reading from a previous date and time It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by setting `setStartTime` in `options`. The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
-> [!NOTE]
-> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
- ### Reading from the beginning
-In our above sample, we set `setStartFromBeginning` to `false`, which is the same as the default value. In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can set `setStartFromBeginning` to `true`. The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
+In our sample, we set `setStartFromBeginning` to `false`, which is the same as the default value. In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can set `setStartFromBeginning` to `true`. The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
> [!NOTE] > These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
+## Change feed and provisioned throughput
+
+Change feed read operations on the monitored container consume [request units](../request-units.md). Make sure your monitored container isn't experiencing [throttling](troubleshoot-request-rate-too-large.md), it adds delays in receiving change feed events on your processors.
+
+Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption is. Make sure your lease container isn't experiencing [throttling](troubleshoot-request-rate-too-large.md), it adds delays in receiving change feed events and can even stop processing completely.
+ ## Sharing the lease container You can share the lease container across multiple [deployment units](#deployment-unit), each deployment unit would be listening to a different monitored container or have a different `processorName`. With this configuration, each deployment unit would maintain an independent state on the lease container. Review the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput) to make sure the provisioned throughput is enough for all the deployment units.
+## Advanced lease configuration
+
+There are three key configurations that can affect the change feed processor behavior, in all cases, they'll affect the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). These configurations can be changed during the creation of the change feed processor but should be used carefully:
+
+* Lease Acquire: By default every 17 seconds. A host will periodically check the state of the lease store and consider acquiring leases as part of the [dynamic scaling](#dynamic-scaling) process. This process is done by executing a Query on the lease container. Reducing this value makes rebalancing and acquiring leases faster but increase [request unit consumption on the lease container](#change-feed-and-provisioned-throughput).
+* Lease Expiration: By default 60 seconds. Defines the maximum amount of time that a lease can exist without any renewal activity before it's acquired by another host. When a host crashes, the leases it owned will be picked up by other hosts after this period of time plus the configured renewal interval. Reducing this value will make recovering after a host crash faster, but the expiration value should never be lower than the renewal interval.
+* Lease Renewal: By default every 13 seconds. A host owning a lease will periodically renew it even if there are no new changes to consume. This process is done by executing a Replace on the lease. Reducing this value lowers the time required to detect leases lost by host crashing but increase [request unit consumption on the lease container](#change-feed-and-provisioned-throughput).
++ ## Where to host the change feed processor The change feed processor can be hosted in any platform that supports long running processes or tasks:
The change feed processor can be hosted in any platform that supports long runni
* A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions). * An [ASP.NET hosted service](/aspnet/core/fundamentals/host/hosted-services).
-While change feed processor can run in short lived environments, because the lease container maintains the state, the startup cycle of these environments will add delay to receiving the notifications (due to the overhead of starting the processor every time the environment is started).
+While change feed processor can run in short lived environments because the lease container maintains the state, the startup cycle of these environments adds delay to receiving the notifications (due to the overhead of starting the processor every time the environment is started).
## Additional resources
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for EA enrollm
description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 02/28/2023 Last updated : 04/27/2023
This article explains how enterprise administrators of direct and indirect Enter
> [!NOTE] > We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md). >
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers can't manage their billing account in the EA portal. Instead, they must use the Azure portal.
> > This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
The following table lists the terms and descriptions shown on the Usage + Charge
In the past, when a reservation refund was required, Microsoft manually reviewed closed bills - sometimes going back multiple years. The manual review sometime led to issues. To resolve the issues, the refund review process is changing to a forward-looking review that doesn't require reviewing closed bills.
-The new review process is being deployed in phases. The current phase began on March 1, 2023. In this phase, Microsoft is addressing only refunds that result in an overage. For example, an overage that generates a credit note.
+The new review process is being deployed in phases. The current phase begins on May 1, 2023. In this phase, Microsoft is addressing only refunds that result in an overage. For example, an overage that generates a credit note.
To better understand the change, let's look at a detailed example of the old process. Assume that a reservation was bought in February 2022 with an overage credit (no Azure prepayment or Monetary Commitment was involved). You decided to return the reservation in August 2022. Refunds use the same payment method as the purchase. So, you received a credit note in August 2022 for the February 2022 billing period. However, the credit amount reflects the month of purchase. In this example, that's February 2022. The refund results in the change to the service overage and total charges.
cost-management-billing Ea Billing Administration Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-billing-administration-partners.md
Title: EA billing administration for partners in the Azure portal
description: This article explains the common tasks that a partner administrator accomplishes in the Azure portal to manage indirect enterprise agreements. Previously updated : 04/24/2023 Last updated : 04/26/2023
This article explains the common tasks that a partner administrator accomplishes in the Azure portal https://portal.azure.com to manage indirect EAs. An indirect EA is one where a customer signs an agreement with a Microsoft partner. The partner administrator manages their indirect EAs on behalf of their customers.
+You can watch the [EA Billing administration in the Azure portal for Partners](https://www.youtube.com/playlist?list=PLeZrVF6SXmso87z4YJ5KKrCCYMuiK47d0) series of videos on YouTube.
+ ## Access the Azure portal The partner organization is referred to as the **billing account** in the Azure portal. Partner administrators can sign in to the Azure portal to view and manage their partner organization. The partner organization contains their customer's enrollments. However, the partner doesn't have an enrollment of their own. A customer's enrollment is shown in the Azure portal as a **billing profile**.
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Azure Reservations help you save money by committing to one-year or three-years
## Who can buy a reservation
-To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement. Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations.
+To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement. Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. You will not be able to purchase a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription, you must use built-in owner or built-in reservation purchaser role.
Enterprise Agreement (EA) customers can limit purchases to EA admins by disabling the **Add Reserved Instances** option in the EA Portal. Direct EA customers can now disable Reserved Instance setting in [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to Policies menu to change settings.
databox-online Azure Stack Edge Reset Reactivate Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-reset-reactivate-device.md
Previously updated : 03/17/2023 Last updated : 04/27/2023
databox-online Azure Stack Edge Return Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-return-device.md
Previously updated : 10/28/2021 Last updated : 04/27/2023
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |--|-|:--:|-|
-| **DDoS Attack detected for Public IP** | DDoS Attack detected for Public IP (IP address) and being mitigated. | Probing | High |
-| **DDoS Attack mitigated for Public IP** | DDoS Attack mitigated for Public IP (IP address). | Probing | Low |
+| **DDoS Attack detected for Public IP**<br>(NETWORK_DDOS_DETECTED) | DDoS Attack detected for Public IP (IP address) and being mitigated. | Probing | High |
+| **DDoS Attack mitigated for Public IP**<br>(NETWORK_DDOS_MITIGATED) | DDoS Attack mitigated for Public IP (IP address). | Probing | Low |
## <a name="alerts-fusion"></a>Security incident alerts
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The optional Defender CSPM plan, provides advanced posture management capabiliti
### Plan pricing > [!NOTE]
-> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on May 1 2023. Billing will apply for compute, database, and storage resources. Billable workloads will be VMs, Storage Accounts, OSS DBs, and SQL PaaS & Servers on Machines. When billing starts, existing Microsoft Defender for Cloud customers will receive automatically applied discounts for Defender CSPM. ΓÇï
+> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1 2023. Billing will apply for compute, database, and storage resources. Billable workloads will be VMs, Storage Accounts, OSS DBs, and SQL PaaS & Servers on Machines. When billing starts, existing Microsoft Defender for Cloud customers will receive automatically applied discounts for Defender CSPM. ΓÇï
Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Databases and Storage accounts at $15/billable resource/month. If you have one of the following plans enabled, you will receive a discount.
Refer to the following table:
|Defender for Containers | 10% | **$13.50/** Compute or Data workload / month |Defender for DBs / Defender for Storage | 5% | **$14.25/** Compute or Data workload / month
-## Plan Availability
+## Plan availability
Learn more about [Defender CSPM pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
The following table summarizes each plan and their cloud availability.
| Feature | Foundational CSPM capabilities | Defender CSPM | Cloud availability | |--|--|--|--| | [Security recommendations to fix misconfigurations and weaknesses](review-security-recommendations.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png":::| Azure, AWS, GCP, on-premises |
-| Asset inventory | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Asset inventory](asset-inventory.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
| [Secure score](secure-score-security-controls.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Data visualization and reporting with Azure Workbooks | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| Data exporting | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| Workflow automation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| Remediation tracking | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Data exporting](export-to-siem.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Workflow automation](workflow-automation.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| Tools for remediation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Governance](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
The following table summarizes each plan and their cloud availability.
| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure | | [Agentless vulnerability assessments for container images](defender-for-containers-vulnerability-assessment-azure.md), including registry scanning (\* Up to 20 unique images per billable resource) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
-| Sensitive data discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| Data flows discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
The following table summarizes each plan and their cloud availability.
## Next steps
-Learn about Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+Learn about Defender for Cloud's [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud Episode Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eight.md
Title: Microsoft Defender for IoT description: Learn how Defender for IoT discovers devices to monitor and how it fits in the Microsoft Security portfolio. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Microsoft Defender for IoT
defender-for-cloud Episode Eighteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eighteen.md
Title: Defender for Azure Cosmos DB | Defender for Cloud in the Field
description: Learn about Defender for Cloud integration with Azure Cosmos DB. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender for Azure Cosmos DB | Defender for Cloud in the Field
defender-for-cloud Episode Eleven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eleven.md
Title: Threat landscape for Defender for Containers description: Learn about the new detections that are available for different attacks and how Defender for Containers can help to quickly identify malicious activities in containers. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Threat landscape for Defender for Containers
defender-for-cloud Episode Fifteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fifteen.md
Title: Remediate security recommendations with governance
description: Learn about the new governance feature in Defender for Cloud, and how to drive security posture improvement. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Remediate security recommendations with governance
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Title: Microsoft Defender for Servers description: Learn all about Microsoft Defender for Servers. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Microsoft Defender for Servers
defender-for-cloud Episode Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-four.md
Title: Security posture management improvements in Microsoft Defender for Cloud description: Learn how to manage your security posture with Microsoft Defender for Cloud. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Security posture management improvements in Microsoft Defender for Cloud
defender-for-cloud Episode Fourteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fourteen.md
Title: Defender for Servers deployment in AWS and GCP
description: Learn about the capabilities available for Defender for Servers deployment within AWS and GCP. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender for Servers deployment in AWS and GCP
defender-for-cloud Episode Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nine.md
Title: Microsoft Defender for Containers in a multicloud environment description: Learn about Microsoft Defender for Containers implementation in AWS and GCP. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Microsoft Defender for Containers in a Multicloud Environment
defender-for-cloud Episode Nineteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nineteen.md
Title: Defender for DevOps | Defender for Cloud in the Field
description: Learn about Defender for Cloud integration with Defender for DevOps. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender for DevOps | Defender for Cloud in the Field
defender-for-cloud Episode One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-one.md
Title: New AWS connector in Microsoft Defender for Cloud description: Learn all about the new AWS connector in Microsoft Defender for Cloud. Previously updated : 01/24/2023 Last updated : 04/27/2023 # New AWS connector in Microsoft Defender for Cloud
defender-for-cloud Episode Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seven.md
Title: New GCP connector in Microsoft Defender for Cloud description: Learn all about the new GCP connector in Microsoft Defender for Cloud. Previously updated : 01/24/2023 Last updated : 04/27/2023 # New GCP connector in Microsoft Defender for Cloud
defender-for-cloud Episode Seventeen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seventeen.md
Title: Defender for Cloud integration with Microsoft Entra | Defender for Cloud
description: Learn about Defender for Cloud integration with Microsoft Entra. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender for Cloud integration with Microsoft Entra | Defender for Cloud in the Field
defender-for-cloud Episode Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-six.md
Title: Lessons learned from the field with Microsoft Defender for Cloud description: Learn how Microsoft Defender for Cloud is used to fill the gap between cloud security posture management and cloud workload protection. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Lessons learned from the field with Microsoft Defender for Cloud
defender-for-cloud Episode Sixteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-sixteen.md
Title: Defender for Servers integration with Microsoft Defender for Endpoint
description: Learn about the integration between Defender for Servers and Microsoft Defender for Endpoint Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender for Servers integration with Microsoft Defender for Endpoint
defender-for-cloud Episode Ten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-ten.md
Title: Protecting containers in GCP with Defender for Containers description: Learn how to use Defender for Containers, to protect Containers that are located in Google Cloud Projects. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Protecting containers in GCP with Defender for Containers
defender-for-cloud Episode Thirteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirteen.md
Title: Defender for Storage description: Learn about the capabilities available in Defender for Storage. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender for Storage
defender-for-cloud Episode Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-three.md
Title: Microsoft Defender for Containers description: Learn how about Microsoft Defender for Containers. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Microsoft Defender for Containers
defender-for-cloud Episode Twelve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twelve.md
Title: Enhanced workload protection features in Defender for Servers description: Learn about the enhanced capabilities available in Defender for Servers, for VMs that are located in GCP, AWS and on-premises. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Enhanced workload protection features in Defender for Servers
defender-for-cloud Episode Twenty Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-eight.md
Title: Zero Trust and Defender for Cloud | Defender for Cloud in the Field
description: Learn about Zero Trust best practices and Zero Trust visibility and analytics tools Previously updated : 04/20/2023 Last updated : 04/27/2023 # Zero Trust and Defender for Cloud | Defender for Cloud in the field
defender-for-cloud Episode Twenty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-five.md
Title: AWS ECR coverage in Defender for Containers | Defender for Cloud in the f
description: Learn about AWS ECR coverage in Defender for Containers Previously updated : 01/24/2023 Last updated : 04/27/2023 # AWS ECR Coverage in Defender for Containers | Defender for Cloud in the field
defender-for-cloud Episode Twenty Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-four.md
Title: Enhancements in Defender for SQL vulnerability assessment | Defender for
description: Learn about Enhancements in Defender for SQL Vulnerability Assessment Previously updated : 01/24/2023 Last updated : 04/27/2023 # Enhancements in Defender for SQL vulnerability assessment | Defender for Cloud in the field
defender-for-cloud Episode Twenty Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-nine.md
Title: Security policy enhancements in Defender for Cloud | Defender for Cloud i
description: Learn about security policy enhancements and dashboard in Defender for Cloud Previously updated : 04/23/2023 Last updated : 04/27/2023 # Security policy enhancements in Defender for Cloud
defender-for-cloud Episode Twenty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-one.md
Title: Latest updates in the regulatory compliance dashboard | Defender for Clou
description: Learn about the latest updates in the regulatory compliance dashboard Previously updated : 01/24/2023 Last updated : 04/27/2023 # Latest updates in the regulatory compliance dashboard| Defender for Cloud in the Field
defender-for-cloud Episode Twenty Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-seven.md
Title: Demystifying Defender for Servers | Defender for Cloud in the field
description: Learn about different deployment options in Defender for Servers Previously updated : 04/19/2023 Last updated : 04/27/2023 # Demystifying Defender for Servers | Defender for Cloud in the field
defender-for-cloud Episode Twenty Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-six.md
Title: Governance capability improvements in Defender for Cloud | Defender for C
description: Learn about the need for governance and new at scale governance capability Previously updated : 02/15/2023 Last updated : 04/27/2023 # Governance capability improvements in Defender for Cloud | Defender for Cloud in the field
defender-for-cloud Episode Twenty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-three.md
Title: Defender Threat Intelligence | Defender for Cloud in the field
description: Learn about Microsoft Defender Threat Intelligence (Defender TI) Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender threat Intelligence | Defender for Cloud in the Field
defender-for-cloud Episode Twenty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-two.md
Title: Defender EASM | Defender for Cloud in the field
description: Learn about Microsoft Defender External Attack Surface Management (Defender EASM) Previously updated : 01/24/2023 Last updated : 04/27/2023 # Defender EASM | Defender for Cloud in the Field
-**Episode description**: In this episode of Defender for Cloud in the Field, Jamil Mirza joins Yuri Diogenes to talk about Microsoft Defender External Attack Surface Management (Defender EASM). Jamil explains how Defender EASM continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. Jamil also covers the integration with Defender for Cloud, how it works, and he demonstrates different capabilities available in Defender EASM..
+**Episode description**: In this episode of Defender for Cloud in the Field, Jamil Mirza joins Yuri Diogenes to talk about Microsoft Defender External Attack Surface Management (Defender EASM). Jamil explains how Defender EASM continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. Jamil also covers the integration with Defender for Cloud, how it works, and he demonstrates different capabilities available in Defender EASM.
<br> <br> <iframe src="https://aka.ms/docs/player?id=5a3e2eab-52ce-4527-94e0-baae1b9cc81d" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Title: Cloud security explorer and attack path analysis | Defender for Cloud in
description: Learn about cloud security explorer and attack path analysis. Previously updated : 04/13/2023 Last updated : 04/27/2023 # Cloud security explorer and attack path analysis | Defender for Cloud in the Field
defender-for-cloud Episode Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-two.md
Title: Integrate Azure Purview with Microsoft Defender for Cloud description: Learn how to integrate Azure Purview with Microsoft Defender for Cloud. Previously updated : 01/24/2023 Last updated : 04/27/2023 # Integrate Microsoft Purview with Microsoft Defender for Cloud
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/24/2023 Last updated : 04/27/2023 # What's new in Microsoft Defender for Cloud?
Updates in April include:
- [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) - [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) - [Two recommendations related to missing Operating System (OS) updates were released to GA](#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga)
+- [Defender for APIs (Preview)](#defender-for-apis-preview)
+ ### Agentless Container Posture in Defender CSPM (Preview) The new Agentless Container Posture (Preview) capabilities are available as part of the Defender CSPM (Cloud Security Posture Management) plan.
The new recommendation `System updates should be installed on your machines (pow
The prerequisite recommendation ([Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment)) will have a negative effect on your Secure Score. You can be remediated the effect with the available [Fix button](implement-security-recommendations.md).
+### Defender for APIs (Preview)
+
+Microsoft's Defender for Cloud is announcing the new Defender for APIs is available in preview.
+
+Defender for APIs offers full lifecycle protection, detection, and response coverage for APIs.
+
+Defender for APIs helps you to gain visibility into business-critical APIs. You can investigate and improve your API security posture, prioritize vulnerability fixes, and quickly detect active real-time threats.
+
+Learn more about [Defender for APIs](defender-for-apis-introduction.md).
+ ## March 2023 Updates in March include:
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
For information about when recommendations are generated for each of these solut
## Next steps -- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md#log-analytics-agent).
+- Learn how [Defender for Cloud collects data using the Log Analytics agent](monitoring-components.md#log-analytics-agent).
- Learn how [Defender for Cloud manages and safeguards data](data-security.md).
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
You can create a hub in the Azure portal. For all new IoT hubs, Defender for IoT
**To create an IoT Hub**:
-1. Follow the steps in [this article](../../iot-hub/iot-hub-create-through-portal.md#create-an-iot-hub).
+1. Follow the steps to [create an IoT hub using the Azure portal](../../iot-hub/iot-hub-create-through-portal.md#create-an-iot-hub).
1. Under the **Management** tab, ensure that **Defender for IoT** is set to **On**. By default, Defender for IoT will be set to **On** .
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
If you turn off alerts that are referenced in other places, such as [alert forwa
Defender for IoT alerts use the following severity levels: -- **Critical**: Indicates a malicious attack that should be handled immediately.
+| Azure portal | OT sensor | Description |
+||||
+| **High** | **Critical** | Indicates a malicious attack that should be handled immediately. |
+| **Medium** | **Major** | Indicates a security threat that's important to address. |
+| **Low** | **Minor**, **Warning** | Indicates some deviation from the baseline behavior that might contain a security threat, or contains no security threats. |
-- **Major**: Indicates a security threat that's important to address.--- **Minor**: Indicates some deviation from the baseline behavior that might contain a security threat.--- **Warning**: Indicates some deviation from the baseline behavior with no security threats.
+Alert severities on this page are listed by the severity as shown in the Azure portal.
## Supported alert types
Policy engine alerts describe detected deviations from learned baseline behavior
| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Collection <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0811: Data from Information Repositories|
-| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
-| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Major | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0869: Standard Application Layer Protocol |
-| **Function Code Raised Unauthorized Exception [*](#ot-alerts-turned-off-by-default)** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0835: Manipulate I/O Image |
-| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Illegal HTTP Communication [*](#ot-alerts-turned-off-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0846: Remote System Discovery |
-| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
-| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0888: Remote System Information Discovery |
-| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0835: Manipulate I/O Image |
-| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - CommandAndControl <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
-| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861 - Point & Tag Identification <br> - T0855: Unauthorized Command Message |
-| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
-| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Discovery <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0888: Remote System Information Discovery |
-| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter <br> - T0821: Modify Controller Tasking |
-| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Suspicion of Illegal Integrity Scan [*](#ot-alerts-turned-off-by-default)** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
-| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking <br> - T0809: Data Destruction |
-| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0855: Unauthorized Command Message |
-| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized Database Login [*](#ot-alerts-turned-off-by-default)** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0859: Valid Accounts <br> - T0811: Data from Information Repositories |
-| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
-| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - LateralMovement <br> - Persistence <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0859: Valid Accounts |
-| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0855: Unauthorized Command Message |
-| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0822: External Remote Services <br> - T0869: Standard Application Layer Protocol |
-| **Unauthorized HTTP SOAP Action [*](#ot-alerts-turned-off-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br> - Execution <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0871: Execution through API |
-| **Unauthorized HTTP User Agent [*](#ot-alerts-turned-off-by-default)** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
-| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
-| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Major | Custom Alerts | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
-| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0831: Manipulation of Control <br> - T0889: Modify Program |
-| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0845: Program Upload |
-| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Lateral Movement <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0889: Modify Program <br> - T0843: Program Download |
-| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0809: Data Destruction |
-| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0836: Modify Parameter <br> - T0863: User Execution |
-| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br> - Execution <br><br> **Techniques:** <br> - T0803 - Block Command Message <br> - T0889: Modify Program <br> - T0821: Modify Controller Tasking |
-| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0863: User Execution |
-| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0859: Valid Accounts |
-| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Command And Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0885: Commonly Used Port |
-| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access | **Tactics:** <br> - InitialAccess <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0869: Standard Application Layer Protocol |
-| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Execution <br> - Privilege Escalation <br> - Command And Control <br><br> **Techniques:** <br> - T0841: Hooking <br> - T0885: Commonly Used Port |
-| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Major | | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |**Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Unpermitted Usage of Internal Indication (IIN) [*](#ot-alerts-turned-off-by-default)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Medium | Authentication | **Tactics:** <br> - Lateral Movement <br> - Collection <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0811: Data from Information Repositories|
+| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | High | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Medium | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Medium | Authentication | **Tactics:** <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0869: Standard Application Layer Protocol |
+| **Function Code Raised Unauthorized Exception [*](#ot-alerts-turned-off-by-default)** | A source device (secondary) returned an exception to a destination device (primary). | Medium | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0835: Manipulate I/O Image |
+| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Low | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Illegal HTTP Communication [*](#ot-alerts-turned-off-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0846: Remote System Discovery |
+| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Medium | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0888: Remote System Information Discovery |
+| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0835: Manipulate I/O Image |
+| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - CommandAndControl <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861 - Point & Tag Identification <br> - T0855: Unauthorized Command Message |
+| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Low | Discovery | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Discovery <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0888: Remote System Information Discovery |
+| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter <br> - T0821: Modify Controller Tasking |
+| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Medium | Discovery | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Medium | Configuration Changes | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Medium | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Low | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Suspicion of Illegal Integrity Scan [*](#ot-alerts-turned-off-by-default)** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Medium | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Low | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Low | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
+| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking <br> - T0809: Data Destruction |
+| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0855: Unauthorized Command Message |
+| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Database Login [*](#ot-alerts-turned-off-by-default)** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Medium | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0859: Valid Accounts <br> - T0811: Data from Information Repositories |
+| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories |
+| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - LateralMovement <br> - Persistence <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0859: Valid Accounts |
+| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0855: Unauthorized Command Message |
+| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0822: External Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized HTTP SOAP Action [*](#ot-alerts-turned-off-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br> - Execution <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0871: Execution through API |
+| **Unauthorized HTTP User Agent [*](#ot-alerts-turned-off-by-default)** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Medium | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | High | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device |
+| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Medium | Programming | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | High | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Medium | Custom Alerts | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Low | Configuration Changes | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
+| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Medium | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0831: Manipulation of Control <br> - T0889: Modify Program |
+| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Medium | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0845: Program Upload |
+| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | High | Programming | **Tactics:** <br> - Impair Process Control <br> - Persistence <br> - Lateral Movement <br><br> **Techniques:** <br> - T0839: Module Firmware <br> - T0889: Modify Program <br> - T0843: Program Download |
+| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0809: Data Destruction |
+| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0836: Modify Parameter <br> - T0863: User Execution |
+| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br> - Execution <br><br> **Techniques:** <br> - T0803 - Block Command Message <br> - T0889: Modify Program <br> - T0821: Modify Controller Tasking |
+| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0863: User Execution |
+| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Medium | Authentication | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0859: Valid Accounts |
+| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Command And Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0885: Commonly Used Port |
+| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Remote Access | **Tactics:** <br> - InitialAccess <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0886: Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Medium | Abnormal Communication Behavior | **Tactics:** <br> - Execution <br> - Privilege Escalation <br> - Command And Control <br><br> **Techniques:** <br> - T0841: Hooking <br> - T0885: Commonly Used Port |
+| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Medium | Abnormal Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Medium | | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior |**Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Unpermitted Usage of Internal Indication (IIN) [*](#ot-alerts-turned-off-by-default)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Medium | Illegal Commands | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Medium | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
## Anomaly engine alerts
Anomaly engine alerts describe detected anomalies in network activity.
| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **Abnormal Exception Pattern in Slave [*](#ot-alerts-turned-off-by-default)** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
-| **Abnormal HTTP Header Length [*](#ot-alerts-turned-off-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
-| **Abnormal Number of Parameters in HTTP Header [*](#ot-alerts-turned-off-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
-| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Abnormal Termination of Applications [*](#ot-alerts-turned-off-by-default)** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior | **Tactics:** <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0889: Modify Program <br> - T0831: Manipulation of Control |
-| **Abnormal Traffic Bandwidth [*](#ot-alerts-turned-off-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Abnormal Traffic Bandwidth Between Devices [*](#ot-alerts-turned-off-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **ARP Address Scan Detected [*](#ot-alerts-turned-off-by-default)** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
-| **ARP Spoofing [*](#ot-alerts-turned-off-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0830: Man in the Middle |
-| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
-| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior | **Tactics:** <br> - Lateral Movement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
-| **Excessive Restart Rate of an Outstation [*](#ot-alerts-turned-off-by-default)** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O |
-| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication | **Tactics:** <br> - Persistence <br> - Execution <br> - LateralMovement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0853: Scripting <br> - T0859: Valid Accounts |
-| **ICMP Flooding [*](#ot-alerts-turned-off-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
-| **Illegal HTTP Header Content [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Inactive Communication Channel [*](#ot-alerts-turned-off-by-default)** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
-| **Long Duration Address Scan Detected [*](#ot-alerts-turned-off-by-default)** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
-| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior | **Tactics:** <br> - InitialAccess <br> - LateralMovement <br><br> **Techniques:** <br> - T0869: Exploitation of Remote Services |
-| **Unexpected Traffic for Standard Port [*](#ot-alerts-turned-off-by-default)** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Command And Control <br> - Discovery <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0842: Network Sniffing |
+| **Abnormal Exception Pattern in Slave [*](#ot-alerts-turned-off-by-default)** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Low | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Abnormal HTTP Header Length [*](#ot-alerts-turned-off-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | High | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Number of Parameters in HTTP Header [*](#ot-alerts-turned-off-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | High | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Low | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Termination of Applications [*](#ot-alerts-turned-off-by-default)** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Medium | Abnormal Communication Behavior | **Tactics:** <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0889: Modify Program <br> - T0831: Manipulation of Control |
+| **Abnormal Traffic Bandwidth [*](#ot-alerts-turned-off-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Low | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Traffic Bandwidth Between Devices [*](#ot-alerts-turned-off-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Low | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | High | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **ARP Address Scan Detected [*](#ot-alerts-turned-off-by-default)** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | High | Scan | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **ARP Spoofing [*](#ot-alerts-turned-off-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Low | Abnormal Communication Behavior | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0830: Man in the Middle |
+| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | High | Authentication | **Tactics:** <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | High | Abnormal Communication Behavior | **Tactics:** <br> - Lateral Movement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **Excessive Restart Rate of an Outstation [*](#ot-alerts-turned-off-by-default)** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Medium | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O |
+| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | High | Authentication | **Tactics:** <br> - Persistence <br> - Execution <br> - LateralMovement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0853: Scripting <br> - T0859: Valid Accounts |
+| **ICMP Flooding [*](#ot-alerts-turned-off-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Low | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **Illegal HTTP Header Content [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | High | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Inactive Communication Channel [*](#ot-alerts-turned-off-by-default)** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Low | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **Long Duration Address Scan Detected [*](#ot-alerts-turned-off-by-default)** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | High | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | High | Authentication | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
+| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | High | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | High | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | High | Abnormal Communication Behavior | **Tactics:** <br> - InitialAccess <br> - LateralMovement <br><br> **Techniques:** <br> - T0869: Exploitation of Remote Services |
+| **Unexpected Traffic for Standard Port [*](#ot-alerts-turned-off-by-default)** | Traffic was detected on a device using a port reserved for another protocol. | Medium | Abnormal Communication Behavior | **Tactics:** <br> - Command And Control <br> - Discovery <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0842: Network Sniffing |
## Protocol violation engine alerts
Protocol engine alerts describe detected deviations in the packet structure, or
| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **Excessive Malformed Packets In a Single Session [*](#ot-alerts-turned-off-by-default)** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
-| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Illegal MODBUS Operation (Function Code Zero) [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Major | Illegal Commands |**Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Illegal Protocol Version [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0820: Remote Services <br> - T0836: Modify Parameter |
-| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
-| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
-| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Data Address Parameter [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Data Value Parameter [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Function Code [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Usage of Improper Formatting by Outstation [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Usage of Reserved Status Flags (IIN) ** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Excessive Malformed Packets In a Single Session [*](#ot-alerts-turned-off-by-default)** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Low | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Illegal BACNet message** | The source device initiated an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Low | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal MODBUS Operation (Function Code Zero) [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Medium | Illegal Commands |**Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal Protocol Version [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Initial Access <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0820: Remote Services <br> - T0836: Modify Parameter |
+| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Low | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Low | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Low | Illegal Commands | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Medium | Illegal Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Address Parameter [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Value Parameter [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Function Code [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Medium | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Usage of Improper Formatting by Outstation [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Low | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Usage of Reserved Status Flags (IIN) ** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Low | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
## Malware engine alerts
Malware engine alerts describe detected malicious network activity.
| Title | Description| Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
-| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
-| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Impact <br><br> **Techniques:** <br> - T0826: Loss of Availability <br> - T0828: Loss of Productivity and Revenue <br> - T0847: Replication Through Removable Media |
-| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
-| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
-| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
-| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
-| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
-| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information <br> - T0811: Data from Information Repositories |
-| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0846: Remote System Discovery <br> - T0814: Denial of Service |
-| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
-| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Evasion <br><br> **Techniques:** <br> - T0849: Masquerading |
-| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy |
-| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0882: Theft of Operational Information |
-| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0818: Engineering Workstation Compromise <br> - T0866: Exploitation of Remote Services <br> - T0831: Manipulation of Control |
-| **Suspicion of Malicious Activity (WannaCry) [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer |
-| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
-| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Suspicion of Remote Windows Service Management [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
-| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit |
-| **Suspicious Traffic Detected [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | High | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | High | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | High | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Medium | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Impact <br><br> **Techniques:** <br> - T0826: Loss of Availability <br> - T0828: Loss of Productivity and Revenue <br> - T0847: Replication Through Removable Media |
+| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | High | Suspicion of Malicious Activity | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | High | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information <br> - T0811: Data from Information Repositories |
+| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0846: Remote System Discovery <br> - T0814: Denial of Service |
+| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Evasion <br><br> **Techniques:** <br> - T0849: Masquerading |
+| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | High | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy |
+| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0882: Theft of Operational Information |
+| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0818: Engineering Workstation Compromise <br> - T0866: Exploitation of Remote Services <br> - T0831: Manipulation of Control |
+| **Suspicion of Malicious Activity (WannaCry) [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Medium | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malware | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of Remote Windows Service Management [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
+| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | High | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit |
+| **Suspicious Traffic Detected [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | High | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Low | Backup | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
## Operational engine alerts
Operational engine alerts describe detected operational incidents, or malfunctio
| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
-| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
-| **Change of Device Configuration [*](#ot-alerts-turned-off-by-default)** | A configuration change was detected on a source device. | Minor | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Continuous Event Buffer Overflow at Outstation [*](#ot-alerts-turned-off-by-default)** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O <br> - T0839: Module Firmware |
-| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
-| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
-| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
-| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
-| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0839: Module Firmware |
-| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
-| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
-| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0803: Block Command Message <br> - T0821: Modify Controller Tasking |
-| **GOOSE Dataset Configuration was Changed [*](#ot-alerts-turned-off-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues | **Tactics:** <br> - Evasion <br> - Execution <br><br> **Techniques:** <br> - T0858: Change Operating Mode |
-| **HTTP Client Error [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
-| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0836: Modify Parameter |
-| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0859: Valid Accounts |
-| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
-| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0838: Modify Alarm Settings |
-| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0816: Device Restart/Shutdown |
-| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0816: Device Restart/Shutdown |
-| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Major | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
-| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
-| **RPC Operation Failed [*](#ot-alerts-turned-off-by-default)** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Sampled Values Message Dataset Configuration was Changed [*](#ot-alerts-turned-off-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Slave Device Unrecoverable Failure [*](#ot-alerts-turned-off-by-default)** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
-| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0881: Service Stop |
-| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
-| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **PLC Operating Mode Changed** | The operating mode on this PLC changed. The new mode may indicate that the PLC is not secure. Leaving the PLC in an unsecure operating mode may allow adversaries to perform malicious activities on it, such as a program download. If the PLC is compromised, devices and processes that interact with it may be impacted. This may affect overall system security and safety. | Warning | Configuration changes | **Tactics:** <br> - Execution <br> - Evasion <br><br> **Techniques:** <br> - T0858: Change Operating Mode |
+| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Low | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Medium | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Change of Device Configuration [*](#ot-alerts-turned-off-by-default)** | A configuration change was detected on a source device. | Low | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Continuous Event Buffer Overflow at Outstation [*](#ot-alerts-turned-off-by-default)** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Medium | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O <br> - T0839: Module Firmware |
+| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Low | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Low | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Medium | Command Failures | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Medium | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0801: Monitor Process State |
+| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Medium | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0839: Module Firmware |
+| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Medium | Backup | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
+| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Low | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Medium | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0803: Block Command Message <br> - T0821: Modify Controller Tasking |
+| **GOOSE Dataset Configuration was Changed [*](#ot-alerts-turned-off-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Low | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Low | Operational Issues | **Tactics:** <br> - Evasion <br> - Execution <br><br> **Techniques:** <br> - T0858: Change Operating Mode |
+| **HTTP Client Error [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Low | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Low | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0836: Modify Parameter |
+| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Low | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0859: Valid Accounts |
+| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | High | Sensor Traffic | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Medium | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0838: Modify Alarm Settings |
+| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Low | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0816: Device Restart/Shutdown |
+| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Low | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0816: Device Restart/Shutdown |
+| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Medium | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
+| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Medium | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction |
+| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Low | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
+| **RPC Operation Failed [*](#ot-alerts-turned-off-by-default)** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Medium | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Sampled Values Message Dataset Configuration was Changed [*](#ot-alerts-turned-off-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Low | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Slave Device Unrecoverable Failure [*](#ot-alerts-turned-off-by-default)** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Medium | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Medium | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0881: Service Stop |
+| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Low | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Low | Sensor Traffic | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **PLC Operating Mode Changed** | The operating mode on this PLC changed. The new mode may indicate that the PLC is not secure. Leaving the PLC in an unsecure operating mode may allow adversaries to perform malicious activities on it, such as a program download. If the PLC is compromised, devices and processes that interact with it may be impacted. This may affect overall system security and safety. | Low | Configuration changes | **Tactics:** <br> - Execution <br> - Evasion <br><br> **Techniques:** <br> - T0858: Change Operating Mode |
## Next steps
defender-for-iot Virtual Management Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-hyper-v.md
This procedure describes how to create a virtual machine for your on-premises ma
1. Enter a name for the virtual machine and select **Next**.
-1. Select **Generation** and set it to **Generation 1** or **Generation 2**, and then select **Next**.
+1. Select **Generation** and set it to **Generation 1**, and then select **Next**.
1. Specify the [memory allocation for your organization's needs](../ot-appliance-sizing.md), and then select **Next**.
defender-for-iot Install Software On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-on-premises-management-console.md
Title: Install Microsoft Defender for IoT on-premises management console software - Microsoft Defender for IoT description: Learn how to install Microsoft Defender for IoT on-premises management console software. Use this article if you're reinstalling software on a pre-configured appliance, or if you've chosen to install software on your own appliances. Previously updated : 12/13/2022 Last updated : 04/18/2023
The installation process takes about 20 minutes. After the installation, the sys
- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
-1. Select your preferred language for the installation process. For example:
+1. The initial console window lists installation languages. Select the language you want to use. For example:
:::image type="content" source="../media/tutorial-install-components/on-prem-language-select.png" alt-text="Screenshot of selecting your preferred language for the installation process.":::
-1. From the options displayed, select the management release you want to install based on the hardware profile you're using.
+1. The console lists a series of installation options. Select the option that best matches your requirements.
-1. Define the following network properties as prompted:
+ The installation wizard starts running. This step takes several minutes to complete, and includes system reboots.
- - For the **Configure management network interface** prompt: For Dell appliances, enter `eth0` and `eth1`. For HP appliances, enter `enu1` and `enu2`, or `possible value`.
+ When complete, a screen similar to the following appears, prompting you to enter your management interface:
- - For the **Configure management network IP address**, **Configure subnet mask**, **Configure DNS**, and **Configure default gateway IP address** prompts, enter the relevant values for each item.
+ :::image type="content" source="../media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot of the management interface prompt.":::
-1. **(Optional)** To install a secondary Network Interface Card (NIC), define a hardware profile, and network properties as prompted.
+1. At each prompt, enter the following values:
- For the **Configure sensor monitoring interface**, enter `eth1` or `possible value`. For other prompts, enter the relevant values for each item.
+ |Prompt |Value |
+ |||
+ |`configure management network interface` | Enter your management interface. For the following appliances, enter specific values:<br><br> - **Dell**: Enter `eth0, eth1`<br> - **HP**: Enter `enu1, enu2` <br><br> Other appliances may have different options. |
+ |`configure management network IP address` | Enter the on-premises management console's IP address. |
+ |`configure subnet mask` | Enter the on-premises management console's subnet mask address. |
+ |`configure DNS` | Enter the on-premises management console's DNS address. |
+ |`configure default gateway IP address` | Enter the IP address for the on-premises management console's default gateway. |
- For example:
+1. (Optional) Enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. When you use a secondary NIC, the first is dedicated for end-users, and the secondary supports the configuration of a gateway for routed networks.
- :::image type="content" source="../media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot that shows the Secondary NIC install questions.":::
+ If you're installing a secondary Network Interface Card (NIC), enter the following details for the sensor's monitoring interface as prompted:
- If you choose not to install the secondary NIC now, you can [do so at a later time](../how-to-manage-the-on-premises-management-console.md#add-a-secondary-nic-after-installation).
+ | Prompt |Value |
+ |||
+ |`configure sensor monitoring interface` | Enter `eth1` or another value as needed for your system. |
+ |`configure an IP address for the sensor monitoring interface` | Enter the secondary NIC's IP address |
+ |`configure a subnet mask for the sensor monitoring interface` | Enter the secondary NIC's subnet mask address. |
-1. Accept the settings and continue by entering `Y`.
+ If you choose not to install the secondary NIC now, you can [do so at a later time](#add-a-secondary-nic-after-installation-optional).
-1. <a name="users"></a>After about 10 minutes, the two sets of credentials appear. For example:
+1. When prompted, enter `Y` to accept the settings. The installation process runs for about 10 minutes.
- :::image type="content" source="../media/tutorial-install-components/credentials-screen.png" alt-text="Screenshot of the credentials that appear that must be copied as they won't be presented again.":::
+1. <a name="users"></a>When the installation process is complete, an appliance ID is displayed with a set of credentials for the *cyberx* privileged user. Save the credentials carefully as they won't be displayed again.
- Save the usernames and passwords, you'll need these credentials to access the platform the first time you use it.
+ When you're ready, press **ENTER** to continue. An appliance ID is displayed with a set of credentials for the *support* privileged user. Save these credentials carefully as well, as they won't be displayed again either.
For more information, see [Default privileged on-premises users](../roles-on-premises.md#default-privileged-on-premises-users).
-1. Select **Enter** to continue.
+1. When you're ready, press **ENTER** to continue.
+
+ The installation is complete and you're prompted to sign in. Sign in using one of the privileged user credentials you saved from the previous step. At this point, you can also browse to the on-premises management console's IP address in a browser and sign in there.
## Configure network adapters for a VM deployment
After deploying an on-premises management console sensor on a [virtual appliance
|Adapters |Description | ||| |**Single network adapter** | To use a single network adapter, add **Network adapter 1** to connect to the on-premises management console UI and any connected OT sensors. |
- |**Secondary NIC** | To use a secondary NIC in addition to your main network adapter, add: <br> <br> - **Network adapter 1** to connect to the on-premises management console UI <br> - **Network adapter 2**, to connect to connected OT sensors |
+ |<a name=add-a-secondary-nic-after-installation-optional></a>**Secondary NIC** | To use a secondary NIC in addition to your main network adapter, add: <br> <br> - **Network adapter 1** to connect to the on-premises management console UI <br> - **Network adapter 2**, to connect to connected OT sensors |
For more information, see:
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Title: Onboard and activate a virtual OT sensor - Microsoft Defender for IoT. description: This tutorial describes how to set up a virtual OT network sensor to monitor your OT network traffic. Previously updated : 07/11/2022 Last updated : 04/18/2023 # Tutorial: Onboard and activate a virtual OT sensor
-This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
+This tutorial describes the basics of setting up a Microsoft Defender for IoT OT sensor, using a trial subscription of Microsoft Defender for IoT and a virtual machine.
+
+For a full, end-to-end deployment, make sure to follow steps to plan and prepare your system, and also fully calibrate and fine-tune your settings. For more information, see [Deploy Defender for IoT for OT monitoring](ot-deploy/ot-deploy-path.md).
> [!NOTE]
-> If you're looking to set up security monitoring for enterprise IoT systems, see [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md) and [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
+> If you're looking to set up security monitoring for enterprise IoT systems, see [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Download software for a virtual sensor > * Create a VM for the sensor > * Install the virtual sensor software
-> * Configure a SPAN port
+> * Configure a virtual SPAN port
> * Verify your cloud connection > * Onboard and activate the virtual sensor
Before you can start using your Defender for IoT sensor, you'll need to onboard
|**Site** | Define the site where you want to associate your sensor, or select **Create site** to create a new site. Define a display name for your site and optional tags to help identify the site later. | |**Zone** | Define the zone where you want to deploy your sensor, or select **Create zone** to create a new one. |
+ For more information, see [Plan OT sites and zones](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones).
+ 1. Select **Register** to add your sensor to Defender for IoT. A success message is displayed and your activation file is automatically downloaded. The activation file is unique for your sensor and contains instructions about your sensor's management mode. [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
This procedure describes how to use the sensor activation file downloaded from D
Your sensor is activated and onboarded to Defender for IoT. In the **Sites and sensors** page, you can see that the **Sensor status** column shows a green check mark, and lists the status as **OK**. -- ## Next steps
-After your OT sensor is connected, continue with any of the following to start analyzing your data:
--- [View assets from the Azure portal](how-to-manage-device-inventory-for-organizations.md)--- [Manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)--- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)--- [Detect threats with Microsoft Sentinel](../../sentinel/iot-solution.md?toc=/azure/defender-for-iot/organizations/toc.json&bc=/azure/defender-for-iot/breadcrumb/toc.json)
-For more information, see:
--- [Defender for IoT installation](how-to-install-software.md)-- [Microsoft Defender for IoT system architecture](architecture.md)
+> [!div class="step-by-step"]
+> [Full deployment path for OT monitoring](ot-deploy/ot-deploy-path.md)
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
The following table describes the current status of Database Migration Service s
| Target | Source | Support | Status | | - | - |:-:|:-:|
-| **Azure SQL Database** | SQL Server <sup>1</sup> | Γ£ö | Preview |
-| | Amazon RDS SQL Server | Γ£ö | Preview |
-| | Oracle | X | |
-| **Azure SQL Database Managed Instance** | SQL Server <sup>1</sup> | Γ£ö | GA |
-| | Amazon RDS SQL Server | X | |
-| | Oracle | X | |
-| **Azure SQL VM** | SQL Server <sup>1</sup> | Γ£ö | GA |
-| | Amazon RDS SQL Server | X | |
-| | Oracle | X | |
-| **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure Database for MySQL - Single Server** | MySQL | Γ£ö | GA |
-| | Amazon RDS MySQL | Γ£ö | GA |
-| | Azure Database for MySQL <sup>2</sup> | Γ£ö | GA |
-| **Azure Database for MySQL - Flexible Server** | MySQL | Γ£ö | GA |
-| | Amazon RDS MySQL | Γ£ö | GA |
-| | Azure Database for MySQL <sup>2</sup> | Γ£ö | GA |
-| **Azure Database for PostgreSQL - Single Server** | PostgreSQL | X |
-| | Amazon RDS PostgreSQL | X | |
-| **Azure Database for PostgreSQL - Flexible Server** | PostgreSQL | X |
-| | Amazon RDS PostgreSQL | X | |
-| **Azure Database for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | X |
-| | Amazon RDS PostgreSQL | X | |
+| **Azure SQL Database** | SQL Server <sup>1</sup> | Yes | GA |
+| | Amazon RDS SQL Server | Yes | GA |
+| | Oracle | No | |
+| **Azure SQL Database Managed Instance** | SQL Server <sup>1</sup> | Yes | GA |
+| | Amazon RDS SQL Server | Yes | GA |
+| | Oracle | No | |
+| **Azure SQL VM** | SQL Server <sup>1</sup> | Yes | GA |
+| | Amazon RDS SQL Server | Yes | GA |
+| | Oracle | No | |
+| **Azure Cosmos DB** | MongoDB | Yes | GA |
+| **Azure Database for MySQL - Single Server** | MySQL | Yes | GA |
+| | Amazon RDS MySQL | Yes | GA |
+| | Azure Database for MySQL <sup>2</sup> | Yes | GA |
+| **Azure Database for MySQL - Flexible Server** | MySQL | Yes | GA |
+| | Amazon RDS MySQL | Yes | GA |
+| | Azure Database for MySQL <sup>2</sup> | Yes | GA |
+| **Azure Database for PostgreSQL - Single Server** | PostgreSQL | No |
+| | Amazon RDS PostgreSQL | No | |
+| **Azure Database for PostgreSQL - Flexible Server** | PostgreSQL | No |
+| | Amazon RDS PostgreSQL | No | |
+| **Azure Database for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | No |
+| | Amazon RDS PostgreSQL | No | |
<sup>1</sup> Offline migrations through the Azure SQL Migration extension for Azure Data Studio are supported for Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, and Azure SQL Database. For more information, see [Migrate databases by using the Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
The following table describes the current status of Database Migration Service s
| Target | Source | Support | Status | | - | - |:-:|:-:|
-| **Azure SQL Database** | SQL Server <sup>1</sup>| X | |
-| | Amazon RDS SQL | X | |
-| | Oracle | X | |
-| **Azure SQL Database MI** | SQL Server <sup>1</sup>| Γ£ö | GA |
-| | Amazon RDS SQL | X | |
-| | Oracle | X | |
-| **Azure SQL VM** | SQL Server <sup>1</sup>| Γ£ö | GA |
-| | Amazon RDS SQL | X | |
-| | Oracle | X | |
-| **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure Database for MySQL - Flexible Server** | Azure Database for MySQL - Single Server | Γ£ö | Preview |
-| | MySQL | Γ£ö | Preview |
-| | Amazon RDS MySQL | Γ£ö | Preview |
-| **Azure Database for PostgreSQL - Single Server** | PostgreSQL | Γ£ö | GA |
-| | Azure Database for PostgreSQL - Single Server <sup>2</sup> | Γ£ö | GA |
-| | Amazon RDS PostgreSQL | Γ£ö | GA |
-| **Azure Database for PostgreSQL - Flexible Server** | PostgreSQL | Γ£ö | GA |
-| | Azure Database for PostgreSQL - Single Server <sup>2</sup> | Γ£ö | GA |
-| | Amazon RDS PostgreSQL | Γ£ö | GA |
-| **Azure Database for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | Γ£ö | GA |
-| | Amazon RDS PostgreSQL | Γ£ö | GA |
+| **Azure SQL Database** | SQL Server <sup>1</sup>| No | |
+| | Amazon RDS SQL | No | |
+| | Oracle | No | |
+| **Azure SQL Database MI** | SQL Server <sup>1</sup>| Yes | GA |
+| | Amazon RDS SQL | Yes | GA |
+| | Oracle | No | |
+| **Azure SQL VM** | SQL Server <sup>1</sup>| Yes | GA |
+| | Amazon RDS SQL | Yes | GA|
+| | Oracle | No | |
+| **Azure Cosmos DB** | MongoDB | Yes | GA |
+| **Azure Database for MySQL - Flexible Server** | Azure Database for MySQL - Single Server | Yes | Preview |
+| | MySQL | Yes | Preview |
+| | Amazon RDS MySQL | Yes | Preview |
+| **Azure Database for PostgreSQL - Single Server** | PostgreSQL | Yes | GA |
+| | Azure Database for PostgreSQL - Single Server <sup>2</sup> | Yes | GA |
+| | Amazon RDS PostgreSQL | Yes | GA |
+| **Azure Database for PostgreSQL - Flexible Server** | PostgreSQL | Yes | GA |
+| | Azure Database for PostgreSQL - Single Server <sup>2</sup> | Yes | GA |
+| | Amazon RDS PostgreSQL | Yes | GA |
+| **Azure Database for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | Yes | GA |
+| | Amazon RDS PostgreSQL | Yes | GA |
<sup>1</sup> Online migrations (minimal downtime) through the Azure SQL Migration extension for Azure Data Studio are supported for Azure SQL Managed Instance and SQL Server on Azure Virtual Machines targets. For more information, see [Migrate databases by using the Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
event-hubs Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-application.md
Azure provides the following Azure built-in roles for authorizing access to Even
- [Azure Event Hubs Data Sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender): Use this role to give access to Event Hubs resources. - [Azure Event Hubs Data Receiver](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-receiver): Use this role to give receiving access to Event Hubs resources.
-For Schema Registry built-in roles, see [Schema Registry roles](schema-registry-overview.md#azure-role-based-access-control).
+For Schema Registry built-in roles, see [Schema Registry roles](schema-registry-concepts.md#azure-role-based-access-control).
> [!IMPORTANT] > Our preview release supported adding Event Hubs data access privileges to Owner or Contributor role. However, data access privileges for Owner and Contributor role are no longer honored. If you are using the Owner or Contributor role, switch to using the Azure Event Hubs Data Owner role.
event-hubs Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authorize-access-azure-active-directory.md
Azure provides the following Azure built-in roles for authorizing access to Even
| [Azure Event Hubs Data sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender) | Use this role to give the send access to Event Hubs resources. | | [Azure Event Hubs Data receiver](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-receiver) | Use this role to give the consuming/receiving access to Event Hubs resources. |
-For Schema Registry built-in roles, see [Schema Registry roles](schema-registry-overview.md#azure-role-based-access-control).
+For Schema Registry built-in roles, see [Schema Registry roles](schema-registry-concepts.md#azure-role-based-access-control).
## Resource scope Before you assign an Azure role to a security principal, determine the scope of access that the security principal should have. Best practices dictate that it's always best to grant only the narrowest possible scope.
event-hubs Create Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/create-schema-registry.md
This article shows you how to create a schema group with schemas in a schema reg
> [!NOTE] > - The feature isn't available in the **basic** tier.
-> - Make sure that you are a member of one of these roles: **Owner**, **Contributor**, or **Schema Registry Contributor**. For details about role-based access control, see [Schema Registry overview](schema-registry-overview.md#azure-role-based-access-control).
+> - Make sure that you are a member of one of these roles: **Owner**, **Contributor**, or **Schema Registry Contributor**. For details about role-based access control, see [Schema Registry overview](schema-registry-concepts.md#azure-role-based-access-control).
> - If the event hub is in a **virtual network**, you won't be able to create schemas in the Azure portal unless you access the portal from a VM in the same virtual network.
event-hubs Event Hubs Kafka Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-tutorial.md
Last updated 11/03/2022
[Apache Kafka Connect](https://kafka.apache.org/documentation/#connect) is a framework to connect and import/export data from/to any external system such as MySQL, HDFS, and file system through a Kafka cluster. This tutorial walks you through using Kafka Connect framework with Event Hubs. > [!NOTE]
-> This feature is currently in Preview.
+> This feature is currently in Preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> [!WARNING] > Use of the Apache Kafka Connect framework and its connectors is **not eligible for product support through Microsoft Azure**.
This section walks you through spinning up FileStreamSource and FileStreamSink c
```bash curl -s -X POST -H "Content-Type: application/json" --data '{"name": "file-source","config": {"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector","tasks.max":"1","topic":"connect-quickstart","file": "{YOUR/HOME/PATH}/connect-quickstart/input.txt"}}' http://localhost:8083/connectors ```
- You should see the Event Hub `connect-quickstart` on your Event Hubs instance after running the above command.
+ You should see the event hub `connect-quickstart` on your Event Hubs instance after running the above command.
4. Check status of source connector. ```bash curl -s http://localhost:8083/connectors/file-source/status
This section walks you through spinning up FileStreamSource and FileStreamSink c
``` ### Cleanup
-Kafka Connect creates Event Hub topics to store configurations, offsets, and status that persist even after the Connect cluster has been taken down. Unless this persistence is desired, it is recommended that these topics are deleted. You may also want to delete the `connect-quickstart` Event Hub that were created during the course of this walkthrough.
+Kafka Connect creates Event Hubs topics to store configurations, offsets, and status that persist even after the Connect cluster has been taken down. Unless this persistence is desired, it is recommended that these topics are deleted. You may also want to delete the `connect-quickstart` Event Hubs that were created during the course of this walkthrough.
## Next steps
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
Title: 'Quickstart: Use Apache Kafka with Azure Event Hubs'
description: 'This quickstart shows you how to stream data into and from Azure Event Hubs using the Apache Kafka protocol.' Last updated 02/07/2023++ # Quickstart: Stream data with Azure Event Hubs and Apache Kafka
-This quickstart shows you how to stream data into and from Azure Event Hubs using the Apache Kafka protocol. You'll not change any code in the sample Kafka producer or consumer apps. You just update the configurations that the clients use to point to an Event Hubs namespace, which exposes a Kafka endpoint. You also don't build and use a Kafka cluster on your own. Instead, you'll use the Event Hubs namespace with the Kafka endpoint.
+This quickstart shows you how to stream data into and from Azure Event Hubs using the Apache Kafka protocol. You'll not change any code in the sample Kafka producer or consumer apps. You just update the configurations that the clients use to point to an Event Hubs namespace, which exposes a Kafka endpoint. You also don't build and use a Kafka cluster on your own. Instead, you use the Event Hubs namespace with the Kafka endpoint.
> [!NOTE] > This sample is available on [GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/quickstart/java)
Azure Event Hubs supports using Azure Active Directory (Azure AD) to authorize r
1. Select the **Azure subscription** that has the VM. 1. For **Managed identity**, select **Virtual machine** 1. Select your virtual machine's managed identity.
- 1. Click **Select** at the bottom of the page.
+ 1. Select **Select** at the bottom of the page.
:::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/add-vm-identity.png" alt-text="Screenshot showing the Add role assignment -> Select managed identities page."::: 1. Select **Review + Assign**. :::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/review-assign.png" alt-text="Screenshot showing the Add role assignment page with role assigned to VM's managed identity.":::
-1. Restart the VM and log in back to the VM for which you configured the managed identity.
+1. Restart the VM and sign in back to the VM for which you configured the managed identity.
1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka). 1. Navigate to `azure-event-hubs-for-kafka/tutorials/oauth/java/managedidentity/consumer`. 1. Switch to the `src/main/resources/` folder, and open `consumer.config`. Replace `namespacename` with the name of your Event Hubs namespace.
event-hubs Schema Registry Client Side Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-client-side-enforcement.md
+
+ Title: Client-side schema enforcement - Schema Registry
+description: This article provides information on using schemas in a schema registry when publishing or consuming events from Azure Event Hubs.
+ Last updated : 04/26/2023++++
+# Client-side schema enforcement
+The information flow when you use schema registry is the same for all protocols that you use to publish or consume events from Azure Event Hubs.
+
+The following diagram shows how the information flows when event producers and consumers use Schema Registry with the **Kafka** protocol using **Avro** serialization.
++
+### Producer
+
+1. Kafka producer application uses `KafkaAvroSerializer` to serialize event data using the specified schema. Producer application provides details of the schema registry endpoint and other optional parameters that are required for schema validation.
+1. The serializer looks for the schema in the schema registry to serialize event data. If it finds the schema, then the corresponding schema ID is returned. You can configure the producer application to auto register the schema with the schema registry if it doesn't exist.
+1. Then the serializer prepends the schema ID to the serialized data that is published to the Event Hubs.
+
+### Consumer
+
+1. Kafka consumer application uses `KafkaAvroDeserializer` to deserialize data that it receives from the event hub.
+1. The deserializer uses the schema ID (prepended by the producer) to retrieve schema from the schema registry.
+1. The deserializer uses the schema to deserialize event data that it receives from the event hub.
+1. The schema registry client uses caching to prevent redundant schema registry lookups in the future.
event-hubs Schema Registry Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-concepts.md
+
+ Title: Azure Schema Registry Concepts
+description: This article explains concepts for Azure Schema Registry in Azure Event Hubs.
+ Last updated : 04/26/2023++++
+# Schema Registry in Azure Event Hubs
+Schema Registry in Azure Event Hubs provides you with a repository to use and manage schemas in schema-driven event streaming scenarios.
+
+## Schema Registry components
+
+An Event Hubs namespace can host schema groups alongside event hubs (or Kafka topics). It hosts a schema registry and can have multiple schema groups. In spite of being hosted in Azure Event Hubs, the schema registry can be used universally with all Azure messaging services and any other message or events broker. Each of these schema groups is a separately securable repository for a set of schemas. Groups can be aligned with a particular application or an organizational unit.
++
+### Schema groups
+Schema group is a logical group of similar schemas based on your business criteria. A schema group can hold multiple versions of a schema. The compatibility enforcement setting on a schema group can help ensure that newer schema versions are backwards compatible.
+
+The security boundary imposed by the grouping mechanism help ensures that trade secrets don't inadvertently leak through metadata in situations where the namespace is shared among multiple partners. It also allows for application owners to manage schemas independent of other applications that share the same namespace.
+
+### Schemas
+Schemas define the contract between producers and consumers. A schema defined in an Event Hubs schema registry helps manage the contract outside of event data, thus removing the payload overhead. A schema has a name, type (example: record, array, and so on.), compatibility mode (none, forward, backward, full), and serialization type (only Avro for now). You can create multiple versions of a schema and retrieve and use a specific version of a schema.
+
+### Schema formats
+Schema formats are used to determine the manner in which a schema is structured and defined, with each format outlining specific guidelines and syntax for defining the structure of the events that will be used for event streaming.
+
+#### Avro schema
+[Avro](https://avro.apache.org/) is a popular data serialization system that uses a compact binary format and provides schema evolution capabilities.
+
+To learn more about using Avro schema format with Event Hubs Schema Registry, see:
+- [How to use schema registry with Kafka and Avro](schema-registry-kafka-java-send-receive-quickstart.md)
+- [ How to use Schema registry with Event Hubs .NET SDK (AMQP) and Avro.](schema-registry-dotnet-send-receive-quickstart.md)
+
+#### JSON Schema (Preview)
+[JSON Schema](https://json-schema.org/) is a standardized way of defining the structure and data types of the events. JSON Schema enables the confident and reliable use of the JSON data format in event streaming.
+
+To learn more about using JSON schema format with Event Hubs Schema Registry, see:
+- [How to use schema registry with Kafka and JSON Schema](schema-registry-json-schema-kafka.md)
+
+## Schema evolution
+Schemas need to evolve with the business requirement of producers and consumers. Azure Schema Registry supports schema evolution by introducing compatibility modes at the schema group level. When you create a schema group, you can specify the compatibility mode of the schemas that you include in that schema group. When you update a schema, the change should comply with the assigned compatibility mode and then only it creates a new version of the schema.
+
+ > [!NOTE]
+> Schema evolution is only supported for Avro schema format only.
+
+Azure Schema Registry for Event Hubs support following compatibility modes.
+
+### Backward compatibility
+Backward compatibility mode allows the consumer code to use a new version of schema but it can process messages with old version of the schema. When you use backward compatibility mode in a schema group, it allows following changes to be made on a schema.
+
+- Delete fields.
+- Add optional fields.
+
+### Forward compatibility
+Forward compatibility allows the consumer code to use an old version of the schema but it can read messages with the new schema. Forward compatibility mode allows following changes to be made on a schema.
+- Add fields
+- Delete optional fields
+
+### No compatibility
+When the ``None`` compatibility mode is used, the schema registry doesn't do any compatibility checks when you update schemas.
+
+## Client SDKs
+
+You can use one of the following libraries to include an Avro serializer, which you can use to serialize and deserialize payloads containing Schema Registry schema identifiers and Avro-encoded data.
+
+- [.NET - Microsoft.Azure.Data.SchemaRegistry.ApacheAvro](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/schemaregistry/Microsoft.Azure.Data.SchemaRegistry.ApacheAvro)
+- [Java - azure-data-schemaregistry-avro](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/schemaregistry/azure-data-schemaregistry-apacheavro)
+- [Python - azure-schemaregistry-avroserializer](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/schemaregistry/azure-schemaregistry-avroencoder/)
+- [JavaScript - @azure/schema-registry-avro](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/schemaregistry/schema-registry-avro)
+- [Apache Kafka](https://github.com/Azure/azure-schema-registry-for-kafka/) - Run Kafka-integrated Apache Avro serializers and deserializers backed by Azure Schema Registry. The Java client's Apache Kafka client serializer for the Azure Schema Registry can be used in any Apache Kafka scenario and with any Apache Kafka® based deployment or cloud service.
+- **Azure CLI** - For an example of adding a schema to a schema group using CLI, see [Adding a schema to a schema group using CLI](https://github.com/Azure/azure-event-hubs/tree/master/samples/Management/CLI/AddschematoSchemaGroups).
+- **PowerShell** - For an example of adding a schema to a schema group using PowerShell, see [Adding a schema to a schema group using PowerShell](https://github.com/Azure/azure-event-hubs/tree/master/samples/Management/PowerShell/AddingSchematoSchemagroups).
++
+## Limits
+For limits (for example: number of schema groups in a namespace) of Event Hubs, see [Event Hubs quotas and limits](event-hubs-quotas.md)
+
+## Azure role-based access control
+When accessing the schema registry programmatically, you need to register an application in Azure Active Directory (Azure AD) and add the security principal of the application to one of the following Azure role-based access control (Azure RBAC) roles:
+
+| Role | Description |
+| - | -- |
+| Owner | Read, write, and delete Schema Registry groups and schemas. |
+| Contributor | Read, write, and delete Schema Registry groups and schemas. |
+| [Schema Registry Reader](../role-based-access-control/built-in-roles.md#schema-registry-reader-preview) | Read and list Schema Registry groups and schemas. |
+| [Schema Registry Contributor](../role-based-access-control/built-in-roles.md#schema-registry-reader-preview) | Read, write, and delete Schema Registry groups and schemas. |
+
+For instructions on creating registering an application using the Azure portal, see [Register an app with Azure AD](../active-directory/develop/quickstart-register-app.md). Note down the client ID (application ID), tenant ID, and the secret to use in the code.
+
+## Next steps
+
+- To learn how to create a schema registry using the Azure portal, see [Create an Event Hubs schema registry using the Azure portal](create-schema-registry.md).
+- See the following **Schema Registry Avro client library** samples.
+ - [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/schemaregistry/Microsoft.Azure.Data.SchemaRegistry.ApacheAvro/tests/Samples)
+ - [Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/schemaregistry/azure-data-schemaregistry-apacheavro/src/samples)
+ - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/schemaregistry/schema-registry-avro/samples )
+ - [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/schemaregistry/azure-schemaregistry-avroencoder/samples)
+ - [Kafka Avro Integration for Azure Schema Registry](https://github.com/Azure/azure-schema-registry-for-kafka/tree/master/csharp/avro/samples)
event-hubs Schema Registry Dotnet Send Receive Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-dotnet-send-receive-quickstart.md
Title: 'Quickstart: Validate schema when sending or receiving events'
+ Title: 'Validate schema when sending or receiving events'
description: In this quickstart, you create a .NET Core application that sends/receives events to/from Azure Event Hubs with schema validation using Schema Registry. Previously updated : 03/20/2023 Last updated : 04/26/2023 ms.devlang: csharp-++
-# Quickstart: Validate schema when sending and receiving events - AMQP and .NET
+# Validate using an Avro schema when streaming events using Event Hubs .NET SDKs (AMQP)
In this quickstart, you learn how to send events to and receive events from an event hub with schema validation using the **Azure.Messaging.EventHubs** .NET library. > [!NOTE]
Add your user account to the **Schema Registry Reader** role at the namespace le
Install-Package Azure.ResourceManager.Compute ``` 1. Authenticate producer applications to connect to Azure via Visual Studio as shown [here](/dotnet/api/overview/azure/identity-readme#authenticating-via-visual-studio).
-1. Sign-in to Azure using the user account that's a member of the `Schema Registry Reader` role at the namespace level. For information about schema registry roles, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md#azure-role-based-access-control).
+1. Sign-in to Azure using the user account that's a member of the `Schema Registry Reader` role at the namespace level. For information about schema registry roles, see [Azure Schema Registry in Event Hubs](schema-registry-concepts.md#azure-role-based-access-control).
### Code generation using the Avro schema 1. Use the same content you used to create the schema to create a file named ``Order.avsc``. Save the file in the project or solution folder.
This section shows how to write a .NET Core console application that receives ev
Install-Package Azure.ResourceManager.Compute ``` 1. Authenticate producer applications to connect to Azure via Visual Studio as shown [here](/dotnet/api/overview/azure/identity-readme#authenticating-via-visual-studio).
-1. Sign-in to Azure using the user account that's a member of the `Schema Registry Reader` role at the namespace level. For information about schema registry roles, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md#azure-role-based-access-control).
+1. Sign-in to Azure using the user account that's a member of the `Schema Registry Reader` role at the namespace level. For information about schema registry roles, see [Azure Schema Registry in Event Hubs](schema-registry-concepts.md#azure-role-based-access-control).
1. Add the `Order.cs` file you generated as part of creating the producer app to the **OrderConsumer** project. 1. Right-click **OrderConsumer** project, and select **Set as Startup project**.
event-hubs Schema Registry Json Schema Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-json-schema-kafka.md
+
+ Title: Use JSON Schema with Apache Kafka applications
+description: This article provides information on how to use JSON Schema in Schema Registry with Apache Kafka applications.
+ Last updated : 04/26/2023
+ms.devlang: scala
++++
+# Use JSON Schema with Apache Kafka applications (Preview)
+This tutorial walks you through a scenario where you use JSON Schemas to serialize and deserialize event using Azure Schema Registry in Event Hubs.
+
+In this use case a Kafka producer application uses JSON schema stored in Azure Schema Registry to, serialize the event and publish them to a Kafka topic/event hub in Azure Event Hubs. The Kafka consumer deserializes the events that it consumes from Event Hubs. For that it uses schema ID of the event and JSON schema, which is stored in Azure Schema Registry.
+
+> [!NOTE]
+> This feature is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Prerequisites
+If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.
+
+To complete this quickstart, you need the following prerequisites:
+- If you don't have an **Azure subscription**, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- In your development environment, install the following components:
+ * [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure).
+ * [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) a Maven binary archive.
+ * [Git](https://www.git-scm.com/)
+- Clone the [Azure Schema Registry for Kafka](https://github.com/Azure/azure-schema-registry-for-kafka.git) repository.
+
+## Create an event hub
+Follow instructions from the quickstart: [Create an Event Hubs namespace and an event hub](event-hubs-create.md) to create an Event Hubs namespace and an event hub. Then, follow instructions from [Get the connection string](event-hubs-get-connection-string.md) to get a connection string to your Event Hubs namespace.
+
+Note down the following settings that you use in the current quickstart:
+- Connection string for the Event Hubs namespace
+- Name of the event hub
+
+## Create a schema
+Follow instructions from [Create schemas using Schema Registry](create-schema-registry.md) to create a schema group and a schema.
+
+1. Create a schema group named **contoso-sg** using the Schema Registry portal. Use *JSON Schema* as the serialization type.
+1. In that schema group, create a new Avro schema with schema nam