Updates from: 10/09/2023 01:08:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
+ # Configure authentication session management with Conditional Access In complex deployments, organizations might have a need to restrict authentication sessions. Some scenarios might include:
Sign-in frequency defines the time period before a user is asked to sign in agai
The Microsoft Entra ID default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire: users that are trained to enter their credentials without thinking can unintentionally supply them to a malicious credential prompt.
-It might sound alarming to not ask for a user to sign back in, in reality any violation of IT policies will revoke the session. Some examples include (but aren't limited to) a password change, an incompliant device, or account disable. You can also explicitly [revoke usersΓÇÖ sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken). The Microsoft Entra ID default configuration comes down to ΓÇ£donΓÇÖt ask users to provide their credentials if security posture of their sessions hasn't changedΓÇ¥.
+It might sound alarming to not ask for a user to sign back in, in reality any violation of IT policies will revoke the session. Some examples include (but aren't limited to) a password change, an incompliant device, or account disable. You can also explicitly [revoke usersΓÇÖ sessions using Microsoft Graph PowerShell](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession). The Microsoft Entra ID default configuration comes down to ΓÇ£donΓÇÖt ask users to provide their credentials if security posture of their sessions hasn't changedΓÇ¥.
The sign-in frequency setting works with apps that have implemented OAuth2 or OIDC protocols according to the standards. Most Microsoft native apps for Windows, Mac, and Mobile including the following web applications comply with the setting.
We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more o
## Next steps * If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).+
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
An *access package* is a bundle of resources that a team or project needs and is
![Screenshot of the access package lifecycle tab](./media/entitlement-management-access-package-first/new-access-package-lifecycle.png)
-1. Skip the **Custom extensions (Preview)** step.
+1. Skip the **Custom extensions** step.
1. Select **Next** to open the **Review + Create** tab.
active-directory Entitlement Management Custom Teams Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-custom-teams-extension.md
To create a Logic App and custom extension in a catalog, you'd follow these step
1. In the left menu, select **Catalogs**.
-1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions (Preview)**.
+1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions**.
1. In the header navigation bar, select **Add a Custom Extension**.
This custom extension to the linked Logic App now appears in your Custom Extensi
## Configuring the Logic App
-1. The custom extension created will show under the **Custom Extensions (Preview)** tab. Select the ΓÇ£*Logic app*ΓÇ¥ in the custom extension that will redirect you to a page to configure the logic app.
+1. The custom extension created will show under the **Custom Extensions** tab. Select the ΓÇ£*Logic app*ΓÇ¥ in the custom extension that will redirect you to a page to configure the logic app.
:::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-configure-logic-app.png" alt-text="Screenshot of the configure logic apps screen." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-configure-logic-app.png"::: 1. On the left menu, select **Logic app designer**. :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer.png" alt-text="Screenshot of the logic apps designer screen." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer.png":::
After setting up custom extensibility in the catalog, administrators can create
1. Change to the Policies tab, select the policy, and select **Edit**.
-1. In the policy settings, go to the **Custom Extensions (Preview)** tab.
+1. In the policy settings, go to the **Custom Extensions** tab.
1. In the menu below Stage, select the access package event you wish to use as trigger for this custom extension (Logic App). For our scenario, to trigger the custom extension Logic App workflow when an access package is requested, approved, granted, or removed, select **Request is created**, **Request is approved**, **Assignment is Granted**, and **Assignment is removed**. :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-custom-extension-policy.png" alt-text="Screenshot of custom extension policies for an access package.":::
After setting up custom extensibility in the catalog, administrators can create
1. Add **Lifecycle** details.
-1. Under the Custom Extensions (Preview) tab, in the menu below Stage, select the access package event you wish to use as trigger for this custom extension (Logic App). For our scenario, to trigger the custom extension Logic App workflow when an access package is requested, approved, granted, or removed, select **Request is created**, **Request is approved**, **Assignment is Granted**, and **Assignment is removed**.
+1. Under the Custom Extensions tab, in the menu below Stage, select the access package event you wish to use as trigger for this custom extension (Logic App). For our scenario, to trigger the custom extension Logic App workflow when an access package is requested, approved, granted, or removed, select **Request is created**, **Request is approved**, **Assignment is Granted**, and **Assignment is removed**.
:::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-access-package-policy.png" alt-text="Screenshot of access package policy selection."::: 1. In **Review and Create**, review the summary of your access package, and make sure the details are correct, then select **Create**.
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md
To allow delegated roles, such as catalog creators and access package managers,
![Microsoft Entra user settings - Administration portal](./media/entitlement-management-delegate-catalog/user-settings.png)
-## Manage role assignments programmatically (preview)
+## Manage role assignments programmatically
You can also view and update catalog creators and entitlement management catalog-specific role assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the Graph API to [list the role definitions](/graph/api/rbacapplication-list-roledefinitions) of entitlement management, and [list role assignments](/graph/api/rbacapplication-list-roleassignments) to those role definitions.
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md
You can view the list of catalogs currently enabled for external users in the Mi
1. If any of those catalogs have a non-zero number of access packages, those access packages may have a policy for users not in directory.
-## Manage role assignments to entitlement management roles programmatically (preview)
+## Manage role assignments to entitlement management roles programmatically
You can also view and update catalog creators and entitlement management catalog-specific role assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the Graph API to [list the role definitions](/graph/api/rbacapplication-list-roledefinitions) of entitlement management, and [list role assignments](/graph/api/rbacapplication-list-roleassignments) to those role definitions.
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Entitlement management can help address these challenges. To learn more about h
Here are some of capabilities of entitlement management: - Control who can get access to applications, groups, Teams and SharePoint sites, with multi-stage approval, and ensure users don't retain access indefinitely through time-limited assignments and recurring access reviews.-- Give users access automatically to those resources, based on the user's properties like department or cost center, and remove a user's access when those properties change (preview).
+- Give users access automatically to those resources, based on the user's properties like department or cost center, and remove a user's access when those properties change.
- Delegate to non-administrators the ability to create access packages. These access packages contain resources that users can request, and the delegated access package managers can define policies with rules for which users can request, who must approve their access, and when access expires. - Select connected organizations whose users can request access. When a user who isn't yet in your directory requests access, and is approved, they're automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.
active-directory Entitlement Management Ticketed Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md
Provide the Azure subscription, resource group details, along with the Logic App
1. In the left menu, select **Catalogs**.
-1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions (Preview)**.
+1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions**.
1. In the header navigation bar, select **Add a Custom Extension**.
After setting up custom extensibility in the catalog, administrators can create
1. Change to the policy tab, select the policy, and select **Edit**.
-1. In the policy settings, go to the **Custom Extensions (Preview)** tab.
+1. In the policy settings, go to the **Custom Extensions** tab.
1. In the menu below **Stage**, select the access package event you wish to use as trigger for this custom extension (Logic App). For our scenario, to trigger the custom extension Logic App workflow when access package has been approved, select **Request is approved**. > [!NOTE]
The IT Support team works on the ticket create above to do necessary provisions
Advance to the next article to learn how to create... > [!div class="nextstepaction"]
-> [Trigger Logic Apps with custom extensions in entitlement management (Preview)](entitlement-management-logic-apps-integration.md)
+> [Trigger Logic Apps with custom extensions in entitlement management](entitlement-management-logic-apps-integration.md)
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Once you've started using these identity governance features, you can easily aut
| Creating, updating and deleting AD and Microsoft Entra user accounts automatically for employees |[Plan cloud HR to Microsoft Entra user provisioning](../app-provisioning/plan-cloud-hr-provision.md)| | Updating the membership of a group, based on changes to the member user's attributes | [Create a dynamic group](../enterprise-users/groups-create-rule.md)| | Assigning licenses | [group-based licensing](../enterprise-users/licensing-groups-assign.md) |
-| Adding and removing a user's group memberships, application roles, and SharePoint site roles, based on changes to the user's attributes | [Configure an automatic assignment policy for an access package in entitlement management](entitlement-management-access-package-auto-assignment-policy.md) (preview)|
+| Adding and removing a user's group memberships, application roles, and SharePoint site roles, based on changes to the user's attributes | [Configure an automatic assignment policy for an access package in entitlement management](entitlement-management-access-package-auto-assignment-policy.md)|
| Adding and removing a user's group memberships, application roles, and SharePoint site roles, on a specific date | [Configure lifecycle settings for an access package in entitlement management](entitlement-management-access-package-lifecycle-policy.md)|
-| Running custom workflows when a user requests or receives access, or access is removed | [Trigger Logic Apps in entitlement management](entitlement-management-logic-apps-integration.md) (preview) |
+| Running custom workflows when a user requests or receives access, or access is removed | [Trigger Logic Apps in entitlement management](entitlement-management-logic-apps-integration.md) |
| Regularly having memberships of guests in Microsoft groups and Teams reviewed, and removing guest memberships that are denied |[Create an access review](create-access-review.md) | | Removing guest accounts that were denied by a reviewer |[Review and remove external users who no longer have resource access](access-reviews-external-users.md) | | Removing guest accounts that have no access package assignments |[Manage the lifecycle of external users](entitlement-management-external-users.md#manage-the-lifecycle-of-external-users) |
active-directory Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/sap.md
When a new employee is hired in your organization, you might need to trigger a w
## Check for separation of duties
-With separation-of-duties checks now available in preview in Microsoft Entra ID [entitlement management](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/ensure-compliance-using-separation-of-duties-checks-in-access/ba-p/2466939), customers can ensure that users don't take on excessive access rights:
+With separation-of-duties checks in Microsoft Entra ID [entitlement management](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/ensure-compliance-using-separation-of-duties-checks-in-access/ba-p/2466939), customers can ensure that users don't take on excessive access rights:
* Admins and access managers can prevent users from requesting additional access packages if they're already assigned to other access packages or are a member of other groups that are incompatible with the requested access. * Enterprises with critical regulatory requirements for SAP apps have a single consistent view of access controls. They can then enforce separation-of-duties checks across their financial and other business-critical applications, along with Microsoft Entra integrated applications.
-* With [Pathlock](https://pathlock.com/), integration customers can take advantage of fine-grained separation-of-duties checks with access packages in Microsoft Entra ID. Over time, this ability will help customers address Sarbanes-Oxley and other compliance requirements.
+* With integration with [Pathlock](https://pathlock.com/) and other partner products, customers can take advantage of fine-grained separation-of-duties checks with access packages in Microsoft Entra ID. Over time, this ability will help customers address Sarbanes-Oxley and other compliance requirements.
## Next steps
ai-services Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/batch-inference.md
# Trigger batch inference with trained model + You could choose the batch inference API, or the streaming inference API for detection. | Batch inference API | Streaming inference API |
ai-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/create-resource.md
# Create and Anomaly Detector resource + Anomaly Detector service is a cloud-based Azure AI service that uses machine-learning models to detect anomalies in your time series data. Here, you'll learn how to create an Anomaly Detector resource in the Azure portal. ## Create an Anomaly Detector resource in Azure portal
ai-services Deploy Anomaly Detection On Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-container-instances.md
# Deploy an Anomaly Detector univariate container to Azure Container Instances + Learn how to deploy the Azure AI services [Anomaly Detector](../anomaly-detector-container-howto.md) container to Azure [Container Instances](../../../container-instances/index.yml). This procedure demonstrates the creation of an Anomaly Detector resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers' attention away from managing infrastructure to instead focusing on application development. [!INCLUDE [Prerequisites](../../containers/includes/container-preview-prerequisites.md)]
ai-services Deploy Anomaly Detection On Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-iot-edge.md
# Deploy an Anomaly Detector univariate module to IoT Edge + Learn how to deploy the Azure AI services [Anomaly Detector](../anomaly-detector-container-howto.md) module to an IoT Edge device. Once it's deployed into IoT Edge, the module runs in IoT Edge together with other modules as container instances. It exposes the exact same APIs as an Anomaly Detector container instance running in a standard docker container environment. ## Prerequisites
ai-services Identify Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/identify-anomalies.md
# How to: Use the Anomaly Detector univariate API on your time series data + The [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector/operations/post-timeseries-entire-detect) provides two methods of anomaly detection. You can either detect anomalies as a batch throughout your times series, or as your data is generated by detecting the anomaly status of the latest data point. The detection model returns anomaly results along with each data point's expected value, and the upper and lower anomaly detection boundaries. you can use these values to visualize the range of normal values, and anomalies in the data. ## Anomaly detection modes
ai-services Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/postman.md
# How to run Multivariate Anomaly Detector API in Postman? + This article will walk you through the process of using Postman to access the Multivariate Anomaly Detection REST API. ## Getting started
ai-services Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/prepare-data.md
# Prepare your data and upload to Storage Account + Multivariate Anomaly Detection requires training to process your data, and an Azure Storage Account to store your data for further training and inference steps. ## Data preparation
ai-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/streaming-inference.md
# Streaming inference with trained model + You could choose the batch inference API, or the streaming inference API for detection. | Batch inference API | Streaming inference API |
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/train-model.md
# Train a Multivariate Anomaly Detection model + To test out Multivariate Anomaly Detection quickly, try the [Code Sample](https://github.com/Azure-Samples/AnomalyDetector)! For more instructions on how to run a Jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#). ## API Overview
ai-services Anomaly Detector Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-configuration.md
# Configure Anomaly Detector univariate containers + The **Anomaly Detector** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings. ## Configuration settings
ai-services Anomaly Detector Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-howto.md
keywords: on-premises, Docker, container, streaming, algorithms
# Install and run Docker containers for the Anomaly Detector API + [!INCLUDE [container image location note](../containers/includes/image-location-note.md)] Containers enable you to use the Anomaly Detector API your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run an Anomaly Detector container.
ai-services Anomaly Detection Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/anomaly-detection-best-practices.md
# Best practices for using the Anomaly Detector univariate API + The Anomaly Detector API is a stateless anomaly detection service. The accuracy and performance of its results can be impacted by: * How your time series data is prepared.
ai-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/best-practices-multivariate.md
keywords: anomaly detection, machine learning, algorithms
# Best practices for using the Multivariate Anomaly Detector API + This article provides guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs. In this tutorial, you'll:
ai-services Multivariate Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/multivariate-architecture.md
keywords: anomaly detection, machine learning, algorithms
# Predictive maintenance solution with Multivariate Anomaly Detector + Many different industries need predictive maintenance solutions to reduce risks and gain actionable insights through processing data from their equipment. Predictive maintenance evaluates the condition of equipment by performing online monitoring. The goal is to perform maintenance before the equipment degrades or breaks down. Monitoring the health status of equipment can be challenging, as each component inside the equipment can generate dozens of signals. For example, vibration, orientation, and rotation. This can be even more complex when those signals have an implicit relationship, and need to be monitored and analyzed together. Defining different rules for those signals and correlating them with each other manually can be costly. Anomaly Detector's multivariate feature allows:
ai-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/troubleshoot.md
keywords: anomaly detection, machine learning, algorithms
# Troubleshoot the multivariate API + This article provides guidance on how to troubleshoot and remediate common error messages when you use the Azure AI Anomaly Detector multivariate API. ## Multivariate error codes
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/overview.md
# What is Anomaly Detector? + [!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)] Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little machine learning (ML) knowledge, either batch validation or real-time inference.
ai-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
# Quickstart: Use the Multivariate Anomaly Detector client library + ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](../includes/quickstarts/anomaly-detector-client-library-csharp-multivariate.md)]
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries.md
# Quickstart: Use the Univariate Anomaly Detector client library + ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](../includes/quickstarts/anomaly-detector-client-library-csharp.md)]
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/regions.md
# Anomaly Detector service supported regions + The Anomaly Detector service provides anomaly detection technology on your time series data. The service is available in multiple regions with unique endpoints for the Anomaly Detector SDK and REST APIs. Keep in mind the following points:
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/service-limits.md
# Anomaly Detector service quotas and limits + This article contains both a quick reference and detailed description of Azure AI Anomaly Detector service quotas and limits for all pricing tiers. It also contains some best practices to help avoid request throttling. The quotas and limits apply to all the versions within Azure AI Anomaly Detector service.
ai-services Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/azure-data-explorer.md
# Tutorial: Use Univariate Anomaly Detector in Azure Data Explorer + ## Introduction The [Anomaly Detector API](../overview.md) enables you to check and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically finding and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API decides boundaries for anomaly detection, expected values, and which data points are anomalies.
ai-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
# Tutorial: Visualize anomalies using batch detection and Power BI (univariate) + Use this tutorial to find anomalies within a time series data set as a batch. Using Power BI desktop, you will take an Excel file, prepare the data for the Anomaly Detector API, and visualize statistical anomalies throughout it. In this tutorial, you'll learn how to:
ai-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
# Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics + Use this tutorial to detect anomalies among multiple variables in Azure Synapse Analytics in very large datasets and databases. This solution is perfect for scenarios like equipment predictive maintenance. The underlying power comes from the integration with [SynapseML](https://microsoft.github.io/SynapseML/), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. It can be installed and used on any Spark 3 infrastructure including your **local machine**, **Databricks**, **Synapse Analytics**, and others. In this tutorial, you'll learn how to:
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/whats-new.md
Last updated 12/15/2022
# What's new in Anomaly Detector + Learn what's new in the service. These items include release notes, videos, blog posts, papers, and other types of information. Bookmark this page to keep up to date with the service. We have also added links to some user-generated content. Those items will be marked with **[UGC]** tag. Some of them are hosted on websites that are external to Microsoft and Microsoft isn't responsible for the content there. Use discretion when you refer to these resources. Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content.
ai-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/disconnected-containers.md
Containers enable you to run Azure AI services APIs in your own environment, and
* [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md) * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md) * [Language Detection](../language-service/language-detection/how-to/use-containers.md)
+ * [Summarization](../language-service/summarization/how-to/use-containers.md)
* [Azure AI Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md) * [Document Intelligence](../../ai-services/document-intelligence/containers/disconnected.md)
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/use-containers.md
Containers enable you to host the Summarization API on your own infrastructure.
* On Windows, Docker must also be configured to support Linux containers. * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/). * A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). For disconnected containers, the DC0 tier is required.
-* For disconnected containers,
[!INCLUDE [Gathering required parameters](../../../containers/includes/container-gathering-required-parameters.md)]
for GPU containers.
[!INCLUDE [Tip for using docker list](../../../../../includes/cognitive-services-containers-docker-list-tip.md)]
-## Download the summarization models
+## Download the summarization container models
-A pre-requisite for running the summarization container is to download the models first. This can be done by running one of the following commands:
+A pre-requisite for running the summarization container is to download the models first. This can be done by running one of the following commands using a CPU container image as an example:
```bash
-docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization downloadModels=ExtractiveSummarization billing={ENDPOINT_URI} apikey={API_KEY}
-docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization downloadModels=AbstractiveSummarization billing={ENDPOINT_URI} apikey={API_KEY}
-docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization downloadModels=ConversationSummarization billing={ENDPOINT_URI} apikey={API_KEY}
+docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu downloadModels=ExtractiveSummarization billing={ENDPOINT_URI} apikey={API_KEY}
+docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu downloadModels=AbstractiveSummarization billing={ENDPOINT_URI} apikey={API_KEY}
+docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu downloadModels=ConversationSummarization billing={ENDPOINT_URI} apikey={API_KEY}
``` It's not recommended to download models for all skills inside the same `HOST_MODELS_PATH`, as the container loads all models inside the `HOST_MODELS_PATH`. Doing so would use a large amount of memory. It's recommended to only download the model for the skill you need in a particular `HOST_MODELS_PATH`.
In order to ensure compatibility between models and the container, re-download t
## Run the container with `docker run`
-Once the container is on the host computer, use the following command to run the containers. The container will continue to run until you stop it. (note the `rai_terms=accept` part)
+Once the *Summarization* container is on the host computer, use the following `docker run` command to run the containers. The container will continue to run until you stop it. Replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{HOST_MODELS_PATH}** | The host computer [volume mount](https://docs.docker.com/storage/volumes/), which Docker uses to persist the model. |An example is c:\SummarizationModel where the c:\ drive is located on the host machine.|
+| **{ENDPOINT_URI}** | The endpoint for accessing the summarization API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com`|
+| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
```bash
-docker run -p 5000:5000 -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization eula=accept rai_terms=accept billing={ENDPOINT_URI} apikey={API_KEY}
+docker run -p 5000:5000 -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu eula=accept rai_terms=accept billing={ENDPOINT_URI} apikey={API_KEY}
```
-Or if you are running a GPU container, use the this command instead.
+Or if you are running a GPU container, use this command instead.
```bash
-docker run -p 5000:5000 --gpus all -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization eula=accept rai_terms=accept billing={ENDPOINT_URI} apikey={API_KEY}
+docker run -p 5000:5000 --gpus all -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:gpu eula=accept rai_terms=accept billing={ENDPOINT_URI} apikey={API_KEY}
``` If there is more than one GPU on the machine, replace `--gpus all` with `--gpus device={DEVICE_ID}`. - > [!IMPORTANT] > * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements. > * The `Eula`, `Billing`, `rai_terms` and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-To run the *Summarization* container, execute the following `docker run` command. Replace the placeholders below with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| **{ENDPOINT_URI}** | The endpoint for accessing the summarization API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
--
-```bash
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
- This command: * Runs a *Summarization* container from the container image
Use the host, `http://localhost:5000`, for container APIs.
## Run the container disconnected from the internet ## Stop the container
ai-services Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/cost-management.md
# Azure AI Metrics Advisor cost management + Azure AI Metrics Advisor monitors the performance of your organization's growth engines, including sales revenue and manufacturing operations. Quickly identify and fix problems through a powerful combination of monitoring in near-real time, adapting models to your scenario, and offering granular analysis with diagnostics and alerting. You will only be charged for the time series that are analyzed by the service. There's no up-front commitment or minimum fee. > [!NOTE]
ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/data-feeds-from-different-sources.md
# How-to: Connect different data sources + Use this article to find the settings and requirements for connecting different types of data sources to Azure AI Metrics Advisor. To learn about using your data with Metrics Advisor, see [Onboard your data](how-tos/onboard-your-data.md). ## Supported authentication types
ai-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/encryption.md
# Metrics Advisor service encryption of data at rest + Metrics Advisor service automatically encrypts your data when it is persisted to the cloud. The Metrics Advisor service encryption protects your data and helps you to meet your organizational security and compliance commitments. [!INCLUDE [cognitive-services-about-encryption](../../ai-services/includes/cognitive-services-about-encryption.md)]
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/glossary.md
# Metrics Advisor glossary of common vocabulary and concepts + This document explains the technical terms used in Metrics Advisor. Use this article to learn about common concepts and objects you might encounter when using the service. ## Data feed
ai-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/alerts.md
# How-to: Configure alerts and get notifications using a hook + After an anomaly is detected by Metrics Advisor, an alert notification will be triggered based on alert settings, using a hook. An alert setting can be used with multiple detection configurations, various parameters are available to customize your alert rule. ## Create a hook
ai-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/anomaly-feedback.md
# Provide anomaly feedback + User feedback is one of the most important methods to discover defects within the anomaly detection system. Here we provide a way for users to mark incorrect detection results directly on a time series, and apply the feedback immediately. In this way, a user can teach the anomaly detection system how to do anomaly detection for a specific time series through active interactions. > [!NOTE]
ai-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/configure-metrics.md
# Configure metrics and fine tune detection configuration + Use this article to start configuring your Metrics Advisor instance using the web portal and fine-tune the anomaly detection results. ## Metrics
ai-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/credential-entity.md
# How-to: Create a credential entity + When onboarding a data feed, you should select an authentication type, some authentication types like *Azure SQL Connection String* and *Service Principal* need a credential entity to store credential-related information, in order to manage your credential in secure. This article will tell how to create a credential entity for different credential types in Metrics Advisor.
ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
# Diagnose an incident using Metrics Advisor + ## What is an incident? When there are anomalies detected on multiple time series within one metric at a particular timestamp, Metrics Advisor will automatically group anomalies that **share the same root cause** into one incident. An incident usually indicates a real issue, Metrics Advisor performs analysis on top of it and provides automatic root cause analysis insights.
ai-services Further Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/further-analysis.md
# Further analyze an incident and evaluate impact + ## Metrics drill down by dimensions When you're viewing incident information, you may need to get more detailed information, for example, for different dimensions, and timestamps. If your data has one or more dimensions, you can use the drill down function to get a more detailed view.
ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/manage-data-feeds.md
# How to: Manage your data feeds + Learn how to manage your onboarded data feeds in Metrics Advisor. This article guides you through managing data feeds in Metrics Advisor. ## Edit a data feed
ai-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/metrics-graph.md
# How-to: Build a metrics graph to analyze related metrics + Each time series in Metrics Advisor is monitored separately by a model that learns from historical data to predict future trends. Anomalies will be detected if any data point falls out of the historical pattern. In some cases, however, several metrics may relate to each other, and anomalies need to be analyzed across multiple metrics. **Metrics graph** is just the tool that helps with this. For example, if you have several metrics that monitor your business from different perspectives, anomaly detection will be applied respectively. However, in the real business case, anomalies detected on multiple metrics may have a relation with each other, discovering those relations and analyzing root cause base on that would be helpful when addressing real issues. The metrics graph helps automatically correlate anomalies detected on related metrics to accelerate the troubleshooting process.
ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md
# How-to: Onboard your metric data to Metrics Advisor + Use this article to learn about onboarding your data to Metrics Advisor. ## Data schema requirements and configuration
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/overview.md
# What is Azure AI Metrics Advisor? + [!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)] Metrics Advisor is a part of [Azure AI services](../../ai-services/what-are-ai-services.md) that uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predicative maintenance, and business monitor applications on top of the service. Use Metrics Advisor to:
ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
# Quickstart: Use the client libraries or REST APIs to customize your solution + Get started with the Metrics Advisor REST API or client libraries. Follow these steps to install the package and try out the example code for basic tasks. Use Metrics Advisor to perform:
ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/quickstarts/web-portal.md
# Quickstart: Monitor your first metric by using the web portal + When you provision an instance of Azure AI Metrics Advisor, you can use the APIs and web-based workspace to interact with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis. ## Prerequisites
ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
Last updated 05/20/2021
# Tutorial: Enable anomaly notification in Metrics Advisor + <!-- 2. Introductory paragraph Required. Lead with a light intro that describes, in customer-friendly language, what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
ai-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/write-a-valid-query.md
verb.
# Tutorial: Write a valid query to onboard metrics data + <!-- 2. Introductory paragraph Required. Lead with a light intro that describes, in customer-friendly language, what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/whats-new.md
# Metrics Advisor: what's new in the docs + Welcome! This page covers what's new in the Metrics Advisor docs. Check back every month for information on service changes, doc additions and updates this month. ## December 2022
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
description: Use the Azure OpenAI Whisper model for speech to text. --+
ai-services Concept Active Inactive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-active-inactive-events.md
Last updated 02/20/2020
# Defer event activation + Deferred activation of events allows you to create personalized websites or mailing campaigns, considering that the user may never actually see the page or open the email. In these scenarios, the application might need to call Rank before it even knows if the result will be used or displayed to the user at all. If the content is never shown to the user, no default Reward (typically zero) should be assumed for it to learn from. Deferred Activation allows you to use the results of a Rank call at one point in time, and decide if the Event should be learned from later on, or elsewhere in your code.
ai-services Concept Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-active-learning.md
Last updated 02/20/2020
# Learning policy and settings + Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different. [Learning policy and settings](how-to-settings.md#configure-rewards-for-the-feedback-loop) are set on your Personalizer resource in the Azure portal.
ai-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-apprentice-mode.md
Last updated 07/26/2022
# Use Apprentice mode to train Personalizer without affecting your existing application + When deploying a new Personalizer resource, it's initialized with an untrained, or blank model. That is, it hasn't learned from any data and therefore won't perform well in practice. This is known as the "cold start" problem and is resolved over time by training the model with real data from your production environment. **Apprentice mode** is a learning behavior that helps mitigate the "cold start" problem, and allows you to gain confidence in the model _before_ it makes decisions in production, all without requiring any code change. <!--
ai-services Concept Auto Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-auto-optimization.md
Last updated 03/08/2021
# Personalizer Auto-Optimize (Preview) ## Introduction Personalizer automatic optimization saves you manual effort in keeping a Personalizer loop at its best machine learning performance, by automatically searching for improved Learning Settings used to train your models and applying them. Personalizer has strict criteria to apply new Learning Settings to insure improvements are unlikely to introduce loss in rewards.
ai-services Concept Multi Slot Personalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-multi-slot-personalization.md
# Multi-slot personalization (Preview) + Multi-slot personalization (Preview) allows you to target content in web layouts, carousels, and lists where more than one action (such as a product or piece of content) is shown to your users. With Personalizer multi-slot APIs, you can have the AI models in Personalizer learn what user contexts and products drive certain behaviors, considering and learning from the placement in your user interface. For example, Personalizer may learn that certain products or content drive more clicks as a sidebar or a footer than as a main highlight on a page. In this article, you'll learn why multi-slot personalization improves results, how to enable it, and when to use it. This article assumes that you are familiar with the Personalizer APIs like `Rank` and `Reward`, and have a conceptual understanding of how you use it in your application. If you aren't familiar with Personalizer and how it works, review the following before you continue:
ai-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-rewards.md
# Reward scores indicate success of personalization + The reward score indicates how well the personalization choice, [RewardActionID](/rest/api/personalizer/1.0/rank/rank#response), resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior. Personalizer trains its machine learning models by evaluating the rewards.
ai-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-exploration.md
Last updated 08/28/2022
# Exploration + With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes. When Personalizer receives a Rank call, it returns a RewardActionID that either:
ai-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-features.md
Last updated 12/28/2022
# Context and actions + Personalizer works by learning what your application should show to users in a given context. Context and actions are the two most important pieces of information that you pass into Personalizer. The **context** represents the information you have about the current user or the state of your system, and the **actions** are the options to be chosen from. ## Context
ai-services Concepts Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-offline-evaluation.md
Last updated 02/20/2020
# Offline evaluation + Offline evaluation is a method that allows you to test and assess the effectiveness of the Personalizer Service without changing your code or affecting user experience. Offline evaluation uses past data, sent from your application to the Rank and Reward APIs, to compare how different ranks have performed. Offline evaluation is performed on a date range. The range can finish as late as the current time. The beginning of the range can't be more than the number of days specified for [data retention](how-to-settings.md).
ai-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-reinforcement-learning.md
Last updated 05/07/2019
# What is Reinforcement Learning? + Reinforcement Learning is an approach to machine learning that learns behaviors by getting feedback from its use. Reinforcement Learning works by:
ai-services Concepts Scalability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-scalability-performance.md
Last updated 10/24/2019
# Scalability and Performance + High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance: * Keeping low latency when making Rank API calls
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/encrypt-data-at-rest.md
# Encryption of data at rest in Personalizer + Personalizer is a service in Azure AI services that uses a machine learning model to provide apps with user-tailored content. When Personalizer persists data to the cloud, it encrypts that data. This encryption protects your data and helps you meet organizational security and compliance commitments. [!INCLUDE [cognitive-services-about-encryption](../includes/cognitive-services-about-encryption.md)]
ai-services How Personalizer Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-personalizer-works.md
Last updated 02/18/2020
# How Personalizer works + The Personalizer resource, your _learning loop_, uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the **Rank** and **Reward** calls. Every loop is completely independent of each other. ## Rank and Reward APIs impact the model
ai-services How To Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-create-resource.md
# Create a Personalizer resource + A Personalizer resource is the same thing as a Personalizer learning loop. A single resource, or learning loop, is created for each subject domain or content area you have. Do not use multiple content areas in the same loop because this will confuse the learning loop and provide poor predictions. If you want Personalizer to select the best content for more than one content area of a web page, use a different learning loop for each.
ai-services How To Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-feature-evaluation.md
Last updated 09/22/2022
# Evaluate feature importances + You can assess how important each feature was to Personalizer's machine learning model by conducting a _feature evaluation_ on your historical log data. Feature evaluations are useful to: * Understand which features are most or least important to the model.
ai-services How To Inference Explainability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-inference-explainability.md
Last updated 09/20/2022
# Inference Explainability++ Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference. Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
ai-services How To Learning Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-learning-behavior.md
Last updated 07/26/2022
# Configure the Personalizer learning behavior + [Apprentice mode](concept-apprentice-mode.md) gives you trust and confidence in the Personalizer service and its machine learning capabilities, and provides assurance that the service is sent information that can be learned from ΓÇô without risking online traffic. ## Configure Apprentice mode
ai-services How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-manage-model.md
Last updated 02/20/2020
# How to manage model and learning settings + The machine-learned model and learning settings can be exported for backup in your own source control system. ## Export the Personalizer model
ai-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-multi-slot.md
# Get started with multi-slot for Azure AI Personalizer + Multi-slot personalization (Preview) allows you to target content in web layouts, carousels, and lists where more than one action (such as a product or piece of content) is shown to your users. With Personalizer multi-slot APIs, you can have the AI models in Personalizer learn what user contexts and products drive certain behaviors, considering and learning from the placement in your user interface. For example, Personalizer may learn that certain products or content drive more clicks as a sidebar or a footer than as a main highlight on a page. In this guide, you'll learn how to use the Personalizer multi-slot APIs.
ai-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-offline-evaluation.md
Last updated 02/20/2020
# Analyze your learning loop with an offline evaluation + Learn how to create an offline evaluation and interpret the results. Offline Evaluations allow you to measure how effective Personalizer is compared to your application's default behavior over a period of logged (historical) data, and assess how well other model configuration settings may perform for your model.
ai-services How To Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-settings.md
Last updated 04/29/2020
# Configure Personalizer learning loop + Service configuration includes how the service treats rewards, how often the service explores, how often the model is retrained, and how much data is stored. Configure the learning loop on the **Configuration** page, in the Azure portal for that Personalizer resource.
ai-services How To Thick Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-thick-client.md
Last updated 09/06/2022
# Get started with the local inference SDK for Azure AI Personalizer + The Personalizer local inference SDK (Preview) downloads the Personalizer model locally, and thus significantly reduces the latency of Rank calls by eliminating network calls. Every minute the client will download the most recent model in the background and use it for inference. In this guide, you'll learn how to use the Personalizer local inference SDK.
ai-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/quickstart-personalizer-sdk.md
zone_pivot_groups: programming-languages-set-six
# Quickstart: Personalizer client library + Get started with the Azure AI Personalizer client libraries to set up a basic learning loop. A learning loop is a system of decisions and feedback: an application requests a decision ranking from the service, then it uses the top-ranked choice and calculates a reward score from the outcome. It returns the reward score to the service. Over time, Personalizer uses AI algorithms to make better decisions for any given context. Follow these steps to set up a sample application. ## Example scenario
ai-services Responsible Characteristics And Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-characteristics-and-limitations.md
# Characteristics and limitations of Personalizer + Azure AI Personalizer can work in many scenarios. To understand where you can apply Personalizer, make sure the requirements of your scenario meet the [expectations for Personalizer to work](where-can-you-use-personalizer.md#expectations-required-to-use-personalizer). To understand whether Personalizer should be used and how to integrate it into your applications, see [Use Cases for Personalizer](responsible-use-cases.md). You'll find criteria and guidance on choosing use cases, designing features, and reward functions for your uses of Personalizer. Before you read this article, it's helpful to understand some background information about [how Personalizer works](how-personalizer-works.md).
ai-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-data-and-privacy.md
# Data and privacy for Personalizer + This article provides information about what data Azure AI Personalizer uses to work, how it processes that data, and how you can control that data. It assumes basic familiarity with [what Personalizer is](what-is-personalizer.md) and [how Personalizer works](how-personalizer-works.md). Specific terms can be found in Terminology.
ai-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-guidance-integration.md
# Guidance for integration and responsible use of Personalizer + Microsoft works to help customers responsibly develop and deploy solutions by using Azure AI Personalizer. Our principled approach upholds personal agency and dignity by considering the AI system's: - Fairness, reliability, and safety.
ai-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-use-cases.md
# Use cases for Personalizer + ## What is a Transparency Note? An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, its capabilities and limitations, and how to achieve the best performance.
ai-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/terminology.md
Last updated 09/16/2022
# Personalizer terminology + Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs. ## Conceptual terminology
ai-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
# Tutorial: Use Personalizer in Azure Notebook + This tutorial runs a Personalizer loop in an Azure Notebook, demonstrating the end to end life cycle of a Personalizer loop. The loop suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is stored in a coffee dataset.
ai-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-personalizer-chat-bot.md
# Tutorial: Use Personalizer in .NET chat bot + Use a C# .NET chat bot with a Personalizer loop to provide the correct content to a user. This chat bot suggests a specific coffee or tea to a user. The user can accept or reject that suggestion. This gives Personalizer information to help make the next suggestion more appropriate. **In this tutorial, you learn how to:**
ai-services Tutorial Use Personalizer Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-personalizer-web-app.md
# Tutorial: Add Personalizer to a .NET web app + Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features. **In this tutorial, you learn how to:**
ai-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/what-is-personalizer.md
keywords: personalizer, Azure AI Personalizer, machine learning
# What is Personalizer? + [!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)] Azure AI Personalizer is an AI service that your applications make smarter decisions at scale using **reinforcement learning**. Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/whats-new.md
Last updated 05/28/2021
# What's new in Personalizer + Learn what's new in Azure AI Personalizer. These items may include release notes, videos, blog posts, and other types of information. Bookmark this page to keep up-to-date with the service. ## Release notes
ai-services Where Can You Use Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/where-can-you-use-personalizer.md
Last updated 02/18/2020
# Where and how to use Personalizer + Use Personalizer in any situation where your application needs to select the correct action (content) to display - in order to make the experience better, achieve better business results, or improve productivity. Personalizer uses reinforcement learning to select which action (content) to show the user. The selection can vary drastically depending on the quantity, quality, and distribution of data sent to the service.
ai-services What Are Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/what-are-ai-services.md
Select a service from the table below and learn how it can help you meet your de
| Service | Description | | | |
-| ![Anomaly Detector icon](media/service-icons/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml) | Identify potential problems early on |
+| ![Anomaly Detector icon](media/service-icons/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml)(retired) | Identify potential problems early on |
| ![Azure Cognitive Search icon](media/service-icons/cognitive-search.svg) [Azure Cognitive Search](../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | | ![Azure OpenAI Service icon](media/service-icons/azure.svg) [Azure OpenAI](./openai/index.yml) | Perform a wide variety of natural language tasks | | ![Bot service icon](media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels |
Select a service from the table below and learn how it can help you meet your de
| ![Immersive Reader icon](media/service-icons/immersive-reader.svg) [Immersive Reader](./immersive-reader/index.yml) | Help users read and comprehend text | | ![Language icon](media/service-icons/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities | | ![Language Understanding icon](media/service-icons/luis.svg) [Language understanding](./luis/index.yml) (retired) | Understand natural language in your apps |
-| ![Metrics Advisor icon](media/service-icons/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml) | An AI service that detects unwanted contents |
-| ![Personalizer icon](media/service-icons/personalizer.svg) [Personalizer](./personalizer/index.yml) | Create rich, personalized experiences for each user |
+| ![Metrics Advisor icon](media/service-icons/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml)(retired) | An AI service that detects unwanted contents |
+| ![Personalizer icon](media/service-icons/personalizer.svg) [Personalizer](./personalizer/index.yml)(retired) | Create rich, personalized experiences for each user |
| ![QnA Maker icon](media/service-icons/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers | | ![Speech icon](media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition | | ![Translator icon](media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects |
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
When Azure Arc-enabled servers is configured on the VM, you see two representati
## Reconfigure Azure VM
+> [!NOTE]
+> For windows, set the environment variable to override the ARC on an Azure VM installation.
+> ```powershell
+> [System.Environment]::SetEnvironmentVariable("MSFT_ARC_TEST",'true', [System.EnvironmentVariableTarget]::Machine)
+> ```
+ 1. Remove any VM extensions on the Azure VM. In the Azure portal, navigate to your Azure VM resource and from the left-hand pane, select **Extensions**. If there are any extensions installed on the VM, select each extension individually and then select **Uninstall**. Wait for all extensions to finish uninstalling before proceeding to step 2.
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
If you want to assign temporary access and remove access for before the SAS toke
there are two options to revoke access for SAS token(s):
-1. Regenerate the key that was used by the SAS token, the primaryKey or secondaryKey of the map account.
-1. Remove the role assignment for the Managed Identity on the associated map account.
+1. Regenerate the key that was used by the SAS token or secondaryKey of the map account.
+1. Remove the role assignment for the managed identity on the associated map account.
> [!WARNING] > Deleting a managed identity used by a SAS token or revoking access control of the managed identity will cause instances of your application using the SAS token and managed identity to intentionally return `401 Unauthorized` or `403 Forbidden` from Azure Maps REST APIs which will create application disruption.
there are two options to revoke access for SAS token(s):
> To avoid disruption: > > 1. Add a second managed identity to the Map Account and grant the new managed identity the correct role assignment.
-> 1. Create a SAS token using `secondaryKey` as the `signingKey` and distribute the new SAS token to the application.
+> 1. Create a SAS token using `secondaryKey`, or a different managed identity than the previous one, as the `signingKey` and distribute the new SAS token to the application.
> 1. Regenerate the primary key, remove the managed identity from the account, and remove the role assignment for the managed identity. ### Create SAS tokens
SAS token parameters:
| Parameter Name | Example Value | Description | | : | :-- | :- |
-| signingKey | `primaryKey` | Required, the string enum value for the signingKey either `primaryKey` or `secondaryKey` is used to create the signature of the SAS. |
+| signingKey | `primaryKey` | Required, the string enum value for the signingKey either `primaryKey`, `secondaryKey` or managed identity is used to create the signature of the SAS. |
| principalId | `<GUID>` | Required, the principalId is the Object (principal) ID of the user-assigned managed identity attached to the map account. | | regions | `[ "eastus", "westus2", "westcentralus" ]` | Optional, the default value is `null`. The regions control which regions the SAS token can be used in the Azure Maps REST [data-plane] API. Omitting regions parameter allows the SAS token to be used without any constraints. When used in combination with an Azure Maps data-plane geographic endpoint like `us.atlas.microsoft.com` and `eu.atlas.microsoft.com` allows the application to control usage with-in the specified geography. This allows prevention of usage in other geographies. | | maxRatePerSecond | 500 | Required, the specified approximate maximum request per second which the SAS token is granted. Once the limit is reached, more throughput is rate limited with HTTP status code `429 (TooManyRequests)`. |
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
The data that Azure Monitor collects from virtual machines with the legacy [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure in the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type. Each type has its own set of properties.
-![Diagram that shows log data collection.](media/agent-data-sources/overview.png)
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
To configure data sources for Log Analytics agents, go to the **Log Analytics wo
Any configuration is delivered to all agents connected to that workspace. You can't exclude any connected agents from this configuration.
-[![Screenshot that shows configuring Windows events.](media/agent-data-sources/configure-events.png)](media/agent-data-sources/configure-events.png#lightbox)
## Data collection
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Starting from agent version 1.13.27, the Linux agent will support both Python 2
If you're using an older version of the agent, you must have the virtual machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default, then you must install it. The following sample commands will install Python 2 on different distros:
+- **Red Hat, CentOS, Oracle**:
```bash sudo yum install -y python2
For the network requirements for the Linux agent, see [Log Analytics agent overv
Regardless of the installation method used, you need the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Under the **Settings** section, select **Agents**.
-[![Screenshot that shows workspace details.](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox)
+
+>[!NOTE]
+>While regenerating the [Log Analytics Workspace shared keys](/rest/api/loganalytics/workspace-shared-keys) is possible, the intention for this is **not** to immediately restrict access to any agents currently using those keys. Agents use the key to generate a certificate that expires after three months. Regenerating the shared keys will only prevent agents from renewing their certificates, not continuing to use those certificates until they expire.
## Agent install package
To extract the agent packages from the bundle without installing the agent, run:
sudo sh ./omsagent-*.universal.x64.sh --extract ``` + ## Upgrade from a previous release
The default cache size is 10 MB but can be modified in the [omsagent.conf file](
- Review [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md) to learn about how to reconfigure, upgrade, or remove the agent from the virtual machine. - Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while you're installing or managing the agent. - Review [Agent data sources](./agent-data-sources.md) to learn about data source configuration.+
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
There are several ways you can verify if the agent is successfully communicating
- Another method to identify a connectivity issue is by running the **TestCloudConnectivity** tool. The tool is installed by default with the agent in the folder *%SystemRoot%\Program Files\Microsoft Monitoring Agent\Agent*. From an elevated command prompt, go to the folder and run the tool. The tool returns the results and highlights where the test failed. For example, perhaps it was related to a particular port or URL that was blocked.
- ![Screenshot that shows TestCloudConnection tool execution results.](./media/agent-windows-troubleshoot/output-testcloudconnection-tool-01.png)
+ :::image type="content" source="./media/agent-windows-troubleshoot/output-testcloudconnection-tool-01.png" lightbox="./media/agent-windows-troubleshoot/output-testcloudconnection-tool-01.png" alt-text="Screenshot that shows TestCloudConnection tool execution results.":::
- Filter the *Operations Manager* event log by **Event sources** *Health Service Modules*, *HealthService*, and *Service Connector* and filter by **Event Level** *Warning* and *Error* to confirm if it has written events from the following table. If they are, review the resolution steps included for each possible event.
If the query returns results, you need to determine if a particular data type is
1. Open an elevated command prompt on the computer and restart the agent service by entering `net stop healthservice && net start healthservice`. 1. Open the *Operations Manager* event log and search for **event IDs** *7023, 7024, 7025, 7028*, and *1210* from **Event source** *HealthService*. These events indicate the agent is successfully receiving configuration from Azure Monitor and they're actively monitoring the computer. The event description for event ID 1210 will also specify on the last line all of the solutions and Insights that are included in the scope of monitoring on the agent.
- ![Screenshot that shows an Event ID 1210 description.](./media/agent-windows-troubleshoot/event-id-1210-healthservice-01.png)
+ :::image type="content" source="./media/agent-windows-troubleshoot/event-id-1210-healthservice-01.png" lightbox="./media/agent-windows-troubleshoot/event-id-1210-healthservice-01.png" alt-text="Screenshot that shows an Event ID 1210 description.":::
1. Wait several minutes. If you don't see the expected data in the query results or visualization, depending on if you're viewing the data from a solution or Insight, from the *Operations Manager* event log, search for **Event sources** *HealthService* and *Health Service Modules*. Filter by **Event Level** *Warning* and *Error* to confirm if it has written events from the following table.
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
Last updated 06/01/2023
- # Install Log Analytics agent on Windows computers
Configure .NET Framework 4.6 or later to support secure cryptography because by
Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then in the **Settings** section, select **Agents**.
-[![Screenshot that shows workspace details.](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox)
> [!NOTE] > You can't configure the agent to report to more than one workspace during initial setup. [Add or remove a workspace](agent-manage.md#add-or-remove-a-workspace) after installation by updating the settings from Control Panel or PowerShell.
+>[!NOTE]
+>While regenerating the [Log Analytics Workspace shared keys](/rest/api/loganalytics/workspace-shared-keys) is possible, the intention for this is **not** to immediately restrict access to any agents currently using those keys. Agents use the key to generate a certificate that expires after three months. Regenerating the shared keys will only prevent agents from renewing their certificates, not continuing to use those certificates until they expire.
+ ## Install the agent [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
The following steps install and configure the Log Analytics agent in Azure and A
6. On the **Azure Log Analytics** page, perform the following: 1. Paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied earlier. If the computer should report to a Log Analytics workspace in Azure Government cloud, select **Azure US Government** from the **Azure Cloud** drop-down list. 2. If the computer needs to communicate through a proxy server to the Log Analytics service, click **Advanced** and provide the URL and port number of the proxy server. If your proxy server requires authentication, type the username and password to authenticate with the proxy server and then click **Next**.
-7. Click **Next** once you have completed providing the necessary configuration settings.<br><br> ![paste Workspace ID and Primary Key](media/agent-windows/log-analytics-mma-setup-laworkspace.png)<br><br>
+7. Click **Next** once you have completed providing the necessary configuration settings.<br><br> :::image type="content" source="media/agent-windows/log-analytics-mma-setup-laworkspace.png" lightbox="media/agent-windows/log-analytics-mma-setup-laworkspace.png" alt-text="paste Workspace ID and Primary Key":::<br><br>
8. On the **Ready to Install** page, review your choices and then click **Install**. 9. On the **Configuration completed successfully** page, click **Finish**.
To retrieve the product code from the agent install package directly, you can us
1. [Import the MMAgent.ps1 configuration script](../../automation/automation-dsc-getting-started.md#import-a-configuration-into-azure-automation) into your Automation account. 1. [Assign a Windows computer or node](../../automation/automation-dsc-getting-started.md#enable-an-azure-resource-manager-vm-for-management-with-state-configuration) to the configuration. Within 15 minutes, the node checks its configuration and the agent is pushed to the node. + ## Verify agent connectivity to Azure Monitor After installation of the agent is finished, you can verify that it's successfully connected and reporting in two ways.
-From the computer in **Control Panel**, find the item **Microsoft Monitoring Agent**. Select it, and on the **Azure Log Analytics** tab, the agent should display a message stating *The Microsoft Monitoring Agent has successfully connected to the Microsoft Operations Management Suite service.*<br><br> ![Screenshot that shows the MMA connection status to Log Analytics message.](media/agent-windows/log-analytics-mma-laworkspace-status.png)
+From the computer in **Control Panel**, find the item **Microsoft Monitoring Agent**. Select it, and on the **Azure Log Analytics** tab, the agent should display a message stating *The Microsoft Monitoring Agent has successfully connected to the Microsoft Operations Management Suite service.*<br><br> :::image type="content" source="media/agent-windows/log-analytics-mma-laworkspace-status.png" lightbox="media/agent-windows/log-analytics-mma-laworkspace-status.png" alt-text="Screenshot that shows the MMA connection status to Log Analytics message.":::
You can also perform a log query in the Azure portal:
The default cache size is 50 MB, but it can be configured between a minimum of 5
- Review [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md) to learn about how to reconfigure, upgrade, or remove the agent from the virtual machine. - Review [Troubleshooting the Windows agent](agent-windows-troubleshoot.md) if you encounter issues while you install or manage the agent.+
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| Category | Area | Azure Monitor Agent | Log Analytics Agent | Diagnostics extension (WAD) | |:|:|:|:|:| | **Environments supported** | | | | |
-| | Azure | X | X | X |
-| | Other cloud (Azure Arc) | X | X | |
-| | On-premises (Azure Arc) | X | X | |
-| | Windows Client OS | X | | |
+| | Azure | Γ£ô | Γ£ô | Γ£ô |
+| | Other cloud (Azure Arc) | Γ£ô | Γ£ô | |
+| | On-premises (Azure Arc) | Γ£ô | Γ£ô | |
+| | Windows Client OS | Γ£ô | | |
| **Data collected** | | | | |
-| | Event Logs | X | X | X |
-| | Performance | X | X | X |
-| | File based logs | X | X | X |
-| | IIS logs | X | X | X |
-| | ETW events | | | X |
-| | .NET app logs | | | X |
-| | Crash dumps | | | X |
-| | Agent diagnostics logs | | | X |
+| | Event Logs | Γ£ô | Γ£ô | Γ£ô |
+| | Performance | Γ£ô | Γ£ô | Γ£ô |
+| | File based logs | Γ£ô | Γ£ô | Γ£ô |
+| | IIS logs | Γ£ô | Γ£ô | Γ£ô |
+| | ETW events | | | Γ£ô |
+| | .NET app logs | | | Γ£ô |
+| | Crash dumps | | | Γ£ô |
+| | Agent diagnostics logs | | | Γ£ô |
| **Data sent to** | | | | |
-| | Azure Monitor Logs | X | X | |
-| | Azure Monitor Metrics<sup>1</sup> | X (Public preview) | | X (Public preview) |
-| | Azure Storage | | | X |
-| | Event Hub | | | X |
+| | Azure Monitor Logs | Γ£ô | Γ£ô | |
+| | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | Γ£ô (Public preview) |
+| | Azure Storage | | | Γ£ô |
+| | Event Hubs | | | Γ£ô |
| **Services and features supported** | | | | |
-| | Microsoft Sentinel | X ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | X | |
-| | VM Insights | X | X | |
-| | Microsoft Defender for Cloud | X (Public preview) | X | |
-| | Automation Update Management | | X | |
-| | Azure Stack HCI | X | | |
+| | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | |
+| | VM Insights | Γ£ô | Γ£ô | |
+| | Microsoft Defender for Cloud | Γ£ô (Public preview) | Γ£ô | |
+| | Automation Update Management | | Γ£ô | |
+| | Azure Stack HCI | Γ£ô | | |
| | Update Manager | N/A (Public preview, independent of monitoring agents) | | |
-| | Change Tracking | X (Public preview) | X | |
-| | SQL Best Practices Assessment | X | | |
+| | Change Tracking | Γ£ô (Public preview) | Γ£ô | |
+| | SQL Best Practices Assessment | Γ£ô | | |
### Linux agents | Category | Area | Azure Monitor Agent | Log Analytics Agent | Diagnostics extension (LAD) | Telegraf agent | |:|:|:|:|:|:| | **Environments supported** | | | | | |
-| | Azure | X | X | X | X |
-| | Other cloud (Azure Arc) | X | X | | X |
-| | On-premises (Azure Arc) | X | X | | X |
+| | Azure | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| | Other cloud (Azure Arc) | Γ£ô | Γ£ô | | Γ£ô |
+| | On-premises (Azure Arc) | Γ£ô | Γ£ô | | Γ£ô |
| **Data collected** | | | | | |
-| | Syslog | X | X | X | |
-| | Performance | X | X | X | X |
-| | File based logs | X | | | |
+| | Syslog | Γ£ô | Γ£ô | Γ£ô | |
+| | Performance | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| | File based logs | Γ£ô | | | |
| **Data sent to** | | | | | |
-| | Azure Monitor Logs | X | X | | |
-| | Azure Monitor Metrics<sup>1</sup> | X (Public preview) | | | X (Public preview) |
-| | Azure Storage | | | X | |
-| | Event Hub | | | X | |
+| | Azure Monitor Logs | Γ£ô | Γ£ô | | |
+| | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | | Γ£ô (Public preview) |
+| | Azure Storage | | | Γ£ô | |
+| | Event Hubs | | | Γ£ô | |
| **Services and features supported** | | | | | |
-| | Microsoft Sentinel | X ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | X | |
-| | VM Insights | X | X | |
-| | Microsoft Defender for Cloud | X (Public preview) | X | |
-| | Automation Update Management | | X | |
+| | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | |
+| | VM Insights | Γ£ô | Γ£ô | |
+| | Microsoft Defender for Cloud | Γ£ô (Public preview) | Γ£ô | |
+| | Automation Update Management | | Γ£ô | |
| | Update Manager | N/A (Public preview, independent of monitoring agents) | | |
-| | Change Tracking | X (Public preview) | X | |
+| | Change Tracking | Γ£ô (Public preview) | Γ£ô | |
<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension | |:|::|::|::|
-| Windows Server 2022 | X | X | |
-| Windows Server 2022 Core | X | | |
-| Windows Server 2019 | X | X | X |
-| Windows Server 2019 Core | X | | |
-| Windows Server 2016 | X | X | X |
-| Windows Server 2016 Core | X | | X |
-| Windows Server 2012 R2 | X | X | X |
-| Windows Server 2012 | X | X | X |
-| Windows Server 2008 R2 SP1 | X | X | X |
-| Windows Server 2008 R2 | | | X |
-| Windows Server 2008 SP2 | | X | |
-| Windows 11 Client and Pro | X<sup>2</sup>, <sup>3</sup> | | |
-| Windows 11 Enterprise<br>(including multi-session) | X | | |
-| Windows 10 1803 (RS4) and higher | X<sup>2</sup> | | |
-| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only) | X | X | X |
-| Windows 8 Enterprise and Pro<br>(Server scenarios only | | X<sup>1</sup>) | |
-| Windows 7 SP1<br>(Server scenarios only) | | X<sup>1</sup>) | |
-| Azure Stack HCI | X | X | |
-| Windows IoT Enterprise | X | | |
+| Windows Server 2022 | Γ£ô | Γ£ô | |
+| Windows Server 2022 Core | Γ£ô | | |
+| Windows Server 2019 | Γ£ô | Γ£ô | Γ£ô |
+| Windows Server 2019 Core | Γ£ô | | |
+| Windows Server 2016 | Γ£ô | Γ£ô | Γ£ô |
+| Windows Server 2016 Core | Γ£ô | | Γ£ô |
+| Windows Server 2012 R2 | Γ£ô | Γ£ô | Γ£ô |
+| Windows Server 2012 | Γ£ô | Γ£ô | Γ£ô |
+| Windows Server 2008 R2 SP1 | Γ£ô | Γ£ô | Γ£ô |
+| Windows Server 2008 R2 | | | Γ£ô |
+| Windows Server 2008 SP2 | | Γ£ô | |
+| Windows 11 Client and Pro | Γ£ô<sup>2</sup>, <sup>3</sup> | | |
+| Windows 11 Enterprise<br>(including multi-session) | Γ£ô | | |
+| Windows 10 1803 (RS4) and higher | Γ£ô<sup>2</sup> | | |
+| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only) | Γ£ô | Γ£ô | Γ£ô |
+| Windows 8 Enterprise and Pro<br>(Server scenarios only | | Γ£ô<sup>1</sup> | |
+| Windows 7 SP1<br>(Server scenarios only) | | Γ£ô<sup>1</sup> | |
+| Azure Stack HCI | Γ£ô | Γ£ô | |
+| Windows IoT Enterprise | Γ£ô | | |
<sup>1</sup> Running the OS on server hardware that is always connected, always on.<br> <sup>2</sup> Using the Azure Monitor agent [client installer](./azure-monitor-agent-windows-client.md).<br>
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|
-| AlmaLinux 8 | X<sup>3</sup> | X | |
-| Amazon Linux 2017.09 | | X | |
-| Amazon Linux 2 | X | X | |
-| CentOS Linux 8 | X | X | |
-| CentOS Linux 7 | X<sup>3</sup> | X | X |
-| CBL-Mariner 2.0 | X<sup>3,4</sup> | | |
-| Debian 11 | X<sup>3</sup> | | |
-| Debian 10 | X | X | |
-| Debian 9 | X | X | X |
-| Debian 8 | | X | |
-| OpenSUSE 15 | X | | |
-| Oracle Linux 8 | X | X | |
-| Oracle Linux 7 | X | X | X |
-| Oracle Linux 6.4+ | | | X |
-| Red Hat Enterprise Linux Server 9+ | X | | |
-| Red Hat Enterprise Linux Server 8.6+ | X<sup>3</sup> | X<sup>2</sup> | X<sup>2</sup> |
-| Red Hat Enterprise Linux Server 8.0-8.5 | X | X<sup>2</sup> | X<sup>2</sup> |
-| Red Hat Enterprise Linux Server 7 | X | X | X |
-| Red Hat Enterprise Linux Server 6.7+ | | | X |
-| Rocky Linux 8 | X | X | |
-| SUSE Linux Enterprise Server 15 SP4 | X<sup>3</sup> | | |
-| SUSE Linux Enterprise Server 15 SP3 | X | | |
-| SUSE Linux Enterprise Server 15 SP2 | X | | |
-| SUSE Linux Enterprise Server 15 SP1 | X | X | |
-| SUSE Linux Enterprise Server 15 | X | X | |
-| SUSE Linux Enterprise Server 12 | X | X | X |
-| Ubuntu 22.04 LTS | X | | |
-| Ubuntu 20.04 LTS | X<sup>3</sup> | X | X |
-| Ubuntu 18.04 LTS | X<sup>3</sup> | X | X |
-| Ubuntu 16.04 LTS | X | X | X |
-| Ubuntu 14.04 LTS | | X | X |
+| AlmaLinux 8 | Γ£ô<sup>3</sup> | Γ£ô | |
+| Amazon Linux 2017.09 | | Γ£ô | |
+| Amazon Linux 2 | Γ£ô | Γ£ô | |
+| CentOS Linux 8 | Γ£ô | Γ£ô | |
+| CentOS Linux 7 | Γ£ô<sup>3</sup> | Γ£ô | Γ£ô |
+| CBL-Mariner 2.0 | Γ£ô<sup>3,4</sup> | | |
+| Debian 11 | Γ£ô<sup>3</sup> | | |
+| Debian 10 | Γ£ô | Γ£ô | |
+| Debian 9 | Γ£ô | Γ£ô | Γ£ô |
+| Debian 8 | | Γ£ô | |
+| OpenSUSE 15 | Γ£ô | | |
+| Oracle Linux 8 | Γ£ô | Γ£ô | |
+| Oracle Linux 7 | Γ£ô | Γ£ô | Γ£ô |
+| Oracle Linux 6.4+ | | | Γ£ô |
+| Red Hat Enterprise Linux Server 9+ | Γ£ô | | |
+| Red Hat Enterprise Linux Server 8.6+ | Γ£ô<sup>3</sup> | Γ£ô<sup>2</sup> | Γ£ô<sup>2</sup> |
+| Red Hat Enterprise Linux Server 8.0-8.5 | Γ£ô | Γ£ô<sup>2</sup> | Γ£ô<sup>2</sup> |
+| Red Hat Enterprise Linux Server 7 | Γ£ô | Γ£ô | Γ£ô |
+| Red Hat Enterprise Linux Server 6.7+ | | | Γ£ô |
+| Rocky Linux 8 | Γ£ô | Γ£ô | |
+| SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>3</sup> | | |
+| SUSE Linux Enterprise Server 15 SP3 | Γ£ô | | |
+| SUSE Linux Enterprise Server 15 SP2 | Γ£ô | | |
+| SUSE Linux Enterprise Server 15 SP1 | Γ£ô | Γ£ô | |
+| SUSE Linux Enterprise Server 15 | Γ£ô | Γ£ô | |
+| SUSE Linux Enterprise Server 12 | Γ£ô | Γ£ô | Γ£ô |
+| Ubuntu 22.04 LTS | Γ£ô | | |
+| Ubuntu 20.04 LTS | Γ£ô<sup>3</sup> | Γ£ô | Γ£ô |
+| Ubuntu 18.04 LTS | Γ£ô<sup>3</sup> | Γ£ô | Γ£ô |
+| Ubuntu 16.04 LTS | Γ£ô | Γ£ô | Γ£ô |
+| Ubuntu 14.04 LTS | | Γ£ô | Γ£ô |
<sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br> <sup>2</sup> Requires Python 2 to be installed on the machine and aliased to the `python` command.<br>
On the roadmap
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|
-| CentOS Linux 7 | X | | |
-| Debian 10 | X | | |
-| Ubuntu 18 | X | | |
-| Ubuntu 20 | X | | |
-| Red Hat Enterprise Linux Server 7 | X | | |
-| Red Hat Enterprise Linux Server 8 | X | | |
+| CentOS Linux 7 | Γ£ô | | |
+| Debian 10 | Γ£ô | | |
+| Ubuntu 18 | Γ£ô | | |
+| Ubuntu 20 | Γ£ô | | |
+| Red Hat Enterprise Linux Server 7 | Γ£ô | | |
+| Red Hat Enterprise Linux Server 8 | Γ£ô | | |
<sup>1</sup> Supports only the above distros and versions
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
The Azure Monitor Agent extensions for Windows and Linux can communicate either
1. Use this flowchart to determine the values of the `Settings` and `ProtectedSettings` parameters first.
- ![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
+ :::image type="content" source="media/azure-monitor-agent-overview/proxy-flowchart.png" lightbox="media/azure-monitor-agent-overview/proxy-flowchart.png" alt-text="Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.":::
> [!NOTE] > Setting Linux system proxy via environment variables such as `http_proxy` and `https_proxy` is only supported using Azure Monitor Agent for Linux version 1.24.2 and above. For the ARM template, if you have proxy configuration please follow the ARM template example below declaring the proxy setting inside the ARM template. Additionally, a user can set "global" environment variables that get picked up by all systemd services [via the DefaultEnvironment variable in /etc/systemd/system.conf](https://www.man7.org/linux/man-pages/man5/systemd-system.conf.5.html).
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
These initiatives above comprise individual policies that:
- Create and deploy the association to link the machine to specified data collection rule. - `Data Collection Rule Resource Id`: The Azure Resource Manager resourceId of the rule you want to associate via this policy to all machines the policy is applied to.
- ![Partial screenshot from the Azure Policy Definitions page that shows two built-in policy initiatives for configuring Azure Monitor Agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
+ :::image type="content" source="media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png" lightbox="media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png" alt-text="Partial screenshot from the Azure Policy Definitions page that shows two built-in policy initiatives for configuring Azure Monitor Agent.":::
#### Known issues
These initiatives above comprise individual policies that:
You can choose to use the individual policies from the preceding policy initiative to perform a single action at scale. For example, if you *only* want to automatically install the agent, use the second agent installation policy from the initiative, as shown.
-![Partial screenshot from the Azure Policy Definitions page that shows policies contained within the initiative for configuring Azure Monitor Agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
### Remediation
The initiatives or policies will apply to each virtual machine as it's created.
When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. For information on the remediation, see [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md).
-![Screenshot that shows initiative remediation for Azure Monitor Agent.](media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png)
## Next steps
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for Windows and Linux machines, in Azure and non-Azure environments, including on-premises and third-party clouds. The [benefits of migrating to Azure Monitor Agent](../agents/azure-monitor-agent-migration.md) include enhanced security, cost-effectiveness, performance, manageability and reliability. This article explains how to use the AMA Migration Helper and DCR Config Generator tools to help automate and track the migration from Log Analytics Agent to Azure Monitor Agent.
-![Flow diagram that shows the steps involved in agent migration and how the migration tools help in generating DCRs and tracking the entire migration process.](media/azure-monitor-agent-migration/mma-to-ama-migration-steps.png)
> [!IMPORTANT] > Do not remove legacy agents being used by other [Azure solutions or services](./azure-monitor-agent-migration.md#migrate-additional-services-and-features). Use the migration helper to discover which solutions and services you use today.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
### Migration steps
-![Flow diagram that shows the steps involved in agent migration and how the migration tools help in generating DCRs and tracking the entire migration process.](media/azure-monitor-agent-migration/mma-to-ama-migration-steps.png)
1. Use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to convert your legacy agent configuration into [data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule) automatically.<sup>1</sup>
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
## Install the agent 1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below):
- [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox)
+ :::image type="content" source="media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png" lightbox="media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png" alt-text="Diagram shows download agent link on Azure portal.":::
2. Open an elevated admin command prompt window and change directory to the location where you downloaded the installer. 3. To install with **default settings**, run the following command: ```cli
You need to create a 'Monitored Object' (MO) that creates a representation for t
Currently this association is only **limited** to the Azure AD tenant scope, which means configuration applied to the AAD tenant will be applied to all devices that are part of the tenant and running the agent installed via the client installer. Agents installed as virtual machine extension will not be impacted by this. The image below demonstrates how this works:
-![Diagram shows monitored object purpose and association.](media/azure-monitor-agent-windows-client/azure-monitor-agent-monitored-object.png)
Then, proceed with the instructions below to create and associate them to a Monitored Object, using REST APIs or PowerShell commands.
$requestURL = "https://management.azure.com$RespondId/providers/microsoft.insigh
Check the ΓÇÿHeartbeatΓÇÖ table (and other tables you configured in the rules) in the Log Analytics workspace that you specified as a destination in the data collection rule(s). The `SourceComputerId`, `Computer`, `ComputerIP` columns should all reflect the client device information respectively, and the `Category` column should say 'Azure Monitor Agent'. See example below:
-[![Diagram shows agent heartbeat logs on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-heartbeat-logs.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-heartbeat-logs.png)
### Using PowerShell for offboarding ```PowerShell
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
Syslog is an event logging protocol that's common to Linux. You can use the Sysl
When the Azure Monitor agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent when Syslog collection is enabled in [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). Azure Monitor Agent then sends the messages to an Azure Monitor or Log Analytics workspace where a corresponding Syslog record is created in a [Syslog table](/azure/azure-monitor/reference/tables/syslog).
-![Diagram that shows Syslog collection.](media/data-sources-syslog/overview.png)
-![Diagram that shows Syslog daemon and Azure Monitor Agent communication.](media/azure-monitor-agent/linux-agent-syslog-communication.png)
The following facilities are supported with the Syslog collector: * auth
Create a *data collection rule* in the same region as your Log Analytics workspa
1. Under **Settings**, select **Data Collection Rules**. 1. Select **Create**.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-data-collection-rule.png" alt-text="Screenshot that shows the Data Collection Rules pane with the Create option selected.":::
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-data-collection-rule.png" lightbox="../../sentinel/media/forward-syslog-monitor-agent/create-data-collection-rule.png" alt-text="Screenshot that shows the Data Collection Rules pane with the Create option selected.":::
#### Add resources 1. Select **Add resources**. 1. Use the filters to find the virtual machine you want to use to collect logs.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-scope.png" alt-text="Screenshot that shows the page to select the scope for the data collection rule. ":::
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-scope.png" lightbox="../../sentinel/media/forward-syslog-monitor-agent/create-rule-scope.png" alt-text="Screenshot that shows the page to select the scope for the data collection rule. ":::
1. Select the virtual machine. 1. Select **Apply**. 1. Select **Next: Collect and deliver**.
Create a *data collection rule* in the same region as your Log Analytics workspa
1. Select **Add data source**. 1. For **Data source type**, select **Linux syslog**.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot that shows the page to select the data source type and minimum log level.":::
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" lightbox="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot that shows the page to select the data source type and minimum log level.":::
1. For **Minimum log level**, leave the default values **LOG_DEBUG**. 1. Select **Next: Destination**.
Create a *data collection rule* in the same region as your Log Analytics workspa
1. Select **Add destination**.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-add-destination.png" alt-text="Screenshot that shows the Destination tab with the Add destination option selected.":::
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-add-destination.png" lightbox="../../sentinel/media/forward-syslog-monitor-agent/create-rule-add-destination.png" alt-text="Screenshot that shows the Destination tab with the Add destination option selected.":::
1. Enter the following values: |Field |Value |
azure-monitor Use Azure Monitor Agent Troubleshooter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/use-azure-monitor-agent-troubleshooter.md
The detailed data collected by the troubleshooter include system configuration,
### Run Linux Troubleshooter 1. Log in to the machine to be diagnosed 2. Go to the location where the troubleshooter is automatically installed: cd /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}/ama_tst
-3. Run the Troubleshooter: sudo sh ama_troubleshooter.sh A
+3. Run the Troubleshooter: sudo sh ama_troubleshooter.sh -A
There are six sections that cover different scenarios that customers have historically had issues with. By enter 1-6 or A, customer is able to diagnose issues with the agent. Adding an L creates a zip file that can be shared if technical support in needed.
The details for the covered scenarios are below:
|Agent custom log collection doesn't work properly|Custom log configuration being pulled / used, Log file paths is valid| ### Share Linux Logs
-To create a zip file use this command when running the troubleshooter: sudo sh ama_troubleshooter.sh A L. You'll be asked for a file location to create the zip file.
+To create a zip file use this command when running the troubleshooter: sudo sh ama_troubleshooter.sh -A L. You'll be asked for a file location to create the zip file.
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
- Title: Release annotations for Application Insights | Microsoft Docs
-description: Learn how to create annotations to track deployment or other significant events with Application Insights.
-- Previously updated : 01/24/2023---
-# Release annotations for Application Insights
-
-Annotations show where you deployed a new build or other significant events. Annotations make it easy to see whether your changes had any effect on your application's performance. They can be automatically created by the [Azure Pipelines](/azure/devops/pipelines/tasks/) build system. You can also create annotations to flag any event you want by creating them from PowerShell.
-
-## Release annotations with Azure Pipelines build
-
-Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps.
-
-If all the following criteria are met, the deployment task creates the release annotation automatically:
--- The resource to which you're deploying is linked to Application Insights via the `APPINSIGHTS_INSTRUMENTATIONKEY` app setting.-- The Application Insights resource is in the same subscription as the resource to which you're deploying.-- You're using one of the following Azure DevOps pipeline tasks:-
- | Task code | Task name | Versions |
- ||-|--|
- | AzureAppServiceSettings | Azure App Service Settings | Any |
- | AzureRmWebAppDeployment | Azure App Service deploy | V3 and above |
- | AzureFunctionApp | Azure Functions | Any |
- | AzureFunctionAppContainer | Azure Functions for container | Any |
- | AzureWebAppContainer | Azure Web App for Containers | Any |
- | AzureWebApp | Azure Web App | Any |
-
-> [!NOTE]
-> If you're still using the Application Insights annotation deployment task, you should delete it.
-
-### Configure release annotations
-
-If you can't use one of the deployment tasks in the previous section, you need to add an inline script task in your deployment pipeline.
-
-1. Go to a new or existing pipeline and select a task.
-
- :::image type="content" source="./media/annotations/task.png" alt-text="Screenshot that shows a task selected under Stages." lightbox="./media/annotations/task.png":::
-1. Add a new task and select **Azure CLI**.
-
- :::image type="content" source="./media/annotations/add-azure-cli.png" alt-text="Screenshot that shows adding a new task and selecting Azure CLI." lightbox="./media/annotations/add-azure-cli.png":::
-1. Specify the relevant Azure subscription. Change **Script Type** to **PowerShell** and **Script Location** to **Inline**.
-1. Add the [PowerShell script from step 2 in the next section](#create-release-annotations-with-the-azure-cli) to **Inline Script**.
-1. Add the following arguments. Replace the angle-bracketed placeholders with your values to **Script Arguments**. The `-releaseProperties` are optional.
-
- ```powershell
- -aiResourceId "<aiResourceId>" `
- -releaseName "<releaseName>" `
- -releaseProperties @{"ReleaseDescription"="<a description>";
- "TriggerBy"="<Your name>" }
- ```
-
- :::image type="content" source="./media/annotations/inline-script.png" alt-text="Screenshot of Azure CLI task settings with Script Type, Script Location, Inline Script, and Script Arguments highlighted." lightbox="./media/annotations/inline-script.png":::
-
- The following example shows metadata you can set in the optional `releaseProperties` argument by using [build](/azure/devops/pipelines/build/variables#build-variables-devops-services) and [release](/azure/devops/pipelines/release/variables#default-variablesrelease) variables.
-
- ```powershell
- -releaseProperties @{
- "BuildNumber"="$(Build.BuildNumber)";
- "BuildRepositoryName"="$(Build.Repository.Name)";
- "BuildRepositoryProvider"="$(Build.Repository.Provider)";
- "ReleaseDefinitionName"="$(Build.DefinitionName)";
- "ReleaseDescription"="Triggered by $(Build.DefinitionName) $(Build.BuildNumber)";
- "ReleaseEnvironmentName"="$(Release.EnvironmentName)";
- "ReleaseId"="$(Release.ReleaseId)";
- "ReleaseName"="$(Release.ReleaseName)";
- "ReleaseRequestedFor"="$(Release.RequestedFor)";
- "ReleaseWebUrl"="$(Release.ReleaseWebUrl)";
- "SourceBranch"="$(Build.SourceBranch)";
- "TeamFoundationCollectionUri"="$(System.TeamFoundationCollectionUri)" }
- ```
-
-1. Select **Save**.
-
-## Create release annotations with the Azure CLI
-
-You can use the `CreateReleaseAnnotation` PowerShell script to create annotations from any process you want without using Azure DevOps.
-
-1. Sign in to the [Azure CLI](/cli/azure/authenticate-azure-cli).
-
-1. Make a local copy of the following script and call it `CreateReleaseAnnotation.ps1`.
-
- ```powershell
- param(
- [parameter(Mandatory = $true)][string]$aiResourceId,
- [parameter(Mandatory = $true)][string]$releaseName,
- [parameter(Mandatory = $false)]$releaseProperties = @()
- )
-
- $annotation = @{
- Id = [GUID]::NewGuid();
- AnnotationName = $releaseName;
- EventTime = (Get-Date).ToUniversalTime().GetDateTimeFormats("s")[0];
- Category = "Deployment"; #Application Insights only displays annotations from the "Deployment" Category
- Properties = ConvertTo-Json $releaseProperties -Compress
- }
-
- $body = (ConvertTo-Json $annotation -Compress) -replace '(\\+)"', '$1$1"' -replace "`"", "`"`""
- az rest --method put --uri "$($aiResourceId)/Annotations?api-version=2015-05-01" --body "$($body) "
-
- # Use the following command for Linux Azure DevOps Hosts or other PowerShell scenarios
- # Invoke-AzRestMethod -Path "$aiResourceId/Annotations?api-version=2015-05-01" -Method PUT -Payload $body
- ```
-
- > [!NOTE]
- > Your annotations must have **Category** set to **Deployment** to appear in the Azure portal.
-
-1. Call the PowerShell script with the following code. Replace the angle-bracketed placeholders with your values. The `-releaseProperties` are optional.
-
- ```powershell
- .\CreateReleaseAnnotation.ps1 `
- -aiResourceId "<aiResourceId>" `
- -releaseName "<releaseName>" `
- -releaseProperties @{"ReleaseDescription"="<a description>";
- "TriggerBy"="<Your name>" }
- ```
-
- |Argument | Definition | Note|
- |--|--|--|
- |`aiResourceId` | The resource ID to the target Application Insights resource. | Example:<br> /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRGName/providers/microsoft.insights/components/MyResourceName|
- |`releaseName` | The name to give the created release annotation. | |
- |`releaseProperties` | Used to attach custom metadata to the annotation. | Optional|
-
-## View annotations
-
-> [!NOTE]
-> Release annotations aren't currently available in the **Metrics** pane of Application Insights.
-
-Whenever you use the release template to deploy a new release, an annotation is sent to Application Insights. You can view annotations in the following locations:
--- **Performance:**-
- :::image type="content" source="./media/annotations/performance.png" alt-text="Screenshot that shows the Performance tab with a release annotation selected to show the Release Properties tab." lightbox="./media/annotations/performance.png":::
--- **Failures:**-
- :::image type="content" source="./media/annotations/failures.png" alt-text="Screenshot that shows the Failures tab with a release annotation selected to show the Release Properties tab." lightbox="./media/annotations/failures.png":::
-- **Usage:**-
- :::image type="content" source="./media/annotations/usage-pane.png" alt-text="Screenshot that shows the Users tab bar with release annotations selected. Release annotations appear as blue arrows above the chart indicating the moment in time that a release occurred." lightbox="./media/annotations/usage-pane.png":::
--- **Workbooks:**-
- In any log-based workbook query where the visualization displays time along the x-axis:
-
- :::image type="content" source="./media/annotations/workbooks-annotations.png" alt-text="Screenshot that shows the Workbooks pane with a time series log-based query with annotations displayed." lightbox="./media/annotations/workbooks-annotations.png":::
-
-To enable annotations in your workbook, go to **Advanced Settings** and select **Show annotations**.
-
-
-Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment.
-
-## Release annotations by using API keys
-
-Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps.
-
-> [!IMPORTANT]
-> Annotations using API keys is deprecated. We recommend using the [Azure CLI](#create-release-annotations-with-the-azure-cli) instead.
-
-### Install the annotations extension (one time)
-
-To create release annotations, install one of the many Azure DevOps extensions available in Visual Studio Marketplace.
-
-1. Sign in to your [Azure DevOps](https://azure.microsoft.com/services/devops/) project.
-
-1. On the **Visual Studio Marketplace** [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization. Select **Install** to add the extension to your Azure DevOps organization.
-
- :::image type="content" source="./media/annotations/1-install.png" lightbox="./media/annotations/1-install.png" alt-text="Screenshot that shows selecting an Azure DevOps organization and selecting Install.":::
-
-You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization.
-
-### Configure release annotations by using API keys
-
-Create a separate API key for each of your Azure Pipelines release templates.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and open the Application Insights resource that monitors your application. Or if you don't have one, [create a new Application Insights resource](create-workspace-resource.md).
-
-1. Open the **API Access** tab and copy the **Application Insights ID**.
-
- :::image type="content" source="./media/annotations/2-app-id.png" lightbox="./media/annotations/2-app-id.png" alt-text="Screenshot that shows under API Access, copying the Application ID.":::
-
-1. In a separate browser window, open or create the release template that manages your Azure Pipelines deployments.
-
-1. Select **Add task** and then select the **Application Insights Release Annotation** task from the menu.
-
- :::image type="content" source="./media/annotations/3-add-task.png" lightbox="./media/annotations/3-add-task.png" alt-text="Screenshot that shows selecting Add Task and Application Insights Release Annotation.":::
-
- > [!NOTE]
- > The Release Annotation task currently supports only Windows-based agents. It won't run on Linux, macOS, or other types of agents.
-
-1. Under **Application ID**, paste the Application Insights ID you copied from the **API Access** tab.
-
- :::image type="content" source="./media/annotations/4-paste-app-id.png" lightbox="./media/annotations/4-paste-app-id.png" alt-text="Screenshot that shows pasting the Application Insights ID.":::
-
-1. Back in the Application Insights **API Access** window, select **Create API Key**.
-
- :::image type="content" source="./media/annotations/5-create-api-key.png" lightbox="./media/annotations/5-create-api-key.png" alt-text="Screenshot that shows selecting the Create API Key on the API Access tab.":::
-
-1. In the **Create API key** window, enter a description, select **Write annotations**, and then select **Generate key**. Copy the new key.
-
- :::image type="content" source="./media/annotations/6-create-api-key.png" lightbox="./media/annotations/6-create-api-key.png" alt-text="Screenshot that shows in the Create API key window, entering a description, selecting Write annotations, and then selecting the Generate key.":::
-
-1. In the release template window, on the **Variables** tab, select **Add** to create a variable definition for the new API key.
-
-1. Under **Name**, enter **ApiKey**. Under **Value**, paste the API key you copied from the **API Access** tab.
-
- :::image type="content" source="./media/annotations/7-paste-api-key.png" lightbox="./media/annotations/7-paste-api-key.png" alt-text="Screenshot that shows in the Azure DevOps Variables tab, selecting Add, naming the variable ApiKey, and pasting the API key under Value.":::
-
-1. Select **Save** in the main release template window to save the template.
-
- > [!NOTE]
- > Limits for API keys are described in the [REST API rate limits documentation](/rest/api/yammer/rest-api-rate-limits).
-
-### Transition to the new release annotation
-
-To use the new release annotations:
-1. [Remove the Release Annotations extension](/azure/devops/marketplace/uninstall-disable-extensions).
-1. Remove the Application Insights Release Annotation task in your Azure Pipelines deployment.
-1. Create new release annotations with [Azure Pipelines](#release-annotations-with-azure-pipelines-build) or the [Azure CLI](#create-release-annotations-with-the-azure-cli).
-
-## Next steps
-
-* [Create work items](./diagnostic-search.md#create-work-item)
-* [Automation with PowerShell](./powershell.md)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Application Insights provides other features including, but not limited to:
- [Live Metrics](live-stream.md): Observe activity from your deployed application in real time with no effect on the host environment. - [Availability](availability-overview.md): Also known as synthetic transaction monitoring. Probe the external endpoints of your applications to test the overall availability and responsiveness over time.-- [GitHub or Azure DevOps integration](work-item-integration.md): Create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in the context of Application Insights data.
+- [GitHub or Azure DevOps integration](release-and-work-item-insights.md?tabs=work-item-integration): Create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in the context of Application Insights data.
- [Usage](usage-overview.md): Understand which features are popular with users and how users interact and use your application. - [Smart detection](proactive-diagnostics.md): Detect failures and anomalies automatically through proactive telemetry analysis.
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md
- Title: Continuous monitoring of your Azure DevOps release pipeline | Microsoft Docs
-description: This article provides instructions to quickly set up continuous monitoring with Azure Pipelines and Application Insights.
- Previously updated : 05/01/2020---
-# Add continuous monitoring to your release pipeline
-
-Azure Pipelines integrates with Application Insights to allow continuous monitoring of your Azure DevOps release pipeline throughout the software development lifecycle.
-
-With continuous monitoring, release pipelines can incorporate monitoring data from Application Insights and other Azure resources. When the release pipeline detects an Application Insights alert, the pipeline can gate or roll back the deployment until the alert is resolved. If all checks pass, deployments can proceed automatically from test all the way to production, without the need for manual intervention.
-
-## Configure continuous monitoring
-
-1. In [Azure DevOps](https://dev.azure.com), select an organization and project.
-
-1. On the left menu of the project page, select **Pipelines** > **Releases**.
-
-1. Select the dropdown arrow next to **New** and select **New release pipeline**. Or, if you don't have a pipeline yet, select **New pipeline** on the page that appears.
-
-1. On the **Select a template** pane, search for and select **Azure App Service deployment with continuous monitoring**, and then select **Apply**.
-
- :::image type="content" source="media/continuous-monitoring/001.png" lightbox="media/continuous-monitoring/001.png" alt-text="Screenshot that shows a new Azure Pipelines release pipeline.":::
-
-1. In the **Stage 1** box, select the hyperlink to **View stage tasks.**
-
- :::image type="content" source="media/continuous-monitoring/002.png" lightbox="media/continuous-monitoring/002.png" alt-text="Screenshot that shows View stage tasks.":::
-
-1. In the **Stage 1** configuration pane, fill in the following fields:
-
- | Parameter | Value |
- | - |:--|
- | **Stage name** | Provide a stage name or leave it at **Stage 1**. |
- | **Azure subscription** | Select the dropdown arrow and select the linked Azure subscription you want to use.|
- | **App type** | Select the dropdown arrow and select your app type. |
- | **App Service name** | Enter the name of your Azure App Service. |
- | **Resource Group name for Application Insights** | Select the dropdown arrow and select the resource group you want to use. |
- | **Application Insights resource name** | Select the dropdown arrow and select the Application Insights resource for the resource group you selected.
-
-1. To save the pipeline with default alert rule settings, select **Save** in the upper-right corner of the Azure DevOps window. Enter a descriptive comment and select **OK**.
-
-## Modify alert rules
-
-Out of the box, the **Azure App Service deployment with continuous monitoring** template has four alert rules: **Availability**, **Failed requests**, **Server response time**, and **Server exceptions**. You can add more rules or change the rule settings to meet your service level needs.
-
-To modify alert rule settings:
-
-In the left pane of the release pipeline page, select **Configure Application Insights Alerts**.
-
-The four default alert rules are created via an Inline script:
-
-```azurecli
-$subscription = az account show --query "id";$subscription.Trim("`"");$resource="/subscriptions/$subscription/resourcegroups/"+"$(Parameters.AppInsightsResourceGroupName)"+"/providers/microsoft.insights/components/" + "$(Parameters.ApplicationInsightsResourceName)";
-az monitor metrics alert create -n 'Availability_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'avg availabilityResults/availabilityPercentage < 99' --description "created from Azure DevOps";
-az monitor metrics alert create -n 'FailedRequests_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'count requests/failed > 5' --description "created from Azure DevOps";
-az monitor metrics alert create -n 'ServerResponseTime_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'avg requests/duration > 5' --description "created from Azure DevOps";
-az monitor metrics alert create -n 'ServerExceptions_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'count exceptions/server > 5' --description "created from Azure DevOps";
-```
-
-You can modify the script and add more alert rules. You can also modify the alert conditions. And you can remove alert rules that don't make sense for your deployment purposes.
-
-## Add deployment conditions
-
-When you add deployment gates to your release pipeline, an alert that exceeds the thresholds you set prevents unwanted release promotion. After you resolve the alert, the deployment can proceed automatically.
-
-To add deployment gates:
-
-1. On the main pipeline page, under **Stages**, select the **Pre-deployment conditions** or **Post-deployment conditions** symbol, depending on which stage needs a continuous monitoring gate.
-
- :::image type="content" source="media/continuous-monitoring/004.png" lightbox="media/continuous-monitoring/004.png" alt-text="Screenshot that shows Pre-deployment conditions.":::
-
-1. In the **Pre-deployment conditions** configuration pane, set **Gates** to **Enabled**.
-
-1. Next to **Deployment gates**, select **Add**.
-
-1. Select **Query Azure Monitor alerts** from the dropdown menu. This option lets you access both Azure Monitor and Application Insights alerts.
-
- :::image type="content" source="media/continuous-monitoring/005.png" lightbox="media/continuous-monitoring/005.png" alt-text="Screenshot that shows Query Azure Monitor alerts.":::
-
-1. Under **Evaluation options**, enter the values you want for settings like **The time between re-evaluation of gates** and **The timeout after which gates fail**.
-
-## View release logs
-
-You can see deployment gate behavior and other release steps in the release logs. To open the logs:
-
-1. Select **Releases** from the left menu of the pipeline page.
-
-1. Select any release.
-
-1. Under **Stages**, select any stage to view a release summary.
-
-1. To view logs, select **View logs** in the release summary, select the **Succeeded** or **Failed** hyperlink in any stage, or hover over any stage and select **Logs**.
-
- :::image type="content" source="media/continuous-monitoring/006.png" lightbox="media/continuous-monitoring/006.png" alt-text="Screenshot that shows viewing release logs.":::
-
-## Next steps
-
-For more information about Azure Pipelines, see the [Azure Pipelines documentation](/azure/devops/pipelines).
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
See these other automation articles:
* [Create an Application Insights resource](./create-workspace-resource.md) * [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert). * [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md).
-* [Create release annotations](annotations.md).
+* [Create release annotations](release-and-work-item-insights.md?tabs=release-annotations).
azure-monitor Release And Work Item Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-and-work-item-insights.md
+
+ Title: Release and work item insights for Application Insights
+description: Learn how to set up continuous monitoring of your release pipeline, create work items in GitHub or Azure DevOps, and track deployment or other significant events.
++ Last updated : 10/06/2023+++
+# Release and work item insights
+
+Release and work item insights are crucial for optimizing the software development lifecycle. As applications evolve, it's vital to monitor each release and its work items closely. These insights highlight performance bottlenecks and let teams address issues proactively, ensuring smooth deployment and user experience. They equip developers and stakeholders to make decisions, adjust processes, and deliver high-quality software.
+
+## [Continuous monitoring](#tab/continuous-monitoring)
+
+Azure Pipelines integrates with Application Insights to allow continuous monitoring of your Azure DevOps release pipeline throughout the software development lifecycle.
+
+With continuous monitoring, release pipelines can incorporate monitoring data from Application Insights and other Azure resources. When the release pipeline detects an Application Insights alert, the pipeline can gate or roll back the deployment until the alert is resolved. If all checks pass, deployments can proceed automatically from test all the way to production, without the need for manual intervention.
+
+## Configure continuous monitoring
+
+1. In [Azure DevOps](https://dev.azure.com), select an organization and project.
+
+1. On the left menu of the project page, select **Pipelines** > **Releases**.
+
+1. Select the dropdown arrow next to **New** and select **New release pipeline**. Or, if you don't have a pipeline yet, select **New pipeline** on the page that appears.
+
+1. On the **Select a template** pane, search for and select **Azure App Service deployment with continuous monitoring**, and then select **Apply**.
+
+ :::image type="content" source="media/release-and-work-item-insights/001.png" lightbox="media/release-and-work-item-insights/001.png" alt-text="Screenshot that shows a new Azure Pipelines release pipeline.":::
+
+1. In the **Stage 1** box, select the hyperlink to **View stage tasks.**
+
+ :::image type="content" source="media/release-and-work-item-insights/002.png" lightbox="media/release-and-work-item-insights/002.png" alt-text="Screenshot that shows View stage tasks.":::
+
+1. In the **Stage 1** configuration pane, fill in the following fields:
+
+ | Parameter | Value |
+ | - |:--|
+ | **Stage name** | Provide a stage name or leave it at **Stage 1**. |
+ | **Azure subscription** | Select the dropdown arrow and select the linked Azure subscription you want to use.|
+ | **App type** | Select the dropdown arrow and select your app type. |
+ | **App Service name** | Enter the name of your Azure App Service. |
+ | **Resource Group name for Application Insights** | Select the dropdown arrow and select the resource group you want to use. |
+ | **Application Insights resource name** | Select the dropdown arrow and select the Application Insights resource for the resource group you selected.
+
+1. To save the pipeline with default alert rule settings, select **Save** in the upper-right corner of the Azure DevOps window. Enter a descriptive comment and select **OK**.
+
+## Modify alert rules
+
+Out of the box, the **Azure App Service deployment with continuous monitoring** template has four alert rules: **Availability**, **Failed requests**, **Server response time**, and **Server exceptions**. You can add more rules or change the rule settings to meet your service level needs.
+
+To modify alert rule settings:
+
+In the left pane of the release pipeline page, select **Configure Application Insights Alerts**.
+
+The four default alert rules are created via an Inline script:
+
+```azurecli
+$subscription = az account show --query "id";$subscription.Trim("`"");$resource="/subscriptions/$subscription/resourcegroups/"+"$(Parameters.AppInsightsResourceGroupName)"+"/providers/microsoft.insights/components/" + "$(Parameters.ApplicationInsightsResourceName)";
+az monitor metrics alert create -n 'Availability_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'avg availabilityResults/availabilityPercentage < 99' --description "created from Azure DevOps";
+az monitor metrics alert create -n 'FailedRequests_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'count requests/failed > 5' --description "created from Azure DevOps";
+az monitor metrics alert create -n 'ServerResponseTime_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'avg requests/duration > 5' --description "created from Azure DevOps";
+az monitor metrics alert create -n 'ServerExceptions_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'count exceptions/server > 5' --description "created from Azure DevOps";
+```
+
+You can modify the script and add more alert rules. You can also modify the alert conditions. And you can remove alert rules that don't make sense for your deployment purposes.
+
+## Add deployment conditions
+
+When you add deployment gates to your release pipeline, an alert that exceeds the thresholds you set prevents unwanted release promotion. After you resolve the alert, the deployment can proceed automatically.
+
+To add deployment gates:
+
+1. On the main pipeline page, under **Stages**, select the **Pre-deployment conditions** or **Post-deployment conditions** symbol, depending on which stage needs a continuous monitoring gate.
+
+ :::image type="content" source="media/release-and-work-item-insights/004.png" lightbox="media/release-and-work-item-insights/004.png" alt-text="Screenshot that shows Pre-deployment conditions.":::
+
+1. In the **Pre-deployment conditions** configuration pane, set **Gates** to **Enabled**.
+
+1. Next to **Deployment gates**, select **Add**.
+
+1. Select **Query Azure Monitor alerts** from the dropdown menu. This option lets you access both Azure Monitor and Application Insights alerts.
+
+ :::image type="content" source="media/release-and-work-item-insights/005.png" lightbox="media/release-and-work-item-insights/005.png" alt-text="Screenshot that shows Query Azure Monitor alerts.":::
+
+1. Under **Evaluation options**, enter the values you want for settings like **The time between re-evaluation of gates** and **The timeout after which gates fail**.
+
+## View release logs
+
+You can see deployment gate behavior and other release steps in the release logs. To open the logs:
+
+1. Select **Releases** from the left menu of the pipeline page.
+
+1. Select any release.
+
+1. Under **Stages**, select any stage to view a release summary.
+
+1. To view logs, select **View logs** in the release summary, select the **Succeeded** or **Failed** hyperlink in any stage, or hover over any stage and select **Logs**.
+
+ :::image type="content" source="media/release-and-work-item-insights/006.png" lightbox="media/release-and-work-item-insights/006.png" alt-text="Screenshot that shows viewing release logs.":::
+
+## [Release annotations](#tab/release-annotations)
+
+Annotations show where you deployed a new build or other significant events. Annotations make it easy to see whether your changes had any effect on your application's performance. They can be automatically created by the [Azure Pipelines](/azure/devops/pipelines/tasks/) build system. You can also create annotations to flag any event you want by creating them from PowerShell.
+
+## Release annotations with Azure Pipelines build
+
+Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps.
+
+If all the following criteria are met, the deployment task creates the release annotation automatically:
+
+- The resource to which you're deploying is linked to Application Insights via the `APPINSIGHTS_INSTRUMENTATIONKEY` app setting.
+- The Application Insights resource is in the same subscription as the resource to which you're deploying.
+- You're using one of the following Azure DevOps pipeline tasks:
+
+ | Task code | Task name | Versions |
+ ||-|--|
+ | AzureAppServiceSettings | Azure App Service Settings | Any |
+ | AzureRmWebAppDeployment | Azure App Service deploy | V3 and above |
+ | AzureFunctionApp | Azure Functions | Any |
+ | AzureFunctionAppContainer | Azure Functions for container | Any |
+ | AzureWebAppContainer | Azure Web App for Containers | Any |
+ | AzureWebApp | Azure Web App | Any |
+
+> [!NOTE]
+> If you're still using the Application Insights annotation deployment task, you should delete it.
+
+### Configure release annotations
+
+If you can't use one of the deployment tasks in the previous section, you need to add an inline script task in your deployment pipeline.
+
+1. Go to a new or existing pipeline and select a task.
+
+ :::image type="content" source="./media/release-and-work-item-insights/task.png" alt-text="Screenshot that shows a task selected under Stages." lightbox="./media/release-and-work-item-insights/task.png":::
+1. Add a new task and select **Azure CLI**.
+
+ :::image type="content" source="./media/release-and-work-item-insights/add-azure-cli.png" alt-text="Screenshot that shows adding a new task and selecting Azure CLI." lightbox="./media/release-and-work-item-insights/add-azure-cli.png":::
+1. Specify the relevant Azure subscription. Change **Script Type** to **PowerShell** and **Script Location** to **Inline**.
+1. Add the [PowerShell script from step 2 in the next section](#create-release-annotations-with-the-azure-cli) to **Inline Script**.
+1. Add the following arguments. Replace the angle-bracketed placeholders with your values to **Script Arguments**. The `-releaseProperties` are optional.
+
+ ```powershell
+ -aiResourceId "<aiResourceId>" `
+ -releaseName "<releaseName>" `
+ -releaseProperties @{"ReleaseDescription"="<a description>";
+ "TriggerBy"="<Your name>" }
+ ```
+
+ :::image type="content" source="./media/release-and-work-item-insights/inline-script.png" alt-text="Screenshot of Azure CLI task settings with Script Type, Script Location, Inline Script, and Script Arguments highlighted." lightbox="./media/release-and-work-item-insights/inline-script.png":::
+
+ The following example shows metadata you can set in the optional `releaseProperties` argument by using [build](/azure/devops/pipelines/build/variables#build-variables-devops-services) and [release](/azure/devops/pipelines/release/variables#default-variablesrelease) variables.
+
+ ```powershell
+ -releaseProperties @{
+ "BuildNumber"="$(Build.BuildNumber)";
+ "BuildRepositoryName"="$(Build.Repository.Name)";
+ "BuildRepositoryProvider"="$(Build.Repository.Provider)";
+ "ReleaseDefinitionName"="$(Build.DefinitionName)";
+ "ReleaseDescription"="Triggered by $(Build.DefinitionName) $(Build.BuildNumber)";
+ "ReleaseEnvironmentName"="$(Release.EnvironmentName)";
+ "ReleaseId"="$(Release.ReleaseId)";
+ "ReleaseName"="$(Release.ReleaseName)";
+ "ReleaseRequestedFor"="$(Release.RequestedFor)";
+ "ReleaseWebUrl"="$(Release.ReleaseWebUrl)";
+ "SourceBranch"="$(Build.SourceBranch)";
+ "TeamFoundationCollectionUri"="$(System.TeamFoundationCollectionUri)" }
+ ```
+
+1. Select **Save**.
+
+## Create release annotations with the Azure CLI
+
+You can use the `CreateReleaseAnnotation` PowerShell script to create annotations from any process you want without using Azure DevOps.
+
+1. Sign in to the [Azure CLI](/cli/azure/authenticate-azure-cli).
+
+1. Make a local copy of the following script and call it `CreateReleaseAnnotation.ps1`.
+
+ ```powershell
+ param(
+ [parameter(Mandatory = $true)][string]$aiResourceId,
+ [parameter(Mandatory = $true)][string]$releaseName,
+ [parameter(Mandatory = $false)]$releaseProperties = @()
+ )
+
+ $annotation = @{
+ Id = [GUID]::NewGuid();
+ AnnotationName = $releaseName;
+ EventTime = (Get-Date).ToUniversalTime().GetDateTimeFormats("s")[0];
+ Category = "Deployment"; #Application Insights only displays annotations from the "Deployment" Category
+ Properties = ConvertTo-Json $releaseProperties -Compress
+ }
+
+ $body = (ConvertTo-Json $annotation -Compress) -replace '(\\+)"', '$1$1"' -replace "`"", "`"`""
+ az rest --method put --uri "$($aiResourceId)/Annotations?api-version=2015-05-01" --body "$($body) "
+
+ # Use the following command for Linux Azure DevOps Hosts or other PowerShell scenarios
+ # Invoke-AzRestMethod -Path "$aiResourceId/Annotations?api-version=2015-05-01" -Method PUT -Payload $body
+ ```
+
+ > [!NOTE]
+ > Your annotations must have **Category** set to **Deployment** to appear in the Azure portal.
+
+1. Call the PowerShell script with the following code. Replace the angle-bracketed placeholders with your values. The `-releaseProperties` are optional.
+
+ ```powershell
+ .\CreateReleaseAnnotation.ps1 `
+ -aiResourceId "<aiResourceId>" `
+ -releaseName "<releaseName>" `
+ -releaseProperties @{"ReleaseDescription"="<a description>";
+ "TriggerBy"="<Your name>" }
+ ```
+
+ |Argument | Definition | Note|
+ |--|--|--|
+ |`aiResourceId` | The resource ID to the target Application Insights resource. | Example:<br> /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRGName/providers/microsoft.insights/components/MyResourceName|
+ |`releaseName` | The name to give the created release annotation. | |
+ |`releaseProperties` | Used to attach custom metadata to the annotation. | Optional|
+
+## View annotations
+
+> [!NOTE]
+> Release annotations aren't currently available in the **Metrics** pane of Application Insights.
+
+Whenever you use the release template to deploy a new release, an annotation is sent to Application Insights. You can view annotations in the following locations:
+
+- **Performance:**
+
+ :::image type="content" source="./media/release-and-work-item-insights/performance.png" alt-text="Screenshot that shows the Performance tab with a release annotation selected to show the Release Properties tab." lightbox="./media/release-and-work-item-insights/performance.png":::
+
+- **Failures:**
+
+ :::image type="content" source="./media/release-and-work-item-insights/failures.png" alt-text="Screenshot that shows the Failures tab with a release annotation selected to show the Release Properties tab." lightbox="./media/release-and-work-item-insights/failures.png":::
+- **Usage:**
+
+ :::image type="content" source="./media/release-and-work-item-insights/usage-pane.png" alt-text="Screenshot that shows the Users tab bar with release annotations selected. Release annotations appear as blue arrows above the chart indicating the moment in time that a release occurred." lightbox="./media/release-and-work-item-insights/usage-pane.png":::
+
+- **Workbooks:**
+
+ In any log-based workbook query where the visualization displays time along the x-axis:
+
+ :::image type="content" source="./media/release-and-work-item-insights/workbooks-annotations.png" alt-text="Screenshot that shows the Workbooks pane with a time series log-based query with annotations displayed." lightbox="./media/release-and-work-item-insights/workbooks-annotations.png":::
+
+To enable annotations in your workbook, go to **Advanced Settings** and select **Show annotations**.
+
+
+Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment.
+
+## Release annotations by using API keys
+
+Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps.
+
+> [!IMPORTANT]
+> Annotations using API keys is deprecated. We recommend using the [Azure CLI](#create-release-annotations-with-the-azure-cli) instead.
+
+### Install the annotations extension (one time)
+
+To create release annotations, install one of the many Azure DevOps extensions available in Visual Studio Marketplace.
+
+1. Sign in to your [Azure DevOps](https://azure.microsoft.com/services/devops/) project.
+
+1. On the **Visual Studio Marketplace** [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization. Select **Install** to add the extension to your Azure DevOps organization.
+
+ :::image type="content" source="./media/release-and-work-item-insights/1-install.png" lightbox="./media/release-and-work-item-insights/1-install.png" alt-text="Screenshot that shows selecting an Azure DevOps organization and selecting Install.":::
+
+You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization.
+
+### Configure release annotations by using API keys
+
+Create a separate API key for each of your Azure Pipelines release templates.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the Application Insights resource that monitors your application. Or if you don't have one, [create a new Application Insights resource](create-workspace-resource.md).
+
+1. Open the **API Access** tab and copy the **Application Insights ID**.
+
+ :::image type="content" source="./media/release-and-work-item-insights/2-app-id.png" lightbox="./media/release-and-work-item-insights/2-app-id.png" alt-text="Screenshot that shows under API Access, copying the Application ID.":::
+
+1. In a separate browser window, open or create the release template that manages your Azure Pipelines deployments.
+
+1. Select **Add task** and then select the **Application Insights Release Annotation** task from the menu.
+
+ :::image type="content" source="./media/release-and-work-item-insights/3-add-task.png" lightbox="./media/release-and-work-item-insights/3-add-task.png" alt-text="Screenshot that shows selecting Add Task and Application Insights Release Annotation.":::
+
+ > [!NOTE]
+ > The Release Annotation task currently supports only Windows-based agents. It won't run on Linux, macOS, or other types of agents.
+
+1. Under **Application ID**, paste the Application Insights ID you copied from the **API Access** tab.
+
+ :::image type="content" source="./media/release-and-work-item-insights/4-paste-app-id.png" lightbox="./media/release-and-work-item-insights/4-paste-app-id.png" alt-text="Screenshot that shows pasting the Application Insights ID.":::
+
+1. Back in the Application Insights **API Access** window, select **Create API Key**.
+
+ :::image type="content" source="./media/release-and-work-item-insights/5-create-api-key.png" lightbox="./media/release-and-work-item-insights/5-create-api-key.png" alt-text="Screenshot that shows selecting the Create API Key on the API Access tab.":::
+
+1. In the **Create API key** window, enter a description, select **Write annotations**, and then select **Generate key**. Copy the new key.
+
+ :::image type="content" source="./media/release-and-work-item-insights/6-create-api-key.png" lightbox="./media/release-and-work-item-insights/6-create-api-key.png" alt-text="Screenshot that shows in the Create API key window, entering a description, selecting Write annotations, and then selecting the Generate key.":::
+
+1. In the release template window, on the **Variables** tab, select **Add** to create a variable definition for the new API key.
+
+1. Under **Name**, enter **ApiKey**. Under **Value**, paste the API key you copied from the **API Access** tab.
+
+ :::image type="content" source="./media/release-and-work-item-insights/7-paste-api-key.png" lightbox="./media/release-and-work-item-insights/7-paste-api-key.png" alt-text="Screenshot that shows in the Azure DevOps Variables tab, selecting Add, naming the variable ApiKey, and pasting the API key under Value.":::
+
+1. Select **Save** in the main release template window to save the template.
+
+ > [!NOTE]
+ > Limits for API keys are described in the [REST API rate limits documentation](/rest/api/yammer/rest-api-rate-limits).
+
+### Transition to the new release annotation
+
+To use the new release annotations:
+1. [Remove the Release Annotations extension](/azure/devops/marketplace/uninstall-disable-extensions).
+1. Remove the Application Insights Release Annotation task in your Azure Pipelines deployment.
+1. Create new release annotations with [Azure Pipelines](#release-annotations-with-azure-pipelines-build) or the [Azure CLI](#create-release-annotations-with-the-azure-cli).
+
+## [Work item integration](#tab/work-item-integration)
+
+Work item integration functionality allows you to easily create work items in GitHub or Azure DevOps that have relevant Application Insights data embedded in them.
++
+The new work item integration offers the following features over [classic](#classic-work-item-integration):
+- Advanced fields like assignee, projects, or milestones.
+- Repo icons so you can differentiate between GitHub & Azure DevOps workbooks.
+- Multiple configurations for any number of repositories or work items.
+- Deployment through Azure Resource Manager templates.
+- Pre-built & customizable Keyword Query Language (KQL) queries to add Application Insights data to your work items.
+- Customizable workbook templates.
++
+## Create and configure a work item template
+
+1. To create a work item template, go to your Application Insights resource and on the left under *Configure* select **Work Items** then at the top select **Create a new template**
+
+ :::image type="content" source="./media/release-and-work-item-insights/create-work-item-template.png" alt-text=" Screenshot of the Work Items tab with create a new template selected." lightbox="./media/release-and-work-item-insights/create-work-item-template.png":::
+
+ You can also create a work item template from the end-to-end transaction details tab, if no template currently exists. Select an event and on the right select **Create a work item**, then **Start with a workbook template**.
+
+ :::image type="content" source="./media/release-and-work-item-insights/create-template-from-transaction-details.png" alt-text=" Screenshot of end-to-end transaction details tab with create a work item, start with a workbook template selected." lightbox="./media/release-and-work-item-insights/create-template-from-transaction-details.png":::
+
+2. After you select **create a new template**, you can choose your tracking systems, name your workbook, link to your selected tracking system, and choose a region to storage the template (the default is the region your Application Insights resource is located in). The URL parameters are the default URL for your repository, for example, `https://github.com/myusername/reponame` or `https://mydevops.visualstudio.com/myproject`.
+
+ :::image type="content" source="./media/release-and-work-item-insights/create-workbook.png" alt-text=" Screenshot of create a new work item workbook template.":::
+
+ You can set specific work item properties directly from the template itself. This includes the assignee, iteration path, projects, & more depending on your version control provider.
+
+## Create a work item
+
+ You can access your new template from any End-to-end transaction details that you can access from Performance, Failures, Availability, or other tabs.
+
+1. To create a work item go to End-to-end transaction details, select an event then select **Create work item** and choose your work item template.
+
+ :::image type="content" source="./media/release-and-work-item-insights/create-work-item.png" alt-text=" Screenshot of end to end transaction details tab with create work item selected." lightbox="./media/release-and-work-item-insights/create-work-item.png":::
+
+1. A new tab in your browser will open up to your select tracking system. In Azure DevOps you can create a bug or task, and in GitHub you can create a new issue in your repository. A new work item is automatically create with contextual information provided by Application Insights.
+
+ :::image type="content" source="./media/release-and-work-item-insights/github-work-item.png" alt-text=" Screenshot of automatically created GitHub issue." lightbox="./media/release-and-work-item-insights/github-work-item.png":::
+
+ :::image type="content" source="./media/release-and-work-item-insights/azure-devops-work-item.png" alt-text=" Screenshot of automatically create bug in Azure DevOps." lightbox="./media/release-and-work-item-insights/azure-devops-work-item.png":::
+
+## Edit a template
+
+To edit your template, go to the **Work Items** tab under *Configure* and select the pencil icon next to the workbook you would like to update.
++
+Select edit :::image type="icon" source="./media/release-and-work-item-insights/edit-icon.png"::: in the top toolbar.
++
+You can create more than one work item configuration and have a custom workbook to meet each scenario. The workbooks can also be deployed by Azure Resource Manager ensuring standard implementations across your environments.
+
+## Classic work item integration
+
+1. In your Application Insights resource under *Configure* select **Work Items**.
+1. Select **Switch to Classic**, fill out the fields with your information, and authorize.
+
+ :::image type="content" source="./media/release-and-work-item-insights/classic.png" alt-text=" Screenshot of how to configure classic work items." lightbox="./media/release-and-work-item-insights/classic.png":::
+
+1. Create a work item by going to the end-to-end transaction details, select an event then select **Create work item (Classic)**.
+
+### Migrate to new work item integration
+
+To migrate, delete your classic work item configuration then [create and configure a work item template](#create-and-configure-a-work-item-template) to recreate your integration.
+
+To delete, go to in your Application Insights resource under *Configure* select **Work Items** then select **Switch to Classic** and **Delete* at the top.
+++
+## See also
+
+* [Azure Pipelines documentation](/azure/devops/pipelines)
+* [Create work items](./diagnostic-search.md#create-work-item)
+* [Automation with PowerShell](./powershell.md)
+* [Availability test](availability-overview.md)
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
The build version number is generated only by the Microsoft Build Engine, not by
### Release annotations
-If you use Azure DevOps, you can [get an annotation marker](../../azure-monitor/app/annotations.md) added to your charts whenever you release a new version.
+If you use Azure DevOps, you can [get an annotation marker](./release-and-work-item-insights.md?tabs=release-annotations) added to your charts whenever you release a new version.
## Frequently asked questions
Unique customizations that commonly need to be manually re-created or updated fo
- Re-create availability alerts. - Re-create any custom Azure role-based access control settings that are required for your users to access the new resource. - Replicate settings involving ingestion sampling, data retention, daily cap, and custom metrics enablement. These settings are controlled via the **Usage and estimated costs** pane.-- Any integration that relies on API keys, such as [release annotations](./annotations.md) and [live metrics secure control channel](./live-stream.md#secure-the-control-channel). You need to generate new API keys and update the associated integration.
+- Any integration that relies on API keys, such as [release annotations](./release-and-work-item-insights.md?tabs=release-annotations) and [live metrics secure control channel](./live-stream.md#secure-the-control-channel). You need to generate new API keys and update the associated integration.
- Continuous export in classic resources must be configured again. - Diagnostic settings in workspace-based resources must be configured again.
azure-monitor Work Item Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/work-item-integration.md
- Title: Work Item Integration - Application Insights
-description: Learn how to create work items in GitHub or Azure DevOps with Application Insights data embedded in them.
- Previously updated : 06/27/2021---
-# Work Item Integration
-
-Work item integration functionality allows you to easily create work items in GitHub or Azure DevOps that have relevant Application Insights data embedded in them.
--
-The new work item integration offers the following features over [classic](#classic-work-item-integration):
-- Advanced fields like assignee, projects, or milestones.-- Repo icons so you can differentiate between GitHub & Azure DevOps workbooks.-- Multiple configurations for any number of repositories or work items.-- Deployment through Azure Resource Manager templates.-- Pre-built & customizable Keyword Query Language (KQL) queries to add Application Insights data to your work items.-- Customizable workbook templates.--
-## Create and configure a work item template
-
-1. To create a work item template, go to your Application Insights resource and on the left under *Configure* select **Work Items** then at the top select **Create a new template**
-
- :::image type="content" source="./media/work-item-integration/create-work-item-template.png" alt-text=" Screenshot of the Work Items tab with create a new template selected." lightbox="./media/work-item-integration/create-work-item-template.png":::
-
- You can also create a work item template from the end-to-end transaction details tab, if no template currently exists. Select an event and on the right select **Create a work item**, then **Start with a workbook template**.
-
- :::image type="content" source="./media/work-item-integration/create-template-from-transaction-details.png" alt-text=" Screenshot of end-to-end transaction details tab with create a work item, start with a workbook template selected." lightbox="./media/work-item-integration/create-template-from-transaction-details.png":::
-
-2. After you select **create a new template**, you can choose your tracking systems, name your workbook, link to your selected tracking system, and choose a region to storage the template (the default is the region your Application Insights resource is located in). The URL parameters are the default URL for your repository, for example, `https://github.com/myusername/reponame` or `https://mydevops.visualstudio.com/myproject`.
-
- :::image type="content" source="./media/work-item-integration/create-workbook.png" alt-text=" Screenshot of create a new work item workbook template.":::
-
- You can set specific work item properties directly from the template itself. This includes the assignee, iteration path, projects, & more depending on your version control provider.
-
-## Create a work item
-
- You can access your new template from any End-to-end transaction details that you can access from Performance, Failures, Availability, or other tabs.
-
-1. To create a work item go to End-to-end transaction details, select an event then select **Create work item** and choose your work item template.
-
- :::image type="content" source="./media/work-item-integration/create-work-item.png" alt-text=" Screenshot of end to end transaction details tab with create work item selected." lightbox="./media/work-item-integration/create-work-item.png":::
-
-1. A new tab in your browser will open up to your select tracking system. In Azure DevOps you can create a bug or task, and in GitHub you can create a new issue in your repository. A new work item is automatically create with contextual information provided by Application Insights.
-
- :::image type="content" source="./media/work-item-integration/github-work-item.png" alt-text=" Screenshot of automatically created GitHub issue" lightbox="./media/work-item-integration/github-work-item.png":::
-
- :::image type="content" source="./media/work-item-integration/azure-devops-work-item.png" alt-text=" Screenshot of automatically create bug in Azure DevOps." lightbox="./media/work-item-integration/azure-devops-work-item.png":::
-
-## Edit a template
-
-To edit your template, go to the **Work Items** tab under *Configure* and select the pencil icon next to the workbook you would like to update.
--
-Select edit :::image type="content" source="./media/work-item-integration/edit-icon.png" lightbox="./media/work-item-integration/edit-icon.png" alt-text="edit icon"::: in the top toolbar.
--
-You can create more than one work item configuration and have a custom workbook to meet each scenario. The workbooks can also be deployed by Azure Resource Manager ensuring standard implementations across your environments.
-
-## Classic work item integration
-
-1. In your Application Insights resource under *Configure* select **Work Items**.
-1. Select **Switch to Classic**, fill out the fields with your information, and authorize.
-
- :::image type="content" source="./media/work-item-integration/classic.png" alt-text=" Screenshot of how to configure classic work items." lightbox="./media/work-item-integration/classic.png":::
-
-1. Create a work item by going to the end-to-end transaction details, select an event then select **Create work item (Classic)**.
--
-### Migrate to new work item integration
-
-To migrate, delete your classic work item configuration then [create and configure a work item template](#create-and-configure-a-work-item-template) to recreate your integration.
-
-To delete, go to in your Application Insights resource under *Configure* select **Work Items** then select **Switch to Classic** and **Delete* at the top.
--
-## Next steps
-[Availability test](availability-overview.md)
-
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Following are common scenarios for monitoring your application.
**Health monitoring**<br> - Create an [Availability test](../app/availability-overview.md) in Application insights to create a recurring test to monitor the availability and responsiveness of your application. - Use the [SLA report](../app/sla-report.md) to calculate and report SLA for web tests.-- Use [annotations](../app/annotations.md) to identify when a new build is deployed so that you can visually inspect any change in performance after the update.
+- Use [annotations](../app/release-and-work-item-insights.md?tabs=release-annotations) to identify when a new build is deployed so that you can visually inspect any change in performance after the update.
**Application logs**<br> - Container insights sends stdout/stderr logs to a Log Analytics workspace. See [Resource logs](../../aks/monitor-aks-reference.md#resource-logs) for a description of the different logs and [Kubernetes Services](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) for a list of the tables each is sent to.
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
Last updated 07/17/2023 ms.reviwer: nikeist- # Data collection transformations in Azure Monitor
The following example is a DCR for data from the Logs Ingestion API that sends d
### Combination of Azure and custom tables
-The following example is a DCR for data from the Logs Ingestion API that sends data to both the `Syslog` table and a custom table with the data in a different format. This DCR requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each.
+The following example is a DCR for data from the Logs Ingestion API that sends data to both the `Syslog` table and a custom table with the data in a different format. This DCR requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each. When using custom tables, it is important to ensure that the schema of the destination (your custom table) contains the custom columns ([how-to add or delete custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column)) that match the schema of the records you are sending. For instance, if your record has a field called SyslogMessage, but the destination custom table only has TimeGenerated and RawData, youΓÇÖll receive an event in the custom table with only the TimeGenerated field populated and the RawData field will be empty. The SyslogMessage field will be dropped because the schema of the destination table doesnΓÇÖt contain a string field called SyslogMessage.
```json {
The following example is a DCR for data from the Logs Ingestion API that sends d
## Next steps [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine by using Azure Monitor Agent.+
azure-monitor Computer Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/computer-groups.md
Computer groups in Azure Monitor allow you to scope [log queries](./log-query-ov
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-log-analytics-rebrand.md)]
+## Permissions required
+
+| Action | Permissions required |
+|:|:|
+| Create a computer group from a log query. | `microsoft.operationalinsights/workspaces/savedSearches/write` permissions to the Log Analytics workspace where you want to create the computer group, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example. |
+| Run a computer group's log search or use a computer group in a log query. | `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example. |
+| Delete a computer group. | `microsoft.operationalinsights/workspaces/savedSearches/delete` permissions to the Log Analytics workspace where the computer group is saved, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example. |
+ ## Creating a computer group You can create a computer group in Azure Monitor using the methods in the following table. Details on each method are provided in the sections below.
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
There are two methods to query data that's stored in multiple workspaces and app
> [!IMPORTANT] > If you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the `workspace()` expression to write a query that includes applications in multiple workspaces. For multiple applications in the same workspace, you don't need a cross-workspace query.
+## Permissions required
+
+- You must have `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example.
+- To save a query, you must have `microsoft.operationalinsights/querypacks/queries/action` permisisons to the query pack where you want to save the query, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example.
+ ## Cross-resource query limits * The number of Application Insights components and Log Analytics workspaces that you can include in a single query is limited to 100. * Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md).
-* References to a cross resource, such as another workspace, should be explicit and can't be parameterized. See [Identify workspace resources](#identify-workspace-resources) for examples.
+* References to a cross resource, such as another workspace, should be explicit and can't be parameterized. See [Gather identifiers for Log Analytics workspaces](?tabs=workspace-identifier#gather-identifiers-for-log-analytics-workspaces-and-application-insights-resources) for examples.
+
+## Gather identifiers for Log Analytics workspaces and Application Insights resources
-## Query across Log Analytics workspaces and from Application Insights
To reference another workspace in your query, use the [workspace](../logs/workspace-expression.md) identifier. For an app from Application Insights, use the [app](./app-expression.md) identifier.
-### Identify workspace resources
+### [Workspace identifier](#tab/workspace-identifier)
You can identify a workspace using one of these IDs:
You can identify a workspace using one of these IDs:
workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail-it").Update | count ```
-### Identify an application
+### [App identifier](#tab/app-identifier)
The following examples return a summarized count of requests made against an app named *fabrikamapp* in Application Insights. You can identify an app using one of these IDs:
You can identify an app using one of these IDs:
app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count ```
-### Perform a query across multiple resources
++
+## Query across Log Analytics workspaces and from Application Insights
+
+Follow the instructions in this section to query without using a function or by using a function.
+
+### Query without using a function
You can query multiple resources from any of your resource instances. These resources can be workspaces and apps combined. Example for a query across three workspaces:
union
| summarize dcount(Computer) by Classification ```
-## Use a cross-resource query for multiple resources
+For more information on the union, where, and summarize operators, see [union operator](/azure/data-explorer/kusto/query/unionoperator), [where operator](/azure/data-explorer/kusto/query/summarizeoperator), and [summarize operator](/azure/data-explorer/kusto/query/summarizeoperator).
+
+### Query by using a function
When you use cross-resource queries to correlate data from multiple Log Analytics workspaces and Application Insights components, the query can become complex and difficult to maintain. You should make use of [functions in Azure Monitor log queries](./functions.md) to separate the query logic from the scoping of the query resources. This method simplifies the query structure. The following example demonstrates how you can monitor multiple Application Insights components and visualize the count of failed requests by application name. Create a query like the following example that references the scope of Application Insights components. The `withsource= SourceApp` command adds a column that designates the application name that sent the log. [Save the query as a function](./functions.md#create-a-function) with the alias `applicationsScoping`.
azure-monitor Ingest Logs Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingest-logs-event-hub.md
Azure Monitor currently supports ingestion from Event Hubs in these regions:
## Collect required information
-You need your subscription ID, resource group name, workspace name, workspace resource ID, and event hub resource ID in subsequent steps:
+You need your subscription ID, resource group name, workspace name, workspace resource ID, and event hub instance resource ID in subsequent steps:
1. Navigate to your workspace in the **Log Analytics workspaces** menu and select **Properties** and copy your **Subscription ID**, **Resource group**, and **Workspace name**. You'll need these details to create resources in this tutorial.
You need your subscription ID, resource group name, workspace name, workspace re
:::image type="content" source="media/ingest-logs-event-hub/log-analytics-workspace-id.png" lightbox="media/ingest-logs-event-hub/log-analytics-workspace-id.png" alt-text="Screenshot showing the Resource JSON screen with the workspace resource ID highlighted.":::
-1. Navigate to your event hub instance, select **JSON** to open the **Resource JSON** screen, and copy the event hub's **Resource ID**. You'll need the event hub's resource ID to associate the data collection rule with the event hub.
+1. Navigate to your event hub instance, select **JSON** to open the **Resource JSON** screen, and copy the event hub instance's **Resource ID**. You'll need the event hub instance's resource ID to associate the data collection rule with the event hub.
:::image type="content" source="media/ingest-logs-event-hub/event-hub-resource-id.png" lightbox="media/ingest-logs-event-hub/event-hub-resource-id.png" alt-text="Screenshot showing the Resource JSON screen with the event hub resource ID highlighted."::: ## Create a destination table in your Log Analytics workspace
To create a data collection rule association in the Azure portal:
1. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule association and then provide values for the parameters defined in the template, including: - **Region** - Populated automatically based on the resource group you select.
- - **Event Hub Resource ID** - See [Collect required information](#collect-required-information).
+ - **Event Hub Instance Resource ID** - See [Collect required information](#collect-required-information).
- **Association Name** - Give the association a name. - **Data Collection Rule ID** - Generated when you [create the data collection rule](#create-a-data-collection-rule).
azure-monitor Move Workspace Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace-region.md
The following procedures show how to prepare the workspace and resources for the
| summarize max(TimeGenerated) by Type ```
-After data sources are connected to the target workspace, ingested data is stored in the target workspace. Older data stays in the original workspace and is subject to the retention policy. You can perform a [cross-workspace query](./cross-workspace-query.md#perform-a-query-across-multiple-resources). If both workspaces were assigned the same name, use a qualified name (*subscriptionName/resourceGroup/componentName*) in the workspace reference.
+After data sources are connected to the target workspace, ingested data is stored in the target workspace. Older data stays in the original workspace and is subject to the retention policy. You can perform a [cross-workspace query](./cross-workspace-query.md). If both workspaces were assigned the same name, use a qualified name (*subscriptionName/resourceGroup/componentName*) in the workspace reference.
Here's an example for a query across two workspaces that have the same name:
If you want to discard the source workspace, delete the exported resources or th
## Clean up
-While new data is being ingested to your new workspace, older data in the original workspace remains available for query and is subject to the retention policy defined in the workspace. We recommend that you keep the original workspace for as long as you need older data to [query across](./cross-workspace-query.md#perform-a-query-across-multiple-resources) workspaces.
+While new data is being ingested to your new workspace, older data in the original workspace remains available for query and is subject to the retention policy defined in the workspace. We recommend that you keep the original workspace for as long as you need older data to [query across](./cross-workspace-query.md) workspaces.
If you no longer need access to older data in the original workspace:
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
A query that spans more than five workspaces is considered a query that consumes
> [!IMPORTANT] > - In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
-> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Azure Resource ID, consume less resources and are more performant. See [Create a log query across multiple workspaces](./cross-workspace-query.md#identify-workspace-resources)
+> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Azure Resource ID, consume less resources and are more performant. See [Gather identifiers for Log Analytics workspaces](./cross-workspace-query.md?tabs=workspace-identifier#gather-identifiers-for-log-analytics-workspaces-and-application-insights-resources)
## Parallelism Azure Monitor Logs uses large clusters of Azure Data Explorer to run queries. These clusters vary in scale and potentially get up to dozens of compute nodes. The system automatically scales the clusters according to workspace placement logic and capacity.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
You may need to integrate Azure Monitor with other systems or to build custom so
|[API](/rest/api/monitor/)|Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor.| |[Azure Logic Apps](../logic-apps/logic-apps-overview.md)|Azure Logic Apps is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services with little or no code. Activities are available that read and write metrics and logs in Azure Monitor. You can use Logic Apps to [customize responses and perform other actions in response to Azure Monitor alerts](alerts/alerts-logic-apps.md). You can also perform other [more complex actions](logs/logicapp-flow-connector.md) when the Azure Monitor infrastructure doesn't already supply a built-it method.| |[Azure Functions](../azure-functions/functions-overview.md)| Similar to Azure Logic Apps, Azure Functions give you the ability to pre process and post process monitoring data as well as perform complex action beyond the scope of typical Azure Monitor alerts. Azure Functions uses code however providing additional flexibility over Logic Apps.
-|Azure DevOps and GitHub | Azure Monitor Application Insights gives you the ability to create [Work Item Integration](app/work-item-integration.md) with monitoring data embedding in it. Additional options include [release annotations](app/annotations.md) and [continuous monitoring](app/continuous-monitoring.md). |
+|Azure DevOps and GitHub | Azure Monitor Application Insights gives you the ability to create [Work Item Integration](app/release-and-work-item-insights.md?tabs=work-item-integration) with monitoring data embedding in it. Additional options include [release annotations](app/release-and-work-item-insights.md?tabs=release-annotations) and [continuous monitoring](app/release-and-work-item-insights.md?tabs=continuous-monitoring). |
## Next steps
azure-resource-manager Request Limits And Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/request-limits-and-throttling.md
Title: Request limits and throttling description: Describes how to use throttling with Azure Resource Manager requests when subscription limits have been reached. Previously updated : 03/02/2023 Last updated : 10/05/2023 # Throttling Resource Manager requests
The Microsoft.Network resource provider applies the following throttle limits:
| write / delete (PUT) | 1000 per 5 minutes | | read (GET) | 10000 per 5 minutes |
-> [!NOTE]
-> **Azure DNS** and **Azure Private DNS** have a throttle limit of 500 read (GET) operations per 5 minutes.
->
+In addition to those general limits, the following limits apply to DNS operations:
+
+| DNS Zone Operation | Limit (per zone) |
+| | -- |
+| Create or Update | 40 per minute |
+| Delete | 40 per minute |
+| Get | 1000 per minute |
+| List | 60 per minute |
+| List By Resource Group | 60 per minute |
+| Update | 40 per minute |
+
+| DNS Record Set Operation | Limit (per zone) |
+| | -- |
+| Create or Update | 200 per minute |
+| Delete | 200 per minute |
+| Get | 1000 per minute |
+| List By DNS Zone | 60 per minute |
+| List By Type | 60 per minute |
+| Update | 200 per minute |
### Compute throttling
azure-web-pubsub Howto Develop Reliable Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-reliable-clients.md
Last updated 01/12/2023
When Websocket client connections drop due to intermittent network issues, messages can be lost. In a pub/sub system, publishers are decoupled from subscribers, so publishers may not detect a subscribers' dropped connection or message loss. It's crucial for clients to overcome intermittent network issues and maintain reliable message delivery. To achieve that, you can create a reliable Websocket client with the help of reliable Azure Web PubSub subprotocols.
-> [!NOTE]
-> Reliable protocols are still in preview. Some changes are expected in the future.
- ## Reliable Protocol The Web PubSub service supports two reliable subprotocols `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1`. Clients must follow the publisher, subscriber, and recovery parts of the subprotocol to achieve reliability. Failing to properly implement the subprotocol may result in the message delivery not working as expected or the service terminating the client due to protocol violations. ## The Easy Way - Use Client SDK
-The simplest way to create a reliable client is to use Client SDK. Client SDK implements [Web PubSub client specification](./reference-client-specification.md) and uses `json.reliable.webpubsub.azure.v1` by default. Please refer to [PubSub with client SDK](./quickstart-use-client-sdk.md) for quick start.
+The simplest way to create a reliable client is to use Client SDK. Client SDK implements [Web PubSub client specification](./reference-client-specification.md) and uses `json.reliable.webpubsub.azure.v1` by default. Please refer to [Publish/subscribe among clients](./quickstarts-pubsub-among-clients.md) for quick start.
## The Hard Way - Implement by hand
azure-web-pubsub Reference Json Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-reliable-webpubsub-subprotocol.md
The JSON WebSocket subprotocol, `json.reliable.webpubsub.azure.v1`, enables the
This document describes the subprotocol `json.reliable.webpubsub.azure.v1`.
-> [!NOTE]
-> Reliable protocols are still in preview. Some changes are expected in the future.
- When WebSocket client connections drop due to intermittent network issues, messages can be lost. In a pub/sub system, publishers are decoupled from subscribers and may not detect a subscribers' dropped connection or message loss. To overcome intermittent network issues and maintain reliable message delivery, you can use the Azure WebPubSub `json.reliable.webpubsub.azure.v1` subprotocol to create a *Reliable PubSub WebSocket client*.
azure-web-pubsub Reference Protobuf Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-protobuf-reliable-webpubsub-subprotocol.md
This document describes the subprotocol `protobuf.reliable.webpubsub.azure.v1`.
When a client is using this subprotocol, both the outgoing and incoming data frames are expected to be protocol buffers (protobuf) payloads.
-> [!NOTE]
-> Reliable protocols are still in preview. Some changes are expected in future.
- ## Overview Subprotocol `protobuf.reliable.webpubsub.azure.v1` empowers the client to have a high reliable message delivery experience under network issues and do a publish-subscribe (PubSub) directly instead of doing a round trip to the upstream server. The WebSocket connection with the `protobuf.reliable.webpubsub.azure.v1` subprotocol is called a Reliable PubSub WebSocket client.
azure-web-pubsub Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/resource-faq.md
Azure Web PubSub service is more suitable for situations where:
## Where does my data reside?
-Azure Web PubSub does not store any customer data. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions.
+Azure Web PubSub does not store any customer data. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [Azure Privacy Overview (white paper)](https://go.microsoft.com/fwlink/p/?linkid=2220836) for guidance about how to keep data residency in Azure regions.
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http.md
Title: Call service endpoints by using HTTP or HTTPS
-description: Send outbound HTTP or HTTPS requests to service endpoints from Azure Logic Apps.
+ Title: Call external service endpoints from workflows
+description: Send outbound HTTP or HTTPS requests to service endpoints from workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 05/31/2022 Last updated : 10/06/2023 tags: connectors
-# Call service endpoints over HTTP or HTTPS from Azure Logic Apps
+# Call external service endpoints over HTTP or HTTPS from workflows in Azure Logic Apps
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the built-in HTTP trigger or action, you can create automated tasks and workflows that can send outbound requests to endpoints on other services and systems over HTTP or HTTPS. To receive and respond to inbound HTTPS calls instead, use the built-in [Request trigger and Response action](../connectors/connectors-native-reqres.md).
+
+This how-to guide shows create a logic app workflow that can send outbound requests to endpoints on other services and systems over HTTP or HTTPS. To receive and respond to inbound HTTPS calls instead, use the built-in [Request trigger and Response action](../connectors/connectors-native-reqres.md).
For example, you can monitor a service endpoint for your website by checking that endpoint on a specific schedule. When the specified event happens at that endpoint, such as your website going down, the event triggers your logic app's workflow and runs the actions in that workflow.
For information about encryption, security, and authorization for outbound calls
* Basic knowledge about how to create logic app workflows. If you're new to logic apps, see [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)?
-* The logic app from where you want to call the target endpoint. To start with the HTTP trigger, you'll need a blank logic app workflow. To use the HTTP action, start your logic app with any trigger that you want. This example uses the HTTP trigger as the first step.
+* The logic app workflow from where you want to call the target endpoint. To start with the HTTP trigger, you have to start with a blank workflow. To use the HTTP action, start your workflow with any trigger that you want. This example uses the HTTP trigger as the first step.
<a name="http-trigger"></a>
For information about encryption, security, and authorization for outbound calls
This built-in trigger makes an HTTP call to the specified URL for an endpoint and returns a response.
-1. Sign in to the [Azure portal](https://portal.azure.com). Open your blank logic app in Logic App Designer.
-
-1. Under the designer's search box, select **Built-in**. In the search box, enter `http` as your filter. From the **Triggers** list, select the **HTTP** trigger.
+1. In the [Azure portal](https://portal.azure.com), open your logic app and blank workflow in the designer.
- ![Select HTTP trigger](./media/connectors-native-http/select-http-trigger.png)
+1. [Follow these general steps to add the built-in trigger named **HTTP** to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
- This example renames the trigger to "HTTP trigger" so that the step has a more descriptive name. Also, the example later adds an HTTP action, and both names must be unique.
+ This example renames the trigger to **HTTP trigger** so that the trigger has a more descriptive name. Also, the example later adds an HTTP action, and both names must be unique.
1. Provide the values for the [HTTP trigger parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-trigger) that you want to include in the call to the target endpoint. Set up the recurrence for how often you want the trigger to check the target endpoint.
This built-in trigger makes an HTTP call to the specified URL for an endpoint an
1. To add other available parameters, open the **Add new parameter** list, and select the parameters that you want.
-1. Continue building your logic app's workflow with actions that run when the trigger fires.
+1. Continue building your workflow with actions that run when the trigger fires.
-1. When you're done, remember to save your logic app. On the designer toolbar, select **Save**.
+1. When you're done, remember to save your workflow. On the designer toolbar, select **Save**.
<a name="http-action"></a>
This built-in trigger makes an HTTP call to the specified URL for an endpoint an
This built-in action makes an HTTP call to the specified URL for an endpoint and returns a response.
-1. Sign in to the [Azure portal](https://portal.azure.com). Open your logic app in Logic App Designer.
-
- This example uses the HTTP trigger as the first step.
-
-1. Under the step where you want to add the HTTP action, select **New step**.
-
- To add an action between steps, move your pointer over the arrow between steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
+1. In the [Azure portal](https://portal.azure.com), open your logic app and workflow in the designer.
-1. Under **Choose an action**, select **Built-in**. In the search box, enter `http` as your filter. From the **Actions** list, select the **HTTP** action.
+ This example uses the HTTP trigger added in the previous section as the first step.
- ![Select HTTP action](./media/connectors-native-http/select-http-action.png)
+1. [Follow these general steps to add the built-in action named **HTTP** to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- This example renames the action to "HTTP action" so that the step has a more descriptive name.
+ This example renames the action to **HTTP action** so that the step has a more descriptive name. Operation names in your workflow must be unique.
1. Provide the values for the [HTTP action parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-action) that you want to include in the call to the target endpoint.
This built-in action makes an HTTP call to the specified URL for an endpoint and
1. To add other available parameters, open the **Add new parameter** list, and select the parameters that you want.
-1. When you're done, remember to save your logic app. On the designer toolbar, select **Save**.
+1. When you're done, remember to save your workflow. On the designer toolbar, select **Save**.
## Trigger and action outputs
Here's more information about the outputs from an HTTP trigger or action, which
| `headers` | JSON object | The headers from the request | | `body` | JSON object | The object with the body content from the request | | `status code` | Integer | The status code from the request |
-|||
| Status code | Description | |-|-|
Here's more information about the outputs from an HTTP trigger or action, which
| 403 | Forbidden | | 404 | Not Found | | 500 | Internal server error. Unknown error occurred. |
-|||
<a name="single-tenant-authentication"></a>
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
Title: Receive inbound or incoming HTTPS calls
-description: Receive and respond to HTTPS requests sent to workflows in Azure Logic Apps.
+ Title: Receive and respond to inbound HTTPS calls
+description: Receive and respond to inbound HTTPS requests received by workflows in Azure Logic Apps.
ms.suite: integration ms.reviewers: estfan, azla
Last updated 07/31/2023
tags: connectors
-# Receive incoming or inbound HTTPS calls or requests to workflows in Azure Logic Apps
+# Receive and respond to inbound HTTPS calls to workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-This how-to guide shows how to run your logic app workflow after receiving an HTTPS call or request from another service by using the Request built-in trigger. When your workflow uses this trigger, you can then respond to the HTTPS request by using the Response built-in action.
+This how-to guide shows create a logic app workflow that can receive and handle an inbound HTTPS request or call from another service using the Request built-in trigger. When your workflow uses this trigger, you can then respond to the HTTPS request by using the Response built-in action.
> [!NOTE] >
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
On the **Deployment settings** page of **Integration runtime setup** pane, you h
#### Creating SSISDB
-On the **Deployment settings** page of **Integration runtime setup** pane, if you want to deploy your packages into SSISDB (Project Deployment Model), select the **Create SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance to store your projects/packages/environments/execution logs** check box. Alternatively, if you want to deploy your packages into file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model), no need to create SSISDB nor select the check box.
-
-Regardless of your deployment model, if you want to use SQL Server Agent hosted by Azure SQL Managed Instance to orchestrate/schedule your package executions, it's enabled by SSISDB, so select the check box anyway. For more information, see [Schedule SSIS package executions via Azure SQL Managed Instance Agent](./how-to-invoke-ssis-package-managed-instance-agent.md).
+On the **Deployment settings** page of **Integration runtime setup** pane, select the **Create SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance to store your projects/packages/environments/execution logs** check box in below scenarios:
+ - Project Deployment Model. You deploy your packages into SSISDB.
+ - Regardless of deployment model, using SQL Server Agent hosted by Azure SQL Managed Instance to orchestrate/schedule your package executions.
+
+ For more information, see [Schedule SSIS package executions via Azure SQL Managed Instance Agent](./how-to-invoke-ssis-package-managed-instance-agent.md).
+
+In below scenario, there is no need to create SSISDB nor select the check box:
+ - Package Deployment Model and not using SQL Server Agent hosted by Azure SQL Managed Instance to orchestrate/schedule your package execution.
+
+ You deploy your packages into file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model), and use Data Factory pipeline to orchestrate/schedule your package executions.
If you select the check box, complete the following steps to bring your own database server to host SSISDB that we'll create and manage on your behalf.
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
Enterprise IoT monitoring is billed based on the number of devices covered by yo
## Free trial
-If you would like to evaluate Defender for IoT, you can use a trial license for 60 days.
+If you would like to evaluate Defender for IoT, you can use a trial license:
- **For OT networks**, use a trial to deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. An OT trial supports a **Large** site license for 60 days. For more information, see [Start a Microsoft Defender for IoT trial](getting-started.md). -- **For Enterprise IoT networks**, use a trial to view alerts, recommendations, and vulnerabilities in Microsoft 365. An Enterprise IoT trial is not limited to a specific number of devices. For more information, see [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md).
+- **For Enterprise IoT networks**, use a 30-day trial to view alerts, recommendations, and vulnerabilities in Microsoft 365. An Enterprise IoT trial is not limited to a specific number of devices. For more information, see [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md).
## Defender for IoT devices
deployment-environments How To Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md
- Previously updated : 04/25/2023+ Last updated : 10/06/2023 # Create and access an environment by using the Azure CLI
Message: The environment resource was not found.
To resolve the issue, assign the correct permissions: [Give access to the development team](quickstart-create-and-configure-projects.md#give-access-to-the-development-team).
-## Access an environment
+### Access an environment
To access an environment:
To access an environment:
1. View the access endpoints to various resources as defined in the ARM template outputs. 1. Access the specific resources by using the endpoints.
+
+### Deploy an environment
+
+```azurecli
+az devcenter dev environment deploy-action --action-id "deploy" --dev-center-name <devcenter-name> \
+ -g <resource-group-name> --project-name <project-name> --environment-name <environment-name> --parameters <parameters-json-string>
+```
+
+### Delete an environment
+
+```azurecli
+az devcenter dev environment delete --dev-center-name <devcenter-name> --project-name <project-name> --environment-name <environment-name> --user-id "me"
+```
## Next steps
deployment-environments How To Manage Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md
Previously updated : 04/25/2023 Last updated : 10/06/2023 # Manage your deployment environment
-In Azure Deployment Environments, a platform engineer gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the pre-configured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
+In Azure Deployment Environments, a platform engineer gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the preconfigured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
As a developer, you can create and manage your environments from the developer portal or by using the Azure CLI.
You can delete your environment completely when you don't need it anymore.
## Manage an environment by using the Azure CLI
-The Azure CLI provides a command-line interface for speed and efficiency when you create multiple similar environments, or for platforms where resources like memory are limited. You can use the following commands to create, list, deploy, or delete an environment.
+The Azure CLI provides a command-line interface for speed and efficiency when you create multiple similar environments, or for platforms where resources like memory are limited. You can use the `devcenter` Azure CLI extension to create, list, deploy, or delete an environment.
-To learn how to use the Deployment Environments Azure CLI extension, see [Configure Azure Deployment Environments by using the Azure CLI](https://aka.ms/CLI-reference).
+To learn how to manage your environments by using the CLI, see [Create and access an environment by using the Azure CLI](how-to-create-access-environments.md).
-### Create an environment
+For reference documentation on the `devcenter` Azure CLI extension, see [az devcenter](https://aka.ms/CLI-reference).
-```azurecli
-az devcenter dev environment create --dev-center-name <devcenter-name> \
- --project-name <project-name> --environment-name <environment-name> --environment-type <environment-type-name> \
- --environment-definition-name <environment-definition-name> catalog-name <catalog-name> \
- --parameters <deployment-parameters-json-string>
-```
+## Related content
-### List environments in a project
-
-```azurecli
-az devcenter dev environment list --dev-center-name <devcenter-name> --project-name <project-name>
-```
-
-### Deploy an environment
-
-```azurecli
-az devcenter dev environment deploy-action --action-id "deploy" --dev-center-name <devcenter-name> \
- -g <resource-group-name> --project-name <project-name> --environment-name <environment-name> --parameters <parameters-json-string>
-```
-
-### Delete an environment
-
-```azurecli
-az devcenter dev environment delete --dev-center-name <devcenter-name> --project-name <project-name> --environment-name <environment-name> --user-id "me"
-```
-
-## Next steps
--- Learn how to configure Azure Deployment Environments in [Quickstart: Create and configure a dev center](quickstart-create-and-configure-devcenter.md).-- Learn more about managing your environments by using the CLI in [Create and access an environment by using the Azure CLI](how-to-create-access-environments.md).
+- [Create and configure a dev center for Azure Deployment Environments by using the Azure CLI](how-to-create-configure-dev-center.md)
+- [Create and configure a project by using the Azure CLI](how-to-create-configure-projects.md)
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
Title: Call, trigger, or nest logic apps by using Request triggers
-description: Set up HTTPS endpoints for calling, triggering, or nesting logic app workflows in Azure Logic Apps.
+ Title: Create callable or nestable workflows
+description: Set up HTTPS endpoints to call, trigger, or nest workflows in Azure Logic Apps.
Previously updated : 09/22/2022 Last updated : 10/06/2023
-# Call, trigger, or nest logic apps by using HTTPS endpoints in Azure Logic Apps
+# Create workflows that you can call, trigger, or nest using HTTPS endpoints in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-To make your logic app callable through a URL and able to receive inbound requests from other services, you can natively expose a synchronous HTTPS endpoint by using a request-based trigger on your logic app. With this capability, you can call your logic app from other logic apps and create a pattern of callable endpoints. To set up a callable endpoint for handling inbound calls, you can use any of these trigger types:
+Some scenarios might require that you create a workflow that you can call through a URL or that can receive and inbound requests from other services or workflows. For this task, you can natively expose a synchronous HTTPS endpoint for your workflow by using any of the following request-based trigger types:
* [Request](../connectors/connectors-native-reqres.md) * [HTTP Webhook](../connectors/connectors-native-webhook.md) * Managed connector triggers that have the [ApiConnectionWebhook type](../logic-apps/logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive inbound HTTPS requests
-This article shows how to create a callable endpoint on your logic app by using the Request trigger and call that endpoint from another logic app. All principles apply identically to the other trigger types that you can use to receive inbound requests.
+This how-to guide shows how to create a callable endpoint for your workflow by using the Request trigger and call that endpoint from another workflow. All principles identically apply to the other request-based trigger types that can receive inbound requests.
-For more information about security, authorization, and encryption for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
+For information about security, authorization, and encryption for inbound calls to your workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](logic-apps-securing-a-logic-app.md#secure-inbound-requests).
## Prerequisites
-* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The logic app where you want to use the trigger to create the callable endpoint. You can start with either a blank logic app workflow or an existing logic app workflow where you can replace the current trigger. This example starts with a blank workflow. If you're new to logic apps, see [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md).
+* The logic app workflow where you want to use the trigger to create the callable endpoint. You can start with either a blank workflow or an existing logic app workflow where you can replace the current trigger. This example starts with a blank workflow. If you're new to logic apps, see [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md).
## Create a callable endpoint 1. In the [Azure portal](https://portal.azure.com), create a logic app resource and blank workflow in the designer.
-1. In the designer, [follow these general steps to add the **Request** trigger named **When a HTTP request is received**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+1. [Follow these general steps to add the **Request** trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
1. Optionally, in the **Request Body JSON Schema** box, you can enter a JSON schema that describes the payload or data that you expect the trigger to receive.
For more information about security, authorization, and encryption for inbound c
The **Request Body JSON Schema** box now shows the generated schema.
-1. Save your logic app.
+1. Save your workflow.
The **HTTP POST URL** box now shows the generated callback URL that other services can use to call and trigger your logic app. This URL includes query parameters that specify a Shared Access Signature (SAS) key, which is used for authentication.
When you want to accept parameter values through the endpoint's URL, you have th
![Resolved "triggerOutputs()" expression](./media/logic-apps-http-endpoint/trigger-outputs-expression-token.png)
- If you save the logic app, navigate away from the designer, and return to the designer, the token shows the parameter name that you specified, for example:
+ If you save the workflow, navigate away from the designer, and return to the designer, the token shows the parameter name that you specified, for example:
![Resolved expression for parameter name](./media/logic-apps-http-endpoint/resolved-expression-parameter-token.png)
When you want to accept parameter values through the endpoint's URL, you have th
![Example response body with parameter](./media/logic-apps-http-endpoint/relative-url-with-parameter.png)
-1. Save your logic app.
+1. Save your workflow.
In the Request trigger, the callback URL is updated and now includes the relative path, for example:
When you want to accept parameter values through the endpoint's URL, you have th
> If you want to include the hash or pound symbol (**#**) in the URI, > use this encoded version instead: `%25%23`
-## Call logic app through endpoint URL
+## Call workflow through endpoint URL
-After you create the endpoint, you can trigger the logic app by sending an HTTPS request to the endpoint's full URL. Logic apps have built-in support for direct-access endpoints.
+After you create the endpoint, you can trigger the workflow by sending an HTTPS request to the endpoint's full URL. Logic app workflows have built-in support for direct-access endpoints.
<a name="generated-tokens"></a> ## Tokens generated from schema
-When you provide a JSON schema in the Request trigger, the workflow designer generates tokens for the properties in that schema. You can then use those tokens for passing data through your logic app workflow.
+When you provide a JSON schema in the Request trigger, the workflow designer generates tokens for the properties in that schema. You can then use those tokens for passing data through your workflow.
-For example, if you add more properties, such as `"suite"`, to your JSON schema, tokens for those properties are available for you to use in the later steps for your logic app. Here is the complete JSON schema:
+For example, if you add more properties, such as `"suite"`, to your JSON schema, tokens for those properties are available for you to use in the later steps for your workflow. Here is the complete JSON schema:
```json {
For example, if you add more properties, such as `"suite"`, to your JSON schema,
} ```
-## Create nested logic app workflows
+## Create nested workflows
You can nest a workflow inside the current workflow by adding calls to other workflows that can receive requests. To call these workflows, follow these steps: 1. In the designer, [follow these general steps to add the action named **Choose a Logic Apps workflow**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- The designer shows the eligible logic app workflows for you to select.
+ The designer shows the eligible workflows for you to select.
-1. Select the logic app workflow to call from your current workflow.
+1. Select the workflow to call from your current workflow.
- ![Select logic app to call from current logic app](./media/logic-apps-http-endpoint/select-logic-app-to-nest.png)
+ ![Screenshot shows workflow to call from current workflow.](./media/logic-apps-http-endpoint/select-logic-app-to-nest.png)
## Reference content from an incoming request
To access specifically the `body` property, you can use the [`@triggerBody()` ex
## Respond to requests
-Sometimes you want to respond to certain requests that trigger your logic app by returning content to the caller. To construct the status code, header, and body for your response, use the Response action. This action can appear anywhere in your logic app, not just at the end of your workflow. If your logic app doesn't include a Response action, the endpoint responds *immediately* with the **202 Accepted** status.
+Sometimes you want to respond to certain requests that trigger your workflow by returning content to the caller. To construct the status code, header, and body for your response, use the Response action. This action can appear anywhere in your workflow, not just at the end of your workflow. If your workflow doesn't include a Response action, the endpoint responds *immediately* with the **202 Accepted** status.
-For the original caller to successfully get the response, all the required steps for the response must finish within the [request timeout limit](./logic-apps-limits-and-config.md) unless the triggered logic app is called as a nested logic app. If no response is returned within this limit, the incoming request times out and receives the **408 Client timeout** response.
+For the original caller to successfully get the response, all the required steps for the response must finish within the [request timeout limit](./logic-apps-limits-and-config.md) unless the triggered workflow is called as a nested workflow. If no response is returned within this limit, the incoming request times out and receives the **408 Client timeout** response.
-For nested logic app workflows, the parent workflow continues to wait for a response until all the steps are completed, regardless of how much time is required.
+For nested workflows, the parent workflow continues to wait for a response until all the steps are completed, regardless of how much time is required.
### Construct the response
Responses have these properties:
| **Status Code** | `statusCode` | The HTTPS status code to use in the response for the incoming request. This code can be any valid status code that starts with 2xx, 4xx, or 5xx. However, 3xx status codes are not permitted. | | **Headers** | `headers` | One or more headers to include in the response | | **Body** | `body` | A body object that can be a string, a JSON object, or even binary content referenced from a previous step |
-||||
-To view the JSON definition for the Response action and your logic app's complete JSON definition, on the Logic App Designer toolbar, select **Code view**.
+To view the JSON definition for the Response action and your workflow's complete JSON definition, on the designer toolbar, select **Code view**.
``` json "Response": {
To view the JSON definition for the Response action and your logic app's complet
#### Q: What about URL security?
-**A**: Azure securely generates logic app callback URLs by using [Shared Access Signature (SAS)](/rest/api/storageservices/delegate-access-with-shared-access-signature). This signature passes through as a query parameter and must be validated before your logic app can run. Azure generates the signature using a unique combination of a secret key per logic app, the trigger name, and the operation that's performed. So unless someone has access to the secret logic app key, they cannot generate a valid signature.
+**A**: Azure securely generates logic app callback URLs by using [Shared Access Signature (SAS)](/rest/api/storageservices/delegate-access-with-shared-access-signature). This signature passes through as a query parameter and must be validated before your workflow can run. Azure generates the signature using a unique combination of a secret key per logic app, the trigger name, and the operation that's performed. So unless someone has access to the secret logic app key, they cannot generate a valid signature.
> [!IMPORTANT]
-> For production and higher security systems, we strongly advise against calling your logic app directly from the browser for these reasons:
+> For production and higher security systems, we strongly advise against calling your workflow directly from the browser for these reasons:
> > * The shared access key appears in the URL. > * You can't manage security content policies due to shared domains across Azure Logic Apps customers.
-For more information about security, authorization, and encryption for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
+For more information about security, authorization, and encryption for inbound calls to your workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app workflow with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
#### Q: Can I configure callable endpoints further?
machine-learning Get Started Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/get-started-prompt-flow.md
This article walks you through the main user journey of using Prompt flow in Azu
> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Prerequisites: Enable Prompt flow in your Azure Machine Learning workspace
+## Prerequisites:
> [!IMPORTANT] > Prompt flow is **not supported** in the workspace which has data isolation enabled. The enableDataIsolation flag can only be set at the workspace creation phase and can't be updated. > >Prompt flow is **not supported** in the project workspace which was created with a workspace hub. The workspace hub is a private preview feature.
-In your Azure Machine Learning workspace, you can enable Prompt flow by turning on **Build AI solutions with Prompt flow** in the **Manage preview features** panel.
+- Enable Prompt flow in your Azure Machine Learning workspace: In your Azure Machine Learning workspace, you can enable Prompt flow by turning on **Build AI solutions with Prompt flow** in the **Manage preview features** panel.
+
+ :::image type="content" source="./media/get-started-prompt-flow/preview-panel.png" alt-text="Screenshot of manage preview features highlighting build AI solutions with Prompt flow button." lightbox ="./media/get-started-prompt-flow/preview-panel.png":::
+
+- Make sure the default data store in your workspace is blob type.
+
+- If you secure prompt flow with virtual network, please follow [Network isolation in prompt flow](how-to-secure-prompt-flow.md) to learn more detail.
## Setup
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
After deploying a Prompt flow, the endpoint must be assigned the `AzureML Data S
### Prerequisites - You need `AzureML Data Scientist` role in the workspace to create a runtime.
+- Make sure the default data store in your workspace is blob type.
+- If you secure prompt flow with virtual network, please follow [Network isolation in prompt flow](how-to-secure-prompt-flow.md) to learn more detail.
> [!IMPORTANT] > Prompt flow is **not supported** in the workspace which has data isolation enabled. The enableDataIsolation flag can only be set at the workspace creation phase and can't be updated.
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
To resolve the issue, you have two options:
- Update your runtime to latest version. - Remove the old tool and re-create a new tool.
-## Why can't I upgrade my old flow?
+## No such file or directory error
Prompt flow relies on fileshare to store snapshot of flow. If fileshare has some issue, you may encounter this issue. Here are some workarounds you can try:-- If you're using private storage account, please see follow [Network isolation in prompt flow](../how-to-secure-prompt-flow.md) to make sure your storage account can be accessed by your workspace.
+- If you're using private storage account, see [Network isolation in prompt flow](../how-to-secure-prompt-flow.md) to make sure your storage account can be accessed by your workspace.
- If the storage account is enabled public access, please check whether there are datastore named `workspaceworkingdirectory` in your workspace, it should be fileshare type. ![workspaceworkingdirectory](../media/faq/working-directory.png) - If you didn't get this datastore, you need add it in your workspace.
Go to the compute instance terminal and run `docker logs -<runtime_container_na
:::image type="content" source="../media/how-to-create-manage-runtime/ci-flow-clone-others.png" alt-text="Screenshot of don't have access error on the flow page. " lightbox = "../media/how-to-create-manage-runtime/ci-flow-clone-others.png"::: It's because you're cloning a flow from others that is using compute instance as runtime. As compute instance runtime is user isolated, you need to create your own compute instance runtime or select a managed online deployment/endpoint runtime, which can be shared with others. +
+### How to find python packages installed in runtime?
+
+Please follow below steps to find python packages installed in runtime:
+
+- Add python node in your flow.
+- Put following code to the code section.
+
+ ```python
+ from promptflow import tool
+ import subprocess
+
+ @tool
+ def list_packages(input: str) -> str:
+ # Run the pip list command and save the output to a file
+ with open('packages.txt', 'w') as f:
+ subprocess.run(['pip', 'list'], stdout=f)
+
+ ```
+- Run the flow, then you can find `packages.txt` in the flow folder.
+ :::image type="content" source="../media/faq/list-packages.png" alt-text="Screenshot of finding python packages installed in runtime. " lightbox = "../media/faq/list-packages.png":::
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
op monitor interval=3600
For RHEL **8.x/9.x**, use the following command to configure the fence device: ```bash
-# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command:
+# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command (see Tip box below for details):
sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \ subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \ op monitor interval=3600
-# If the version of pacemaker is less than 2.0.4-6.el8, then run following command:
+# If the version of pacemaker is less than 2.0.4-6.el8, then run following command (see Tip box below for details):
sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \ subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
op monitor interval=3600
For RHEL **8.x/9.x**, use the following command to configure the fence device: ```bash
-# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command:
+# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command (see Tip box below for details):
sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \ resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \ pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \ op monitor interval=3600
-# If the version of pacemaker is less than 2.0.4-6.el8, then run following command:
+# If the version of pacemaker is less than 2.0.4-6.el8, then run following command (see Tip box below for details):
sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \ resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \ pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
op monitor interval=3600
If you're using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration. > [!TIP]
-> Only configure the `pcmk_delay_max` attribute in two node clusters, with pacemaker version less than 2.0.4-6.el8. For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829).
+> 'value1' and 'value2' are integer values with a unit of seconds. Replace the values 'value1' and 'value2' with appropriate integer values that are at least 5 seconds apart. For example:`pcmk_delay_base="prod-cl1-0:0;prod-cl1-1:10"`. Only configure the `pcmk_delay_max` attribute in two node clusters, with pacemaker version less than 2.0.4-6.el8. For pacemaker versions greater than 2.0.4-6.el8, use `pcmk_delay_base`.<br> For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829).
> [!IMPORTANT] > The monitoring and fencing operations are deserialized. As a result, if there is a longer running monitoring operation and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring operation.
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
mssparkutils.notebook.run("notebook path", <timeoutSeconds>, <parameterMap>)
For example: ```r
-mssparkutils.notebook.run("folder/Sample1", 90, {"input": 20 })
+mssparkutils.notebook.run("folder/Sample1", 90, list("input": 20))
``` After the run finished, you will see a snapshot link named '**View notebook run: *Notebook Name***' shown in the cell output, you can click the link to see the snapshot for this specific run.
Sample1 run success with input is 10
You can run the **Sample1** in another notebook and set the **input** value as 20: ```r
-exitVal <- mssparkutils.notebook.run("mssparkutils/folder/Sample1", 90, {"input": 20 })
+exitVal <- mssparkutils.notebook.run("mssparkutils/folder/Sample1", 90, list("input": 20))
print (exitVal) ```
update-center Manage Updates Customized Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md
Currently, scheduled patching and periodic assessment on [specialized images](..
| Images | Currently supported scenarios | Unsupported scenarios | | | | | | Azure Compute Gallery: Generalized images | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment </br> - Scheduled patching | Automatic VM guest patching |
-| Azure Compute Gallery: Specialized images | - On-demand assessment </br> - On-demand patching | - Periodic assessment </br> - Scheduled patching </br> - Automatic VM guest patching |
-| Non-Azure Compute Gallery images (non-SIG) | None | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment </br> - Scheduled patching </br> - Automatic VM guest patching |
+| Azure Compute Gallery: Specialized images | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment (preview) </br> - Scheduled patching (preview) </br> | Automatic VM guest patching |
+| Non-Azure Compute Gallery images (non-SIG)| - On-demand assessment </br> - On-demand patching </br> - Periodic assessment (preview) </br> - Scheduled patching (preview) </br> | Automatic VM guest patching |
Automatic VM guest patching doesn't work on Azure Compute Gallery images even if Patch orchestration mode is set to `Azure orchestrated/AutomaticByPlatform`. You can use scheduled patching to patch the machines and define your own schedules.
update-center Whats Upcoming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md
Last updated 09/27/2023
The article [What's new in Azure Update Manager](whats-new.md) contains updates of feature releases. This article lists all the upcoming features for Azure Update Manager.
-## Expanded support for operating system and VM images
-
-Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), virtual machines created by Azure Migrate, Azure Backup, and Azure Site Recovery, and Azure Marketplace images are upcoming in the fourth quarter of 2023. Until then, we recommend that you continue using [Automation Update Management](../automation/update-management/overview.md) for these images. For more information, see [Support matrix for Update Manager](support-matrix.md#supported-operating-systems).
- ## Prescript and postscript
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 10/04/2023 Last updated : 10/06/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-|
-| Public | 1.2.4582 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.4675 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Public | 1.2.4583 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
+| Insider | 1.2.4677 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
-## Updates for version 1.2.4675 (Insider)
+## Updates for version 1.2.4677 (Insider)
*Date published: October 3, 2023*
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Wi
- Added new parameters for multiple monitor configuration when connecting to a remote resource using the [Uniform Resource Identifier (URI) scheme](uri-scheme.md). - Added support for the following languages: Czech (Czechia), Hungarian (Hungary), Indonesian (Indonesia), Korean (Korea), Portuguese (Portugal), Turkish (Turkey). - Fixed a bug that caused a crash when using Teams Media Optimization. -- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+>[!NOTE]
+>This Insiders release was originally version 1.2.4675, but we've replaced it with version 1.2.4677, which fixes the [CVE-2023-5217](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-5217) security vulnerability.
+
+## Updates for version 1.2.4583
+
+*Date published: October 6, 2023*
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+
+- Fixed the [CVE-2023-5217](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-5217) security vulnerability.
## Updates for version 1.2.4582
In this release, we've made the following changes:
*Date published: July 21, 2023*
-Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17VPy), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17Yn9), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17VPx)
- In this release, we've made the following changes: - Fixed an issue where the client doesn't auto-reconnect when the gateway WebSocket connection shuts down normally.