Updates from: 11/21/2022 02:05:36
Service Microsoft Docs article Related commit history on GitHub Change details
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Title: Introduction to Azure Kubernetes Service
description: Learn the features and benefits of Azure Kubernetes Service to deploy and manage container-based applications in Azure. Previously updated : 02/24/2021 Last updated : 11/18/2022 # Azure Kubernetes Service
-Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.
+Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for and manage the nodes attached to the AKS cluster.
You can create an AKS cluster using:
-* [The Azure CLI][aks-quickstart-cli]
-* [The Azure portal][aks-quickstart-portal]
+
+* [Azure CLI][aks-quickstart-cli]
* [Azure PowerShell][aks-quickstart-powershell]
-* Using template-driven deployment options, like [Azure Resource Manager templates][aks-quickstart-template], [Bicep](../azure-resource-manager/bicep/overview.md) and Terraform.
+* [Azure portal][aks-quickstart-portal]
+* Template-driven deployment options, like [Azure Resource Manager templates][aks-quickstart-template], [Bicep](../azure-resource-manager/bicep/overview.md), and Terraform.
-When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Advanced networking, Azure Active Directory (Azure AD) integration, monitoring, and other features can be configured during the deployment process.
+When you deploy an AKS cluster, you specify the number and size of the nodes, and AKS deploys and configures the Kubernetes control plane and nodes. [Advanced networking][aks-networking], [Azure Active Directory (Azure AD) integration][aad], [monitoring][aks-monitor], and other features can be configured during the deployment process.
For more information on Kubernetes basics, see [Kubernetes core concepts for AKS][concepts-clusters-workloads].
For more information on Kubernetes basics, see [Kubernetes core concepts for AKS
## Access, security, and monitoring
-For improved security and management, AKS lets you integrate with Azure AD to:
-* Use Kubernetes role-based access control (Kubernetes RBAC).
+For improved security and management, you can integrate with [Azure AD][aad] to:
+
+* Use Kubernetes role-based access control (Kubernetes RBAC).
* Monitor the health of your cluster and resources. ### Identity and security management
You can configure an AKS cluster to integrate with Azure AD. With Azure AD integ
For more information on identity, see [Access and identity options for AKS][concepts-identity].
-To secure your AKS clusters, see [Integrate Azure Active Directory with AKS][aks-aad].
+To secure your AKS clusters, see [Integrate Azure AD with AKS][aks-aad].
### Integrated logging and monitoring
-Azure Monitor for Container Health collects memory and processor performance metrics from containers, nodes, and controllers within your AKS cluster and deployed applications. You can review both container logs and [the Kubernetes master logs][aks-master-logs], which are:
-* Stored in an Azure Log Analytics workspace.
+[Azure Monitor for Container Health][azure-monitor] collects memory and processor performance metrics from containers, nodes, and controllers within your AKS clusters and deployed applications. You can review both container logs and [the Kubernetes logs][aks-master-logs], which are:
+
+* Stored in an [Azure Log Analytics][azure-logs] workspace.
* Available through the Azure portal, Azure CLI, or a REST endpoint.
-For more information, see [Monitor Azure Kubernetes Service container health][container-health].
+For more information, see [Monitor AKS container health][container-health].
## Clusters and nodes
For more information about Kubernetes cluster, node, and node pool capabilities,
As demand for resources change, the number of cluster nodes or pods that run your services automatically scales up or down. You can adjust both the horizontal pod autoscaler or the cluster autoscaler to adjust to demands and only run necessary resources.
-For more information, see [Scale an Azure Kubernetes Service (AKS) cluster][aks-scale].
+For more information, see [Scale an AKS cluster][aks-scale].
### Cluster node upgrades
-AKS offers multiple Kubernetes versions. As new versions become available in AKS, you can upgrade your cluster using the Azure portal or Azure CLI. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.
+AKS offers multiple Kubernetes versions. As new versions become available in AKS, you can upgrade your cluster using the Azure portal, Azure CLI, or Azure PowerShell. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.
-To learn more about lifecycle versions, see [Supported Kubernetes versions in AKS][aks-supported versions]. For steps on how to upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
+To learn more about lifecycle versions, see [Supported Kubernetes versions in AKS][aks-supported versions]. For steps on how to upgrade, see [Upgrade an AKS cluster][aks-upgrade].
### GPU-enabled nodes
For more information, see [Confidential computing nodes on AKS][conf-com-node].
Mariner is an open-source Linux distribution created by Microsoft, and itΓÇÖs now available for preview as a container host on Azure Kubernetes Service (AKS). The Mariner container host provides reliability and consistency from cloud to edge across the AKS, AKS-HCI, and Arc products. You can deploy Mariner node pools in a new cluster, add Mariner node pools to your existing Ubuntu clusters, or migrate your Ubuntu nodes to Mariner nodes.
-For more information, see [Use the Mariner container host on Azure Kubernetes Service (AKS)](use-mariner.md)
+For more information, see [Use the Mariner container host on AKS](use-mariner.md)
### Storage volume support
-To support application workloads, you can mount static or dynamic storage volumes for persistent data. Depending on the number of connected pods expected to share the storage volumes, you can use storage backed by either:
-* Azure Disks for single pod access, or
-* Azure Files for multiple, concurrent pod access.
+To support application workloads, you can mount static or dynamic storage volumes for persistent data. Depending on the number of connected pods expected to share the storage volumes, you can use storage backed by:
-For more information, see [Storage options for applications in AKS][concepts-storage].
+* [Azure Disks][azure-disk] for single pod access
+* [Azure Files][azure-files] for multiple, concurrent pod access.
-Get started with dynamic persistent volumes using [Azure Disks][azure-disk] or [Azure Files][azure-files].
+For more information, see [Storage options for applications in AKS][concepts-storage].
## Virtual networks and ingress
-An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with:
-* Other pods in the cluster
-* Other nodes in the virtual network.
+An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network and can directly communicate with other pods in the cluster and other nodes in the virtual network.
-Pods can also connect to other services in a peered virtual network and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
+Pods can also connect to other services in a peered virtual network and on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
For more information, see the [Network concepts for applications in AKS][aks-networking].
For more information, see the [Network concepts for applications in AKS][aks-net
The HTTP application routing add-on helps you easily access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster.
-As applications are deployed, publicly accessible DNS names are autoconfigured. The HTTP application routing sets up a DNS zone and integrates it with the AKS cluster. You can then deploy Kubernetes ingress resources as normal.
+As applications are deployed, publicly accessible DNS names are auto-configured. The HTTP application routing sets up a DNS zone and integrates it with the AKS cluster. You can then deploy Kubernetes ingress resources as normal.
-To get started with ingress traffic, see [HTTP application routing][aks-http-routing].
+To get started with Ingress traffic, see [HTTP application routing][aks-http-routing].
## Development tooling integration
-Kubernetes has a rich ecosystem of development and management tools that work seamlessly with AKS. These tools include Helm and the Kubernetes extension for Visual Studio Code.
+Kubernetes has a rich ecosystem of development and management tools that work seamlessly with AKS. These tools include [Helm][helm] and the [Kubernetes extension for Visual Studio Code][k8s-extension].
Azure provides several tools that help streamline Kubernetes, such as DevOps Starter. ### DevOps Starter DevOps Starter provides a simple solution for bringing existing code and Git repositories into Azure. DevOps Starter automatically:
-* Creates Azure resources (such as AKS);
-* Configures a release pipeline in Azure DevOps Services that includes a build pipeline for CI;
-* Sets up a release pipeline for CD; and,
-* Generates an Azure Application Insights resource for monitoring.
+
+* Creates Azure resources (such as AKS).
+* Configures a release pipeline in Azure DevOps Services that includes a build pipeline for CI.
+* Sets up a release pipeline for CD.
+* Generates an Azure Application Insights resource for monitoring.
For more information, see [DevOps Starter][azure-devops].
To create a private image store, see [Azure Container Registry][acr-docs].
## Kubernetes certification
-AKS has been CNCF-certified as Kubernetes conformant.
+AKS has been [CNCF-certified][cncf-cert] as Kubernetes conformant.
## Regulatory compliance
AKS is compliant with SOC, ISO, PCI DSS, and HIPAA. For more information, see [O
## Next steps
-Learn more about deploying and managing AKS with the Azure CLI Quickstart.
+Learn more about deploying and managing AKS.
> [!div class="nextstepaction"]
-> [Deploy an AKS Cluster using Azure CLI][aks-quickstart-cli]
+> [Cluster operator and developer best practices to build and manage applications on AKS][aks-best-practices]
<!-- LINKS - external -->
-[kubectl-overview]: https://kubernetes.io/docs/user-guide/kubectl-overview/
[compliance-doc]: https://azure.microsoft.com/overview/trusted-cloud/compliance/
+[cncf-cert]: https://www.cncf.io/certification/software-conformance/
+[k8s-extension]: https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools
<!-- LINKS - internal --> [acr-docs]: ../container-registry/container-registry-intro.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[aks-networking]: ./concepts-network.md [aks-scale]: ./tutorial-kubernetes-scale.md [aks-upgrade]: ./upgrade-cluster.md
-[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
[azure-devops]: ../devops-project/overview.md [azure-disk]: ./azure-disks-dynamic-pv.md [azure-files]: ./azure-files-dynamic-pv.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[concepts-identity]: concepts-identity.md [concepts-storage]: concepts-storage.md [conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
+[aad]: managed-aad.md
+[aks-monitor]: monitor-aks.md
+[azure-monitor]: ../azure-monitor/containers/containers.md
+[azure-logs]: ../azure-monitor/logs/log-analytics-overview.md
+[helm]: /quickstart-helm.md
+[aks-best-practices]: /best-practices.md
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.g
--
+Go to the application folder:
+
+```bash
+cd msdocs-python-flask-webapp-quickstart
+```
++ Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance. ```
DBUSER=<db-user-name>
DBPASS=<db-password> ```
+Create a virtual environment for the app:
++
+Install the dependencies:
+
+```bash
+pip install -r requirements.txt
+```
+ Run the sample application with the following commands: ### [Flask](#tab/flask) ```bash
-# Clone the sample
-git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
-cd msdocs-flask-postgresql-sample-app
-# Activate a virtual environment
-python3 -m venv .venv # In CMD on Windows, run "py -m venv .venv" instead
-.venv/scripts/activate
-# Install dependencies
-pip install -r requirements.txt
# Run database migration flask db upgrade # Run the app at http://127.0.0.1:5000
flask run
### [Django](#tab/django) ```bash
-# Clone the sample
-git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git
-cd msdocs-django-postgresql-sample-app
-# Activate a virtual environment
-python3 -m venv .venv # In CMD on Windows, run "py -m venv .venv" instead
-.venv/scripts/activate
-# Install dependencies
-pip install -r requirements.txt
# Run database migration python manage.py migrate # Run the app at http://127.0.0.1:8000
applied-ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api.md
Previously updated : 10/07/2022 Last updated : 11/18/2022 zone_pivot_groups: programming-languages-set-formre recommendations: false
applied-ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api.md
Previously updated : 10/10/2022 Last updated : 11/18/2022 zone_pivot_groups: programming-languages-set-formre recommendations: false
- # Get started with Form Recognizer
+# Get started with Form Recognizer
::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](../includes/applies-to-v3-0.md)]
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps"
# GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes
-> [!NOTE]
-> This document is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
+> [!IMPORTANT]
+> The documents in this section are for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. The Git repository can contain:
azure-arc Conceptual Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-ci-cd.md
keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Serv
# CI/CD workflow using GitOps - Azure Arc-enabled Kubernetes
-> [!NOTE]
+> [!IMPORTANT]
> The workflow described in this document uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about CI/CD workflow using GitOps with Flux v2](./conceptual-gitops-flux2-ci-cd.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. Modern Kubernetes deployments house multiple applications, clusters, and environments. With GitOps, you can manage these complex setups more easily, tracking the desired state of the Kubernetes environments declaratively with Git. Using common Git tooling to track cluster state, you can increase accountability, facilitate fault investigation, and enable automation to manage environments.
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
# Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters
-> [!NOTE]
+> [!IMPORTANT]
> This tutorial uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial that uses GitOps with Flux v2](./tutorial-gitops-flux2-ci-cd.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. In this tutorial, you'll set up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. Using the sample Azure Vote app, you'll:
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
# Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster
-> [!NOTE]
+> [!IMPORTANT]
> This tutorial is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. In this tutorial, you will apply configurations using GitOps on an Azure Arc-enabled Kubernetes cluster. You'll learn how to:
azure-arc Use Azure Policy Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy-flux-2.md
Title: "Apply Flux v2 configurations at-scale using Azure Policy"
+ Title: "Deploy applications consistently at scale using Flux v2 configurations and Azure Policy"
Last updated 8/23/2022
-description: "Apply Flux v2 configurations at-scale using Azure Policy"
+description: "Use Azure Policy to apply Flux v2 configurations at scale on Azure Arc-enabled Kubernetes or AKS clusters."
keywords: "Kubernetes, K8s, Arc, AKS, Azure, containers, GitOps, Flux v2, policy"
-# Apply Flux v2 configurations at-scale using Azure Policy
+# Deploy applications consistently at scale using Flux v2 configurations and Azure Policy
You can use Azure Policy to apply Flux v2 configurations (`Microsoft.KubernetesConfiguration/fluxConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes (`Microsoft.Kubernetes/connectedClusters`) or AKS (`Microsoft.ContainerService/managedClusters`) clusters. To use Azure Policy, select a built-in policy definition and create a policy assignment. You can search for **flux** to find all of the Flux v2 policy definitions. When creating the policy assignment:+ 1. Set the scope for the assignment. * The scope will be all resource groups in a subscription or management group or specific resource groups. 2. Set the parameters for the Flux v2 configuration that will be created.
To enable separation of concerns, you can create multiple policy assignments, ea
> [!TIP] > There are built-in policy definitions for these scenarios:
+>
> * Flux extension install (required for all scenarios): `Configure installation of Flux extension on Kubernetes cluster` > * Flux configuration using public Git repository (generally a test scenario): `Configure Kubernetes clusters with Flux v2 configuration using public Git repository` > * Flux configuration using private Git repository with SSH auth: `Configure Kubernetes clusters with Flux v2 configuration using Git repository and SSH secrets`
To enable separation of concerns, you can create multiple policy assignments, ea
> * Flux configuration using private Bucket source and KeyVault secrets: `Configure Kubernetes clusters with Flux v2 configuration using Bucket source and secrets in KeyVault` > * Flux configuration using private Bucket source and local K8s secret: `Configure Kubernetes clusters with specified Flux v2 Bucket source using local secrets`
-## Prerequisite
+## Prerequisites
Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on the scope (subscription or resource group) where you'll create this policy assignment.
azure-arc Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy.md
keywords: "Kubernetes, Arc, Azure, K8s, containers, GitOps, Flux v1, policy"
You can use Azure Policy to apply Flux v1 configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes clusters (`Microsoft.Kubernetes/connectedclusters`).
-> [!NOTE]
+> [!IMPORTANT]
> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the article for using policy with Flux v2](./use-azure-policy-flux-2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. To use Azure Policy, select a built-in GitOps policy definition and create a policy assignment. When creating the policy assignment:
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md
keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Serv
# Deploy Helm Charts using GitOps on an Azure Arc-enabled Kubernetes cluster
-> [!NOTE]
+> [!IMPORTANT]
> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible. Helm is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like APT and Yum, Helm is used to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
A comma-delimited list of beta features to enable. Beta features enabled by thes
|Key|Sample value| |||
-|AzureWebJobsFeatureFlags|`feature1,feature2`|
+|AzureWebJobsFeatureFlags|`feature1,feature2,EnableProxies`|
+
+Add `EnableProxies` to this list to re-enable proxies on version 4.x of the Functions runtime while you plan your migration to Azure API Management. For more information, see [Re-enable proxies in Functions v4.x](./legacy-proxies.md#re-enable-proxies-in-functions-v4x).
## AzureWebJobsKubernetesSecretName
azure-functions Functions Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-proxies.md
API Management uses a policy-based model to let you control routing, security, a
When moving from Functions Proxies to using API Management, you must integrate your function app with an API Management instance, and then configure the API Management instance to behave like the previous proxy. The following section provides links to the relevant articles that help you succeed in using API Management with Azure Functions.
-If you have challenges moving from Proxies or if Azure API Management doesn't address your specific scenarios, create an issue in the [Azure Functions repository](https://github.com/Azure/Azure-Functions). Make sure to tag the issue with the label `proxy-deprecation`.
+If you have challenges moving from proxies or if Azure API Management doesn't address your specific scenarios, post a request in the [API Management feedback forum](https://feedback.azure.com/d365community/forum/e808a70c-ff24-ec11-b6e6-000d3a4f0858).
## API Management integration
azure-functions Legacy Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/legacy-proxies.md
This article explains how to configure and work with Azure Functions Proxies. Wi
Standard Functions billing applies to proxy executions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
+## Re-enable proxies in Functions v4.x
+
+After [migrating your function app to version 4.x of the Functions runtime](migrate-version-3-version-4.md), you'll need to specifically reenable proxies. You should still switch to integrating your function apps with [Azure API Management](functions-proxies.md#api-management-integration) as soon as possible, and not just rely on proxies.
+
+Re-enabling proxies requires you to set a flag in the `AzureWebJobsFeatureFlags` application setting in one of the following ways:
+++ If the `AzureWebJobsFeatureFlags` setting doesn't already exists, add this setting to your function app with a value of `EnableProxies`. +++ If this setting already exists, add `,EnableProxies` to the end of the existing value.+
+[`AzureWebJobsFeatureFlags`](functions-app-settings.md#azurewebjobsfeatureflags) is a comma-delimited array of flags used to enable preview and other temporary features.
+
+To learn more about how to create and modify application settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+ ## <a name="create"></a>Create a proxy > [!IMPORTANT]
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
If you don't see your programming language, go select it from the [top of the pa
### Runtime -- Azure Functions proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions proxies is being returned in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). For information about the pending return of proxies in version 4.x, [Monitor the App Service announcements page](https://github.com/Azure/app-service-announcements/issues).
+- Azure Functions proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions proxies can be re-enabled in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). To learn how to re-enable proxies support in Functions version 4.x, see [Re-enable proxies in Functions v4.x](legacy-proxies.md#re-enable-proxies-in-functions-v4x).
- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Upgrade to the latest release of the Log Analytics agent for Windows and Linux m
| Environment | Installation method | Upgrade method | |--|-|-|
-| Azure VM | Log Analytics agent VM extension for Windows/Linux | The agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property `autoUpgradeMinorVersion` to **false**. Once deployed, however, the extension won't upgrade minor versions unless redeployed, even with this property set to **true**. Only the Linux agent supports automatic update post deployment with `enableAutomaticUpgrade` property (see [Enable Auto-update for the Linux agent](#enable-auto-update-for-the-linux-agent)). Major version upgrade is always manual (see [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](https://docs.azure.cn/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion?view=azure-dotnet)). |
+| Azure VM | Log Analytics agent VM extension for Windows/Linux | The agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property `autoUpgradeMinorVersion` to **false**. Once deployed, however, the extension won't upgrade minor versions unless redeployed, even with this property set to **true**. Only the Linux agent supports automatic update post deployment with `enableAutomaticUpgrade` property (see [Enable Auto-update for the Linux agent](#enable-auto-update-for-the-linux-agent)). Major version upgrade is always manual (see [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion)). |
| Custom Azure VM images | Manual installation of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent must be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.| | Non-Azure VMs | Manual installation of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent must be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. |
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
For details on how to create a diagnostic setting, see [Create diagnostic settin
> [!NOTE] > * Entries in the Activity Log are system generated and can't be changed or deleted.
-> * Entries in the Activity Log are representing control plane changes like a virtual machine restart, any non related entries should be written into [Azure Resource Logs](https://learn.microsoft.com/azure/azure-monitor/essentials/resource-logs)
+> * Entries in the Activity Log are representing control plane changes like a virtual machine restart, any non related entries should be written into [Azure Resource Logs](resource-logs.md)
## Retention period
The columns in the following table have been deprecated in the updated schema. T
|resourceProviderName | ResourceProvider | ResourceProviderValue || > [!Important]- > In some cases, the values in these columns might be all uppercase. If you have a query that includes these columns, use the [=~ operator](/azure/kusto/query/datatypes-string-operators) to do a case-insensitive comparison. The following columns have been added to `AzureActivity` in the updated schema:
The following columns have been added to `AzureActivity` in the updated schema:
- Claims_d - Properties_d
-## Activity log insights
-
-Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view activity log insights in the Azure portal.
-
-Before you use activity log insights, you must [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
-
-### How do activity log insights work?
-
-Activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) are stored in a table called `AzureActivity`.
-
-Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the `AzureActivity` table. For example, data might include which administrators deleted, updated, or created resources and whether the activities failed or succeeded.
--
-### View activity log insights: Resource group or subscription level
-
-To view activity log insights on a resource group or a subscription level:
-
-1. In the Azure portal, select **Monitor** > **Workbooks**.
-1. In the **Insights** section, select **Activity Logs Insights**.
-
- :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox= "media/activity-log/open-activity-log-insights-workbook.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a scale level.":::
-
-1. At the top of the **Activity Logs Insights** page, select:
-
- 1. One or more subscriptions from the **Subscriptions** dropdown.
- 1. Resources and resource groups from the **CurrentResource** dropdown.
- 1. A time range for which to view data from the **TimeRange** dropdown.
-
-### View activity log insights on any Azure resource
-
->[!Note]
-> Currently, Application Insights resources aren't supported for this workbook.
-
-To view activity log insights on a resource level:
-
-1. In the Azure portal, go to your resource and select **Workbooks**.
-1. In the **Activity Logs Insights** section, select **Activity Logs Insights**.
-
- :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a resource level.":::
-
-1. At the top of the **Activity Logs Insights** page, select a time range for which to view data from the **TimeRange** dropdown:
-
- * **Azure Activity Log Entries** shows the count of activity log records in each activity log category.
-
- :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot that shows Azure activity logs by category value.":::
-
- * **Activity Logs by Status** shows the count of activity log records in each status.
-
- :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Screenshot that shows Azure activity logs by status.":::
-
- * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of activity log records for each resource and resource provider.
-
- :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Screenshot that shows Azure activity logs by resource.":::
- ## Next steps
-* [Read an overview of platform logs](./platform-logs-overview.md)
-* [Review activity log event schema](activity-log-schema.md)
-* [Create a diagnostic setting to send activity logs to other destinations](./diagnostic-settings.md)
+Learn more about:
+
+* [Platform logs](./platform-logs-overview.md)
+* [Activity log event schema](activity-log-schema.md)
+* [Activity log insights](activity-log-insights.md)
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
If you want to turn on the scraping of the default targets that aren't enabled b
### Customizing metrics collected by default targets By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, in the configmap under `default-targets-metrics-keep-list`, set `minimalingestionprofile` to `false`.
-To filter in additional metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you'd like to change.
+To filter in more metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you'd like to change.
-For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following to filter IN metrics collected for the default targets using regex based filtering.
+For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following to filter IN metrics collected for the default targets using regex based filtering.
``` kubelet = "metricX|metricY"
apiserver = "mymetric.*"
To further customize the default jobs to change properties such as collection frequency or labels, disable the corresponding default target by setting the configmap value for the target to `false`, and then apply the job using custom configmap. For details on custom configuration, see [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md#configure-custom-prometheus-scrape-jobs). ### Cluster alias
-The cluster label appended to every time series scraped will use the last part of the full AKS cluster's ARM resourceID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
+The cluster label appended to every time series scraped will use the last part of the full AKS cluster's ARM resourceID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one.
The new label will also show up in the cluster parameter dropdown in the Grafana
> [!NOTE] > Only alphanumeric characters are allowed. Any other characters else will be replaced with `_`. This is to ensure that different components that consume this label will adhere to the basic alphanumeric convention.
-### Debug mode
+### Debug mode
To view every metric that is being scraped for debugging purposes, the metrics addon agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one. See [the Debug Mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode) for more details.
+### Scrape interval settings
+To update the scrape interval settings for any target, the customer can update the duration in default-targets-scrape-interval-settings setting for that target in `ama-metrics-settings-configmap` [configmap](https://aka.ms/azureprometheus-addon-settings-configmap). The scrape intervals have to be set by customer in the correct format specified [here](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file), else the default value of 30 seconds will be applied to the corresponding targets.
+ ## Configure custom Prometheus scrape jobs You can configure the metrics addon to scrape targets other than the default ones, using the same configuration format as the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
Follow the instructions to [create, validate, and apply the configmap](prometheu
### Advanced Setup: Configure custom Prometheus scrape jobs for the daemonset
-The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery; otherwise each node will try to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
+The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise each node will try to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
```yaml - job_name: node
scrape_configs:
- <job-y> ```
-Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
+Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
Refer to [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the prometheus config.
scrape_configs:
#### Kubernetes Service Discovery config
-Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) will each have different `__meta_*` labels depending on what role is specified. These can be used in the `relabel_configs` section to filter targets or replace labels for the targets.
+Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) will each have different `__meta_*` labels depending on what role is specified. The labels can be used in the `relabel_configs` section to filter targets or replace labels for the targets.
See the [Prometheus examples](https://aka.ms/azureprometheus-promsampleossconfig) of scrape configs for a Kubernetes cluster.
See the [Prometheus examples](https://aka.ms/azureprometheus-promsampleossconfig
The `relabel_configs` section is applied at the time of target discovery and applies to each target for the job. Below are examples showing ways to use `relabel_configs`. #### Adding a label
-Add a new label called `example_label` with value `example_value` to every metric of the job. Use `__address__` as the source label only because that label will always exist. This will add the label for every target of the job.
+Add a new label called `example_label` with value `example_value` to every metric of the job. Use `__address__` as the source label only because that label will always exist and will add the label for every target of the job.
```yaml relabel_configs:
The scrape config below uses the `__meta_*` labels added from the `kubernetes_sd
To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the below job will scrape only the address specified by the annotation: - `prometheus.io/scrape`: Enable scraping for this pod-- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set this to `https` & most likely set the tls config.
+- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set scheme to `https` & most likely set the tls config.
- `prometheus.io/path`: If the metrics path isn't /metrics, define it with this annotation. - `prometheus.io/port`: Specify a single, desired port to scrape
scrape_configs:
regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__
-
+ # If prometheus.io/scheme is specified, scrape with this scheme instead of http - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] action: replace
scrape_configs:
- source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name
-
+ # [Optional] Include all pod labels as labels for each metric - action: labelmap regex: __meta_kubernetes_pod_label_(.+)
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
For example, our most powerful GPT-3 model is called `text-davinci-002`, while o
## Finding what models are available
-You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](../reference.md#models).
+You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](/rest/api/cognitiveservices/azureopenai/models/list).
## Finding the right model
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
Previously updated : 06/24/2022 Last updated : 11/17/2022 recommendations: false
# Azure OpenAI REST API reference
-This article provides details on the REST API endpoints for the Azure OpenAI Service, a service in the Azure Cognitive Services suite. The REST APIs are broken up into two categories:
+This article provides details on the REST API endpoints for the Azure OpenAI Service, a service in the Azure Cognitive Services suite. The REST APIs are broken up into two categories:
* **Management APIs**: The Azure Resource Manager (ARM) provides the management layer in Azure that allows you to create, update and delete resource in Azure. All services use a common structure for these operations. [Learn More](../../azure-resource-manager/management/overview.md) * **Service APIs**: The Azure OpenAI service provides you with a set of REST APIs for interacting with the resources & models you deploy via the Management APIs.
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
} ``` -
-## Models
-
-#### List all available models
-This API will return the list of all available models in your resource. This includes both 'base models' that are available by default and models you've created from fine-tuning jobs.
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/models?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X GET https://example_resource_name.openai.azure.com/openai/models?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example Response
-```json
-{
- "data": [
- {
- "id": "ada",
- "status": "succeeded",
- "created_at": 1633564800,
- "updated_at": 1633564800,
- "object": "model",
- "capabilities": {
- "fine_tune": true
- },
- "deprecation": {
- "fine_tune": 1704067200,
- "inference": 1704067200
- }
- },
- {
- "id": "davinci",
- "status": "succeeded",
- "created_at": 1642809600,
- "updated_at": 1642809600,
- "object": "model",
- "capabilities": {
- "fine_tune": true
- },
- "deprecation": {
- "fine_tune": 1704067200,
- "inference": 1704067200
- }
- } ],
- "object": "list"
-}
-```
-
-#### Get information on a specific model
-
-This API will retrieve information on a specific model
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/models/{model_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```model_id``` | string | Required | ID of the model you wish to get information on. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X GET https://example_resource_name.openai.azure.com/openai/models/ada?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-
-```json
-{
- "id": "ada",
- "status": "succeeded",
- "created_at": 1633564800,
- "updated_at": 1633564800,
- "object": "model",
- "capabilities": {
- "fine_tune": true
- },
- "deprecation": {
- "fine_tune": 1704067200,
- "inference": 1704067200
- }
-}
-```
-
-## Fine-tune
-
-You can create customized versions of our models using the fine-tuning APIs. These APIs allow you to create training jobs which produce new models that are available for deployment.
-
-#### List all training jobs
-
-This API will list your resource's fine-tuning jobs
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format. |
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X GET https://your_resource_name.openai.azure.com/openai/fine-tunes?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-
-```json
-{
- "data": [
- {
- "model": "curie",
- "fine_tuned_model": "curie.ft-573da37c1eb64047850be7c0cb59953d",
- "training_files": [
- {
- "purpose": "fine-tune",
- "filename": "training_file_name.txt",
- "id": "file-cdb57152d5bd4a7dae8da5915ce14132",
- "status": "succeeded",
- "created_at": 1645700311,
- "updated_at": 1645700314,
- "object": "file"
- }
- ],
- "validation_files": [
- {
- "purpose": "fine-tune",
- "filename": "validation_file_name.txt",
- "id": "file-cdb57152d5bd4a7dae8da5915ce14132",
- "status": "succeeded",
- "created_at": 1645700311,
- "updated_at": 1645700314,
- "object": "file"
- }
- ],
- "result_files": [
- {
- "bytes": 540,
- "purpose": "fine-tune-results",
- "filename": "results.csv",
- "id": "file-1d92e225cf6c428da8790b305b37f9c9",
- "status": "succeeded",
- "created_at": 1645704004,
- "updated_at": 1645704004,
- "object": "file"
- }
- ],
- "hyperparams": {
- "batch_size": 200,
- "learning_rate_multiplier": 0.1,
- "n_epochs": 1,
- "prompt_loss_weight": 0.1
- },
- "events": [
- {
- "created_at": 1645700414,
- "level": "info",
- "message": "Job enqueued. Waiting for jobs ahead to complete.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645700420,
- "level": "info",
- "message": "Job started.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645703999,
- "level": "info",
- "message": "Job succeeded.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645704004,
- "level": "info",
- "message": "Uploaded result files: file-1d92e225cf6c428da8790b305b37f9c9",
- "object": "fine-tune-event"
- }
- ],
- "id": "ft-573da37c1eb64047850be7c0cb59953d",
- "status": "succeeded",
- "created_at": 1645700409,
- "updated_at": 1646042114,
- "object": "fine-tune"
- }
- ],
- "object": "list"
-}
-
-```
-
-#### Create a Fine tune job
-
-This API will create a new job to fine-tune a specified model with the specified dataset.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/fine-tunes?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| model | string | yes | n/a | The name of the base model you want to fine tune. use the models API to find the list of available models for fine tuning |
-| training_file | string | yes | n/a |The ID of an uploaded file that contains training data. Your dataset must be formatted as a JSONL file, where each training example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune. |
-| validation_file| string | no | null | The ID of an uploaded file that contains validation data. <br> If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. Your train and validation data should be mutually exclusive. <br><br> Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune. |
-| batch_size | integer | no | null | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. <br><br> By default, the batch size will be dynamically configured to be ~0.2% of the number of examples in the training set, capped at 256 - in general, we've found that larger batch sizes tend to work better for larger datasets.
-| learning_rate_multiplier | number (double) | no | null | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value.<br><br> We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. |
-| n_epochs | integer | no | 4 for `ada`, `babbage`, `curie`. 1 for `davinci` | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
-| prompt_loss_weight | number (double) | no | 0.1 | The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion, which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short. <br><br> |
-| compute_classification_metrics | boolean | no | false | If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. |
-| classification_n_classes | integer | no | null | The number of classes in a classification task. This parameter is required for multiclass classification |
-| classification_positive_class | string | no | null | The positive class in binary classification. This parameter is needed to generate precision, recall, and F1 metrics when doing binary classification. |
-| classification_betas | array | no | null | If this is provided, we calculate F-beta scores at the specified beta values. The F-beta score is a generalization of F-1 score. This is only used for binary classification With a beta of 1 (the F-1 score), precision and recall are given the same weight. A larger beta score puts more weight on recall and less on precision. A smaller beta score puts more weight on precision and less on recall. |
-
-#### Example request
-
-```console
-curl https://your-resource-name.openai.azure.com/openai/fine-tunes?api-version=2022-06-01-preview \
- -X POST \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d "{
- \"model\": \"ada\",
- \"training_file\": \"file-6ca9bd640c8e4eaa9ec922604226ab6c\",
- \"validation_file\": \"file-cbdad17806aa48e48b05fc2c44c87bf5\",
- \"hyperparams\": {
- \"batch_size\": 1,
- \"learning_rate_multiplier\": 0.1,
- \"n_epochs\": 4,
- }
- }"
-```
-
-#### Example response
-
-```json
-{
- "model": "ada",
- "training_files": [
- {
- "purpose": "fine-tune",
- "filename": "training_data_file.jsonl",
- "id": "file-63618d04c90a4c50961dacc31950e6a9",
- "status": "succeeded",
- "created_at": 1646927862,
- "updated_at": 1646927867,
- "object": "file"
- }
- ],
- "validation_files": [
- {
- "purpose": "fine-tune",
- "filename": "validation_data_file.jsonl",
- "id": "file-9a19ba124fde451aa32c7527844d48e4",
- "status": "succeeded",
- "created_at": 1646927864,
- "updated_at": 1646927867,
- "object": "file"
- }
- ],
- "hyperparams": {
- "batch_size": 10,
- "learning_rate_multiplier": 0.1,
- "n_epochs": 1,
- "prompt_loss_weight": 0.1
- },
- "events": [],
- "id": "ft-e72ba1b389f8428e9bd4aefea40610b6",
- "status": "notRunning",
- "created_at": 1646927942,
- "updated_at": 1646927942,
- "object": "fine-tune"
-}
-
-```
-
-#### Get a specific fine tuning job
-
-This API will retrieve information about a specific fine tuning job
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```fine_tune_id``` | string | Required | The ID for the fine tuning job you wish to retrieve |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl https://example_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65d49d34e74a80f6328ba6d8d08?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-```json
-{
- "id": "ft-9f84568b71ff403a8b118df91128925b",
- "status": "succeeded",
- "created_at": 1645704199,
- "updated_at": 1646042114,
- "object": "fine-tune",
- "model": "ada",
- "fine_tuned_model": "ada.ft-9f84568b71ff403a8b118df91128925b",
- "training_files": [
- {
- "purpose": "fine-tune",
- "filename": "training_file_data.jsonl",
- "id": "file-cdb57152d5bd4a7dae8da5915ce14132",
- "status": "succeeded",
- "created_at": 1645700311,
- "updated_at": 1645700314,
- "object": "file"
- }
- ],
- "validation_files": [
- {
- "purpose": "fine-tune",
- "filename": "validation_file_data.jsonl",
- "id": "file-cdb57152d5bd4a7dae8da5915ce14132",
- "status": "succeeded",
- "created_at": 1645700311,
- "updated_at": 1645700314,
- "object": "file"
- }
- ],
- "result_files": [
- {
- "bytes": 541,
- "purpose": "fine-tune-results",
- "filename": "results.csv",
- "id": "file-8ed3b46d8d02479198067c1735457d76",
- "status": "succeeded",
- "created_at": 1645706224,
- "updated_at": 1645706224,
- "object": "file"
- }
- ],
- "hyperparams": {
- "batch_size": 200,
- "learning_rate_multiplier": 0.1,
- "n_epochs": 1,
- "prompt_loss_weight": 0.1
- },
- "events": [
- {
- "created_at": 1645704207,
- "level": "info",
- "message": "Job enqueued. Waiting for jobs ahead to complete.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645704208,
- "level": "info",
- "message": "Job started.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645706219,
- "level": "info",
- "message": "Job succeeded.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645706224,
- "level": "info",
- "message": "Uploaded result files: file-8ed3b46d8d02479198067c1735457d76",
- "object": "fine-tune-event"
- }
- ]
- }
-```
-
-#### Delete a specific fine tuning job
-
-This API will delete a specific fine tuning job
-
-```http
-DELETE https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```fine_tune_id``` | string | Required | The ID for the fine tuning job you wish to delete |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported Versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl https://example_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65d49d34e74a80f6328ba6d8d08?api-version=2022-06-01-preview \
- -X DELETE
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Retrieve events for a specific fine tuning job
-
-This API will retrieve the events associated with the specified fine tuning job. To stream events as they become available, use the query parameter ΓÇ£streamΓÇ¥ and pass true value (&stream=true)
--
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}/events?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```fine_tune_id``` | string | Required | The ID for the fine tuning job you wish to stream events from |
-| ```stream``` | boolean | no | To stream events as they become available pass a true value |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-```console
-curl -X GET https://your_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65d49d34e74a80f6328ba6d8d08/events?stream=true&api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-
-```json
-{
- "data": [
- {
- "created_at": 1645704207,
- "level": "info",
- "message": "Job enqueued. Waiting for jobs ahead to complete.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645704208,
- "level": "info",
- "message": "Job started.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645706219,
- "level": "info",
- "message": "Job succeeded.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1645706224,
- "level": "info",
- "message": "Uploaded result files: file-8ed3b46d8d02479198067c1735457d76",
- "object": "fine-tune-event"
- }
- ],
- "object": "list"
-}
-```
-
-#### Cancel a fine tuning job
-
-This API will cancel the specified job
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}/cancel?api-version={api-version}
-```
-
-**Path Parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```fine_tune_id``` | string | Required | The ID for the fine tuning job you wish to stream events from |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
-- `2022-06-01-preview`-
-#### Example request
-```console
-curl -X POST https://your_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65d49d34e74a80f6328ba6d8d08/cancel?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-
-```json
-{
- "model": "ada",
- "training_files": [
- {
- "purpose": "fine-tune",
- "filename": "training_data_file.jsonl",
- "id": "file-63618d04c90a4c50961dacc31950e6a9",
- "status": "succeeded",
- "created_at": 1646927862,
- "updated_at": 1646927867,
- "object": "file"
- }
- ],
- "hyperparams": {
- "batch_size": 10,
- "learning_rate_multiplier": 0.1,
- "n_epochs": 1,
- "prompt_loss_weight": 0.1
- },
- "events": [
- {
- "created_at": 1646927881,
- "level": "info",
- "message": "Job enqueued. Waiting for jobs ahead to complete.",
- "object": "fine-tune-event"
- },
- {
- "created_at": 1646927886,
- "level": "info",
- "message": "Job started.",
- "object": "fine-tune-event"
- }
- ],
- "id": "ft-cd8414443c4243d9aa9644af1c1f4f80",
- "status": "canceled",
- "created_at": 1646927875,
- "updated_at": 1646928438,
- "object": "fine-tune"
-}
-
-```
-
-## Files
-
-#### List all files in your resource
-
-This API will list all the Files that have been uploaded to the resource
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/files?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X GET https://example_resource_name.openai.azure.com/openai/files?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-
-```JSON
-{
- "data": [
- {
- "bytes": 1519036,
- "purpose": "fine-tune",
- "filename": "training_data_file.jsonl",
- "id": "file-90933867b7fe49dfab5468a87aa49bcd",
- "status": "succeeded",
- "created_at": 1646043430,
- "updated_at": 1646043436,
- "object": "file"
- },
- {
- "bytes": 387349,
- "purpose": "fine-tune",
- "filename": "validation_data_file.jsonl",
- "id": "file-c00a485713664d3f87d27de7f083a78b",
- "status": "succeeded",
- "created_at": 1646043444,
- "updated_at": 1646043447,
- "object": "file"
- }
- ],
- "object": "list"
-}
-
-```
-
-#### Upload a file
-
-This API will upload a file that contains the examples used for fine-tuning a model.
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/files?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Form data**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| purpose | string | Required | The intended purpose of the uploaded documents. Currently only 'fine-tune' is supported for fine tuning documents |
-| file | string | Required | the name of the JSON lines file to be uploaded |
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X POST https://example_resource_name.openai.azure.com/openai/files?api-version=2022-06-01-preview \
- -H "accept: application/json" \
- -H "Content-Type: multipart/form-data" \
- -F "purpose=fine-tune" \
- -F "file=@straining_file_name.jsonl"
-```
-
-#### Example response
-
-```json
-{
- "bytes": 405898,
- "purpose": "fine-tune",
- "filename": "training_file.jsonl",
- "id": "file-a3a7c9947c0d4a2ead8c4adddb973cc3",
- "status": "notRunning",
- "created_at": 1646928754,
- "updated_at": 1646928754,
- "object": "file"
-}
-```
-
-#### Retrieve information on a file
-
-This API will return information on the specified file
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/files/{file_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```file_id``` | string | yes | The ID of the file you wish to retrieve |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X GET https://example_resource_name.openai.azure.com/openai/files/file-6ca9bd640c8e4eaa9ec922604226ab6c?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-
-```json
-{
- "bytes": 405898,
- "purpose": "fine-tune",
- "filename": "test_prepared_train.jsonl",
- "id": "file-63618d04c90a4c50961dacc31950e6a9",
- "status": "succeeded",
- "created_at": 1646927862,
- "updated_at": 1646927867,
- "object": "file"
-}
-```
-
-#### Delete a file
-
-This API will delete the specified file
-
-```http
-DELETE https://{your-resource-name}.openai.azure.com/openai/files/{file_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```file_id``` | string | yes | The ID of the file you wish to Delete |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-```console
-curl -X DELETE https://example_resource_name.openai.azure.com/openai/files/file-6ca9bd640c8e4eaa9ec922604226ab6c?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Download a file
-
-This API will download the specified file.
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/files/{file_id}/content?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```file_id``` | string | yes | The ID of the file you wish to download |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-```console
-curl -X GET https://example_resource_name.openai.azure.com/openai/files/file-6ca9bd640c8e4eaa9ec922604226ab6c/content?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Import a file from Azure Blob
-
-Import files from blob storage or other web locations. We recommend you use this option for importing large files. Large files can become unstable when uploaded through multipart forms because the requests are atomic and can't be retried or resumed.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/files/import?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| purpose | string | Yes | N/A | The intended purpose of the uploaded documents. Currently only 'fine-tune' is supported for fine tuning documents |
-| filename | string | Yes | N/A | The name of the file you wish to import |
-| content_url | string | Yes | N/A | Blob URI location. Include the SAS token if the file is non-public. |
-
-#### Example request
-
-```console
-curl -X POST https://example_resource_name.openai.azure.com/openai/files/files/import?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
- -H "Content-Type: application/json" \
- -d "{
- \"purpose\": \"fine-tune\",
- \"filename\": \"NAME_OF_FILE\",
- \"content_url\": \"URL_TO_FILE\"
- }"
-```
-
-### Example response
-
-```json
-{
- "purpose": "fine-tune",
- "filename": "validationfiletest.jsonl",
- "id": "file-83f408999d8f4c12af66d4e067e19736",
- "status": "notRunning",
- "created_at": 1646929498,
- "updated_at": 1646929498,
- "object": "file"
-}
-```
-
-## Deployments
-
-#### List all deployments in the resource
-
-This API will return a list of all the deployments in the resource.
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/deployments?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`--
-#### Example request
-
-```console
-curl -X GET https://example_resource_name.openai.azure.com/openai/deployments?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-
-#### Example response
-
-```json
-{
- "data": [
- {
- "model": "curie.ft-573da37c1eb64047850be7c0cb59953d",
- "scale_settings": {
- "scale_type": "standard"
- },
- "owner": "organization-owner",
- "id": "your_deployment_name",
- "status": "succeeded",
- "created_at": 1645710085,
- "updated_at": 1645710085,
- "object": "deployment"
- },
- ],
- "object": "list"
-}
-
-```
--
-#### Create a new deployment
-
-This API will create a new deployment in the resource. This will enable you to make completions and embeddings calls with the model.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| model | string | Yes | N/A | The name of the model you wish to deploy. You can find the list of available models from the models API. |
-| scale_type | string | Yes | N/A | Scale configuration. The only option today is 'Standard' |
-
-#### Example request
-
-```console
-curl -X POST https://example_resource_name.openai.azure.com/openai/deployments?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
- -H "Content-Type: application/json" \
- -d "{
- \"model\": \"ada\",
- \"scale_settings\": {
- \"scale_type\": \"standard\"
- }
- }"
-```
-
-#### Example response
-
-```json
-{
- "model": "ada",
- "scale_settings": {
- "scale_type": "standard"
- },
- "owner": "organization-owner",
- "id": "deployment-2f9834184fd34d1a9a0f26464450db87",
- "status": "running",
- "created_at": 1646929698,
- "updated_at": 1646929698,
- "object": "deployment"
-}
-```
--
-#### Retrieve information about a deployment
-
-This API will retrieve information about the specified deployment
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment_id``` | string | Required | The name of the deployment you wish to retrieve |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X GET https://example_resource_name.openai.azure.com/openai/deployments/{deployment_id}?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
-#### Example response
-
-```json
-{
- "model": "ada",
- "scale_settings": {
- "scale_type": "standard"
- },
- "owner": "organization-owner",
- "id": "deployment-2f9834184fd34d1a9a0f26464450db87",
- "status": "running",
- "created_at": 1646929698,
- "updated_at": 1646929698,
- "object": "deployment"
-}
-```
-
-#### Update a deployment
-
-This API will update an existing deployment. Make sure to set the content-type to `application/merge-patch+json`
-
-```http
-PATCH https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment_id``` | string | Required | The name of the deployment you wish to update |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| model | string | Yes | N/A | The name of the model you wish to deploy. You can find the list of available models from the models API. |
-| scale_type | string | Yes | N/A | Scale configuration. The only option today is 'standard' |
-
-#### Example request
-
-```console
-curl -X PATCH https://example_resource_name.openai.azure.com/openai/deployments/my_personal_deployment?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
- -H "Content-Type: application/merge-patch+json" \
- -d "{
- \"model\": \"ada\",
- \"scale_settings\": {
- \"scale_type\": \"standard\"
- }
- }"
-```
-
-#### Delete a deployment
-
-This API will delete the specified deployment
-
-```http
-DELETE https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment_id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment_id``` | string | Required | The name of the deployment you wish to delete |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD-preview format.|
-
-**Supported versions**
--- `2022-06-01-preview`-
-#### Example request
-
-```console
-curl -X DELETE https://example_resource_name.openai.azure.com/openai/deployments/{deployment_id}?api-version=2022-06-01-preview \
- -H "api-key: YOUR_API_KEY"
-```
- ## Next steps
+Learn about [managing deployments, models, and finetuning with the REST API](/rest/api/cognitiveservices/azureopenai/deployments/create).
Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
The below error codes are exposed by Call Automation SDK.
| Error Code | Description | Actions to take | |--|--|--| | 400 | Bad request | The input request is invalid. Look at the error message to determine which input is incorrect.
+| 400 | Play Failed | Ensure your audio file is WAV, 16KHz, Mono and make sure the file url is publicly accessible. |
+| 400 | Recognize Failed | Check the error message. The message will highlight if this is due to timeout being reached or if operation was canceled. For more information about the error codes and messages you can check our how-to guide for [gathering user input](../how-tos/call-automation/recognize-action.md#event-codes).
| 401 | Unauthorized | HMAC authentication failed. Verify whether the connection string used to create CallAutomationClient is correct. | 403 | Forbidden | Request is forbidden. Make sure that you can have access to the resource you are trying to access. | 404 | Resource not found | The call you are trying to act on doesn't exist. For example, transferring a call that has already disconnected. | 429 | Too many requests | Retry after a delay suggested in the Retry-After header, then exponentially backoff. | 500 | Internal server error | Retry after a delay. If it persists, raise a support ticket.
+| 500 | Play Failed | File a support request through the Azure portal. |
+| 500 | Recognize Failed | Check error message and confirm the audio file format is valid (WAV, 16KHz, Mono), if the file format is valid then file a support request through Azure portal. |
| 502 | Bad gateway | Retry after a delay with a fresh http client. Consider the below tips when troubleshooting certain issues.
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
## Store the signing certificate in AKV
-If you have an existing certificate, upload it to AKV. For more information on how to use your own signing key, see the [signing certificate requirements.](https://github.com/notaryproject/notaryproject/blob/main/signature-specification.md#certificate-requirements)
+If you have an existing certificate, upload it to AKV. For more information on how to use your own signing key, see the [signing certificate requirements.](https://github.com/notaryproject/notaryproject/blob/main/specs/signature-specification.md#certificate-requirements)
Otherwise create an x509 self-signed certificate storing it in AKV for remote signing using the steps below. ### Create a self-signed certificate (Azure CLI)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine High Performance Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md
Previously updated : 05/19/2022 Last updated : 11/18/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a high-performance network (HPN) VM on your Azure Stack Edge Pro GPU device.
+You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a high performance network (HPN) VM on your Azure Stack Edge Pro GPU device.
## About HPN VMs
-A non-uniform memory access (NUMA) architecture is used to increase processor speeds. In a NUMA system, CPUs are arranged in smaller systems called nodes. Each node has its own processors and memory. Processors are typically allocated memory that they are close to so the access is quicker. For more information, see [NUMA Support](/windows/win32/procthread/numa-support).
+HPN VMs are specifically designed for 5G and Multi-access Edge Computing (MEC) network functions that require high packet processing rates, low latency, and low jitter.
-On your Azure Stack Edge device, logical processors are distributed on NUMA nodes and high speed network interfaces can be attached to these nodes. An HPN VM has a dedicated set of logical processors. These processors are first picked from the NUMA node that has high speed network interface attached to it, and then picked from other nodes. An HPN VM can only use the memory of the NUMA node that is assigned to its processors.
+HPN VMs rely on a non-uniform memory access (NUMA) architecture to increase processing speeds. In a NUMA system, CPUs are arranged in smaller systems called nodes. Each node has a dedicated set of logical processors and memory. An HPN VM can use CPU from only one NUMA node.
-To run low latency and high throughput network applications on the HPN VMs deployed on your device, make sure to reserve vCPUs that reside in NUMA node 0. This node has Mellanox high speed network interfaces, Port 5 and Port 6, attached to it.
+On your Azure Stack Edge device, logical processors are distributed on NUMA nodes and high speed network interfaces can be attached to these nodes.
+
+To maximize performance, processing, and transmitting on the same NUMA node, processors are allocated memory that they're closest to in order to reduce physical distance. For more information, see [NUMA Support](/windows/win32/procthread/numa-support).
+
+### vCPU reservations for Azure Stack Edge
+
+To deploy HPN VMs on Azure Stack Edge, you must reserve vCPUs on NUMA nodes. The number of vCPUs reserved determines the available vCPUs that can be assigned to the HPN VMs.
+
+For the number of cores that each HPN VM size uses, see theΓÇ»[Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+
+In version 2210, vCPUs are automatically reserved with the maximum number of vCPUs supported on each NUMA node. If the vCPUs were already reserved for HPN VMs in an earlier version, the existing reservation is carried forth to the 2210 version. If vCPUs weren't reserved for HPN VMs in an earlier version, upgrading to 2210 will still carry forth the existing configuration.
+
+For versions 2209 and earlier, you must reserve vCPUs on NUMA nodes before you deploy HPN VMs on your device. We recommend that the vCPU reservation is done on NUMA node 0, as this node has Mellanox high speed network interfaces, Port 5 and Port 6, attached to it.
-
## HPN VM deployment workflow The high-level summary of the HPN deployment workflow is as follows:
-1. Enable a network interface for compute on your Azure Stack Edge device. This step creates a virtual switch on the specified network interface.
-1. Enable cloud management of VMs from the Azure portal.
-1. Upload a VHD to an Azure Storage account by using Azure Storage Explorer.
-1. Use the uploaded VHD to download the VHD onto the device, and create a VM image from the VHD.
-1. Reserve vCPUs on the device for HPN VMs.
-1. Use the resources created in the previous steps:
- 1. VM image that you created.
- 1. Virtual switch associated with the network interface on which you enabled compute.
- 1. Subnet associated with the virtual switch.
+1. While configuring the network settings on your device, make sure that there's a virtual switch associated with a network interface on your device that can be used for the VM resources and VMs. We'll use the default virtual network created with the vswitch for this article. You have the option of creating and using a different virtual network, if desired.
- And create or specify the following resources inline:
- 1. VM name, choose a supported HPN VM size, sign-in credentials for the VM.
- 1. Create new data disks or attach existing data disks.
- 1. Configure static or dynamic IP for the VM. If you're providing a static IP, choose from a free IP in the subnet range of the network interface enabled for compute.
+2. Enable cloud management of VMs from the Azure portal. Download a VHD onto your device, and create a VM image from the VHD.
- Use the preceding resources to create an HPN VM.
+3. Reserve vCPUs on the device for HPN VMs with versions 2209 and earlier. For version 2210, the vCPUs are automatically reserved.
+
+4. Use the resources created in the previous steps:
+
+ 1. The VM image that you created.
+ 2. The default virtual network associated with the virtual switch. The default virtual network has the same name as the name of the virtual switch.
+ 3. The default subnet for the default virtual network.
+
+1. And create or specify the following resources:
+
+ 1. Specify a VM name, choose a supported HPN VM size, and specify sign-in credentials for the VM.
+ 1. Create new data disks or attach existing data disks.
+ 1. Configure static or dynamic IP for the VM. If you're providing a static IP, choose from a free IP in the subnet range of the default virtual network.
+
+1. Use the preceding resources to create an HPN VM.
## Prerequisites
-Before you begin to create and manage VMs on your device via the Azure portal, make sure that:
+Before you create and manage VMs on your device via the Azure portal, make sure that:
+
+### [2210](#tab/2210)
+
+- You've configured and activated your Azure Stack Edge Pro GPU device as described in [Tutorial: Activate Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-deploy-activate.md).
+
+ Make sure that you've created a virtual switch. The VMs and the resources for VMs will be using this virtual switch and the associated virtual network. For more information, see [Configure a virtual switch on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches).
+
+- You have access to a VM image for the VM you intend to create. To create a VM image, you can [Get an image from Azure Marketplace](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
+
+- In addition to the above prerequisites for VM creation, you'll also need to check the vCPU reservation of HPN VMs.
+
+ - The default vCPU reservation uses the SkuPolicy, which reserves all vCPUs that are available for HPN VMs.
+
+ - If the vCPUs were already reserved for HPN VMs in an earlier version - for example, version 2009 or earlier, then the existing reservation is carried forth to the 2210 version.
+
+ - For most use cases, we recommend that you use the default configuration. If needed, you can also customize the NUMA configuration for HPN VMs. To customize the configuration, use the steps provided for 2209.
+
+- Use the following steps to get information about the SkuPolicy settings on your device:
+
+ 1. [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+
+
+ 1. Run the following command to see the available NUMA policies on your device:
+
+ ```powershell
+ Get-HcsNumaPolicy
+ ```
+
+ Here's an example output:
+
+ ```powershell
+ [DBE-BNVGF33.microsoftdatabox.com]: PS>Get-HcsNumaPolicy
+
+ Get-HcsNumaPolicy
+ PolicyType: AllRoot
+ HpnLpMapping:
+ CPUs: []
+
+ PolicyType: SkuPolicy
+ HpnLpMapping:
+ CPUs: [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]
+
+ [DBE-BNVGF33.microsoftdatabox.com]: PS>
+ ```
+
+ 1. Run the following command to get the vCPU reservation information on your device:
+
+ This cmdlet will output:
+ 1. HpnLpMapping: The NUMA logical processor indexes that are reserved on the machine.
+ 1. HpnCapableLpMapping: The NUMA logical processor indexes that are capable for reservation.
+ 1. HpnLpAvailable: The NUMA logical processor indexes that aren't available for new HPN VM deployments.
+ 1. The NUMA logical processors used by HPN VMs and NUMA logical processors available for new HPN VM deployments on each NUMA node in the cluster.
+
+ ```powershell
+ Get-HcsNumaLpMapping
+ ```
+
+ Here's an example output when SkuPolicy is in effect:
+
+ ```powershell
+ [DBE-BNVGF33.microsoftdatabox.com]: PS>Get-HcsNumaLpMapping
+ Hardware:
+ { Numa Node #0 : CPUs [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] }
+ { Numa Node #1 : CPUs [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] }
+
+ HpnCapableLpMapping:
+ { Numa Node #0 : CPUs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] }
+ { Numa Node #1 : CPUs [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] }
+
+ BNVGF33:
+ HpnLpMapping:
+ { Numa Node #0 : CPUs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] }
+ { Numa Node #1 : CPUs [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] }
+
+ HpnLpAvailable:
+ { Numa Node #0 : CPUs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] }
+ { Numa Node #1 : CPUs [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] }
+ ```
+ Proceed to the following steps only if you want to change the current reservation, or to create a new reservation.
+
+ 1. Run the following command to set a NUMA logical processor mapping on your device. You can use the `-Custom` parameter if you want to specify a custom logical processor set. See the **2209 and earlier** tab in this article for rules when specifying a custom set.
+
+ Running this command stops running VMs, triggers a reboot, and then restarts the VMs.
+
+ ```powershell
+ Set-HcsNumaLpMapping -UseSkuPolicy
+ ```
+
+ Here's an example output:
+
+ ```powershell
+ [DBE-BNVGF33.microsoftdatabox.com]: Set-HcsNumaLpMapping -UseSkuPolicy
+ Requested Config already exists. No action needed.
+
+ [DBE-BNVGF33.microsoftdatabox.com]: PS> Set-HcsNumaLpMapping -UseAllRoot
+ Requested Configuration requires a reboot...
+ Machine will reboot in some time. Please be patient.
+ [DBE-BNVGF33.microsoftdatabox.com]: PS>
+ ```
+
+ 1. Run the following command to validate the vCPU reservation and verify that the VMs have restarted.
+
+ ```powershell
+ Get-HcsNumaLpMapping
+ ```
+
+ The output shouldn't show the indexes you set. If you see the indexes you set in the output, the `Set` command didn't complete successfully. Retry the command and if the problem persists, contact Microsoft Support.
+
+ Here's an example output.
+
+ ```powershell
+ dbe-1csphq2.microsoftdatabox.com]: PS> Get-HcsNumaLpMapping -MapType MinRootAware -NodeName 1CSPHQ2
+
+ { Numa Node #0 : CPUs [0, 1, 2, 3] }
+
+ { Numa Node #1 : CPUs [20, 21, 22, 23] }
+
+ [dbe-1csphq2.microsoftdatabox.com]:
+
+ PS>
+
+### [2209 and earlier](#tab/2209)
- You've completed the network settings on your Azure Stack Edge Pro GPU device as described in [Step 1: Configure an Azure Stack Edge Pro GPU device](./azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-device).
Before you begin to create and manage VMs on your device via the Azure portal, m
- You have access to a Windows or Linux VHD that you'll use to create the VM image for the VM you intend to create.
-In addition to the above prerequisites that are used for VM creation, you'll also need to configure the following prerequisite specifically for the HPN VMs:
+In addition to the above prerequisites that are used for VM creation, configure the following prerequisite specifically for the HPN VMs:
- Reserve vCPUs for HPN VMs on the Mellanox interface. Follow these steps: 1. [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).-
- 1. Identify all the VMs running on your device. This includes Kubernetes VMs, or any VM workloads that you may have deployed.
+ 1. Identify all the VMs running on your device, including Kubernetes VMs and any VM workloads that you may have deployed.
```powershell get-vm
In addition to the above prerequisites that are used for VM creation, you'll als
```powershell stop-vm -force
- ```
- 1. Get the `hostname` for your device. This should return a string corresponding to the device hostname.
+ ```
+ 1. Get the `hostname` for your device. This should return a string corresponding to the device hostname.
```powershell hostname ```
- 1. Get the logical processor indexes to reserve for HPN VMs.
+ 1. Get the logical processor indexes to reserve for HPN VMs.
```powershell
- Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName <Output of hostname command>
+ Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName <Output of hostname command>
```
- Here is an example output:
- ```powershell
- [dbe-1csphq2.microsoftdatabox.com]: PS>hostname
- 1CSPHQ2
- [dbe-1csphq2.microsoftdatabox.com]: P> Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName 1CSPHQ2
- { Numa Node #0 : CPUs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] }
- { Numa Node #1 : CPUs [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39] }
+ Here's example output:
+
+ ```powershell
+ [dbe-1csphq2.microsoftdatabox.com]: PS>hostname 1CSPHQ2
+ [dbe-1csphq2.microsoftdatabox.com]: P> Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName
+ [dbe-1csphq2.microsoftdatabox.com]: P> Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName 1CSPHQ2
+ { Numa Node #0 : CPUs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] }
+ { Numa Node #1 : CPUs [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39] }
+
+ [dbe-1csphq2.microsoftdatabox.com]: PS>
+ ```
+
+ 1. Reserve vCPUs for HPN VMs. The number of vCPUs reserved here determines the available vCPUs that could be assigned to the HPN VMs. For the number of cores that each HPN VM size uses, see theΓÇ»[Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes). On your device, Mellanox ports 5 and 6 are on NUMA node 0.
+
+ ```powershell
+ Set-HcsNumaLpMapping -CpusForHighPerfVmsCommaSeperated <Logical indexes from the Get-HcsNumaLpMapping cmdlet> -AssignAllCpusToRoot $false
+ ```
+
+ Here's an example output:
+
+ ```powershell
+ [dbe-1csphq2.microsoftdatabox.com]: PS>Set-HcsNumaLpMapping -CpusForHighPerfVmsCommaSeperated "4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39" -AssignAllCpusToRoot $false
+
+ Requested Configuration requires a reboot...
+
+ Machine will reboot in some time. Please be patient.
+
+ [dbe-1csphq2.microsoftdatabox.com]: PS>
+ ```
+
+ > [!Note]
+ > - You can choose to reserve all the logical indexes from both NUMA nodes shown in the example or a subset of the indexes. If you choose to reserve a subset of indexes, pick the indexes from the device node that has a Mellanox network interface attached to it, for best performance. For Azure Stack Edge Pro GPU, the NUMA node with Mellanox network interface is #0.
+ > - The list of logical indexes must contain a paired sequence of an odd number and an even number. For example, ((4,5)(6,7)(10,11)). Attempting to set a list of numbers such as `5,6,7` or pairs such as `4,6` will not work.
+ > - Using two `Set-HcsNuma` commands consecutively to assign vCPUs will reset the configuration. Also, do not free the CPUs using the Set-HcsNuma cmdlet if you have deployed an HPN VM.
+
+ > [!NOTE]
+ > Devices that are updated to 2210 from earlier versions will keep their minroot configuration from before upgrade.
- [dbe-1csphq2.microsoftdatabox.com]: PS>
- ```
-
- 1. Reserve vCPUs for HPN VMs. The number of vCPUs reserved here determines the available vCPUs that could be assigned to the HPN VMs. For the number of cores that each HPN VM size uses, see the [Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes). On your device, Mellanox ports 5 and 6 are on NUMA node 0.
-
- ```powershell
- Set-HcsNumaLpMapping -CpusForHighPerfVmsCommaSeperated <Logical indexes from the Get-HcsNumaLpMapping cmdlet> -AssignAllCpusToRoot $false
- ```
+ 8. Wait for the device to finish rebooting. Once the device is running again, open a new PowerShell session. [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
- After this command is run, the device restarts automatically.
+ 1. Validate the vCPU reservation and verify that the VMs have restarted.
- Here is an example output:
-
- ```powershell
- [dbe-1csphq2.microsoftdatabox.com]: PS>Set-HcsNumaLpMapping -CpusForHighPerfVmsCommaSeperated "4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39" -AssignAllCpusToRoot $false
- Requested Configuration requires a reboot...
- Machine will reboot in some time. Please be patient.
- [dbe-1csphq2.microsoftdatabox.com]: PS>
- ```
-
- > [!NOTE]
- > - You can choose to reserve all the logical indexes from both NUMA nodes shown in the example or a subset of the indexes. If you choose to reserve a subset of indexes, pick the indexes from the device node that has a Mellanox network interface attached to it, for best performance. For Azure Stack Edge Pro GPU, the NUMA node with Mellanox network interface is #0.
- > - The list of logical indexes must contain a paired sequence of an odd number and an even number. For example, ((4,5)(6,7)(10,11)). Attempting to set a list of numbers such as `5,6,7` or pairs such as `4,6` will not work.
- > - Using two `Set-HcsNuma` commands consecutively to assign vCPUs will reset the configuration. Also, do not free the CPUs using the Set-HcsNuma cmdlet if you have deployed an HPN VM.
-
- 1. Wait for the device to finish rebooting. Once the device is running, open a new PowerShell session. [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
-
- 1. Validate the vCPU reservation.
-
- ```powershell
- Get-HcsNumaLpMapping -MapType MinRootAware -NodeName <Output of hostname command>
- ```
- The output should not show the indexes you set. If you see the indexes you set in the output, the `Set` command did not complete successfully. Retry the command and if the problem persists, contact Microsoft Support.
-
- Here is an example output.
-
- ```powershell
- [dbe-1csphq2.microsoftdatabox.com]: PS> Get-HcsNumaLpMapping -MapType MinRootAware -NodeName 1CSPHQ2
- { Numa Node #0 : CPUs [0, 1, 2, 3] }
- { Numa Node #1 : CPUs [20, 21, 22, 23] }
- [dbe-1csphq2.microsoftdatabox.com]: PS>
- ```
-
- 1. Restart the VMs that you had stopped in the earlier step.
-
- ```powershell
- start-vm
- ```
- <!-- Start-vm doesn't seem to work alone. Get-vm alone doesn't seem to return my running VM"VmId"-->
+ ```powershell
+ Get-HcsNumaLpMapping
+ ```
+
+ The output shouldn't show the indexes you set. If you see the indexes you set in the output, the `Set` command didn't complete successfully. Retry the command and if the problem persists, contact Microsoft Support.
+
+ Here's an example output.
+
+ ```powershell
+ dbe-1csphq2.microsoftdatabox.com]: PS> Get-HcsNumaLpMapping -MapType MinRootAware -NodeName 1CSPHQ2
+
+ { Numa Node #0 : CPUs [0, 1, 2, 3] }
+
+ { Numa Node #1 : CPUs [20, 21, 22, 23] }
+ [dbe-1csphq2.microsoftdatabox.com]:
+ PS>
+ ```
+
+ 1. Restart the VMs that you had stopped in the earlier step.
+
+ ```powershell
+ start-vm
+ ```
+ ## Deploy a VM Follow these steps to create an HPN VM on your device.
+> [!NOTE]
+> Azure Stack Edge Pro 1 devices have two NUMA nodes, so you must provision HPN VMs before you provision non-HPN VMs.
+ 1. In the Azure portal of your Azure Stack Edge resource, [Add a VM image](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm-image). You'll use this VM image to create a VM in the next step. You can choose either Gen1 or Gen2 for the VM. 1. Follow all the steps in [Add a VM](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm) with this configuration requirement.
Follow these steps to create an HPN VM on your device.
You'll use the IP address for the network interface to connect to the VM.
- > [!NOTE]
- > If the vCPUs are not reserved for HPN VMs prior to the deployment, the deployment will fail with `FabricVmPlacementErrorInsufficientNumaNodeCapacity` error.
-
+ > [!NOTE]
+ > If the vCPUs are not reserved for HPN VMs prior to the deployment, the deployment will fail with a *FabricVmPlacementErrorInsufficientNumaNodeCapacity* error.
+ ## Next steps - [Troubleshoot VM deployment](azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md) - [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md) - [Monitor CPU and memory on a VM](azure-stack-edge-gpu-monitor-virtual-machine-metrics.md)-
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 07/19/2022 Last updated : 11/15/2022 # Security alerts - a reference guide
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Detected anomalous mix of upper and lower case characters in command-line** | Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host. | - | Medium | | **Detected change to a registry key that can be abused to bypass UAC** | Analysis of host data on %{Compromised Host} detected that a registry key that can be abused to bypass UAC (User Account Control) was changed. This kind of configuration, while possibly benign, is also typical of attacker activity when trying to move from unprivileged (standard user) to privileged (for example administrator) access on a compromised host. | - | Medium | | **Detected decoding of an executable using built-in certutil.exe tool** | Analysis of host data on %{Compromised Host} detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. | - | High |
-| **Detected enabling of the WDigest UseLogonCredential registry key** | Analysis of host data detected a change in the registry key HKLM\SYSTEM\ CurrentControlSet\Control\SecurityProviders\WDigest\ "UseLogonCredential". Specifically this key has been updated to allow logon credentials to be stored in clear text in LSA memory. Once enabled an attacker can dump clear text passwords from LSA memory with credential harvesting tools such as Mimikatz. | - | Medium |
+| **Detected enabling of the WDigest UseLogonCredential registry key** | Analysis of host data detected a change in the registry key HKLM\SYSTEM\ CurrentControlSet\Control\SecurityProviders\WDigest\ "UseLogonCredential". Specifically this key has been updated to allow logon credentials to be stored in clear text in LSA memory. Once enabled, an attacker can dump clear text passwords from LSA memory with credential harvesting tools such as Mimikatz. | - | Medium |
| **Detected encoded executable in command line data** | Analysis of host data on %{Compromised Host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. | - | High | | **Detected obfuscated command line** | Attackers use increasingly complex obfuscation techniques to evade detections that run against the underlying data. Analysis of host data on %{Compromised Host} detected suspicious indicators of obfuscation on the commandline. | - | Informational | | **Detected Petya ransomware indicators** | Analysis of host data on %{Compromised Host} detected indicators associated with Petya ransomware. See https://aka.ms/petya-blog for more information. Review the command line associated in this alert and escalate this alert to your security team. | - | High |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Detected suspicious execution of VBScript.Encode command** | Analysis of host data on %{Compromised Host} detected the execution of VBScript.Encode command. This encodes the scripts into unreadable text, making it more difficult for users to examine the code. Microsoft threat research shows that attackers often use encoded VBscript files as part of their attack to evade detection systems. This could be legitimate activity, or an indication of a compromised host. | - | Medium | | **Detected suspicious execution via rundll32.exe** | Analysis of host data on %{Compromised Host} detected rundll32.exe being used to execute a process with an uncommon name, consistent with the process naming scheme previously seen used by activity group GOLD when installing their first stage implant on a compromised host. | - | High | | **Detected suspicious file cleanup commands** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing post-compromise self-cleanup activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession, followed by a delete command in the way that has occurred here is rare. | - | High |
-| **Detected suspicious file creation** | Analysis of host data on %{Compromised Host} detected creation or execution of a process which has previously indicated post-compromise action taken on a victim host by activity group BARIUM. This activity group has been known to use this technique to download additional malware to a compromised host after an attachment in a phishing doc has been opened. | - | High |
+| **Detected suspicious file creation** | Analysis of host data on %{Compromised Host} detected creation or execution of a process that has previously indicated post-compromise action taken on a victim host by activity group BARIUM. This activity group has been known to use this technique to download more malware to a compromised host after an attachment in a phishing doc has been opened. | - | High |
| **Detected suspicious named pipe communications** | Analysis of host data on %{Compromised Host} detected data being written to a local named pipe from a Windows console command. Named pipes are known to be a channel used by attackers to task and communicate with a malicious implant. This could be legitimate activity, or an indication of a compromised host. | - | High | | **Detected suspicious network activity** | Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | - | Low | | **Detected suspicious new firewall rule** | Analysis of host data detected a new firewall rule has been added via netsh.exe to allow traffic from an executable in a suspicious location. | - | Medium | | **Detected suspicious use of Cacls to lower the security state of the system** | Attackers use myriad ways like brute force, spear phishing etc. to achieve initial compromise and get a foothold on the network. Once initial compromise is achieved they often take steps to lower the security settings of a system. CaclsΓÇöshort for change access control list is Microsoft Windows native command-line utility often used for modifying the security permission on folders and files. A lot of time the binary is used by the attackers to lower the security settings of a system. This is done by giving Everyone full access to some of the system binaries like ftp.exe, net.exe, wscript.exe etc. Analysis of host data on %{Compromised Host} detected suspicious use of Cacls to lower the security of a system. | - | Medium |
-| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file which is configured to connect to a remote FTP server and download additional malicious binaries. | - | Medium |
+| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file which is configured to connect to a remote FTP server and download more malicious binaries. | - | Medium |
| **Detected suspicious use of Pcalua.exe to launch executable code** | Analysis of host data on %{Compromised Host} detected the use of pcalua.exe to launch executable code. Pcalua.exe is component of the Microsoft Windows "Program Compatibility Assistant" which detects compatibility issues during the installation or execution of a program. Attackers are known to abuse functionality of legitimate Windows system tools to perform malicious actions, for example using pcalua.exe with the -a switch to launch malicious executables either locally or from remote shares. | - | Medium | | **Detected the disabling of critical services** | The analysis of host data on %{Compromised Host} detected execution of "net.exe stop" command being used to stop critical services like SharedAccess or the Windows Security app. The stopping of either of these services can be indication of a malicious behavior. | - | Medium | | **Digital currency mining related behavior detected** | Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining. | - | High |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Rare SVCHOST service group executed**<br>(VM_SvcHostRunInRareServiceGroup) | The system process SVCHOST was observed running a rare service group. Malware often uses SVCHOST to masquerade its malicious activity. | Defense Evasion, Execution | Informational | | **Sticky keys attack detected** | Analysis of host data indicates that an attacker may be subverting an accessibility binary (for example sticky keys, onscreen keyboard, narrator) in order to provide backdoor access to the host %{Compromised Host}. | - | Medium | | **Successful brute force attack**<br>(VM_LoginBruteForceSuccess) | Several sign in attempts were detected from the same source. Some successfully authenticated to the host.<br>This resembles a burst attack, in which an attacker performs numerous authentication attempts to find valid account credentials. | Exploitation | Medium/High |
-| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it is a known attacker technique to compromise additional user accounts and move laterally across a network. | - | Medium |
-| **Suspect service installation** | Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it is a known attacker technique to compromise additional user accounts and move laterally across a network. | - | Medium |
+| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it is a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
+| **Suspect service installation** | Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
| **Suspected Kerberos Golden Ticket attack parameters observed** | Analysis of host data detected commandline parameters consistent with a Kerberos Golden Ticket attack. | - | Medium | | **Suspicious Account Creation Detected** | Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator. | - | Medium | | **Suspicious Activity Detected**<br>(VM_SuspiciousActivity) | Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | | **Suspicious authentication activity**<br>(VM_LoginBruteForceValidUserFailed) | Although none of them succeeded, some of them used accounts were recognized by the host. This resembles a dictionary attack, in which an attacker performs numerous authentication attempts using a dictionary of predefined account names and passwords in order to find valid credentials to access the host. This indicates that some of your host account names might exist in a well-known account name dictionary. | Probing | Medium |
-| **Suspicious code segment detected** | Indicates that a code segment has been allocated by using non-standard methods, such as reflective injection and process hollowing. The alert provides additional characteristics of the code segment that have been processed to provide context for the capabilities and behaviors of the reported code segment. | - | Medium |
+| **Suspicious code segment detected** | Indicates that a code segment has been allocated by using non-standard methods, such as reflective injection and process hollowing. The alert provides more characteristics of the code segment that have been processed to provide context for the capabilities and behaviors of the reported code segment. | - | Medium |
| **Suspicious command execution**<br>(VM_SuspiciousCommandLineExecution) | Machine logs indicate a suspicious command-line execution by user %{user name}. | Execution | High | | **Suspicious double extension file executed** | Analysis of host data indicates an execution of a process with a suspicious double extension. This extension may trick users into thinking files are safe to be opened and might indicate the presence of malware on the system. | - | High | | **Suspicious download using Certutil detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|Alert (alert type)|Description|MITRE tactics<br>([Learn more](#intentions))|Severity| |-|-|:-:|--| |**a history file has been cleared**|Analysis of host data indicates that the command history log file has been cleared. Attackers may do this to cover their traces. The operation was performed by user: '%{user name}'.|-|Medium|
-|**Access of htaccess file detected**<br>(VM_SuspectHtaccessFileAccess)|Analysis of host data on %{Compromised Host} detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running the Apache Web software including basic redirect functionality, or for more advanced functions such as basic password protection. Attackers will often modify htaccess files on machines they have compromised to gain persistence.|Persistence, Defense Evasion, Execution|Medium|
+|**Access of htaccess file detected**<br>(VM_SuspectHtaccessFileAccess)|Analysis of host data on %{Compromised Host} detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running the Apache Web software including basic redirect functionality, or for more advanced functions such as basic password protection. Attackers will often modify htaccess files on machines they've compromised to gain persistence.|Persistence, Defense Evasion, Execution|Medium|
|**Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium | |**Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High | |**Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Attempt to stop apt-daily-upgrade.timer service detected**<br>(VM_TimerServiceDisabled)|Analysis of host data on %{Compromised Host} detected an attempt to stop apt-daily-upgrade.timer service. In some recent attacks, attackers have been observed stopping this service, to download malicious files and granting execution privileges for their attack.|Defense Evasion|Low| |**Behavior similar to common Linux bots detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of a process normally associated with common Linux botnets. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Behavior similar to common Linux bots detected**<br>(VM_CommonBot)|Analysis of host data on %{Compromised Host} detected the execution of a process normally associated with common Linux botnets.|Execution, Collection, Command and Control|Medium|
-|**Behavior similar to Fairware ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Behavior similar to Fairware ransomware detected**<br>(VM_FairwareMalware)|Analysis of host data on %{Compromised Host} detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder.|Execution|Medium|
+|**Behavior similar to Fairware ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it's normally used on discrete folders. In this case, it's being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
+|**Behavior similar to Fairware ransomware detected**<br>(VM_FairwareMalware)|Analysis of host data on %{Compromised Host} detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it's normally used on discrete folders. In this case, it's being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder.|Execution|Medium|
|**Behavior similar to ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of files that have resemblance of known ransomware that can prevent users from accessing their system or personal files, and demands ransom payment in order to regain access. This behavior was seen [x] times today on the following machines: [Machine names]|-|High| |**Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium | |**Container with a miner image detected**<br>(VM_MinerInContainerImage) | Machine logs indicate execution of a Docker container that run an image associated with a digital currency mining. | Execution | High |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Detected anomalous mix of upper and lower case characters in command line**|Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host.|-|Medium| |**Detected file download from a known malicious source [seen multiple times]**<br>(VM_SuspectDownload)|Analysis of host data has detected the download of a file from a known malware source on %{Compromised Host}. This behavior was seen over [x] times today on the following machines: [Machine names]|Privilege Escalation, Execution, Exfiltration, Command and Control|Medium| |**Detected file download from a known malicious source**|Analysis of host data has detected the download of a file from a known malware source on %{Compromised Host}.|-|Medium|
-|**Detected persistence attempt [seen multiple times]**|Analysis of host data on %{Compromised Host} has detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode, so this may indicate that an attacker has added a malicious process to every run-level to guarantee persistence. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
+|**Detected persistence attempt [seen multiple times]**|Analysis of host data on %{Compromised Host} has detected installation of a startup script for single-user mode. It's extremely rare that any legitimate process needs to execute in that mode, so this may indicate that an attacker has added a malicious process to every run-level to guarantee persistence. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
|**Detected persistence attempt**<br>(VM_NewSingleUserModeStartupScript)|Host data analysis has detected that a startup script for single-user mode has been installed.<br>Because it's rare that any legitimate process would be required to run in that mode, this might indicate that an attacker has added a malicious process to every run-level to guarantee persistence. |Persistence|Medium| |**Detected suspicious file download [seen multiple times]**|Analysis of host data has detected suspicious download of remote file on %{Compromised Host}. This behavior was seen 10 times today on the following machines: [Machine name]|-|Low| |**Detected suspicious file download**<br>(VM_SuspectDownloadArtifacts)|Analysis of host data has detected suspicious download of remote file on %{Compromised Host}.|Persistence|Low|
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Disabling of auditd logging [seen multiple times]**|The Linux Audit system provides a way to track security-relevant information on the system. It records as much information about the events that are happening on your system as possible. Disabling auditd logging could hamper discovering violations of security policies used on the system. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low| |**Executable found running from a suspicious location**<br>(VM_SuspectExecutablePath)|Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host.| Execution |High| |**Exploitation of Xorg vulnerability [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the user of Xorg with suspicious arguments. Attackers may use this technique in privilege escalation attempts. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Exposed Docker daemon on TCP socket**<br>(VM_ExposedDocker)|Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port.|Execution, Exploitation|Medium|
+|**Exposed Docker daemon on TCP socket**<br>(VM_ExposedDocker)|Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, doesn't use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port.|Execution, Exploitation|Medium|
|**Failed SSH brute force attack**<br>(VM_SshBruteForceFailed)|Failed brute force attacks were detected from the following attackers: %{Attackers}. Attackers were trying to access the host with the following user names: %{Accounts used on failed sign in to host attempts}.|Probing|Medium| |**Fileless Attack Behavior Detected**<br>(VM_FilelessAttackBehavior.Linux)| The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors} | Execution | Low | |**Fileless Attack Technique Detected**<br>(VM_FilelessAttackTechnique.Linux)| The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors} | Execution | High |
-|**Fileless Attack Toolkit Detected**<br>(VM_FilelessAttackToolkit.Linux)| The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically do not have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors} | Defense Evasion, Execution | High |
+|**Fileless Attack Toolkit Detected**<br>(VM_FilelessAttackToolkit.Linux)| The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically don't have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors} | Defense Evasion, Execution | High |
|**Hidden file execution detected**|Analysis of host data indicates that a hidden file was executed by %{user name}. This activity could either be legitimate activity, or an indication of a compromised host.|-|Informational| |**Indicators associated with DDOS toolkit detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services and taking full control over the infected system. This could also possibly be legitimate activity. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Indicators associated with DDOS toolkit detected**<br>(VM_KnownLinuxDDoSToolkit)|Analysis of host data on %{Compromised Host} detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services and taking full control over the infected system. This could also possibly be legitimate activity.|Persistence, Lateral Movement, Execution, Exploitation|Medium|
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Local host reconnaissance detected**<br>(VM_LinuxReconnaissance)|Analysis of host data on %{Compromised Host} detected the execution of a command normally associated with common Linux bot reconnaissance.|Discovery|Medium| |**Manipulation of host firewall detected [seen multiple times]**<br>(VM_FirewallDisabled)|Analysis of host data on %{Compromised Host} detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. This behavior was seen [x] times today on the following machines: [Machine names]|Defense Evasion, Exfiltration|Medium| |**Manipulation of host firewall detected**|Analysis of host data on %{Compromised Host} detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data.|-|Medium|
-|**MITRE Caldera agent detected**<br>(VM_MitreCalderaTools)|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines in some way.|All |Medium|
+|**MITRE Caldera agent detected**<br>(VM_MitreCalderaTools)|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This is often associated with the MITRE 54ndc47 agent, which could be used maliciously to attack other machines in some way.|All |Medium|
|**New SSH key added [seen multiple times]**<br>(VM_SshKeyAddition)|A new SSH key was added to the authorized keys file. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence|Low| |**New SSH key added**|A new SSH key was added to the authorized keys file|-|Low| |**Possible attack tool detected [seen multiple times]**|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This tool is often associated with malicious users attacking other machines in some way. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Possible backdoor detected [seen multiple times]**|Analysis of host data has detected a suspicious file being downloaded then run on %{Compromised Host} in your subscription. This activity has previously been associated with installation of a backdoor. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible credential access tool detected [seen multiple times]**|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible credential access tool detected**<br>(VM_KnownLinuxCredentialAccessTool)|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials.|Credential Access|Medium|
-|**Possible data exfiltration [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised. This behavior was seen [x]] times today on the following machines: [Machine names]|-|Medium|
-|**Possible data exfiltration**<br>(VM_DataEgressArtifacts)|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they have compromised.|Collection, Exfiltration|Medium|
+|**Possible data exfiltration [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they've compromised. This behavior was seen [x]] times today on the following machines: [Machine names]|-|Medium|
+|**Possible data exfiltration**<br>(VM_DataEgressArtifacts)|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they've compromised.|Collection, Exfiltration|Medium|
|**Possible exploitation of Hadoop Yarn**<br>(VM_HadoopYarnExploit)|Analysis of host data on %{Compromised Host} detected the possible exploitation of the Hadoop Yarn service.|Exploitation|Medium| |**Possible exploitation of the mailserver detected**<br>(VM_MailserverExploitation )|Analysis of host data on %{Compromised Host} detected an unusual execution under the mail server account|Exploitation|Medium| |**Possible Log Tampering Activity Detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Possible Log Tampering Activity Detected**<br>(VM_SystemLogRemoval)|Analysis of host data on %{Compromised Host} detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files.|Defense Evasion|Medium|
-|**Possible malicious web shell detected [seen multiple times]**<br>(VM_Webshell)|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they have compromised to gain persistence or for further exploitation. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence, Exploitation|Medium|
-|**Possible malicious web shell detected**|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they have compromised to gain persistence or for further exploitation.|-|Medium|
+|**Possible malicious web shell detected [seen multiple times]**<br>(VM_Webshell)|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they've compromised to gain persistence or for further exploitation. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence, Exploitation|Medium|
+|**Possible malicious web shell detected**|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they've compromised to gain persistence or for further exploitation.|-|Medium|
|**Possible password change using crypt-method detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected password change using crypt method. Attackers can make this change to continue access and gaining persistence after compromise. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Potential overriding of common files [seen multiple times]**|Analysis of host data has detected common executables being overwritten on %{Compromised Host}. Attackers will overwrite common files as a way to obfuscate their actions or for persistence. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Potential overriding of common files**<br>(VM_OverridingCommonFiles)|Analysis of host data has detected common executables being overwritten on %{Compromised Host}. Attackers will overwrite common files as a way to obfuscate their actions or for persistence.|Persistence|Medium|
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Successful SSH brute force attack**<br>(VM_SshBruteForceSuccess)|Analysis of host data has detected a successful brute force attack. The IP %{Attacker source IP} was seen making multiple login attempts. Successful logins were made from that IP with the following user(s): %{Accounts used to successfully sign in to host}. This means that the host may be compromised and controlled by a malicious actor.|Exploitation|High| |**Suspect Password File Access** <br> (VM_SuspectPasswordFileAccess) | Analysis of host data has detected suspicious access to encrypted user passwords. | Persistence | Informational | |**Suspicious Account Creation Detected**|Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator.|-|Medium|
-|**Suspicious compilation detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they have compromised to escalate privileges. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Suspicious compilation detected**<br>(VM_SuspectCompilation)|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they have compromised to escalate privileges.|Privilege Escalation, Exploitation|Medium|
+|**Suspicious compilation detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they've compromised to escalate privileges. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
+|**Suspicious compilation detected**<br>(VM_SuspectCompilation)|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they've compromised to escalate privileges.|Privilege Escalation, Exploitation|Medium|
|**Suspicious DNS Over Https** <br> (VM_SuspiciousDNSOverHttps) | Analysis of host data indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium | |**Suspicious failed execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousFailure) | Suspicious failure of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Such failures may be associated with malicious scripts run by this extension. | Execution | Medium | |**Suspicious kernel module detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a shared object file being loaded as a kernel module. This could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
Microsoft Defender for Containers provides security alerts on the cluster level
| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium | | **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High | | **PREVIEW - Impossible travel activity**<br>(ARM.MCAS_ImpossibleTravelActivity) | Two user activities (in a single or multiple sessions) have occurred, originating from geographically distant locations. This occurs within a time period shorter than the time it would have taken the user to travel from the first location to the second. This indicates that a different user is using the same credentials.<br>This detection uses a machine learning algorithm that ignores obvious false positives contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The detection has an initial learning period of seven days, during which it learns a new user's activity pattern.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Data Collection' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Collection | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Defense Evasion' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Defense Evasion | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Execution' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Defense Execution | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Impact' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Impact | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Initial Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Initial access | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Lateral Movement Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise more resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Lateral movement | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'persistence' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Persistence | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Privilege Escalation' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent.. | Privilege escalation | Medium |
+| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium | | **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) | Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium | | **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium | | **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
-| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise additional resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
+| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise more resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium | | **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Privilege Escalation | Medium | | **Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_MicroBurst.RunCodeOnBehalf) | Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | Persistence, Credential Access | High | | **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **Suspicious classic role assignment detected (Preview)**<br>(ARM_AnomalousClassicRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious classic role assignment in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to provide backward compatibility with classic roles that are no longer commonly used. While this activity may be legitimate, a threat actor might utilize such assignment to grant permissions to an additional user account under their control. |  Lateral Movement, Defense Evasion | High |
+| **Suspicious classic role assignment detected (Preview)**<br>(ARM_AnomalousClassicRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious classic role assignment in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to provide backward compatibility with classic roles that are no longer commonly used. While this activity may be legitimate, a threat actor might utilize such assignment to grant permissions to an more user account under their control. |  Lateral Movement, Defense Evasion | High |
## <a name="alerts-dns"></a>Alerts for DNS
Defender for Cloud's supported kill chain intents are based on [version 9 of the
| **Defense Evasion** | V7, V9 | Defense evasion consists of techniques an adversary may use to evade detection or avoid other defenses. Sometimes these actions are the same as (or variations of) techniques in other categories that have the added benefit of subverting a particular defense or mitigation. | | **Credential Access** | V7, V9 | Credential access represents techniques resulting in access to or control over system, domain, or service credentials that are used within an enterprise environment. Adversaries will likely attempt to obtain legitimate credentials from users or administrator accounts (local system administrator or domain users with administrator access) to use within the network. With sufficient access within a network, an adversary can create accounts for later use within the environment. | | **Discovery** | V7, V9 | Discovery consists of techniques that allow the adversary to gain knowledge about the system and internal network. When adversaries gain access to a new system, they must orient themselves to what they now have control of and what benefits operating from that system give to their current objective or overall goals during the intrusion. The operating system provides many native tools that aid in this post-compromise information-gathering phase. |
-| **LateralMovement** | V7, V9 | Lateral movement consists of techniques that enable an adversary to access and control remote systems on a network and could, but does not necessarily, include execution of tools on remote systems. The lateral movement techniques could allow an adversary to gather information from a system without needing additional tools, such as a remote access tool. An adversary can use lateral movement for many purposes, including remote Execution of tools, pivoting to additional systems, access to specific information or files, access to additional credentials, or to cause an effect. |
+| **LateralMovement** | V7, V9 | Lateral movement consists of techniques that enable an adversary to access and control remote systems on a network and could, but does not necessarily, include execution of tools on remote systems. The lateral movement techniques could allow an adversary to gather information from a system without needing more tools, such as a remote access tool. An adversary can use lateral movement for many purposes, including remote Execution of tools, pivoting to more systems, access to specific information or files, access to more credentials, or to cause an effect. |
| **Execution** | V7, V9 | The execution tactic represents techniques that result in execution of adversary-controlled code on a local or remote system. This tactic is often used in conjunction with lateral movement to expand access to remote systems on a network. | | **Collection** | V7, V9 | Collection consists of techniques used to identify and gather information, such as sensitive files, from a target network prior to exfiltration. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. | | **Command and Control** | V7, V9 | The command and control tactic represents how adversaries communicate with systems under their control within a target network. |
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
description: Learn how to configure continuous export of security alerts and rec
Previously updated : 07/31/2022 Last updated : 11/06/2022 # Continuously export Microsoft Defender for Cloud data
Continuous export can be configured and managed via the Microsoft Defender for C
You can also send the data to an [Event hub or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hub-or-log-analytics-workspace-in-another-tenant).
-Here are some examples of options that you can only use in the the API:
+Here are some examples of options that you can only use in the API:
* **Greater volume** - You can create multiple export configurations on a single subscription with the API. The **Continuous Export** page in the Azure portal supports only one export configuration per subscription.
Here are some examples of options that you can only use in the the API:
> [!TIP] > These API-only options are not shown in the Azure portal. If you use them, there'll be a banner informing you that other configurations exist.
-Learn more about the automations API in the [REST API documentation](/rest/api/defenderforcloud/automations).
- ### [**Deploy at scale with Azure Policy**](#tab/azure-policy) ### Configure continuous export at scale using the supplied policies
To view the event schemas of the exported data types, visit the [Log Analytics t
## Export data to an Azure Event hub or Log Analytics workspace in another tenant
-You can export data to an Azure Event hub or Log Analytics workspace in a different tenant, which can help you to gather your data for central analysis.
+You can export data to an Azure Event hub or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](/azure/lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location.
To export data to an Azure Event hub or Log Analytics workspace in a different tenant:
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
This glossary provides a brief description of important terms and concepts for t
|**APT** | Advanced Persistent Threats | [Video: Understanding APTs](/events/teched-2012/sia303)| | **Arc-enabled Kubernetes**| Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center.|[What is Azure Arc-enabled Logic Apps? (Preview)](../logic-apps/azure-arc-enabled-logic-apps-overview.md) |**ARM**| Azure Resource Manager-the deployment and management service for Azure.| [Azure Resource Manager Overview](../azure-resource-manager/management/overview.md)|
-|**ASB**| Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure.| [Azure Security Benchmark](/azure/baselines/security-center-security-baseline) |
+|**ASB**| Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure.| [Azure Security Benchmark](/security/benchmark/azure/baselines/security-center-security-baseline) |
|**Auto-provisioning**| To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can use auto provisioning to quietly deploy the Azure Monitor Agent on your servers.| [Configure auto provision](../iot-dps/quick-setup-auto-provision.md)| ## B
This glossary provides a brief description of important terms and concepts for t
|**Zero-Trust**|A new security model that assumes breach and verifies each request as though it originated from an uncontrolled network.|[Zero-Trust Security](../security/fundamentals/zero-trust.md)| ## Next Steps
-[Microsoft Defender for Cloud-overview](overview-page.md)
+[Microsoft Defender for Cloud-overview](overview-page.md)
defender-for-cloud Episode Eighteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eighteen.md
Last updated 11/03/2022
## Recommended resources
-Learn more about [Enable Microsoft Defender for Azure Cosmos DB](/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md)
+Learn more about [Enable Microsoft Defender for Azure Cosmos DB](/azure/defender-for-cloud/defender-for-databases-enable-cosmos-protections)
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
defender-for-cloud Episode Nineteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nineteen.md
Last updated 11/08/2022
- [08:22](/shows/mdc-in-the-field/defender-for-devops#time=08m22s) - Demonstration ## Recommended resources
- - [Learn more](/defender-for-cloud/defender-for-devops-introduction.md) about Defender for DevOps.
+ - [Learn more](/azure/defender-for-cloud/defender-for-devops-introduction) about Defender for DevOps.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Last updated 11/08/2022
## Recommended resources
- - [Learn more](/defender-for-cloud/concept-attack-path.md) about Attack path.
+ - [Learn more](/azure/defender-for-cloud/concept-attack-path) about Attack path.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
description: Learn about deploying Microsoft Defender for Endpoint from Microsof
Previously updated : 07/20/2022 Last updated : 11/20/2022 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
Confirm that your machine meets the necessary requirements for Defender for Endp
> [!IMPORTANT] > Defender for Cloud's integration with Microsoft Defender for Endpoint is enabled by default. So when you enable enhanced security features, you give consent for Microsoft Defender for Servers to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
-1. For Windows servers, make sure that your servers meet the requirements for [onboarding Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-server-endpoints#windows-server-2012-r2-and-windows-server-2016)
+1. For Windows servers, make sure that your servers meet the requirements for [onboarding Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-server-endpoints#windows-server-2012-r2-and-windows-server-2016).
1. If you've moved your subscription between Azure tenants, some manual preparatory steps are also required. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
You can also enable the MDE unified solution at scale through the supplied REST
This is an example request body for the PUT request to enable the MDE unified solution:
-URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings&api-version=2022-05-01-preview`
+URI: `https://management.azure.com/subscriptions/<subscriptionId>providers/Microsoft.Security/settings/WDATP_UNIFIED_SOLUTION?api-version=2022-05-01`
```json {
hdinsight Hdinsight Selecting Vm Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-selecting-vm-size.md
keywords: vm sizes, cluster sizes, cluster configuration
Previously updated : 04/27/2022 Last updated : 11/19/2022 # Selecting the right VM size for your Azure HDInsight cluster
The following table describes the cluster types and node types, which can be cre
| Kafka | All | F4 and above | no | no | | HBase | All | F4 and above | no | no | | LLAP | disabled | no | no | no |
-| Storm | disabled | no | no | no |
-| ML Service | HDI 3.6 ONLY | F4 and above | no | no |
+ To see the specifications of each F-series SKU, see [F-series VM sizes](https://azure.microsoft.com/blog/f-series-vm-size/).
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
The logged information contains text similar to the following JSON:
} ```
-### Python SDK
+### View properties
After submitting a training run, a [Job](/python/api/azure-ai-ml/azure.ai.ml.entities.job) object is returned. The `properties` attribute of this object contains the logged git information. For example, the following code retrieves the commit hash:
+# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] ```python job.properties["azureml.git.commit"] ```
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml job show --name my_job_id --query "{GitCommit:properties."""azureml.git.commit"""}"
+```
+++ ## Next steps * [Access a compute instance terminal in your workspace](how-to-access-terminal.md)
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
This article gives a comparison of data scenario(s) in SDK v1 and SDK v2.
|Functionality in SDK v1|Rough mapping in SDK v2| |-|-|
-|[Method/API in SDK v1](/python/api/azurzeml-core/azureml.data)|[Method/API in SDK v2](/python/api/azure-ai-ml/azure.ai.ml.entities)|
+|[Method/API in SDK v1](/python/api/azureml-core/azureml.data)|[Method/API in SDK v2](/python/api/azure-ai-ml/azure.ai.ml.entities)|
## Next steps
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
Application Configuration Service for Tanzu supports Azure DevOps, GitHub, GitLa
To manage the service settings, open the **Settings** section and add a new entry under the **Repositories** section. The following table describes properties for each entry.
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
The following list describes features and capabilities that are available in the
- **Inventory reports for blobs and containers**
- You can generate inventory reports for blobs and containers. A report for blobs can contain base blobs, snapshots, content length, blob versions and their associated properties such as creation time, last modified time. A report for containers describes containers and their associated properties such as immutability policy status, legal hold status. Currently, the report does not have an option to include Soft Deleted blobs or Soft Delete containers.
+ You can generate inventory reports for blobs and containers. A report for blobs can contain base blobs, snapshots, content length, blob versions and their associated properties such as creation time, last modified time. A report for containers describes containers and their associated properties such as immutability policy status, legal hold status.
- **Custom Schema**
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.s
To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using PowerShell:
-1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps.md).
-1. Use the `Connect-AzAccount` cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps.md).
+1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. Use the `Connect-AzAccount` cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps).
1. Use these commands to register your subscription to the Microsoft Defender for Cloud Resource Provider: ```powershell
To enable Microsoft Defender for Storage at the subscription level with per-tran
``` > [!TIP]
-> You can use the [`GetAzSecurityPricing` (Az_Security)](/powershell/module/az.security/get-azsecuritypricing.md) to see all of the Defender for Cloud plans that are enabled for the subscription.
+> You can use the [`GetAzSecurityPricing` (Az_Security)](/powershell/module/az.security/get-azsecuritypricing) to see all of the Defender for Cloud plans that are enabled for the subscription.
To disable the plan, set the `-PricingTier` property value to `Free`.
To enable Microsoft Defender for Storage at the subscription level with per-tran
To disable the plan, set the `-tier` property value to `free`.
-Learn more about the [`az security pricing create`](/cli/azure/security/pricing.md#az-security-pricing-create) command.
+Learn more about the [`az security pricing create`](/cli/azure/security/pricing#az-security-pricing-create) command.
#### REST API
If you want to disable Defender for Storage on the account:
To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using PowerShell:
-1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps.md).
-1. Use the Connect-AzAccount cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps.md).
-1. Enable Microsoft Defender for Storage for the desired storage account with theΓÇ»[`Enable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection.md) cmdlet:
+1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. Use the Connect-AzAccount cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps).
+1. Enable Microsoft Defender for Storage for the desired storage account with theΓÇ»[`Enable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection) cmdlet:
```powershell Enable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/"
To enable Microsoft Defender for Storage for a specific storage account with per
Replace `<subscriptionId>`, `<resource-group>`, and `<storage-account>` with the values for your environment.
-If you want to disable per-transaction pricing for a specific storage account, use the [`Disable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection.md) cmdlet:
+If you want to disable per-transaction pricing for a specific storage account, use the [`Disable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection) cmdlet:
```powershell Disable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/"
To enable Microsoft Defender for Storage for a specific storage account with per
1. If you don't have it already, [install the Azure CLI](/cli/azure/install-azure-cli). 1. Use the `az login` command to sign in to your Azure account. Learn more about [signing in to Azure with Azure CLI](/cli/azure/authenticate-azure-cli).
-1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»[`az security atp storage update`](/cli/azure/security/atp/storage.md) command:
+1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»[`az security atp storage update`](/cli/azure/security/atp/storage) command:
```azurecli az security atp storage update \
To enable Microsoft Defender for Storage for a specific storage account with per
``` > [!TIP]
-> You can use the [`az security atp storage show`](/cli/azure/security/atp/storage.md) command to see if Defender for Storage is enabled on an account.
+> You can use the [`az security atp storage show`](/cli/azure/security/atp/storage) command to see if Defender for Storage is enabled on an account.
-To disable Microsoft Defender for Storage for your subscription, use theΓÇ»[`az security atp storage update`](/cli/azure/security/atp/storage.md) command:
+To disable Microsoft Defender for Storage for your subscription, use theΓÇ»[`az security atp storage update`](/cli/azure/security/atp/storage) command:
```azurecli az security atp storage update \
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 07/21/2022 Last updated : 11/18/2022
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/download-vhd.md
Previously updated : 07/21/2022 Last updated : 11/18/2022 # Download a Linux VHD from Azure
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell
description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 07/21/2022 Last updated : 11/18/2022 linux
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/download-vhd.md
Previously updated : 07/21/2022 Last updated : 11/18/2022 # Download a Windows VHD from Azure
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
Application Gateway web application firewall (WAF) protects web applications fro
The Application Gateway WAF comes pre-configured with CRS 3.2 by default, but you can choose to use any other supported CRS version.
-CRS 3.2 offers a new engine and new rule sets defending against Java infections, an initial set of file upload checks, and fewer false positives compared with earlier versions of CRS. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md). Learn more about the new [Azure WAF engine](waf-engine.md).
+CRS 3.2 offers a new engine and new rule sets defending against Java injections, an initial set of file upload checks, and fewer false positives compared with earlier versions of CRS. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md). Learn more about the new [Azure WAF engine](waf-engine.md).
> [!div class="mx-imgBorder"] > ![Manages rules](../media/application-gateway-crs-rulegroups-rules/managed-rules-01.png)