Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Red Teaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/red-teaming.md | Title: Introduction to red teaming large language models (LLMs) + Title: Planning red teaming for large language models (LLMs) and their applications -description: Learn about how red teaming and adversarial testing is an essential practice in the responsible development of systems and features using large language models (LLMs) +description: Learn about how red teaming and adversarial testing are an essential practice in the responsible development of systems and features using large language models (LLMs) Previously updated : 05/18/2023 Last updated : 11/03/2023 recommendations: false keywords: -# Introduction to red teaming large language models (LLMs) +# Planning red teaming for large language models (LLMs) and their applications ++This guide offers some potential strategies for planning how to set up and manage red teaming for responsible AI (RAI) risks throughout the large language model (LLM) product life cycle. ++## What is red teaming? The term *red teaming* has historically described systematic adversarial attacks for testing security vulnerabilities. With the rise of LLMs, the term has extended beyond traditional cybersecurity and evolved in common usage to describe many kinds of probing, testing, and attacking of AI systems. With LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hate speech, incitement or glorification of violence, or sexual content. -**Red teaming is an essential practice in the responsible development of systems and features using LLMs**. While not a replacement for systematic [measurement and mitigation](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context) work, red teamers help to uncover and identify harms and, in turn, enable measurement strategies to validate the effectiveness of mitigations. +## Why is RAI red teaming an important practice? ++Red teaming is a best practice in the responsible development of systems and features using LLMs. While not a replacement for systematic measurement and mitigation work, red teamers help to uncover and identify harms and, in turn, enable measurement strategies to validate the effectiveness of mitigations. -Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](content-filter.md) and other [mitigation strategies](prompt-engineering.md)) for its Azure OpenAI Service models (see this [Responsible AI Overview](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context)). However, the context of your LLM application will be unique and you also should conduct red teaming to: +While Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](./content-filter.md) and other [mitigation strategies](./prompt-engineering.md)) for its Azure OpenAI Service models (see this [Overview of responsible AI practices](/legal/cognitive-services/openai/overview)), the context of each LLM application will be unique and you also should conduct red teaming to: ++- Test the LLM base model and determine whether there are gaps in the existing safety systems, given the context of your application. -- Test the LLM base model and determine whether there are gaps in the existing safety systems, given the context of your application system. - Identify and mitigate shortcomings in the existing default filters or mitigation strategies.-- Provide feedback on failures so we can make improvements. -Here is how you can get started in your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise. +- Provide feedback on failures in order to make improvements. ++- Note that red teaming is not a replacement for systematic measurement. A best practice is to complete an initial round of manual red teaming before conducting systematic measurements and implementing mitigations. As highlighted above, the goal of RAI red teaming is to identify harms, understand the risk surface, and develop the list of harms that can inform what needs to be measured and mitigated. -## Getting started +Here is how you can get started and plan your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise. -### Managing your red team +## Before testing -**Assemble a diverse group of red teamers.** +### Plan: Who will do the testing -LLM red teamers should be a mix of people with diverse social and professional backgrounds, demographic groups, and interdisciplinary expertise that fits the deployment context of your AI system. For example, if youΓÇÖre designing a chatbot to help health care providers, medical experts can help identify risks in that domain. +**Assemble a diverse group of red teamers** -**Recruit red teamers with both benign and adversarial mindsets.** +Determine the ideal composition of red teamers in terms of peopleΓÇÖs experience, demographics, and expertise across disciplines (for example, experts in AI, social sciences, security) for your productΓÇÖs domain. For example, if youΓÇÖre designing a chatbot to help health care providers, medical experts can help identify risks in that domain. ++**Recruit red teamers with both benign and adversarial mindsets** Having red teamers with an adversarial mindset and security-testing experience is essential for understanding security risks, but red teamers who are ordinary users of your application system and havenΓÇÖt been involved in its development can bring valuable perspectives on harms that regular users might encounter. -**Remember that handling potentially harmful content can be mentally taxing.** +**Assign red teamers to harms and/or product features** ++- Assign RAI red teamers with specific expertise to probe for specific types of harms (for example, security subject matter experts can probe for jailbreaks, meta prompt extraction, and content related to cyberattacks). ++- For multiple rounds of testing, decide whether to switch red teamer assignments in each round to get diverse perspectives on each harm and maintain creativity. If switching assignments, allow time for red teamers to get up to speed on the instructions for their newly assigned harm. ++- In later stages, when the application and its UI are developed, you might want to assign red teamers to specific parts of the application (i.e., features) to ensure coverage of the entire application. ++- Consider how much time and effort each red teamer should dedicate (for example, those testing for benign scenarios might need less time than those testing for adversarial scenarios). ++It can be helpful to provide red teamers with: + - Clear instructions that could include: + - An introduction describing the purpose and goal of the given round of red teaming; the product and features that will be tested and how to access them; what kinds of issues to test for; red teamersΓÇÖ focus areas, if the testing is more targeted; how much time and effort each red teamer should spend on testing; how to record results; and who to contact with questions. + - A file or location for recording their examples and findings, including information such as: + - The date an example was surfaced; a unique identifier for the input/output pair if available, for reproducibility purposes; the input prompt; a description or screenshot of the output. ++### Plan: What to test ++Because an application is developed using a base model, you may need to test at several different layers: ++- The LLM base model with its safety system in place to identify any gaps that may need to be addressed in the context of your application system. (Testing is usually done through an API endpoint.) ++- Your application. (Testing is best done through a UI.) ++- Both the LLM base model and your application, before and after mitigations are in place. ++The following recommendations help you choose what to test at various points during red teaming: ++- You can begin by testing the base model to understand the risk surface, identify harms, and guide the development of RAI mitigations for your product. ++- Test versions of your product iteratively with and without RAI mitigations in place to assess the effectiveness of RAI mitigations. (Note, manual red teaming might not be sufficient assessmentΓÇöuse systematic measurements as well, but only after completing an initial round of manual red teaming.) ++- Conduct testing of application(s) on the production UI as much as possible because this most closely resembles real-world usage. ++When reporting results, make clear which endpoints were used for testing. When testing was done in an endpoint other than product, consider testing again on the production endpoint or UI in future rounds. ++### Plan: How to test ++**Conduct open-ended testing to uncover a wide range of harms.** ++The benefit of RAI red teamers exploring and documenting any problematic content (rather than asking them to find examples of specific harms) enables them to creatively explore a wide range of issues, uncovering blind spots in your understanding of the risk surface. ++**Create a list of harms from the open-ended testing.** ++- Consider creating a list of harms, with definitions and examples of the harms. +- Provide this list as a guideline to red teamers in later rounds of testing. ++**Conduct guided red teaming and iterate: Continue probing for harms in the list; identify new harms that surface.** ++Use a list of harms if available and continue testing for known harms and the effectiveness of their mitigations. In the process, you will likely identify new harms. Integrate these into the list and be open to shifting measurement and mitigation priorities to address the newly identified harms. ++Plan which harms to prioritize for iterative testing. Several factors can inform your prioritization, including, but not limited to, the severity of the harms and the context in which they are more likely to surface. ++### Plan: How to record data ++**Decide what data you need to collect and what data is optional.** ++- Decide what data the red teamers will need to record (for example, the input they used; the output of the system; a unique ID, if available, to reproduce the example in the future; and other notes.) ++- Be strategic with what data you are collecting to avoid overwhelming red teamers, while not missing out on critical information. ++**Create a structure for data collection** ++A shared Excel spreadsheet is often the simplest method for collecting red teaming data. A benefit of this shared file is that red teamers can review each otherΓÇÖs examples to gain creative ideas for their own testing and avoid duplication of data. ++## During testing -You will need to take care of your red teamers, not only by limiting the amount of time they spend on an assignment, but also by letting them know they can opt out at any time. Also, avoid burnout by switching red teamersΓÇÖ assignments to different focus areas. +**Plan to be on active standby while red teaming is ongoing** -### Planning your red teaming +- Be prepared to assist red teamers with instructions and access issues. +- Monitor progress on the spreadsheet and send timely reminders to red teamers. -#### Where to test +## After each round of testing -Because a system is developed using a LLM base model, you may need to test at several different layers: +**Report data** -- The LLM base model with its [safety system](./content-filter.md) in place to identify any gaps that may need to be addressed in the context of your application system. (Testing is usually through an API endpoint.)-- Your application system. (Testing is usually through a UI.)-- Both the LLM base model and your application system before and after mitigations are in place.+Share a short report on a regular interval with key stakeholders that: -#### How to test +1. Lists the top identified issues. -Consider conducting iterative red teaming in at least two phases: +2. Provides a link to the raw data. -1. Open-ended red teaming, where red teamers are encouraged to discover a variety of harms. This can help you develop a taxonomy of harms to guide further testing. Note that developing a taxonomy of undesired LLM outputs for your application system is crucial to being able to measure the success of specific mitigation efforts. -2. Guided red teaming, where red teamers are assigned to focus on specific harms listed in the taxonomy while staying alert for any new harms that may emerge. Red teamers can also be instructed to focus testing on specific features of a system for surfacing potential harms. +3. Previews the testing plan for the upcoming rounds. -Be sure to: +4. Acknowledges red teamers. -- Provide your red teamers with clear instructions for what harms or system features they will be testing.-- Give your red teamers a place for recording their findings. For example, this could be a simple spreadsheet specifying the types of data that red teamers should provide, including basics such as:- - The type of harm that was surfaced. - - The input prompt that triggered the output. - - An excerpt from the problematic output. - - Comments about why the red teamer considered the output problematic. -- Maximize the effort of responsible AI red teamers who have expertise for testing specific types of harms or undesired outputs. For example, have security subject matter experts focus on jailbreaks, metaprompt extraction, and content related to aiding cyberattacks.+5. Provides any other relevant information. -### Reporting red teaming findings +**Differentiate between identification and measurement** -You will want to summarize and report red teaming top findings at regular intervals to key stakeholders, including teams involved in the measurement and mitigation of LLM failures so that the findings can inform critical decision making and prioritizations. +In the report, be sure to clarify that the role of RAI red teaming is to expose and raise understanding of risk surface and is not a replacement for systematic measurement and rigorous mitigation work. It is important that people do not interpret specific examples as a metric for the pervasiveness of that harm. -## Next steps +Additionally, if the report contains problematic content and examples, consider including a content warning. -[Learn about other mitigation strategies like prompt engineering](./prompt-engineering.md) +The guidance in this document is not intended to be, and should not be construed as providing, legal advice. The jurisdiction in which you're operating may have various regulatory or legal requirements that apply to your AI system. Be aware that not all of these recommendations are appropriate for every scenario and, conversely, these recommendations may be insufficient for some scenarios. |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | You can modify the following additional settings in the **Data parameters** sect |Parameter name | Description | |||-|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 3. This is the `topNDocuments` parameter in the API. | +|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 5. This is the `topNDocuments` parameter in the API. | | **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. | ## Virtual network support & private endpoint support |
ai-services | Embeddings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md | curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM -d '{"input": "Sample Document goes here"}' ``` -# [python](#tab/python) +# [OpenAI Python 0.28.1](#tab/python) + ```python import openai embeddings = response['data'][0]['embedding'] print(embeddings) ``` +# [OpenAI Python 1.x](#tab/python-new) ++```python +import os +from openai import AzureOpenAI ++client = AzureOpenAI( + api_key = os.getenv("AZURE_OPENAI_KEY"), + api_version = "2023-05-15", + azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT") +) ++response = client.embeddings.create( + input = "Your text string goes here", + model= "text-embedding-ada-002" +) ++print(response.model_dump_json(indent=2)) +``` + # [C#](#tab/csharp) ```csharp using Azure; |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | The following parameters can be used inside of the `parameters` field inside of | `indexName` | string | Required | null | The search index to be used. | | `fieldsMapping` | dictionary | Optional | null | Index data column mapping. | | `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. |-| `topNDocuments` | number | Optional | 3 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. | +| `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. | | `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. | | `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.| |
ai-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md | Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
ai-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md | Follow these steps to install the Speech SDK for Java using Apache Maven: <dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk-embedded</artifactId>- <version>1.32.1</version> + <version>1.33.0</version> </dependency> </dependencies> </project> Be sure to use the `@aar` suffix when the dependency is specified in `build.grad ``` dependencies {- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.32.1@aar' + implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.33.0@aar' } ``` ::: zone-end dependencies { ## Models and voices -For embedded speech, you'll need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process. +For embedded speech, you need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions are provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process. The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW. -All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for additional languages and voices. +All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for more languages and voices. ## Embedded speech configuration Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed. -With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request. +With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the final result is selected based on response speed. The best result is evaluated again on each new synthesis request. ## Cloud speech For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to ## Embedded voices capabilities -For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, please refer to the table below. +For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, refer to the following table. | Level 1 | Level 2 | Sub values | Support in embedded NTTS | |--|--|-|--| |
aks | Ai Toolchain Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ai-toolchain-operator.md | + + Title: Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (Preview) +description: Learn how to enable the AI toolchain operator add-on on Azure Kubernetes Service (AKS) to simplify OSS AI model management and deployment. ++ Last updated : 11/01/2023+++# Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (Preview) ++The AI toolchain operator (KAITO) is a managed add-on for AKS that simplifies the experience of running OSS AI models on your AKS clusters. The AI toolchain operator automatically provisions the necessary GPU nodes and sets up the associated inference server as an endpoint server to your AI models. Using this add-on reduces your onboarding time and enables you to focus on AI model usage and development rather than infrastructure setup. ++This article shows you how to enable the AI toolchain operator add-on and deploy an AI model on AKS. +++## Before you begin ++* This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for AKS](./concepts-clusters-workloads.md). +* If you aren't familiar with Microsoft Entra Workload Identity, see the [Workload Identity overview](../active-directory/workload-identities/workload-identities-overview.md). +* For ***all hosted model inference files*** and recommended infrastructure setup, see the [KAITO GitHub repository](https://github.com/Azure/kaito). ++## Prerequisites ++* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + * If you have multiple Azure subscriptions, make sure you select the correct subscription in which the resources will be created and charged using the [`az account set`][az-account-set] command. ++ > [!NOTE] + > The subscription you use must have GPU VM quota. ++* Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +* Helm v3 installed. For more information, see [Installing Helm](https://helm.sh/docs/intro/install/). +* The Kubernetes command-line client, kubectl, installed and configured. For more information, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). ++## Enable the Azure CLI preview extension ++* Enable the Azure CLI preview extension using the [`az extension add`][az-extension-add] command. ++ ```azurecli-interactive + az extension add --name aks-preview + ``` ++### Export environment variables ++* To simplify the configuration steps in this article, you can define environment variables using the following commands. Make sure to replace the placeholder values with your own. ++ ```azurecli-interactive + export AZURE_SUBSCRIPTION_ID="mySubscriptionID" + export AZURE_RESOURCE_GROUP="myResourceGroup" + export CLUSTER_NAME="myClusterName" + ``` ++## Enable the AI toolchain operator add-on on an AKS cluster ++1. Create an Azure resource group using the [`az group create`][az-group-create] command. ++ ```azurecli-interactive + az group create --name AZURE_RESOURCE_GROUP --location eastus + ``` ++2. Create an AKS cluster with the AI toolchain operator add-on enabled using the [`az aks create`][az-aks-create] command with the `--enable-ai-toolchain-operator`, `--enable-workload-identity`, and `--enable-oidc-issuer` flags. ++ ```azurecli-interactive + az aks create --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME --generate-ssh-keys --enable-managed-identity --enable-workload-identity --enable-oidc-issuer --enable-ai-toolchain-operator + ``` ++ > [!NOTE] + > AKS creates a managed identity once you enable the AI toolchain operator add-on. The managed identity is used to access the AI toolchain operator workspace CRD. The AI toolchain operator workspace CRD is used to create and manage AI toolchain operator workspaces. + > + > AI toolchain operator enablement requires the enablement of workload identity and OIDC issuer. ++## Connect to your cluster ++1. Configure `kubectl` to connect to your cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME + ``` ++2. Verify the connection to your cluster using the `kubectl get` command. ++ ```azurecli-interactive + kubectl get nodes + ``` ++3. Export environment variables for the principal ID identity and client ID identity using the following commands: ++ ```azurecli-interactive + export MC_RESOURCE_GROUP=$(az aks show --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME --query nodeResourceGroup -o tsv) + export PRINCIPAL_ID=$(az identity show --name "ai-toolchain-operator-{CLUSTER_NAME}" --resource-group "{MC_RESOURCE_GROUP} --query 'principalId' -o tsv) + export CLIENT_ID=$(az identity show --name gpuIdentity --resource-group "${AZURE_RESOURCE_GROUP}" --subscription "${AZURE_SUBSCRIPTION_ID}" --query 'clientId' -o tsv) + ``` ++## Create a role assignment for the principal ID identity ++1. Create a new role assignment for the service principal using the [`az role assignment create`][az-role-assignment-create] command. ++ ```azurecli-interactive + az role assignment create --role "Contributor" --assignee "${PRINCIPAL_ID}" --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"/providers/Microsoft.ContainerService/managedClusters/${CLUSTER_NAME}" + ``` ++2. Get the AKS OIDC Issuer URL and export it as an environment variable using the following command: ++ ```azurecli-interactive + export AKS_OIDC_ISSUER=$(az aks show --resource-group "${AZURE_RESOURCE_GROUP}" --name "${CLUSTER_NAME}" --subscription "${AZURE_SUBSCRIPTION_ID}" --query "oidcIssuerProfile.issuerUrl" -o tsv) + ``` ++## Establish a federated identity credential ++* Create the federated identity credential between the managed identity, AKS OIDC issuer, and subject using the [`az identity federated-credential create`][az-identity-federated-credential-create] command. ++ ```azurecli-interactive + az identity federated-credential create --name "${FEDERATED_IDENTITY}" --identity-name "ai-toolchain-operator-{CLUSTER_NAME}" --resource-group "${AZURE_RESOURCE_GROUP} --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"kube-system":"gpu-provisioner" --audience api://AzureADTokenExchange --subscription "${AZURE_SUBSCRIPTION_ID}" + ``` ++## Deploy a default hosted AI model ++1. Deploy the Falcon 7B model YAML file from the GitHub repository using the `kubectl apply` command. ++ ```azurecli-interactive + kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/kaito_workspace_falcon_7b.yaml + ``` ++2. Track the live resource changes in your workspace using the `kubectl get` command. ++ ```azurecli-interactive + kubectl get workspace workspace-falcon-7b -w + ``` ++3. Check your service and get the service IP address using the `kubectl get svc` command. ++ ```azurecli-interactive + export SERVICE_IP=$(kubectl get svc workspace-falcon-7b -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` ++4. Run the Falcon 7B model with a sample input of your choice using the following `curl` command: ++ ```azurecli-interactive + curl -X POST "http://SERVICE_IP:80/chat" -H "accept: application/json" -H "Content-Type: application/json" -d '{"prompt":"YOUR_PROMPT_HERE"}' + ``` ++## Clean up resources ++If you no longer need these resources, you can delete them to avoid incurring extra Azure charges. ++* Delete the resource group and its associated resources using the [`az group delete`][az-group-delete] command. ++ ```azurecli-interactive + az group delete --name AZURE_RESOURCE_GROUP --yes --no-wait + ``` ++## Next steps ++For more inference model options, see the [KAITO GitHub repository](https://github.com/Azure/kaito). ++<!-- LINKS --> +[az-group-create]: /cli/azure/group#az_group_create +[az-group-delete]: /cli/azure/group#az_group_delete +[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials +[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create +[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az_identity_federated_credential_create +[az-account-set]: /cli/azure/account#az_account_set +[az-extension-add]: /cli/azure/extension#az_extension_add |
aks | Best Practices Performance Scale Large | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale-large.md | + + Title: Performance and scaling best practices for large workloads in Azure Kubernetes Service (AKS) ++description: Learn the best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS). + Last updated : 11/03/2023+++# Best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS) ++> [!NOTE] +> This article focuses on general best practices for **large workloads**. For best practices specific to **small to medium workloads**, see [Performance and scaling best practices for small to medium workloads in Azure Kubernetes Service (AKS)](./best-practices-performance-scale.md). ++As you deploy and maintain clusters in AKS, you can use the following best practices to help you optimize performance and scaling. ++Keep in mind that *large* is a relative term. Kubernetes has a multi-dimensional scale envelope, and the scale envelope for your workload depends on the resources you use. For example, a cluster with 100 nodes and thousands of pods or CRDs might be considered large. A 1,000 node cluster with 1,000 pods and various other resources might be considered small from the control plane perspective. The best signal for scale of a Kubernetes control plane is API server HTTP request success rate and latency, as that's a proxy for the amount of load on the control plane. ++In this article, you learn about: ++> [!div class="checklist"] +> +> * AKS and Kubernetes control plane scalability. +> * Kubernetes Client best practices, including backoff, watches, and pagination. +> * Azure API and platform throttling limits. +> * Feature limitations. +> * Networking and node pool scaling best practices. ++## AKS and Kubernetes control plane scalability ++In AKS, a *cluster* consists of a set of nodes (physical or virtual machines (VMs)) that run Kubernetes agents and are managed by the Kubernetes control plane hosted by AKS. While AKS optimizes the Kubernetes control plane and its components for scalability and performance, it's still bound by the upstream project limits. ++Kubernetes has a multi-dimensional scale envelope with each resource type representing a dimension. Not all resources are alike. For example, *watches* are commonly set on secrets, which result in list calls to the kube-apiserver that add cost and a disproportionately higher load on the control plane compared to resources without watches. ++The control plane manages all the resource scaling in the cluster, so the more you scale the cluster within a given dimension, the less you can scale within other dimensions. For example, running hundreds of thousands of pods in an AKS cluster impacts how much pod churn rate (pod mutations per second) the control plane can support. ++The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports two control plane tiers as part of the Base SKU: the Free tier and the Standard tier. For more information, see [Free and Standard pricing tiers for AKS cluster management][free-standard-tier]. ++> [!IMPORTANT] +> We highly recommend using the Standard tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits: +> +> * Up to 5,000 nodes per AKS cluster +> * 200,000 pods per AKS cluster (with Azure CNI Overlay) ++In most cases, crossing the scale limit threshold results in degraded performance, but doesn't cause the cluster to immediately fail over. To manage load on the Kubernetes control plane, consider scaling in batches of up to 10-20% of the current scale. For example, for a 5,000 node cluster, scale in increments of 500-1,000 nodes. While AKS does autoscale your control plane, it doesn't happen instantaneously. ++You can leverage API Priority and Fairness (APF) to throttle specific clients and request types to protect the control plane during high churn and load. ++## Kubernetes clients ++Kubernetes clients are the applications clients, such as operators or monitoring agents, deployed in the Kubernetes cluster that need to communicate with the kube-api server to perform read or mutate operations. It's important to optimize the behavior of these clients to minimize the load they add to the kube-api server and Kubernetes control plane. ++AKS doesn't expose control plane and API server metrics via Prometheus or through platform metrics. However, you can analyze API server traffic and client behavior through Kube Audit logs. For more information, see [Troubleshoot the Kubernetes control plane](/troubleshoot/azure/azure-kubernetes/troubleshoot-apiserver-etcd). ++LIST requests can be expensive. When working with lists that might have more than a few thousand small objects or more than a few hundred large objects, you should consider the following guidelines: ++* **Consider the number of objects (CRs) you expect to eventually exist** when defining a new resource type (CRD). +* **The load on etcd and API server primarily relies on the number of objects that exist, not the number of objects that are returned**. Even if you use a field selector to filter the list and retrieve only a small number of results, these guidelines still apply. The only exception is retrieval of a single object by `metadata.name`. +* **Avoid repeated LIST calls if possible** if your code needs to maintain an updated list of objects in memory. Instead, consider using the Informer classes provided in most Kubernetes libraries. Informers automatically combine LIST and WATCH functionalities to efficiently maintain an in-memory collection. +* **Consider whether you need strong consistency** if Informers don't meet your needs. Do you need to see the most recent data, up to the exact moment in time you issued the query? If not, set `ResourceVersion=0`. This causes the API server cache to serve your request instead of etcd. +* **If you can't use Informers or the API server cache, read large lists in chunks**. +* **Avoid listing more often than needed**. If you can't use Informers, consider how often your application lists the resources. After you read the last object in a large list, don't immediately re-query the same list. You should wait awhile instead. +* **Consider the number of running instances of your client application**. There's a big difference between having a single controller listing objects vs. having pods on each node doing the same thing. If you plan to have multiple instances of your client application periodically listing large numbers of objects, your solution won't scale to large clusters. ++## Azure API and Platform throttling ++The load on a cloud application can vary over time based on factors such as the number of active users or the types of actions that users perform. If the processing requirements of the system exceed the capacity of the available resources, the system can become overloaded and suffer from poor performance and failures. ++To handle varying load sizes in a cloud application, you can allow the application to use resources up to a specified limit and then throttle them when the limit is reached. On Azure, throttling happens at two levels. Azure Resource Manager (ARM) throttles requests for the subscription and tenant. If the request is under the throttling limits for the subscription and tenant, ARM routes the request to the resource provider. The resource provider then applies throttling limits tailored to its operations. For more information, see [ARM throttling requests](../azure-resource-manager/management/request-limits-and-throttling.md). ++### Manage throttling in AKS ++Azure API limits are usually defined at a subscription-region combination level. For example, all clients within a subscription in a given region share API limits for a given Azure API, such as Virtual Machine Scale Sets PUT APIs. Every AKS cluster has several AKS-owned clients, such as cloud provider or cluster autoscaler, or customer-owned clients, such as Datadog or self-hosted Prometheus, that call Azure APIs. When running multiple AKS clusters in a subscription within a given region, all the AKS-owned and customer-owned clients within the clusters share a common set of API limits. Therefore, the number of clusters you can deploy in a subscription region is a function of the number of clients deployed, their call patterns, and the overall scale and elasticity of the clusters. ++Keeping the above considerations in mind, customers are typically able to deploy between 20-40 small to medium scale clusters per subscription-region. You can maximize your subscription scale using the following best practices: ++Always upgrade your Kubernetes clusters to the latest version. Newer versions contain many improvements that address performance and throttling issues. If you're using an upgraded version of Kubernetes and still see throttling due to the actual load or the number of clients in the subscription, you can try the following options: ++* **Analyze errors using AKS Diagnose and Solve Problems**: You can use [AKS Diagnose and Solve Problems](./aks-diagnostics.md) to analyze errors, identity the root cause, and get resolution recommendations. + * **Increase the Cluster Autoscaler scan interval**: If the diagnostic reports show that [Cluster Autoscaler throttling has been detected](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors#analyze-and-identify-errors-by-using-aks-diagnose-and-solve-problems), you can [increase the scan interval](./cluster-autoscaler.md#change-the-cluster-autoscaler-settings) to reduce the number of calls to Virtual Machine Scale Sets from the Cluster Autoscaler. + * **Reconfigure third-party applications to make fewer calls**: If you filter by *user agents* in the ***View request rate and throttle details*** diagnostic and see that [a third-party application, such as a monitoring application, makes a large number of GET requests](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors#analyze-and-identify-errors-by-using-aks-diagnose-and-solve-problems), you can change the settings of these applications to reduce the frequency of the GET calls. Make sure the application clients use exponential backoff when calling Azure APIs. +* **Split your clusters into different subscriptions or regions**: If you have a large number of clusters and node pools that use Virtual Machine Scale Sets, you can split them into different subscriptions or regions within the same subscription. Most Azure API limits are shared at the subscription-region level, so you can move or scale your clusters to different subscriptions or regions to get unblocked on Azure API throttling. This option is especially helpful if you expect your clusters to have high activity. There are no generic guidelines for these limits. If you want specific guidance, you can create a support ticket. ++## Feature limitations ++As you scale your AKS clusters to larger scale points, keep the following feature limitations in mind: ++* AKS supports up to a 1,000 node scale in an AKS cluster by default. While AKS doesn't prevent you from scaling further, doing so might result in degraded performance. If you want to scale beyond 1,000 nodes, you can request a limit increase. For more information, see [Best practices for creating and running AKS clusters at scale][run-aks-at-scale]. +* [Azure Network Policy Manager (Azure npm)][azure-npm] only supports up to 250 nodes. +* You can't use the Stop and Start feature with clusters that have more than 100 nodes. For more information, see [Stop and start an AKS cluster](./start-stop-cluster.md). ++## Networking ++As you scale your AKS clusters to larger scale points, keep the following networking best practices in mind: ++* Use Managed NAT for cluster egress with at least two public IPs on the NAT gateway. For more information, see [Create a managed NAT gateway for your AKS cluster][managed-nat-gateway]. +* Use Azure CNI Overlay to scale up to 200,000 pods and 5,000 nodes per cluster. For more information, see [Configure Azure CNI Overlay networking in AKS][azure-cni-overlay]. +* If your application needs direct pod-to-pod communication across clusters, use Azure CNI with dynamic IP allocation and scale up to 50,000 application pods per cluster with one routable IP per pod. For more information, see [Configure Azure CNI networking for dynamic IP allocation in AKS][azure-cni-dynamic-ip]. +* When using internal Kubernetes services behind an internal load balancer, we recommend creating an internal load balancer or service below a 750 node scale for optimal scaling performance and load balancer elasticity. +* Azure npm only supports up to 250 nodes. If you want to enforce network policies for larger clusters, consider using [Azure CNI powered by Cilium](./azure-cni-powered-by-cilium.md), which combines the robust control plane of Azure CNI with the Cilium data plane to provide high performance networking and security. ++## Node pool scaling ++As you scale your AKS clusters to larger scale points, keep the following node pool scaling best practices in mind: ++* For system node pools, use the *Standard_D16ds_v5* SKU or an equivalent core/memory VM SKU with ephemeral OS disks to provide sufficient compute resources for kube-system pods. +* Since AKS has a limit of 1,000 nodes per node pool, we recommend creating at least five user node pools to scale up to 5,000 nodes. +* When running at-scale AKS clusters, use the cluster autoscaler whenever possible to ensure dynamic scaling of node pools based on the demand for compute resources. For more information, see [Automatically scale an AKS cluster to meet application demands][cluster-autoscaler]. +* If you're scaling beyond 1,000 nodes and are *not* using the cluster autoscaler, we recommend scaling in batches of 500-700 nodes at a time. The scaling operations should have a two-minute to five-minute wait time between scale up operations to prevent Azure API throttling. For more information, see [API management: Caching and throttling policies][throttling-policies]. ++> [!NOTE] +> You can't use [Azure Network Policy Manager (Azure NPM)][azure-npm] with clusters that have more than 500 nodes. ++<!-- LINKS - Internal > +[run-aks-at-scale]: ./operator-best-practices-run-at-scale.md +[managed-nat-gateway]: ./nat-gateway.md +[azure-cni-dynamic-ip]: ./configure-azure-cni-dynamic-ip-allocation.md +[azure-cni-overlay]: ./azure-cni-overlay.md +[free-standard-tier]: ./free-standard-pricing-tiers.md +[cluster-autoscaler]: cluster-autoscaler.md +[azure-npm]: ../virtual-network/kubernetes-network-policies.md ++<!-- LINKS - External --> +[throttling-policies]: https://azure.microsoft.com/blog/api-management-advanced-caching-and-throttling-policies/ |
aks | Best Practices Performance Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale.md | + + Title: Performance and scaling best practices for small to medium workloads in Azure Kubernetes Service (AKS) ++description: Learn the best practices for performance and scaling for small to medium workloads in Azure Kubernetes Service (AKS). + Last updated : 11/03/2023+++# Best practices for performance and scaling for small to medium workloads in Azure Kubernetes Service (AKS) ++> [!NOTE] +> This article focuses on general best practices for **small to medium workloads**. For best practices specific to **large workloads**, see [Performance and scaling best practices for large workloads in Azure Kubernetes Service (AKS)](./best-practices-performance-scale-large.md). ++As you deploy and maintain clusters in AKS, you can use the following best practices to help you optimize performance and scaling. ++In this article, you learn about: ++> [!div class="checklist"] +> +> * Tradeoffs and recommendations for autoscaling your workloads. +> * Managing node scaling and efficiency based on your workload demands. +> * Networking considerations for ingress and egress traffic. +> * Monitoring and troubleshooting control plane and node performance. +> * Capacity planning, surge scenarios, and cluster upgrades. +> * Storage and networking considerations for data plane performance. ++## Application autoscaling vs. infrastructure autoscaling ++### Application autoscaling ++Application autoscaling is useful when dealing with cost optimization or infrastructure limitations. A well-configured autoscaler maintains high availability for your application while also minimizing costs. You only pay for the resources required to maintain availability, regardless of the demand. ++For example, if an existing node has space but not enough IPs in the subnet, it might be able to skip the creation of a new node and instead immediately start running the application on a new pod. ++#### Horizontal Pod autoscaling ++Implementing [horizontal pod autoscaling](./concepts-scale.md#horizontal-pod-autoscaler) is useful for applications with a steady and predictable resource demand. The Horizontal Pod Autoscaler (HPA) dynamically scales the number of pod replicas, which effectively distributes the load across multiple pods and nodes. This scaling mechanism is typically most beneficial for applications that can be decomposed into smaller, independent components capable of running in parallel. ++The HPA provides resource utilization metrics by default. You can also integrate custom metrics or leverage tools like the [Kubernetes Event-Driven Autoscaler (KEDA) (Preview)](./keda-about.md). These extensions allow the HPA to make scaling decisions based on multiple perspectives and criteria, providing a more holistic view of your application's performance. This is especially helpful for applications with varying complex scaling requirements. ++> [!NOTE] +> If maintaining high availability for your application is a top priority, we recommend leaving a slightly higher buffer for the minimum pod number for your HPA to account for scaling time. ++#### Vertical Pod autoscaling ++Implementing [vertical pod autoscaling](./vertical-pod-autoscaler.md) is useful for applications with fluctuating and unpredictable resource demands. The Vertical Pod Autoscaler (VPA) allows you to fine-tune resource requests, including CPU and memory, for individual pods, enabling precise control over resource allocation. This granularity minimizes resource waste and enhances the overall efficiency of cluster utilization. The VPA also streamlines application management by automating resource allocation, freeing up resources for critical tasks. ++> [!WARNING] +> You shouldn't use the VPA in conjunction with the HPA on the same CPU or memory metrics. This combination can lead to conflicts, as both autoscalers attempt to respond to changes in demand using the same metrics. However, you can use the VPA for CPU or memory in conjunction with the HPA for custom metrics to prevent overlap and ensure that each autoscaler focuses on distinct aspects of workload scaling. ++> [!NOTE] +> The VPA works based on historical data. We recommend waiting at least *24 hours* after deploying the VPA before applying any changes to give it time to collect recommendation data. ++### Infrastructure autoscaling ++#### Cluster autoscaling ++Implementing cluster autoscaling is useful if your existing nodes lack sufficient capacity, as it helps with scaling up and provisioning new nodes. ++When considering cluster autoscaling, the decision of when to remove a node involves a tradeoff between optimizing resource utilization and ensuring resource availability. Eliminating underutilized nodes enhances cluster utilization but might result in new workloads having to wait for resources to be provisioned before they can be deployed. It's important to find a balance between these two factors that aligns with your cluster and workload requirements and [configure the cluster autoscaler profile settings accordingly](./cluster-autoscaler.md#change-the-cluster-autoscaler-settings). ++The Cluster Autoscaler profile settings apply universally to all autoscaler-enabled node pools in your cluster. This means that any scaling actions occurring in one autoscaler-enabled node pool might impact the autoscaling behavior in another node pool. It's important to apply consistent and synchronized profile settings across all relevant node pools to ensure that the autoscaler behaves as expected. ++##### Overprovisioning ++Overprovisioning is a strategy that helps mitigate the risk of application pressure by ensuring there's an excess of readily available resources. This approach is especially useful for applications that experience highly variable loads and cluster scaling patterns that show frequent scale ups and scale downs. ++To determine the optimal amount of overprovisioning, you can use the following formula: ++```txt +1-buffer/1+traffic +``` ++For example, let's say you want to avoid hitting 100% CPU utilization in your cluster. You might opt for a 30% buffer to maintain a safety margin. If you anticipate an average traffic growth rate of 40%, you might consider overprovisioning by 50%, as calculated by the formula: ++```txt +1-30%/1+40%=50% +``` ++An effective overprovisioning method involves the use of *pause pods*. Pause pods are low-priority deployments that can be easily replaced by high-priority deployments. You create low priority pods that serve the sole purpose of reserving buffer space. When a high-priority pod requires space, the pause pods are removed and rescheduled on another node or a new node to accommodate the high priority pod. ++The following YAML shows an example pause pod manifest: ++```yml +apiVersion: scheduling.k8s.io/v1 +kind: PriorityClass +metadata: + name: overprovisioning +value: -1 +globalDefault: false +description: "Priority class used by overprovisioning." + +apiVersion: apps/v1 +kind: Deployment +metadata: + name: overprovisioning + namespace: kube-system +spec: + replicas: 1 + selector: + matchLabels: + run: overprovisioning + template: + metadata: + labels: + run: overprovisioning + spec: + priorityClassName: overprovisioning + containers: + - name: reserve-resources + image: your-custome-pause-image + resources: + requests: + cpu: 1 + memory: 4Gi +``` ++## Node scaling and efficiency ++> **Best practice guidance**: +> +> Carefully monitor resource utilization and scheduling policies to ensure nodes are being used efficiently. ++Node scaling allows you to dynamically adjust the number of nodes in your cluster based on workload demands. It's important to understand that adding more nodes to a cluster isn't always the best solution for improving performance. To ensure optimal performance, you should carefully monitor resource utilization and scheduling policies to ensure nodes are being used efficiently. ++### Node images ++> **Best practice guidance**: +> +> Use the latest node image version to ensure that you have the latest security patches and bug fixes. ++Using the latest node image version provides the best performance experience. AKS ships performance improvements within the weekly image releases. The latest daemonset images are cached on the latest VHD image, which provide lower latency benefits for node provisioning and bootstrapping. Falling behind on updates might have a negative impact on performance, so it's important to avoid large gaps between versions. ++#### Azure Linux ++The [Azure Linux Container Host on AKS](../azure-linux/intro-azure-linux.md) uses a native AKS image and provides a single place for Linux development. Every package is built from source and validated, ensuring your services run on proven components. ++Azure Linux is lightweight, only including the necessary set of packages to run container workloads. It provides a reduced attack surface and eliminates patching and maintenance of unnecessary packages. At its base layer, it has a Microsoft-hardened kernel tuned for Azure. This image is ideal for performance-sensitive workloads and platform engineers or operators that manage fleets of AKS clusters. ++#### Ubuntu 2204 ++The [Ubuntu 2204 image](https://github.com/Azure/AKS/blob/master/CHANGELOG.md) is the default node image for AKS. It's a lightweight and efficient operating system optimized for running containerized workloads. This means that it can help reduce resource usage and improve overall performance. The image includes the latest security patches and updates, which help ensure that your workloads are protected from vulnerabilities. ++The Ubuntu 2204 image is fully supported by Microsoft, Canonical, and the Ubuntu community and can help you achieve better performance and security for your containerized workloads. ++### Virtual machines (VMs) ++> **Best practice guidance**: +> +> When selecting a VM, ensure the size and performance of the OS disk and VM SKU don't have a large discrepancy. A discrepancy in size or performance can cause performance issues and resource contention. ++Application performance is closely tied to the VM SKUs you use in your workloads. Larger and more powerful VMs, generally provide better performance. For *mission critical or product workloads*, we recommend using VMs with at least an 8-core CPU. VMs with newer hardware generations, like v4 and v5, can also help improve performance. Keep in mind that create and scale latency might vary depending on the VM SKUs you use. ++### Use dedicated system node pools ++For scaling performance and reliability, we recommend using a dedicated system node pool. With this configuration, the dedicated system node pool reserves space for critical system resources such as system OS daemons. Your application workload can then run in a user node pool to increase the availability of allocatable resources for your application. This configuration also helps mitigate the risk of resource competition between the system and application. ++### Create operations ++Review the extensions and add-ons you have enabled during create provisioning. Extensions and add-ons can add latency to overall duration of create operations. If you don't need an extension or add-on, we recommend removing it to improve create latency. ++You can also use availability zones to provide a higher level of availability to protect against potential hardware failures or planned maintenance events. AKS clusters distribute resources across logical sections of underlying Azure infrastructure. Availability zones physically separate nodes from other nodes to help ensure that a single failure doesn't impact the availability of your application. Availability zones are only available in certain regions. For more information, see [Availability zones in Azure](../reliability/availability-zones-overview.md). ++## Kubernetes API server ++### LIST and WATCH operations ++Kubernetes uses the LIST and WATCH operations to interact with the Kubernetes API server and monitor information about cluster resources. These operations are fundamental to how Kubernetes performs resource management. ++**The LIST operation retrieves a list of resources that fit within certain criteria**, such as all pods in a specific namespace or all services in the cluster. This operation is useful when you want to get an overview of your cluster resources or you need to operator on multiple resources at once. ++The LIST operation can retrieve large amounts of data, especially in large clusters with multiple resources. Be mindful of the fact that making unbounded or frequent LIST calls puts a significant load on the API server and can close down response times. ++**The WATCH operation performs real-time resource monitoring**. When you set up a WATCH on a resource, the API server sends you updates whenever there are changes to that resource. This is important for controllers, like the ReplicaSet controller, which rely on WATCH to maintain the desired state of resources. ++Be mindful of the fact that watching too many mutable resources or making too many concurrent WATCH requests can overwhelm the API server and cause excessive resource consumption. ++To avoid potential issues and ensure the stability of the Kubernetes control plane, you can use the following strategies: ++**Resource quotas** ++Implement resource quotas to limit the number of resources that can be listed or watched by a particular user or namespace to prevent excessive calls. ++**API Priority and Fairness** ++Kubernetes introduced the concept of API Priority and Fairness (APF) to prioritize and manage API requests. You can use APF in Kubernetes to protect the cluster's API server and reduce the number of `HTTP 429 Too Many Requests` responses seen by client applications. ++| Custom resource | Key features | +| -- | | +| PriorityLevelConfigurations | ΓÇó Define different priority levels for API requests.<br/> ΓÇó Specifies a unique name and assigns an integer value representing the priority level. Higher priority levels have lower integer values, indicating they're more critical.<br/> ΓÇó Can use multiple to categorize requests into different priority levels based on their importance.<br/> ΓÇó Allow you to specify whether requests at a particular priority level should be subject to rate limits. | +| FlowSchemas | ΓÇó Define how API requests should be routed to different priority levels based on request attributes.<br/> ΓÇó Specify rules that match requests based on criteria like API groups, versions, and resources.<br/> ΓÇó When a request matches a given rule, the request is directed to the priority level specified in the associated PriorityLevelConfiguration.<br/> ΓÇó Can use to set the order of evaluation when multiple FlowSchemas match a request to ensure that certain rules take precedence. | ++Configuring API with PriorityLevelConfigurations and FlowSchemas enables the prioritization of critical API requests over less important requests. This ensures that essential operations don't starve or experience delays because of lower priority requests. ++**Optimize labeling and selectors** ++When using LIST operations, optimize label selectors to narrow down the scope of the resources you want to query to reduce the amount of data returned and the load on the API server. ++In Kubernetes CREATE and UPDATE operations refer to actions that manage and modify cluster resources. ++### CREATE and UPDATE operations ++**The CREATE operation creates new resources in the Kubernetes cluster**, such as pods, services, deployments, configmaps, and secrets. During a CREATE operation, a client, such as `kubectl` or a controller, sends a request to the Kubernetes API server to create the new resource. The API server validates the request, ensures compliance with any admission controller policies, and then creates the resource in the cluster's desired state. ++**The UPDATE operation modifies existing resources in the Kubernetes cluster**, including changes to resources specifications, like number of replicas, container images, environment variables, or labels. During an UPDATE operation, a client sends a request to the API server to update an existing resource. The API server validates the request, applies the changes to the resource definition, and updates the cluster resource. ++CREATE and UPDATE operations can impact the performance of the Kubernetes API server under the following conditions: ++* **High concurrency**: When multiple users or applications make concurrent CREATE or UPDATE requests, it can lead to a surge in API requests arriving at the server at the same time. This can stress the API server's processing capacity and cause performance issues. +* **Complex resource definitions**: Resource definitions that are overly complex or involve multiple nested objects can increase the time it takes for the API server to validate and process CREATE and UPDATE requests, which can lead to performance degradation. +* **Resource validation and admission control**: Kubernetes enforces various admission control policies and validation checks on incoming CREATE and UPDATE requests. Large resource definitions, like ones with extensive annotations or configurations, might require more processing time. +* **Custom controllers**: Custom controllers that watch for changes in resources, like Deployments or StatefulSet controllers, can generate a significant number of updates when scaling or rolling out changes. These updates can strain the API server's resources. ++For more information, see [Troubleshoot API server and etcd problems in AKS](/troubleshoot/azure/azure-kubernetes/troubleshoot-apiserver-etcd). ++## Data plane performance ++The Kubernetes data plane is responsible for managing network traffic between containers and services. Issues with the data plane can lead to slow response times, degraded performance, and application downtime. It's important to carefully monitor and optimize data plane configurations, such as network latency, resource allocation, container density, and network policies, to ensure your containerized applications run smoothly and efficiently. ++### Storage types ++AKS recommends and defaults to using ephemeral OS disks. Ephemeral OS disks are created on local VM storage and aren't saved to remote Azure storage like managed OS disks. They have faster reimaging and boot times, enabling faster cluster operations, and they provide lower read/write latency on the OS disk of AKS agent nodes. Ephemeral OS disks work well for stateless workloads, where applications are tolerant of individual VM failures but not of VM deployment time or individual VM reimaging instances. Only certain VM SKUs support ephemeral OS disks, so you need to ensure that your desired SKU generation and size is compatible. For more information, see [Ephemeral OS disks in Azure Kubernetes Service (AKS)](./cluster-configuration.md#use-ephemeral-os-on-new-clusters). ++If your workload is unable to use ephemeral OS disks, AKS defaults to using Premium SSD OS disks. If Premium SSD OS disks aren't compatible with your workload, AKS defaults to Standard SSD disks. Currently, the only other available OS disk type is Standard HDD. For more information, see [Storage options in Azure Kubernetes Service (AKS)](./concepts-storage.md). ++The following table provides a breakdown of suggested use cases for OS disks supported in AKS: ++| OS disk type | Key features | Suggested use cases | +|--|--|| +| Ephemeral OS disks | ΓÇó Faster reimaging and boot times.<br/> ΓÇó Lower read/write latency on OS disk of AKS agent nodes.<br/> ΓÇó High performance and availability. | ΓÇó Demanding enterprise workloads, such as SQL Server, Oracle, Dynamics, Exchange Server, MySQL, Cassandra, MongoDB, SAP Business Suite, etc.<br/> ΓÇó Stateless production workloads that require high availability and low latency. | +| Premium SSD OS disks | ΓÇó Consistent performance and low latency.<br/> ΓÇó High availability. | ΓÇó Demanding enterprise workloads, such as SQL Server, Oracle, Dynamics, Exchange Server, MySQL, Cassandra, MongoDB, SAP Business Suite, etc.<br/> ΓÇó Input/output (IO) intensive enterprise workloads. | +| Standard SSD OS disks | ΓÇó Consistent performance.<br/> ΓÇó Better availability and latency compared to Standard HDD disks. | ΓÇó Web servers.<br/> ΓÇó Low input/output operations per second (IOPS) application servers.<br/> ΓÇó Lightly used enterprise applications.<br/> ΓÇó Dev/test workloads. | +| Standard HDD disks | ΓÇó Low cost.<br/> ΓÇó Exhibits variability in performance and latency. | ΓÇó Backup storage.<br/> ΓÇó Mass storage with infrequent access. | ++#### IOPS and throughput ++Input/output operations per second (IOPS) refers to the number of read and write operations that a disk can perform in a second. Throughout refers to the amount of data that can be transferred in a given time period. ++OS disks are responsible for storing the operating system and its associated files, and the VMs are responsible for running the applications. When selecting a VM, ensure the size and performance of the OS disk and VM SKU don't have a large discrepancy. A discrepancy in size or performance can cause performance issues and resource contention. For example, if the OS disk is significantly smaller than the VMs, it can limit the amount of space available for application data and cause the system to run out of disk space. If the OS disk has lower performance than the VMs, it can become a bottleneck and limit the overall performance of the system. Make sure the size and performance are balanced to ensure optimal performance in Kubernetes. ++You can use the following steps to monitor IOPS and bandwidth meters on OS disks in the Azure portal: ++1. Navigate to the [Azure portal](https://portal.azure.com/). +2. Search for **Virtual machine scale sets** and select your virtual machine scale set. +3. Under **Monitoring**, select **Metrics**. ++Ephemeral OS disks can provide dynamic IOPS and throughput for your application, whereas managed disks have capped IOPS and throughput. For more information, see [Ephemeral OS disks for Azure VMs](../virtual-machines/ephemeral-os-disks.md). ++[Azure Premium SSD v2](../virtual-machines/disks-types.md#premium-ssd-v2) is designed for IO-intense enterprise workloads that require sub-millisecond disk latencies and high IOPS and throughput at a low cost. It's suited for a broad range of workloads, such as SQL server, Oracle, MariaDB, SAP, Cassandra, MongoDB, big data/analytics, gaming, and more. This disk type is the highest performing option currently available for persistent volumes. ++### Pod scheduling ++The memory and CPU resources allocated to a VM have a direct impact on the performance of the pods running on the VM. When a pod is created, it's assigned a certain amount of memory and CPU resources, which are used to run the application. If the VM doesn't have enough memory or CPU resources available, it can cause the pods to slow down or even crash. If the VM has too much memory or CPU resources available, it can cause the pods to run inefficiently, wasting resources and increasing costs. We recommend monitoring the total pod requests across your workloads against the total allocatable resources for best scheduling predictability and performance. You can also set the maximum pods per node based on your capacity planning using `--max-pods`. |
aks | Create Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md | Title: Create node pools in Azure Kubernetes Service (AKS) description: Learn how to create multiple node pools for a cluster in Azure Kubernetes Service (AKS). Previously updated : 07/18/2023 Last updated : 11/06/2023 # Create node pools for a cluster in Azure Kubernetes Service (AKS) In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped t To support applications that have different compute or storage demands, you can create *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and `konnectivity`. User node pools serve the primary purpose of hosting your application pods. For example, use more user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage. However, if you wish to have only one pool in your AKS cluster, you can schedule application pods on system node pools. > [!NOTE]-> This feature enables more control over creating and managing multiple node pools and requires separate commands for create/update/delete operations. Previously, cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and requires use of the `az aks nodepool` command set to execute operations on an individual node pool. +> This feature enables more control over creating and managing multiple node pools and requires separate commands for *create/update/delete* (CRUD) operations. Previously, cluster operations through [`az aks create`][az-aks-create] or [`az aks update`][az-aks-update] used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and requires use of the [`az aks nodepool`][az-aks-nodepool] command set to execute operations on an individual node pool. This article shows you how to create one or more node pools in an AKS cluster. The following limitations apply when you create AKS clusters that support multip * See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)](quotas-skus-regions.md). * You can delete system node pools if you have another system node pool to take its place in the AKS cluster. * System pools must contain at least one node, and user node pools may contain zero or more nodes.-* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. The feature isn't supported with Basic SKU load balancers. +* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. This feature isn't supported with Basic SKU load balancers. * The AKS cluster must use Virtual Machine Scale Sets for the nodes. * The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. * For Linux node pools, the length must be between 1-11 characters. A workload may require splitting cluster nodes into separate pools for logical i * All subnets assigned to node pools must belong to the same virtual network. * System pods must have access to all nodes and pods in the cluster to provide critical functionality, such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.-* If you expand your VNET after creating the cluster, you must update your cluster before adding a subnet outside the original CIDR block. While AKS errors-out on the agent pool add, the `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command performs an update operation without making any changes, which can recover a cluster stuck in a failed state. -* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets. +* If you expand your VNET after creating the cluster, you must update your cluster before adding a subnet outside the original CIDR block. While AKS errors-out on the agent pool add, the `aks-preview` Azure CLI extension (version 0.5.66 and higher) now supports running [`az aks update`][az-aks-update] command with only the required `-g <resourceGroup> -n <clusterName>` arguments. This command performs an update operation without making any changes, which can recover a cluster stuck in a failed state. +* In clusters with Kubernetes version less than 1.23.3, kube-proxy SNATs traffic from new subnets, which can cause Azure Network Policy to drop the packets. * Windows nodes SNAT traffic to the new subnets until the node pool is reimaged. * Internal load balancers default to one of the node pool subnets. Beginning in Kubernetes version 1.20 and higher, you can specify `containerd` as > When using `containerd` with Windows Server 2019 node pools: > > * Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.-> * When you create or update a node pool to run Windows Server containers, the default value for `--node-vm-size` is *Standard_D2s_v3*, which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the `--node-vm-size` parameter, please check the list of [restricted VM sizes][restricted-vm-sizes]. -> * We highly recommended using [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly. +> * When you create or update a node pool to run Windows Server containers, the default value for `--node-vm-size` is *Standard_D2s_v3*, which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes version 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the `--node-vm-size` parameter, check the list of [restricted VM sizes][restricted-vm-sizes]. +> * We recommended using [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly. ### Add a Windows Server node pool with `containerd` In this article, you learned how to create multiple node pools in an AKS cluster [arm-sku-vm3]: ../virtual-machines/epsv5-epdsv5-series.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-create]: /cli/azure/aks#az_aks_create+[az-aks-update]: /cli/azure/aks#az_aks_update [az-aks-delete]: /cli/azure/aks#az_aks_delete+[az-aks-nodepool]: /cli/azure/aks/nodepool [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list [az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade |
aks | Egress Outboundtype | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md | Last updated 06/06/2023 You can customize egress for an AKS cluster to fit specific scenarios. By default, AKS will provision a standard SKU load balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress. This article covers the various types of outbound connectivity that are available in AKS clusters.-how + > [!NOTE] > You can now update the `outboundType` after cluster creation. This feature is in preview. See [Updating `outboundType after cluster creation (preview)](#updating-outboundtype-after-cluster-creation-preview). |
aks | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md | Title: Frequently asked questions for Azure Kubernetes Service (AKS) description: Find answers to some of the common questions about Azure Kubernetes Service (AKS). Previously updated : 07/20/2022 Last updated : 11/06/2023 Moving or renaming your AKS cluster and its associated resources isn't supported Most clusters are deleted upon user request. In some cases, especially cases where you bring your own Resource Group or perform cross-RG tasks, deletion can take more time or even fail. If you have an issue with deletes, double-check that you don't have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on. ## Why is my cluster create/update taking so long?+ If you have issues with create and update cluster operations, make sure you don't have any assigned policies or service constraints that may block your AKS cluster from managing resources like VMs, load balancers, tags, etc. ## Can I restore my cluster after deleting it? -No, you're unable to restore your cluster after deleting it. When you delete your cluster, the associated resource group and all its resources are deleted. If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you have the **Owner** or **User Access Administrator** built-in role, you can lock Azure resources to protect them from accidental deletions and modifications. For more information, see [Lock your resources to protect your infrastructure][lock-azure-resources]. +No, you cannot restore your cluster after deleting it. When you delete your cluster, the node resource group and all its resources are also deleted. An example of the second resource group is *MC_myResourceGroup_myAKSCluster_eastus*. ++If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you want to protect against accidental deletes, you can lock the AKS managed resource group hosting your cluster resources using [Node resource group lockdown][node-resource-group-lockdown]. ## What is platform support, and what does it include? The AKS Linux Extension is an Azure VM extension that installs and configures mo - [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.-- [Local-gadget](https://inspektor-gadget.io/docs/v0.18.1): Uses in-kernel eBPF helper programs to monitor events related to syscalls from userspace programs in a pod.+- [Local-gadget](https://inspektor-gadget.io/docs/): Uses in-kernel eBPF helper programs to monitor events related to syscalls from userspace programs in a pod. These tools help provide observability around many node health related problems, such as: The extension **doesn't require additional outbound access** to any URLs, IP add [az-regions]: ../availability-zones/az-region.md [pricing-tiers]: ./free-standard-pricing-tiers.md [aks-keyvault-provider]: ./csi-secrets-store-driver.md+[node-resource-group-lockdown]: cluster-configuration.md#create-an-aks-cluster-with-node-resource-group-lockdown <!-- LINKS - external --> [aks-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service [cordon-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ [admission-controllers]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/-[lock-azure-resources]: ../azure-resource-manager/management/lock-resources.md + |
aks | Gpu Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md | Last updated 04/10/2023 # Use GPUs for compute-intensive workloads on Azure Kubernetes Service (AKS) -Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. For more information on available GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6*. The NVv4 series (based on AMD GPUs) aren't supported with AKS. +Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. For more information on available GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported with AKS. This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters. Now that you updated your cluster to use the AKS GPU image, you can add a node p --cluster-name myAKSCluster \ --name gpunp \ --node-count 1 \- --node-vm-size Standard_NC6 \ + --node-vm-size Standard_NC6s_v3 \ --node-taints sku=gpu:NoSchedule \ --aks-custom-headers UseGPUDedicatedVHD=true \ --enable-cluster-autoscaler \ Now that you updated your cluster to use the AKS GPU image, you can add a node p The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings: - * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6*. + * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*. * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool. * `--aks-custom-headers`: Specifies a specialized AKS GPU image, *UseGPUDedicatedVHD=true*. If your GPU sku requires generation 2 VMs, use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true* instead. * `--enable-cluster-autoscaler`: Enables the cluster autoscaler. You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac --cluster-name myAKSCluster \ --name gpunp \ --node-count 1 \- --node-vm-size Standard_NC6 \ + --node-vm-size Standard_NC6s_v3 \ --node-taints sku=gpu:NoSchedule \ --enable-cluster-autoscaler \ --min-count 1 \ You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings: - * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6*. + * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*. * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool. * `--enable-cluster-autoscaler`: Enables the cluster autoscaler. * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool. |
aks | Manage Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-node-pools.md | For more information, see [capacity reservation groups][capacity-reservation-gro You may need to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory or a node pool that provides GPU support. In the next section, you [use taints and tolerations](#set-node-pool-taints) to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes. -In the following example, we create a GPU-based node pool that uses the *Standard_NC6* VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information, see [Available sizes for Linux virtual machines in Azure][vm-sizes]. +In the following example, we create a GPU-based node pool that uses the *Standard_NC6s_v3* VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information, see [Available sizes for Linux virtual machines in Azure][vm-sizes]. 1. Create a node pool using the [`az aks node pool add`][az-aks-nodepool-add] command. Specify the name *gpunodepool* and use the `--node-vm-size` parameter to specify the *Standard_NC6* size. In the following example, we create a GPU-based node pool that uses the *Standar --cluster-name myAKSCluster \ --name gpunodepool \ --node-count 1 \- --node-vm-size Standard_NC6 \ + --node-vm-size Standard_NC6s_v3 \ --no-wait ``` In the following example, we create a GPU-based node pool that uses the *Standar ... "provisioningState": "Creating", ...- "vmSize": "Standard_NC6", + "vmSize": "Standard_NC6s_v3", ... }, { |
aks | Manage Ssh Node Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md | Last updated 11/01/2023 # Manage SSH for secure access to Azure Kubernetes Service (AKS) nodes -This article describes how to update the SSH key on your AKS clusters or node pools. +This article describes how to update the SSH key (preview) on your AKS clusters or node pools. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] This article describes how to update the SSH key on your AKS clusters or node po * You need the Azure CLI version 2.46.0 or later installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * This feature supports Linux, Mariner, and CBLMariner node pools on existing clusters. -## Update SSH public key on an existing AKS cluster +## Update SSH public key (preview) on an existing AKS cluster Use the [az aks update][az-aks-update] command to update the SSH public key on your cluster. This operation updates the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument. |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
aks | Tutorial Kubernetes Deploy Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md | Title: Kubernetes on Azure tutorial - Deploy an application to Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using images stored in Azure Container Registry. Previously updated : 10/23/2023 Last updated : 11/02/2023 #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications. In this tutorial, you deployed a sample Azure application to a Kubernetes cluste In the next tutorial, you learn how to use PaaS services for stateful workloads in Kubernetes. > [!div class="nextstepaction"]-> Use PaaS services for stateful workloads in AKS +> [Use PaaS services for stateful workloads in AKS][aks-tutorial-paas] <!-- LINKS - external --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply |
aks | Tutorial Kubernetes Prepare Acr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md | Title: Kubernetes on Azure tutorial - Create an Azure Container Registry and build images description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload sample application container images. Previously updated : 10/23/2023 Last updated : 11/02/2023 #Customer intent: As a developer, I want to learn how to create and use a container registry so that I can deploy my own applications to Azure Kubernetes Service. This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-Install Before creating an ACR instance, you need a resource group. An Azure resource group is a logical container into which you deploy and manage Azure resources. +> [!IMPORTANT] +> This tutorial uses *myResourceGroup* as a placeholder for the resource group name. If you want to use a different name, replace *myResourceGroup* with your own resource group name. + ### [Azure CLI](#tab/azure-cli) 1. Create a resource group using the [`az group create`][az-group-create] command. Before creating an ACR instance, you need a resource group. An Azure resource gr az group create --name myResourceGroup --location eastus ``` -2. Create an ACR instance using the [`az acr create`][az-acr-create] command and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses `<acrName>` as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. +2. Create an ACR instance using the [`az acr create`][az-acr-create] command and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses an environment variable, `$ACRNAME`, as a placeholder for the container registry name. You can set this environment variable to your unique ACR name to use in future commands. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. ```azurecli-interactive- az acr create --resource-group myResourceGroup --name <acrName> --sku Basic + az acr create --resource-group myResourceGroup --name $ACRNAME --sku Basic ``` ### [Azure PowerShell](#tab/azure-powershell) Before creating an ACR instance, you need a resource group. An Azure resource gr New-AzResourceGroup -Name myResourceGroup -Location eastus ``` -2. Create an ACR instance using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses `<acrName>` as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. +2. Create an ACR instance using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses an environment variable, `$ACRNAME`, as a placeholder for the container registry name. You can set this environment variable to your unique ACR name to use in future commands. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. ```azurepowershell-interactive- New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName> -Location eastus -Sku Basic + New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name $ACRNAME -Location eastus -Sku Basic ``` Before creating an ACR instance, you need a resource group. An Azure resource gr > In the following example, we don't build the `rabbitmq` image. This image is available from the Docker Hub public repository and doesn't need to be built or pushed to your ACR instance. ```azurecli-interactive- az acr build --registry <acrName> --image aks-store-demo/product-service:latest ./src/product-service/ - az acr build --registry <acrName> --image aks-store-demo/order-service:latest ./src/order-service/ - az acr build --registry <acrName> --image aks-store-demo/store-front:latest ./src/store-front/ + az acr build --registry $ACRNAME --image aks-store-demo/product-service:latest ./src/product-service/ + az acr build --registry $ACRNAME --image aks-store-demo/order-service:latest ./src/order-service/ + az acr build --registry $ACRNAME --image aks-store-demo/store-front:latest ./src/store-front/ ``` ## List images in registry Before creating an ACR instance, you need a resource group. An Azure resource gr * View the images in your ACR instance using the [`az acr repository list`][az-acr-repository-list] command. ```azurecli-interactive- az acr repository list --name <acrName> --output table + az acr repository list --name $ACRNAME --output table ``` The following example output lists the available images in your registry: Before creating an ACR instance, you need a resource group. An Azure resource gr * View the images in your ACR instance using the [`Get-AzContainerRegistryRepository`][get-azcontainerregistryrepository] cmdlet. ```azurepowershell-interactive- Get-AzContainerRegistryRepository -RegistryName <acrName> + Get-AzContainerRegistryRepository -RegistryName $ACRNAME ``` The following example output lists the available images in your registry: |
aks | Tutorial Kubernetes Upgrade Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md | Title: Kubernetes on Azure tutorial - Upgrade an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 10/23/2023 Last updated : 11/02/2023 #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features. For more information on AKS, see the [AKS overview][aks-intro]. For guidance on [aks-auto-upgrade]: ./auto-upgrade-cluster.md [auto-upgrade-node-image]: ./auto-upgrade-node-image.md [node-image-upgrade]: ./node-image-upgrade.md+[az-aks-update]: /cli/azure/aks#az_aks_update |
aks | Use Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md | The following labels are AKS reserved labels. *Virtual node usage* specifies if | kubernetes.azure.com/agentpool | \<agent pool name> | nodepool1 | Same | | kubernetes.io/arch | amd64 | runtime.GOARCH | N/A | | kubernetes.io/os | \<OS Type> | Linux/Windows | Same |-| node.kubernetes.io/instance-type | \<VM size> | Standard_NC6 | Virtual | +| node.kubernetes.io/instance-type | \<VM size> | Standard_NC6s_v3 | Virtual | | topology.kubernetes.io/region | \<Azure region> | westus2 | Same | | topology.kubernetes.io/zone | \<Azure zone> | 0 | Same | | kubernetes.azure.com/cluster | \<MC_RgName> | MC_aks_myAKSCluster_westus2 | Same | |
aks | Virtual Nodes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes.md | Title: Use virtual nodes description: Overview of how using virtual node with Azure Kubernetes Services (AKS) Previously updated : 01/18/2023 Last updated : 11/06/2023 Virtual nodes enable network communication between pods that run in Azure Contai Pods running in Azure Container Instances (ACI) need access to the AKS API server endpoint, in order to configure networking. -## Known limitations +## Limitations -Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following scenarios aren't supported with virtual nodes: +Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following are scenarios not supported with virtual nodes or are deployment considerations: * Using service principal to pull ACR images. [Workaround](https://github.com/virtual-kubelet/azure-aci/blob/master/README.md#private-registry) is to use [Kubernetes secrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) * [Virtual Network Limitations](../container-instances/container-instances-vnet.md) including VNet peering, Kubernetes network policies, and outbound traffic to the internet with network security groups. Virtual nodes functionality is heavily dependent on ACI's feature set. In additi * [Host aliases](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/) * [Arguments](../container-instances/container-instances-exec.md#restrictions) for exec in ACI * [DaemonSets](concepts-clusters-workloads.md#statefulsets-and-daemonsets) won't deploy pods to the virtual nodes-* Virtual nodes support scheduling Linux pods. You can manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider to schedule Windows Server containers to ACI. +* To schedule Windows Server containers to ACI, you need to manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider. * Virtual nodes require AKS clusters with Azure CNI networking.-* Using api server authorized ip ranges for AKS. +* Using API server authorized ip ranges for AKS. * Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). However, virtual nodes currently don't support [Persistent Volumes](concepts-storage.md#persistent-volumes) and [Persistent Volume Claims](concepts-storage.md#persistent-volume-claims). Follow the instructions for mounting [a volume with Azure Files share as an inline volume](azure-csi-files-storage-provision.md#mount-file-share-as-an-inline-volume). * Using IPv6 isn't supported. * Virtual nodes don't support the [Container hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) feature. |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
application-gateway | Configuration Frontend Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md | Only one public IP address and one private IP address is supported. You choose t A frontend IP address is associated to a *listener*, which checks for incoming requests on the frontend IP. >[!NOTE] -> You can create private and public listeners with the same port number (Preview feature). However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an allow-inbound rule with **Destination IP addresses** as your application gateway's Public and Private frontend IPs. When using the same port, your application gateway changes the "Destination" of the inbound flow to the frontend IPs of your gateway. +> You can create private and public listeners with the same port number. However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an allow-inbound rule with **Destination IP addresses** as your application gateway's Public and Private frontend IPs. When using the same port, your application gateway changes the "Destination" of the inbound flow to the frontend IPs of your gateway. > > **Inbound Rule**: > - Source: (as per your requirement) |
application-gateway | Configuration Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md | To use NSG with your application gateway, you will need to create or retain some ||||||| |`<as per need>`|Any|`<Subnet IP Prefix>`|`<listener ports>`|TCP|Allow| -Upon configuring **active public and private listeners** (with Rules) **with the same port number** (in Preview), your application gateway changes the "Destination" of all inbound flows to the frontend IPs of your gateway. This is true even for the listeners not sharing any port. You must thus include your gateway's frontend Public and Private IP addresses in the Destination of the inbound rule when using the same port configuration. +Upon configuring **active public and private listeners** (with Rules) **with the same port number**, your application gateway changes the "Destination" of all inbound flows to the frontend IPs of your gateway. This is true even for the listeners not sharing any port. You must thus include your gateway's frontend Public and Private IP addresses in the Destination of the inbound rule when using the same port configuration. | Source | Source ports | Destination | Destination ports | Protocol | Access | |
application-gateway | Quick Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md | You'll create the application gateway using the tabs on the **Create application - **Name**: Enter *myVNet* for the name of the virtual network. - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed. The default IP address range provided is 10.0.0.0/24.+ + - **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column. ++ - **Address range** (backend server subnet): In the second row of the **Subnets** Grid, enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.0.0.0/24, enter *10.0.1.0/24* for the address range of *myBackendSubnet*. Select **OK** to close the **Create virtual network** window and save the virtual network settings. |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
automation | Guidance Migration Log Analytics Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md | This article provides guidance to move from Change Tracking and Inventory using ### [Using Azure portal - for single VM](#tab/ct-single-vm) -1. Sign in to the [Azure portal](https://portal.azure.com) and select your virtual machine +1. Sign in to the [Azure portal](https://portal.azure.com) and select your virtual machine 1. Under **Operations** , select **Change tracking**. 1. Select **Configure with AMA** and in the **Configure with Azure monitor agent**, provide the **Log analytics workspace** and select **Migrate** to initiate the deployment. This article provides guidance to move from Change Tracking and Inventory using 1. On the **Onboarding to Change Tracking with Azure Monitoring** page, you can view your automation account and list of machines that are currently on Log Analytics and ready to be onboarded to Azure Monitoring Agent of Change Tracking and inventory. 1. On the **Assess virtual machines** tab, select the machines and then select **Next**. 1. On **Assign workspace** tab, assign a new [Log Analytics workspace resource ID](#obtain-log-analytics-workspace-resource-id) to which the settings of AMA based solution should be stored and select **Next**.- + :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-inline.png" alt-text="Screenshot of assigning new Log Analytics resource ID." lightbox="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-expanded.png":::- + 1. On **Review** tab, you can review the machines that are being onboarded and the new workspace. 1. Select **Migrate** to initiate the deployment. This article provides guidance to move from Change Tracking and Inventory using :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-inline.png" alt-text="Screenshot that shows switching between log analytics and Azure Monitoring Agent after a successful migration." lightbox="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-expanded.png"::: - ### [Using PowerShell script](#tab/ps-policy) #### Prerequisites -- Ensure to have the Windows PowerShell console installed. Follow the steps to [install Windows PowerShell](https://learn.microsoft.com/powershell/scripting/windows-powershell/install/installing-windows-powershell?view=powershell-7.3).-- We recommend that you use PowerShell version 7.1.3 or higher.+- Ensure to have the Windows PowerShell console installed. We recommend that you use PowerShell version 7.2 or higher. Follow the steps to [Install PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows). - Obtain Read access for the specified workspace resources. - Ensure that you have `Az.Accounts` and `Az.OperationalInsights` modules installed. The `Az.PowerShell` module is used to pull workspace agent configuration information. - Ensure to have the Azure credentials to run `Connect-AzAccount` and `Select Az-Context` that set the context for the script to run. Follow these steps to migrate using scripts. #### Onboard at scale Use the [script](https://github.com/mayguptMSFT/AzureMonitorCommunity/blob/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/DCR%20Config%20Generator/CTDcrGenerator/CTWorkSpaceSettingstoDCR.ps1) to migrate Change tracking workspace settings to data collection rule.- + #### Parameters **Parameter** | **Required** | **Description** |- | | | + | | | `InputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Log Analytics. | `OutputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Azure Monitoring Agent. | `OutputDCRName`| Yes | Custom name of the new DCR created. | `OutputDCRLocation`| Yes | Azure location of the output workspace ID. |-`OutputDCRTemplateFolderPath`| Yes | Folder path where DCR templates are created. | +`OutputDCRTemplateFolderPath`| Yes | Folder path where DCR templates are created. | To obtain the Log Analytics Workspace resource ID, follow these steps: **For single VM and Automation Account** 1. 100 VMs per Automation Account can be migrated in one instance.-1. Any VM with > 100 file/registry settings for migration via portal isn't supported now. +1. Any VM with > 100 file/registry settings for migration via portal isn't supported now. 1. Arc VM migration isn't supported with portal, we recommend that you use PowerShell script migration. 1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking-monitoring-agent.md#configure-file-content-changes). 1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md). To obtain the Log Analytics Workspace resource ID, follow these steps: ### [Using PowerShell script](#tab/limit-policy) 1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking.md#track-file-contents).-1. Any VM with > 100 file/registry settings for migration via portal isn't supported now. +1. Any VM with > 100 file/registry settings for migration via portal isn't supported now. 1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md). After you enable management of your virtual machines using Change Tracking and I The disable method incorporates the following: - [Removes change tracking with LA agent for selected few VMs within Log Analytics Workspace](remove-vms-from-change-tracking.md). - [Removes change tracking with LA agent from the entire Log Analytics Workspace](remove-feature.md).- + ## Next steps - To enable from the Azure portal, see [Enable Change Tracking and Inventory from the Azure portal](../change-tracking/enable-vms-monitoring-agent.md). |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 # |
azure-arc | Deliver Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md | If any problems occur during the enablement process, see [Troubleshoot delivery There are some scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Two of these scenarios supported by Azure Arc include the following: -- Dev/Test (Visual Studio)-- Disaster Recovery (Entitled benefit DR instances from Software Assurance or subscription only)+- [Dev/Test (Visual Studio)](/azure/devtest/offer/overview-what-is-devtest-offer-visual-studio) +- Disaster Recovery ([Entitled benefit DR instances from Software Assurance](https://www.microsoft.com/en-us/licensing/licensing-programs/software-assurance-by-benefits) or subscription only) To qualify for these scenarios, you must have: |
azure-arc | Quick Enable Hybrid Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md | Title: Quickstart - Connect hybrid machine with Azure Arc-enabled servers description: In this quickstart, you connect and register a hybrid machine with Azure Arc-enabled servers. Previously updated : 05/04/2023 Last updated : 11/03/2023 Use the Azure portal to create a script that automates the agent download and in 1. [Go to the Azure portal page for adding servers with Azure Arc](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**. - :::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server-expanded.png"::: + :::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server.png"::: > [!NOTE] > In the portal, you can also reach this page by searching for and selecting "Servers - Azure Arc" and then selecting **+Add**. -1. Review the information on the **Prerequisites** page, then select **Next**. --1. On the **Resource details** page, provide the following: +1. On the **Basics** page, provide the following: 1. Select the subscription and resource group where you want the machine to be managed within Azure. 1. For **Region**, choose the Azure region in which the server's metadata will be stored. |
azure-arc | Tutorial Assign Policy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-assign-policy-portal.md | Follow the steps below to create a policy assignment and assign the policy defin For a partial list of available built-in policies, see [Azure Policy samples](../../../governance/policy/samples/index.md). 1. Search through the policy definitions list to find the _\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines_- definition (if you have enabled the Azure Connected Machine agent on a Windows-based machine). For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**. + definition (if you have enabled the Azure Connected Machine agent on a Windows-based machine). For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Add**. 1. The **Assignment name** is automatically populated with the policy name you selected, but you can change it. For this example, leave the policy name as is, and don't change any of the remaining options on the page. |
azure-arc | Tutorial Enable Vm Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md | Sign in to the [Azure portal](https://portal.azure.com).> ## Enable VM insights -1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Servers - Azure Arc**. +1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Machines - Azure Arc**. :::image type="content" source="./media/quick-enable-hybrid-vm/search-machines.png" alt-text="Screenshot of Azure portal showing search for Servers, Azure Arc." border="false"::: -1. On the **Azure Arc - Servers** page, select the connected machine you created in the [quickstart](quick-enable-hybrid-vm.md) article. +1. On the **Azure Arc - Machines** page, select the connected machine you created in the [quickstart](quick-enable-hybrid-vm.md) article. 1. From the left-pane under the **Monitoring** section, select **Insights** and then **Enable**. |
azure-arc | Manage Automatic Vm Extension Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md | Title: Automatic extension upgrade for Azure Arc-enabled servers description: Learn how to enable automatic extension upgrades for your Azure Arc-enabled servers. Previously updated : 10/14/2022 Last updated : 11/03/2023 # Automatic extension upgrade for Azure Arc-enabled servers If you continue to have trouble upgrading an extension, you can [disable automat ### Timing of automatic extension upgrades -When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it may take 5 - 8 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you may see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions). +When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it might take 5 - 8 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you might see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions). Extension versions fixing critical security vulnerabilities are rolled out much faster. These automatic upgrades happen using a specialized roll out process which can take 1 - 3 weeks to automatically upgrade every server with that extension. Azure handles identifying which extension version should be rollout quickly to ensure all servers are protected. If you need to upgrade the extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions). Automatic extension upgrade is enabled by default when you install extensions on Use the following steps to configure automatic extension upgrades in using the Azure portal: -1. Navigate to the [Azure portal](https://portal.azure.com) and type **Servers - Azure Arc** into the search bar. - :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-search-arc-server.png" alt-text="Screenshot of Azure portal showing user typing in Servers - Azure Arc." border="true"::: -1. Select **Servers - Azure Arc** under the Services category, then select the individual server you wish to manage. -1. In the navigation pane, select the **Extensions** tab to see a list of all extensions installed on the server. +1. Go to the [Azure portal](https://portal.azure.com) navigate to **Machines - Azure Arc**. +1. Select the applicable server. +1. In the left pane, select the **Extensions** tab to see a list of all extensions installed on the server. :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-navigation-extensions.png" alt-text="Screenshot of an Azure Arc-enabled server in the Azure portal showing where to navigate to extensions." border="true"::: 1. The **Automatic upgrade** column in the table shows whether upgrades are enabled, disabled, or not supported for each extension. Select the checkbox next to the extensions for which you want automatic upgrades enabled, then select **Enable automatic upgrade** to turn on the feature. Select **Disable automatic upgrade** to turn off the feature.- :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-enable-auto-upgrade.png" alt-text="Screenshot of Azure portal showing how to select extensions and enable automatic upgrades." border="true"::: ### [Azure CLI](#tab/azure-cli) Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName A machine managed by Arc-enabled servers can have multiple extensions with automatic extension upgrade enabled. The same machine can also have other extensions without automatic extension upgrade enabled. -If multiple extension upgrades are available for a machine, the upgrades may be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded. +If multiple extension upgrades are available for a machine, the upgrades might be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded. ## Check automatic extension upgrade history |
azure-arc | Manage Vm Extensions Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-portal.md | VM extensions can be applied to your Azure Arc-enabled server-managed machine vi 1. From your browser, go to the [Azure portal](https://portal.azure.com). -2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list. +2. In the portal, browse to **Machines - Azure Arc** and select your machine from the list. -3. Choose **Extensions**, then select **Add**. Choose the extension you want from the list of available extensions and follow the instructions in the wizard. In this example, we will deploy the Log Analytics VM extension. +3. Choose **Extensions**, then select **Add**. - ![Select VM extension for selected machine](./media/manage-vm-extensions/add-vm-extensions.png) -- The following example shows the installation of the Log Analytics VM extension from the Azure portal: +4. Choose the extension you want from the list of available extensions and follow the instructions in the wizard. In this example, we will deploy the Log Analytics VM extension. ![Install Log Analytics VM extension](./media/manage-vm-extensions/mma-extension-config.png) To complete the installation, you are required to provide the workspace ID and primary key. If you are not familiar with how to find this information, see [obtain workspace ID and key](../../azure-monitor/agents/agent-windows.md#workspace-id-and-key). -4. After confirming the required information provided, select **Review + Create**. A summary of the deployment is displayed and you can review the status of the deployment. +5. After confirming the required information provided, select **Review + Create**. A summary of the deployment is displayed and you can review the status of the deployment. >[!NOTE] >While multiple extensions can be batched together and processed, they are installed serially. Once the first extension installation is complete, installation of the next extension is attempted. You can get a list of the VM extensions on your Azure Arc-enabled server from th 1. From your browser, go to the [Azure portal](https://portal.azure.com). -2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list. +2. In the portal, browse to **Machines - Azure Arc** and select your machine from the list. 3. Choose **Extensions**, and the list of installed extensions is returned. You can upgrade one, or select multiple extensions eligible for an upgrade from 1. From your browser, go to the [Azure portal](https://portal.azure.com). -2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list. +2. In the portal, browse to **Machines - Azure Arc** and select your hybrid machine from the list. 3. Choose **Extensions**, and review the status of extensions under the **Update available** column. You can remove one or more extensions from an Azure Arc-enabled server from the 1. From your browser, go to the [Azure portal](https://portal.azure.com). -2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list. +2. In the portal, browse to **Machines - Azure Arc** and select your hybrid machine from the list. 3. Choose **Extensions**, and then select an extension from the list of installed extensions. |
azure-arc | Onboard Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md | Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Previously updated : 05/23/2022 Last updated : 11/03/2023 The script to automate the download and installation, and to establish the conne 1. From your browser, go to the [Azure portal](https://portal.azure.com). -1. On the **Servers - Azure Arc** page, select **Add** at the upper left. +1. On the **Machines - Azure Arc** page, select **Add/Create** at the upper left, then select **Add a machine** from the drop-down menu. -1. On the **Select a method** page, select the **Add multiple servers** tile, and then select **Generate script**. +1. On the **Add servers with Azure Arc** page, select the **Add multiple servers** tile, and then select **Generate script**. -1. On the **Generate script** page, select the subscription and resource group where you want the machine to be managed within Azure. Select an Azure location where the machine metadata will be stored. This location can be the same or different, as the resource group's location. +1. On the **Basics** page, provide the following: -1. On the **Prerequisites** page, review the information and then select **Next: Resource details**. --1. On the **Resource details** page, provide the following: -- 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from. - 1. In the **Region** drop-down list, select the Azure region to store the servers metadata. + 1. Select the **Subscription** and **Resource group** for the machines. + 1. In the **Region** drop-down list, select the Azure region to store the servers' metadata. 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on. 1. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Using this configuration, the agent communicates through the proxy server using the HTTP protocol. Enter the value in the format `http://<proxyURL>:<proxyport>`.- 1. Select **Next: Authentication**. --1. On the **Authentication** page, under the **service principal** drop-down list, select **Arc-for-servers**. Then select, **Next: Tags**. + 1. Select **Next**. + 1. In the **Authentication** section, under the **Service principal** drop-down list, select **Arc-for-servers**. Then select, **Next**. 1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards. -1. Select **Next: Download and run script**. +1. Select **Next**. 1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**. After you install the agent and configure it to connect to Azure Arc-enabled ser ![Screenshot showing a successful server connection in the Azure portal.](./media/onboard-portal/arc-for-servers-successful-onboard.png) ---------- ## Next steps - Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. |
azure-arc | Onboard Update Management Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-update-management-machines.md | Title: Connect machines from Azure Automation Update Management description: In this article, you learn how to connect hybrid machines to Azure Arc managed by Automation Update Management. Previously updated : 11/01/2023 Last updated : 11/06/2023 Perform the following steps to configure the hybrid machine with Arc-enabled ser 1. From your browser, go to the [Azure portal](https://portal.azure.com). -1. Navigate to the **Servers - Azure Arc** page, and then select **Add** at the upper left. +1. Navigate to the **Machines - Azure Arc** page, select **Add/Create**, and then select **Add a machine** from the drop-down menu. -1. On the **Select a method** page, select the **Add managed servers from Update Management (preview)** tile, and then select **Add servers**. +1. On the **Add servers with Azure Arc** page, select **Add servers** from the **Add managed servers from Update Management** tile. -1. On the **Basics** page, configure the following: +1. On the **Resource details** page, configure the following: - 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from. + 1. Select the **Subscription** and **Resource group** where you want the server to be managed within Azure. 1. In the **Region** drop-down list, select the Azure region to store the servers metadata. 1. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Enter the value in the format `http://<proxyURL>:<proxyport>`.- 1. Select **Next: Machines**. + 1. Select **Next**. -1. On the **Machines** page, select the **Subscription** and **Automation account** from the drop-down list that has the Update Management feature enabled and includes the machines you want to onboard to Azure Arc-enabled servers. +1. On the **Servers** page, select **Add Servers**, then select the **Subscription** and **Automation account** from the drop-down list that has the Update Management feature enabled and includes the machines you want to onboard to Azure Arc-enabled servers. After specifying the Automation account, the list below returns non-Azure machines managed by Update Management for that Automation account. Both Windows and Linux machines are listed and for each one, select **add**. You can review your selection by selecting **Review selection** and if you want to remove a machine select **remove** from under the **Action** column. - Once you confirm your selection, select **Next: Tags**. + Once you confirm your selection, select **Next**. 1. On the **Tags** page, specify one or more **Name**/**Value** pairs to support your standards. Select **Next: Review + add**. |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-arc | Prepare Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md | -With Windows Server 2012 and Windows Server 2012 R2 reaching end of support on October 10, 2023, Azure Arc-enabled servers lets you enroll your existing Windows Server 2012/2012 R2 machines in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview). Affording both cost flexibility and an enhanced delivery experience, Azure Arc better positions you to migrate to Azure. +With Windows Server 2012 and Windows Server 2012 R2 having reached end of support on October 10, 2023, Azure Arc-enabled servers lets you enroll your existing Windows Server 2012/2012 R2 machines in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview). Affording both cost flexibility and an enhanced delivery experience, Azure Arc better positions you to migrate to Azure. The purpose of this article is to help you understand the benefits and how to prepare to use Arc-enabled servers to enable delivery of ESUs. Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the follow For Azure Arc-enabled servers enrolled in WS2012 ESUs enabled by Azure Arc, free access is provided to these Azure services from October 10, 2023: * [Azure Update Manager](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.+ Enrollment in ESUs does not impact Azure Update Manager. After enrollment in ESUs through Azure Arc, the server becomes eligible for ESU patches. These patches can be delivered through Azure Update Manager or any other patching solution. You'll still need to configure updates from Microsoft Updates or Windows Server Update Services. * [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2) - Track changes in virtual machines hosted in Azure, on-premises, and other cloud environments. * [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) - Audit the configuration settings in a virtual machine. Guest configuration supports Azure VMs natively and non-Azure physical and virtual servers through Azure Arc-enabled servers. Other Azure services through Azure Arc-enabled servers are available as well, wi * [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) - Collect security-related events and correlate them with other data sources. >[!NOTE]- >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter. + >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter. ## Prepare delivery of ESUs To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported. -We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy one month before Windows Server 2012 end of support. Billing for this service starts from October 2023, after Windows Server 2012 end of support. +We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy. Billing for this service starts from October 2023 (i.e., after Windows Server 2012 end of support). |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | Azure Arc supports the following Windows and Linux operating systems. Only x86-6 * Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI -The Azure Connected Machine agent can't currently be installed on systems hardened by the Center for Information Security (CIS) Benchmark. +The Azure Connected Machine agent hasn't been tested on operating systems hardened by the Center for Information Security (CIS) Benchmark. ### Client operating system guidance |
azure-arc | Private Link Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md | Last updated 06/20/2023 # Use Azure Private Link to securely connect servers to Azure Arc -[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. This means you can connect your on-premises or multi-cloud servers with Azure Arc and send all traffic over an Azure [ExpressRoute](../../expressroute/expressroute-introduction.md) or site-to-site [VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) instead of using public networks. +[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. This means you can connect your on-premises or multicloud servers with Azure Arc and send all traffic over an Azure [ExpressRoute](../../expressroute/expressroute-introduction.md) or site-to-site [VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) instead of using public networks. Starting with Azure Arc-enabled servers, you can use a Private Link Scope model to allow multiple servers or machines to communicate with their Azure Arc resources using a single private endpoint. There are two ways you can achieve this: |Priority |150 (must be lower than any rules that block internet access) |151 (must be lower than any rules that block internet access) | |Name |AllowAADOutboundAccess |AllowAzOutboundAccess | -- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID and Azure using the downloadable service tag files. The [JSON file](https://www.microsoft.com/en-us/download/details.aspx?id=56519) contains all the public IP address ranges used by Microsoft Entra ID and Azure and is updated monthly to reflect any changes. Azure ADs service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.+- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID and Azure using the downloadable service tag files. The [JSON file](https://www.microsoft.com/en-us/download/details.aspx?id=56519) contains all the public IP address ranges used by Microsoft Entra ID and Azure and is updated monthly to reflect any changes. Azure AD's service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules. See the visual diagram under the section [How it works](#how-it-works) for the network traffic flows. Once your Azure Arc Private Link Scope is created, you need to connect it with o a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server. - b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below. + b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones might be different from what is shown in the screenshot below. > [!NOTE] > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Arc-enabled servers. If you opted out of using Azure private DNS zones during private endpoint creati ### Single server scenarios -If you're only planning to use Private Links to support a few machines or servers, you may not want to update your entire network's DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address. +If you're only planning to use Private Links to support a few machines or servers, you might not want to update your entire network's DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address. #### Windows If you're only planning to use Private Links to support a few machines or server 1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file requires the IP address first followed by a space and then the hostname. -1. Save the file with your changes. You may need to save to another directory first, then copy the file to the original path. +1. Save the file with your changes. You might need to save to another directory first, then copy the file to the original path. #### Linux When connecting a machine or server with Azure Arc-enabled servers for the first 1. From your browser, go to the [Azure portal](https://portal.azure.com). -1. Navigate to **Servers -Azure Arc**. +1. Navigate to **Machines - Azure Arc**. -1. On the **Servers - Azure Arc** page, select **Add** at the upper left. +1. On the **Machines - Azure Arc** page, select **Add/Create** at the upper left, and then select **Add a machine** from the drop-down menu. 1. On the **Add servers with Azure Arc** page, select either the **Add a single server** or **Add multiple servers** depending on your deployment scenario, and then select **Generate script**. 1. On the **Generate script** page, select the subscription and resource group where you want the machine to be managed within Azure. Select an Azure location where the machine metadata will be stored. This location can be the same or different, as the resource group's location. -1. On the **Prerequisites** page, review the information and then select **Next: Resource details**. +1. On the **Basics** page, provide the following: -1. On the **Resource details** page, provide the following: -- 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from. + 1. Select the **Subscription** and **Resource group** for the machine. 1. In the **Region** drop-down list, select the Azure region to store the machine or server metadata. 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on.- 1. Under **Network Connectivity**, select **Private endpoint** and select the Azure Arc Private Link Scope created in Part 1 from the drop-down list. + 1. Under **Connectivity method**, select **Private endpoint** and select the Azure Arc Private Link Scope created in Part 1 from the drop-down list. :::image type="content" source="./media/private-link-security/arc-enabled-servers-create-script.png" alt-text="Selecting Private Endpoint connectivity option" border="true"::: When connecting a machine or server with Azure Arc-enabled servers for the first 1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**. -After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you may need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent. +After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you might need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent. The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager. The script will return status messages letting you know if onboarding was successful after it completes. > [!TIP]-> Network traffic from the Azure Connected Machine agent to Microsoft Entra ID and Azure Resource Manager will continue to use public endpoints. If your server needs to communicate through a proxy server to reach these endpoints, [configure the agent with the proxy server URL](manage-agent.md#update-or-remove-proxy-settings) before connecting it to Azure. You may also need to [configure a proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) for the Azure Arc services if your private endpoint is not accessible from your proxy server. +> Network traffic from the Azure Connected Machine agent to Microsoft Entra ID and Azure Resource Manager will continue to use public endpoints. If your server needs to communicate through a proxy server to reach these endpoints, [configure the agent with the proxy server URL](manage-agent.md#update-or-remove-proxy-settings) before connecting it to Azure. You might also need to [configure a proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) for the Azure Arc services if your private endpoint is not accessible from your proxy server. ### Configure an existing Azure Arc-enabled server For Azure Arc-enabled servers that were set up prior to your private link scope, :::image type="content" source="./media/private-link-security/select-servers-private-link-scope.png" lightbox="./media/private-link-security/select-servers-private-link-scope.png" alt-text="Selecting Azure Arc resources" border="true"::: -It may take up to 15 minutes for the Private Link Scope to accept connections from the recently associated server(s). +It might take up to 15 minutes for the Private Link Scope to accept connections from the recently associated server(s). ## Troubleshooting |
azure-arc | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/disaster-recovery.md | To recover from Arc resource bridge VM deletion, you need to deploy a new resour 1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and SCVMM Azure resources. -2. Find and delete the old Arc resource bridge template from your SCVMM. +2. Find and delete the old Arc resource bridge resource under the [Resource Bridges tab from the Azure Arc center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/resourceBridges). 3. Download the [onboarding script](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure. To recover from Arc resource bridge VM deletion, you need to deploy a new resour 5. [Provide the inputs](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#script-runtime) as prompted. -6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again. +6. In the same machine, run the following scripts, as applicable: + - [Download the script](https://download.microsoft.com/download/6/b/4/6b4a5009-fed8-46c2-b22b-b24a4d0a06e3/arcvmm-appliance-dr.ps1) if you are running the script from a Windows machine + - [Download the script](https://download.microsoft.com/download/0/5/c/05c2bcb8-87f8-4ead-9757-a87a0759071c/arcvmm-appliance-dr.sh) if you are running the script from a Linux machine ++7. Once the script is run successfully, the old Resource Bridge will be recovered and the connection is re-established to the existing Azure-enabled SCVMM resources. ## Next steps |
azure-arc | Quickstart Connect System Center Virtual Machine Manager To Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md | This QuickStart shows you how to connect your SCVMM management server to Azure A | **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |-| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. | +| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. | | **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. | | **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. | |
azure-arc | Administer Arc Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/administer-arc-vmware.md | Title: Perform ongoing administration for Arc-enabled VMware vSphere description: Learn how to perform administrator operations related to Azure Arc-enabled VMware vSphere Previously updated : 08/18/2023 Last updated : 11/06/2023 -In this article, you learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview): +In this article, you learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere: -- Upgrading the Azure Arc resource bridge (preview)+- Upgrading the Azure Arc resource bridge - Updating the credentials - Collecting logs from the Arc resource bridge |
azure-arc | Azure Arc Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md | Title: Azure Arc agent description: Learn about Azure Arc agent Previously updated : 10/31/2023 Last updated : 11/06/2023 |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-guest-management-at-scale.md | Title: Install Arc agent at scale for your VMware VMs description: Learn how to enable guest management at scale for Arc enabled VMware vSphere VMs. Previously updated : 08/21/2023 Last updated : 11/06/2023 |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md | Title: What is Azure Arc-enabled VMware vSphere (preview)? + Title: What is Azure Arc-enabled VMware vSphere? description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 10/31/2023 Last updated : 11/06/2023 -# What is Azure Arc-enabled VMware vSphere (preview)? +# What is Azure Arc-enabled VMware vSphere? -Azure Arc-enabled VMware vSphere (preview) is an [Azure Arc](../overview.md) service that helps you simplify management of hybrid IT estate distributed across VMware vSphere and Azure. It does so by extending the Azure control plane to VMware vSphere infrastructure and enabling the use of Azure security, governance, and management capabilities consistently across VMware vSphere and Azure. +Azure Arc-enabled VMware vSphere is an [Azure Arc](../overview.md) service that helps you simplify management of hybrid IT estate distributed across VMware vSphere and Azure. It does so by extending the Azure control plane to VMware vSphere infrastructure and enabling the use of Azure security, governance, and management capabilities consistently across VMware vSphere and Azure. -Arc-enabled VMware vSphere (preview) allows you to: +Arc-enabled VMware vSphere allows you to: - Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Arc at scale. Arc-enabled VMware vSphere extends Azure's control plane (Azure Resource Manager ## How does it work? -Arc-enabled VMware vSphere provides these capabilities by integrating with your VMware vCenter Server. To connect your VMware vCenter Server to Azure Arc, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview) in your vSphere environment. Azure Arc resource bridge is a virtual appliance that hosts the components that communicate with your vCenter Server and Azure. +Arc-enabled VMware vSphere provides these capabilities by integrating with your VMware vCenter Server. To connect your VMware vCenter Server to Azure Arc, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) in your vSphere environment. Azure Arc resource bridge is a virtual appliance that hosts the components that communicate with your vCenter Server and Azure. When a VMware vCenter Server is connected to Azure, an automatic discovery of the inventory of vSphere resources is performed. This inventory data is continuously kept in sync with the vCenter Server. You have the flexibility to start with either option, and incorporate the other ## Supported VMware vSphere versions -Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 7 and 8. +Azure Arc-enabled VMware vSphere currently works with vCenter Server versions 7 and 8. > [!NOTE]-> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point. +> Azure Arc-enabled VMware vSphere supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point. ## Supported regions -You can use Azure Arc-enabled VMware vSphere (preview) in these supported regions: -- Australia East-- Canada Central+You can use Azure Arc-enabled VMware vSphere in these supported regions: + - East US-- East US 2-- North Europe-- Southeast Asia+- East US2 +- West US2 +- West US3 +- South Central US +- Canada Central - UK South+- North Europe - West Europe-- West US 2-- West US 3+- Sweden Central +- Southeast Asia +- Australia East For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all) page. |
azure-arc | Perform Vm Ops Through Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/perform-vm-ops-through-azure.md | Title: Perform VM operations on VMware VMs through Azure description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent. Previously updated : 08/18/2023 Last updated : 11/06/2023 # Manage VMware VMs in Azure through Arc-enabled VMware vSphere -In this article, you learn how to perform various operations on the Azure Arc-enabled VMware vSphere (preview) VMs such as: +In this article, you learn how to perform various operations on the Azure Arc-enabled VMware vSphere VMs such as: - Start, stop, and restart a VM |
azure-arc | Quick Start Connect Vcenter To Arc Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md | Title: Connect VMware vCenter Server to Azure Arc by using the helper script description: In this quickstart, you learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. Previously updated : 10/31/2023 Last updated : 11/06/2023 -First, the script deploys a virtual appliance called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc. +First, the script deploys a virtual appliance called [Azure Arc resource bridge](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc. > [!IMPORTANT] > This article describes a way to connect a generic vCenter Server to Azure Arc. If you're trying to enable Arc for Azure VMware Solution (AVS) private cloud, please follow this guide instead - [Deploy Arc for Azure VMware Solution](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). With the Arc for AVS onboarding process you need to provide fewer inputs and Arc capabilities are better integrated into the AVS private cloud portal experience. You need a vSphere account that can: - Read all inventory. - Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc. -This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the deployment of the Azure Arc resource bridge (preview) VM. +This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the deployment of the Azure Arc resource bridge VM. ### Workstation A typical onboarding that uses the script takes 30 to 60 minutes. During the pro | **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge VM should be deployed. | | **Network selection** | Select the name of the virtual network or segment to which the Azure Arc resource bridge VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). | | **Static IP / DHCP** | For deploying Azure Arc resource bridge, the preferred configuration is to use Static IP. Enter **n** to select static IP configuration. While not recommended, if you have DHCP server in your network and want to use it instead, enter **y**. If you're using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). If you use DHCP, the cluster configuration IP address still needs to be a static IP address. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br>|-| **Control Plane IP address** | Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). <br> - If there's a DHCP service on the network, the IP address must be outside of DHCP range.| +| **Control Plane IP address** | Azure Arc resource bridge runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). <br> - If there's a DHCP service on the network, the IP address must be outside of DHCP range.| | **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge VM will be deployed. | | **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge VM. | | **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. | |
azure-arc | Quick Start Create A Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md | Title: Create a virtual machine on VMware vCenter using Azure Arc description: In this quickstart, you learn how to create a virtual machine on VMware vCenter using Azure Arc Previously updated : 10/23/2023 Last updated : 11/06/2023 |
azure-arc | Recover From Resource Bridge Deletion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion.md | Title: Perform disaster recovery operations description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios. Previously updated : 08/18/2023 Last updated : 11/06/2023 # Recover from accidental deletion of resource bridge VM -In this article, you learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail. +In this article, you learn how to recover the Azure Arc resource bridge connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc fail. -## Recovering the Arc resource bridge in case of VM deletion +## Recovering the Arc resource bridge if there is VM deletion To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps. To recover from Arc resource bridge VM deletion, you need to deploy a new resour 5. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted. -6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again. +6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources are manageable in Azure again. ## Next steps -[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md) +[Troubleshoot Azure Arc resource bridge issues](../resource-bridge/troubleshoot-resource-bridge.md) If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support: |
azure-arc | Remove Vcenter From Arc Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md | description: This article explains the steps to cleanly remove your VMware vCent Previously updated : 03/28/2022 Last updated : 11/06/2023 |
azure-arc | Setup And Manage Self Service Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/setup-and-manage-self-service-access.md | Title: Set up and manage self-service access to VMware resources through Azure RBAC description: Learn how to manage access to your on-premises VMware resources through Azure role-based access control (Azure RBAC). Previously updated : 08/21/2023 Last updated : 11/06/2023 # Customer intent: As a VI admin, I want to manage access to my vCenter resources in Azure so that I can keep environments secure To provision VMware VMs and change their size, add disks, change network interfa You must assign this role on individual resource pool (or cluster or host), network, datastore, and template that a user or a group needs to access. -1. Go to the [**VMware vCenters (preview)** list in Arc center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/vCenter). +1. Go to the [**VMware vCenters** list in Arc center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/vCenter). 2. Search and select your vCenter. |
azure-arc | Support Matrix For Arc Enabled Vmware Vsphere | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md | Title: Plan for deployment description: Learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements, and more. Previously updated : 10/31/2023 Last updated : 11/06/2023 # Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere. -# Support matrix for Azure Arc-enabled VMware vSphere (preview) +# Support matrix for Azure Arc-enabled VMware vSphere -This article documents the prerequisites and support requirements for using [Azure Arc-enabled VMware vSphere (preview)](overview.md) to manage your VMware vSphere VMs through Azure Arc. +This article documents the prerequisites and support requirements for using [Azure Arc-enabled VMware vSphere](overview.md) to manage your VMware vSphere VMs through Azure Arc. -To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge (preview) in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc. +To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc. ## VMware vSphere requirements The following requirements must be met in order to use Azure Arc-enabled VMware ### Supported vCenter Server versions -Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 7 and 8. +Azure Arc-enabled VMware vSphere works with vCenter Server versions 7 and 8. > [!NOTE]-> Azure Arc-enabled VMware vSphere (preview) currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it's not recommended to use Arc-enabled VMware vSphere with it at this point. +> Azure Arc-enabled VMware vSphere currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it's not recommended to use Arc-enabled VMware vSphere with it at this point. ### Required vSphere account privileges You need a vSphere account that can: - Read all inventory. - Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc. -This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the deployment of the Azure Arc resource bridge (preview) VM. +This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the deployment of the Azure Arc resource bridge VM. ### Resource bridge resource requirements |
azure-arc | Switch To New Preview Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-preview-version.md | - Title: Switch to the new preview version -description: Learn to switch to the new preview version and use its capabilities - Previously updated : 08/22/2023-------# Customer intent: As a VI admin, I want to switch to the new preview version of Arc-enabled VMware vSphere and leverage the associated capabilities ---# Switch to the new preview version --On August 21, 2023, we rolled out major changes to Azure Arc-enabled VMware vSphere preview. We're now announcing a new preview. By switching to the new preview version, you can use all the Azure management services that are available for Arc-enabled Servers. --> [!NOTE] -> If you're new to Arc-enabled VMware vSphere (preview), you will be able to leverage the new capabilities by default. To get started with the new preview, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). ---## Switch to the new preview version (Existing preview customer) --If you're an existing **Azure Arc-enabled VMware** customer, for VMs that are Azure-enabled, follow these steps to switch to the new preview version: -->[!Note] ->If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc). --1. From your browser, go to the vCenters blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the vCenter resource. --2. Select all the virtual machines that are Azure enabled with the older preview version. --3. Select **Remove from Azure**. -- :::image type="VM Inventory view" source="media/switch-to-new-preview-version/vm-inventory-view-inline.png" alt-text="Screenshot of VM Inventory view." lightbox="media/switch-to-new-preview-version/vm-inventory-view-expanded.png"::: --4. After successful removal from Azure, enable the same resources again in Azure. --5. Once the resources are re-enabled, the VMs are auto switched to the new preview version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**. -- :::image type=" New VM browse view" source="media/switch-to-new-preview-version/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-preview-version/new-vm-browse-view-expanded.png"::: --## Next steps --[Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script). |
azure-arc | Switch To New Version Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-version-vmware.md | + + Title: Switch to the new version of VMware vSphere +description: Learn to switch to the new version of VMware vSphere and use its capabilities + Last updated : 11/06/2023+++++++# Customer intent: As a VI admin, I want to switch to the new version of Arc-enabled VMware vSphere and leverage the associated capabilities. +++# Switch to the new version of VMware vSphere ++On August 21, 2023, we rolled out major changes to **Azure Arc-enabled VMware vSphere**. By switching to the new version, you can use all the Azure management services that are available for Arc-enabled Servers. ++> [!NOTE] +> If you're new to Arc-enabled VMware vSphere, you'll be able to leverage the new capabilities by default. To get started with the new version, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). +++## Switch to the new version (Existing customer) ++If you've onboarded to **Azure Arc-enabled VMware** before August 21, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version: ++>[!Note] +>If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc). ++1. From your browser, go to the vCenters blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the vCenter resource. ++2. Select all the virtual machines that are Azure enabled with the older version. ++3. Select **Remove from Azure**. ++ :::image type="VM Inventory view" source="media/switch-to-new-version-vmware/vm-inventory-view-inline.png" alt-text="Screenshot of VM Inventory view." lightbox="media/switch-to-new-version-vmware/vm-inventory-view-expanded.png"::: ++4. After successful removal from Azure, enable the same resources again in Azure. ++5. Once the resources are re-enabled, the VMs are auto switched to the new version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**. ++ :::image type=" New VM browse view" source="media/switch-to-new-version-vmware/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-version-vmware/new-vm-browse-view-expanded.png"::: + +## Next steps ++[Create a virtual machine on VMware vCenter using Azure Arc](/azure/azure-arc/vmware-vsphere/quick-start-create-a-vm). |
azure-arc | Troubleshoot Guest Management Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md | Title: Troubleshoot Guest Management Issues description: Learn about how to troubleshoot the guest management issues for Arc-enabled VMware vSphere. Previously updated : 08/18/2023 Last updated : 11/06/2023 |
azure-cache-for-redis | Cache Best Practices Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-performance.md | redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n >These numbers might change as we post newer results periodically. > +>[!IMPORTANT] +>Microsoft periodically updates the underlying VM used in cache instances. This can change the performance characteristics from cache to cache and from region to region. The example benchmarking values on this page reflect older generation cache hardware in a single region. You may see better or different results in practice. +> + ### Standard tier | Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) | |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-functions | Durable Functions Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md | Make sure to choose your Durable Functions development language at the top of th ## Python v2 programming model -Durable Functions provides preview support of the new [Python v2 programming model](../functions-reference-python.md?pivots=python-mode-decorators). To use the v2 model, you must install the Durable Functions SDK, which is the PyPI package `azure-functions-durable`, version `1.2.2` or a later version. During the preview, you can provide feedback and suggestions in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python/issues). --Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) isn't currently supported for the v2 model with Durable Functions. You'll instead need to manage your extensions manually as follows: --1. Remove the `extensionBundle` section of your `host.json` file. - -1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview. For more information, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). +Durable Functions is supported in the new [Python v2 programming model](../functions-reference-python.md?pivots=python-mode-decorators). To use the v2 model, you must install the Durable Functions SDK, which is the PyPI package `azure-functions-durable`, version `1.2.2` or a later version. You must also check `host.json` to make sure your app is referencing [Extension Bundles](../functions-bindings-register.md#extension-bundles) version 4.x to use the v2 model with Durable Functions. +You can provide feedback and suggestions in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python/issues). ::: zone-end ## Orchestration trigger |
azure-maps | Tutorial Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md | The Map Control API is a convenient client library. This API allows you to easil 4. Save your changes to the file and open the HTML page in a browser. The map shown is the most basic map that you can make by calling `atlas.Map` using your account key. - :::image type="content" source="./media/tutorial-search-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling atlas.Map using your Azure Maps account key."::: + :::image type="content" source="./media/tutorial-search-location/basic-map.png" lightbox="./media/tutorial-search-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling `atlas.Map` using your Azure Maps account key."::: 5. In the `GetMap` function, after initializing the map, add the following JavaScript code. This section shows how to use the Maps [Search API] to find a point of interest 3. Save the **MapSearch.html** file and refresh your browser. You should see the map centered on Seattle with round-blue pins for locations of gas stations in the area. - :::image type="content" source="./media/tutorial-search-location/pins-map.png" alt-text="A screenshot showing the map resulting from the search, which is a map showing Seattle with round-blue pins at locations of gas stations."::: + :::image type="content" source="./media/tutorial-search-location/pins-map.png" lightbox="./media/tutorial-search-location/pins-map.png" alt-text="A screenshot showing the map resulting from the search, which is a map showing Seattle with round-blue pins at locations of gas stations."::: 4. You can see the raw data that the map is rendering by entering the following HTTPRequest in your browser. Replace `<Your Azure Maps Subscription Key>` with your subscription key. The map that we've made so far only looks at the longitude/latitude data for the 3. Save the file and refresh your browser. Now the map in the browser shows information popups when you hover over any of the search pins. - :::image type="content" source="./media/tutorial-search-location/popup-map.png" alt-text="A screenshot of a map with information popups that appear when you hover over a search pin."::: + :::image type="content" source="./media/tutorial-search-location/popup-map.png" lightbox="./media/tutorial-search-location/popup-map.png" alt-text="A screenshot of a map with information popups that appear when you hover over a search pin."::: * For the completed code used in this tutorial, see the [search tutorial] on GitHub. * To view this sample live, see [Search for points of interest] on the **Azure Maps Code Samples** site. |
azure-maps | Web Sdk Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md | -The Azure Maps Web SDK provides a powerful canvas for rendering large spatial data sets in many different ways. In some cases, there are multiple ways to render data the same way, but depending on the size of the data set and the desired functionality, one method may perform better than others. This article highlights best practices and tips and tricks to maximize performance and create a smooth user experience. +The Azure Maps Web SDK provides a powerful canvas for rendering large spatial data sets in many different ways. In some cases, there are multiple ways to render data the same way, but depending on the size of the data set and the desired functionality, one method might perform better than others. This article highlights best practices and tips and tricks to maximize performance and create a smooth user experience. Generally, when looking to improve performance of the map, look for ways to reduce the number of layers and sources, and the complexity of the data sets and rendering styles being used. Often apps want to load the map to a specific location or style. Sometimes devel The Web SDK has two data sources, -* **GeoJSON source**: The `DataSource` class, manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features). -* **Vector tile source**: The `VectorTileSource` class, loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features). +* **GeoJSON source**: The `DataSource` class manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features). +* **Vector tile source**: The `VectorTileSource` class loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features). ### Use tile-based solutions for large datasets It's possible to store GeoJSON objects inline inside of JavaScript, however this ## Optimize rendering layers -Azure maps provides several different layers for rendering data on a map. There are many optimizations you can take advantage of to tailor these layers to your scenario the increase performances and the overall user experience. +Azure maps provide several different layers for rendering data on a map. There are many optimizations you can take advantage of to tailor these layers to your scenario the increase performances and the overall user experience. ### Create layers once and reuse them Unlike most layers in the Azure Maps Web control that use WebGL for rendering, H The [Reusing Popup with Multiple Pins] code sample shows how to create a single popup and reuse it by updating its content and position. For the source code, see [Reusing Popup with Multiple Pins sample code]. <! > [!VIDEO //codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] --> -That said, if you only have a few points to render on the map, the simplicity of HTML markers may be preferred. Additionally, HTML markers can easily be made draggable if needed. +That said, if you only have a few points to render on the map, the simplicity of HTML markers might be preferred. Additionally, HTML markers can easily be made draggable if needed. ### Combine layers The symbol layer has two options that exist for both icon and text called `allow ### Cluster large point data sets -When working with large sets of data points you may find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms in the map, clusters break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options. +When working with large sets of data points you might find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms in the map, clusters break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options. Additionally, increase the size of the cluster radius to improve performance. The larger the cluster radius, the less clustered points there's to keep track of and render. For more information, see [Clustering point data in the Web SDK]. ### Use weighted clustered heat maps -The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there's little visual difference in the rendered heat map. Using a larger cluster radius improves performance more but may reduce the resolution of the rendered heat map. +The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there's little visual difference in the rendered heat map. Using a larger cluster radius improves performance more but might reduce the resolution of the rendered heat map. ```javascript var layer = new atlas.layer.HeatMapLayer(source, null, { var layer = new atlas.layer.BubbleLayer(source, null, { }); ``` -The above code functions fine if all features in the data source have a `myColor` property, and the value of that property is a color. This may not be an issue if you have complete control of the data in the data source and know for certain all features have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green. +The above code functions fine if all features in the data source have a `myColor` property, and the value of that property is a color. This might not be an issue if you have complete control of the data in the data source and know for certain all features have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green. ```javascript var layer = new atlas.layer.BubbleLayer(source, null, { Things to check: * Ensure that you complete your authentication options in the map. Without authentication, the map loads a blank canvas and returns a 401 error in the network tab of the browser's developer tools. * Ensure that you have an internet connection.-* Check the console for errors of the browser's developer tools. Some errors may cause the map not to render. Debug your application. +* Check the console for errors of the browser's developer tools. Some errors might cause the map not to render. Debug your application. * Ensure you're using a [supported browser]. **All my data is showing up on the other side of the world, what's going on?** Things to check: **Why are icons or text in the symbol layer appearing in the wrong place?** Check that the `anchor` and the `offset` options are configured correctly to align with the part of your image or text that you want to have aligned with the coordinate on the map.-If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols rotate with the maps viewport, appearing upright to the user. However, depending on your scenario, it may be desirable to lock the symbol to the map's orientation by setting the `rotationAlignment` option to `map`. +If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols rotate with the maps viewport, appearing upright to the user. However, depending on your scenario, it might be desirable to lock the symbol to the map's orientation by setting the `rotationAlignment` option to `map`. -If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols stay upright in the maps viewport when the map is pitched or tilted. However, depending on your scenario, it may be desirable to lock the symbol to the map's pitch by setting the `pitchAlignment` option to `map`. +If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols stay upright in the maps viewport when the map is pitched or tilted. However, depending on your scenario, it might be desirable to lock the symbol to the map's pitch by setting the `pitchAlignment` option to `map`. **Why isn't any of my data appearing on the map?** Things to check: * Check the console in the browser's developer tools for errors. * Ensure that a data source has been created and added to the map, and that the data source has been connected to a rendering layer that has also been added to the map. * Add break points in your code and step through it. Ensure data is added to the data source and the data source and layers are added to the map.-* Try removing data-driven expressions from your rendering layer. It's possible that one of them may have an error in it that is causing the issue. +* Try removing data-driven expressions from your rendering layer. It's possible that one of them might have an error in it that is causing the issue. **Can I use the Azure Maps Web SDK in a sandboxed iframe?** |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | View [supported operating systems for Azure Arc Connected Machine agent](../../a | Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|+| AlmaLinux 9 | Γ£ô<sup>3</sup> | | | | AlmaLinux 8 | Γ£ô<sup>3</sup> | Γ£ô | | | Amazon Linux 2017.09 | | Γ£ô | | | Amazon Linux 2 | Γ£ô | Γ£ô | | View [supported operating systems for Azure Arc Connected Machine agent](../../a | Debian 9 | Γ£ô | Γ£ô | Γ£ô | | Debian 8 | | Γ£ô | | | OpenSUSE 15 | Γ£ô | | |+| Oracle Linux 9 | Γ£ô | | | | Oracle Linux 8 | Γ£ô | Γ£ô | | | Oracle Linux 7 | Γ£ô | Γ£ô | Γ£ô | | Oracle Linux 6.4+ | | | Γ£ô | View [supported operating systems for Azure Arc Connected Machine agent](../../a | Red Hat Enterprise Linux Server 8.0-8.5 | Γ£ô | Γ£ô | Γ£ô<sup>2</sup> | | Red Hat Enterprise Linux Server 7 | Γ£ô | Γ£ô | Γ£ô | | Red Hat Enterprise Linux Server 6.7+ | | | Γ£ô |+| Rocky Linux 9 | Γ£ô | | | | Rocky Linux 8 | Γ£ô | Γ£ô | | | SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>3</sup> | | | | SUSE Linux Enterprise Server 15 SP3 | Γ£ô | | | |
azure-monitor | Alerts Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-plan.md | You want to create alerts for any important information in your environment. But Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale: -- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).+- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md). - For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](resource-manager-alerts-metric.md). - To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource. |
azure-monitor | Alerts Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md | The information in this table can help you decide when to use each type of alert |Alert type |When to use |Pricing information| |||| |Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Use metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time series that are monitored. |-|Log alert|You can use log alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for [at-scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. | +|Log alert|You can use log alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for at-scale monitoring using splitting by dimensions, the cost also depends on the number of time series created by the dimensions resulting from your query. | |Activity log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource like a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts let you know when there's an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).| |Prometheus alerts|Prometheus alerts are used for alerting on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). The alert rules are based on the PromQL open-source query language. |Prometheus alert rules are only charged on the data queried by the rules. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/). | |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | These properties are client specific, so you can configure `appInsights.defaultC | correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation. (Default is `30000`.) | | correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection. (Default. See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts).)| -## How do I customize logs collection? --By default, Application Insights Node.js SDK logs at warning level to console. --To spot and diagnose issues with Application Insights, "Self-diagnostics" can be enabled. This means collection of internal logging from the Application Insights Node.js SDK. --The following code demonstrates how to enable debug logging as well as generate telemetry for internal logs. --``` -let appInsights = require("applicationinsights"); -appInsights.setup("<YOUR_CONNECTION_STRING>") - .setInternalLogging(true, true) // Enable both debug and warning logging - .setAutoCollectConsole(true, true) // Generate Trace telemetry for winston/bunyan and console logs - .start(); - -Logs could be put into local file using APPLICATIONINSIGHTS_LOG_DESTINATION environment variable, supported values are file and file+console, a file named applicationinsights.log will be generated on tmp folder by default, including all logs, /tmp for *nix and USERDIR\\AppData\\Local\\Temp for Windows. Log directory could be configured using APPLICATIONINSIGHTS_LOGDIR environment variable. --process.env.APPLICATIONINSIGHTS_LOG_DESTINATION = "file+console"; -process.env.APPLICATIONINSIGHTS_LOGDIR = "C:\\applicationinsights\\logs"; --// Application Insights SDK setup.... -``` - ## Troubleshooting --For more information, see [Troubleshoot Application Insights monitoring of Node.js apps and services](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-app-insights-nodejs). +For troubleshooting information, including "no data" scenarios and customizing logs, see [Troubleshoot Application Insights monitoring of Node.js apps and services](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-app-insights-nodejs). ## Next steps |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | Sampling is a feature in [Application Insights](./app-insights-overview.md). It' When metric counts are presented in the portal, they're renormalized to take into account sampling. Doing so minimizes any effect on the statistics. +> [!NOTE] +> - If you've adopted our OpenTelemetry Distro and are looking for configuration options, see [Enable Sampling](opentelemetry-configuration.md#enable-sampling). ++ ## Brief summary * There are three different types of sampling: adaptive sampling, fixed-rate sampling, and ingestion sampling. |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | All custom tables created with or migrated to the [data collection rule (DCR)-ba | Managed Lustre | [AFSAuditLogs](/azure/azure-monitor/reference/tables/AFSAuditLogs) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | | Monitor | [AzureMetricsV2](/azure/azure-monitor/reference/tables/AzureMetricsV2) |+| Network managers | [AVNMConnectivityConfigurationChange](/azure/azure-monitor/reference/tables/AVNMConnectivityConfigurationChange) | | Nexus Clusters | [NCCKubernetesLogs](/azure/azure-monitor/reference/tables/NCCKubernetesLogs)<br>[NCCVMOrchestrationLogs](/azure/azure-monitor/reference/tables/NCCVMOrchestrationLogs) | | Nexus Storage Appliances | [NCSStorageLogs](/azure/azure-monitor/reference/tables/NCSStorageLogs)<br>[NCSStorageAlerts](/azure/azure-monitor/reference/tables/NCSStorageAlerts) | | Redis cache | [ACRConnectedClientList](/azure/azure-monitor/reference/tables/ACRConnectedClientList) | |
azure-monitor | Manage Logs Tables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md | Reduce costs and analysis effort by using data collection rules to [filter out a ## View table properties +> [!NOTE] +> The table name is case sensitive. + # [Portal](#tab/azure-portal) To view and set table properties in the Azure portal: To view table properties using PowerShell, run: Invoke-AzRestMethod -Path "/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/microsoft.operationalinsights/workspaces/ContosoWorkspace/tables/Heartbeat?api-version=2021-12-01-preview" -Method GET ``` -> [!NOTE] -> The table name used in the `-Path` parameter is case sensitive. - **Sample response** ```json |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | This article lists significant changes to Azure Monitor documentation. > > !["An rss icon"](./media//whats-new/rss.png) https://aka.ms/azmon/rss +## October 2023 ++|Subservice | Article | Description | +|||| +General|[Best practices for monitoring Kubernetes with Azure Monitor](best-practices-containers.md)|New article.| +General|[Estimate Azure Monitor costs](cost-estimate.md)|New article describing use of Azure Monitor pricing calculator.| +General|[Azure Monitor billing meter names](cost-meters.md)|Billing meters moved into dedicated reference article.| +General|[Azure Monitor cost and usage](cost-usage.md)|Rewritten.| +Agents|[Collect logs from a text or JSON file with Azure Monitor Agent](agents/data-collection-text-log.md)|Added the ability to collect logs from a JSON file with Azure Monitor Agent.| +Alerts|[Create or edit an alert rule](alerts/alerts-create-new-alert-rule.md)|Custom properties for Azure Monitor alerts are now located in the Details tab when creating or editing an alert rule. | +Alerts|[Create or edit an alert rule](alerts/alerts-create-new-alert-rule.md)|Added note clarifying the limitations of setting the frequency of alert rules to one minute. | +Application-Insights|[IP addresses used by Azure Monitor](app/ip-addresses.md)|A logic model diagram is available to assist with troubleshooting scenarios.| +Application-Insights|[Application Insights Overview dashboard](app/overview-dashboard.md)|All of the Application Insights experiences are now defined in a manner that mirrors the Azure portal experience. We've included a logic model diagram to visually convey how Application Insights works at a high level.| +Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python, and Java applications](app/opentelemetry-enable.md)|Our OpenTelemetry Distro released for .NET, Java, Python, and Node.js. This is a replacement for classic Application Insights SDKs.| +Essentials|[Collect IIS logs with Azure Monitor Agent](agents/data-collection-iis.md)|Added guidance on setting up data collection endpoints based on deployment.| +Logs|[Restore logs in Azure Monitor](logs/restore.md)|Updated information about the cost of restoring logs. | +Logs|[Log Analytics workspace data export in Azure Monitor](logs/logs-data-export.md)|Billing for Data Export was enabled in early October 2023.| +Logs|[Analyze usage in a Log Analytics workspace](logs/analyze-usage.md)|Added support for querying data volume from events directly, and by computer.| ++ ## September 2023 |Subservice | Article | Description | |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md) * [SAP on Azure NetApp Files Sizing Best Practices](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-netapp-files-sizing-best-practices/ba-p/3895300) * [Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimize-hana-deployments-with-azure-netapp-files-application/ba-p/3683417)+* [Configuring Azure NetApp Files Application Volume Group (AVG) for zonal SAP HANA deployment](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/configuring-azure-netapp-files-anf-application-volume-group-avg/ba-p/3943801) * [Using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions (MP)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-netapp-files-avg-for-sap-hana-to-deploy-hana-with/ba-p/3742747) * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md) * [High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux](../virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md) |
azure-netapp-files | Backup Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md | Azure NetApp Files backup is supported for the following regions: * East US * East US 2 * France Central+* Germany North * Germany West Central * Japan East * Japan West |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-resource-manager | Azure Services Resource Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md | Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 08/02/2023 Last updated : 11/06/2023 content_well_notification: - AI-contribution content_well_notification: # Resource providers for Azure services -This article shows how resource provider namespaces map to Azure services. If you don't know the resource provider, see [Find resource provider](#find-resource-provider). +This article connects resource provider namespaces to Azure services. If you don't know the resource provider, see [Find resource provider](#find-resource-provider). -## Match resource provider to service +## AI and machine learning resource providers -The resources providers that are marked with **- registered** are registered by default for your subscription. For more information, see [Registration](#registration). +| Resource provider namespace | Azure service | +| | - | +| Microsoft.AutonomousSystems | [Autonomous Systems](https://www.microsoft.com/ai/autonomous-systems) | +| Microsoft.BotService | [Azure Bot Service](/azure/bot-service/) | +| Microsoft.CognitiveServices | [Cognitive Services](../../ai-services/index.yml) | +| Microsoft.EnterpriseKnowledgeGraph | Enterprise Knowledge Graph | +| Microsoft.MachineLearning | [Machine Learning Studio](../../machine-learning/classic/index.yml) | +| Microsoft.MachineLearningServices | [Azure Machine Learning](../../machine-learning/index.yml) | +| Microsoft.Search | [Azure Cognitive Search](../../search/index.yml) | ++## Analytics resource providers | Resource provider namespace | Azure service | | | - |-| Microsoft.AAD | [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) | -| Microsoft.Addons | core | -| Microsoft.App | [Azure Container Apps](../../container-apps/index.yml) | -| Microsoft.ADHybridHealthService - [registered](#registration) | [Microsoft Entra ID](../../active-directory/index.yml) | -| Microsoft.Advisor | [Azure Advisor](../../advisor/index.yml) | -| Microsoft.AlertsManagement | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.AnalysisServices | [Azure Analysis Services](../../analysis-services/index.yml) |-| Microsoft.ApiManagement | [API Management](../../api-management/index.yml) | -| Microsoft.AppConfiguration | [Azure App Configuration](../../azure-app-configuration/index.yml) | +| Microsoft.Databricks | [Azure Databricks](/azure/azure-databricks/) | +| Microsoft.DataCatalog | [Data Catalog](../../data-catalog/index.yml) | +| Microsoft.DataFactory | [Data Factory](../../data-factory/index.yml) | +| Microsoft.DataLakeAnalytics | [Data Lake Analytics](../../data-lake-analytics/index.yml) | +| Microsoft.DataLakeStore | [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) | +| Microsoft.DataShare | [Azure Data Share](../../data-share/index.yml) | +| Microsoft.HDInsight | [HDInsight](../../hdinsight/index.yml) | +| Microsoft.Kusto | [Azure Data Explorer](/azure/data-explorer/) | +| Microsoft.PowerBI | [Power BI](/power-bi/power-bi-overview) | +| Microsoft.PowerBIDedicated | [Power BI Embedded](/azure/power-bi-embedded/) | +| Microsoft.ProjectBabylon | [Azure Data Catalog](../../data-catalog/overview.md) | +| Microsoft.Purview | [Microsoft Purview](/purview/purview) | +| Microsoft.StreamAnalytics | [Azure Stream Analytics](../../stream-analytics/index.yml) | +| Microsoft.Synapse | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | ++## Blockchain resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.Blockchain | [Azure Blockchain Service](../../blockchain/workbench/index.yml) | +| Microsoft.BlockchainTokens | [Azure Blockchain Tokens](https://azure.microsoft.com/services/blockchain-tokens/) | ++## Compute resource providers ++| Resource provider namespace | Azure service | +| | - | | Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/overview.md) |-| Microsoft.Attestation | Azure Attestation Service | -| Microsoft.Authorization - [registered](#registration) | [Azure Resource Manager](../index.yml) | -| Microsoft.Automation | [Automation](../../automation/index.yml) | -| Microsoft.AutonomousSystems | [Autonomous Systems](https://www.microsoft.com/ai/autonomous-systems) | | Microsoft.AVS | [Azure VMware Solution](../../azure-vmware/index.yml) |-| Microsoft.AzureActiveDirectory | [Microsoft Entra ID B2C](../../active-directory-b2c/index.yml) | -| Microsoft.AzureArcData | Azure Arc-enabled data services | -| Microsoft.AzureData | SQL Server registry | -| Microsoft.AzureStack | core | -| Microsoft.AzureStackHCI | [Azure Stack HCI](/azure-stack/hci/overview) | | Microsoft.Batch | [Batch](../../batch/index.yml) |-| Microsoft.Billing - [registered](#registration) | [Cost Management and Billing](/azure/billing/) | -| Microsoft.BingMaps | [Bing Maps](/BingMaps/#pivot=main&panel=BingMapsAPI) | -| Microsoft.Blockchain | [Azure Blockchain Service](../../blockchain/workbench/index.yml) | -| Microsoft.BlockchainTokens | [Azure Blockchain Tokens](https://azure.microsoft.com/services/blockchain-tokens/) | -| Microsoft.Blueprint | [Azure Blueprints](../../governance/blueprints/index.yml) | -| Microsoft.BotService | [Azure Bot Service](/azure/bot-service/) | -| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | -| Microsoft.Capacity | core | -| Microsoft.Cdn | [Content Delivery Network](../../cdn/index.yml) | -| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-app-service-certificate.md) | -| Microsoft.ChangeAnalysis | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.ClassicCompute | Classic deployment model virtual machine |-| Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration | -| Microsoft.ClassicNetwork | Classic deployment model virtual network | -| Microsoft.ClassicStorage | Classic deployment model storage | -| Microsoft.ClassicSubscription - [registered](#registration) | Classic deployment model | -| Microsoft.CognitiveServices | [Cognitive Services](../../ai-services/index.yml) | -| Microsoft.Commerce - [registered](#registration) | core | -| Microsoft.Communication | [Azure Communication Services](../../communication-services/overview.md) | | Microsoft.Compute | [Virtual Machines](../../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) |-| Microsoft.Consumption - [registered](#registration) | [Cost Management](/azure/cost-management/) | +| Microsoft.DesktopVirtualization | [Azure Virtual Desktop](../../virtual-desktop/index.yml) | +| Microsoft.DevTestLab | [Azure Lab Services](../../lab-services/index.yml) | +| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) | +| Microsoft.LabServices | [Azure Lab Services](../../lab-services/index.yml) | +| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) | +| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/overview.md) | +| Microsoft.Quantum | [Azure Quantum](https://azure.microsoft.com/services/quantum/) | +| Microsoft.SerialConsole - [registered by default](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) | +| Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) | +| Microsoft.VirtualMachineImages | [Azure Image Builder](../../virtual-machines/image-builder-overview.md) | +| Microsoft.VMware | [Azure VMware Solution](../../azure-vmware/index.yml) | +| Microsoft.VMwareCloudSimple | [Azure VMware Solution by CloudSimple](../../vmware-cloudsimple/index.md) | ++## Container resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.App | [Azure Container Apps](../../container-apps/index.yml) | | Microsoft.ContainerInstance | [Container Instances](../../container-instances/index.yml) | | Microsoft.ContainerRegistry | [Container Registry](../../container-registry/index.yml) | | Microsoft.ContainerService | [Azure Kubernetes Service (AKS)](../../aks/index.yml) |-| Microsoft.CostManagement - [registered](#registration) | [Cost Management](/azure/cost-management/) | -| Microsoft.CostManagementExports | [Cost Management](/azure/cost-management/) | -| Microsoft.CustomerLockbox | [Customer Lockbox for Microsoft Azure](../../security/fundamentals/customer-lockbox-overview.md) | -| Microsoft.CustomProviders | [Azure Custom Providers](../custom-providers/overview.md) | -| Microsoft.DataBox | [Azure Data Box](../../databox/index.yml) | -| Microsoft.DataBoxEdge | [Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md) | -| Microsoft.Databricks | [Azure Databricks](/azure/azure-databricks/) | -| Microsoft.DataCatalog | [Data Catalog](../../data-catalog/index.yml) | -| Microsoft.DataFactory | [Data Factory](../../data-factory/index.yml) | -| Microsoft.DataLakeAnalytics | [Data Lake Analytics](../../data-lake-analytics/index.yml) | -| Microsoft.DataLakeStore | [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) | -| Microsoft.DataMigration | [Azure Database Migration Service](../../dms/index.yml) | -| Microsoft.DataProtection | Data Protection | -| Microsoft.DataShare | [Azure Data Share](../../data-share/index.yml) | +| Microsoft.RedHatOpenShift | [Azure Red Hat OpenShift](../../virtual-machines/linux/openshift-get-started.md) | ++## Core resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.Addons | core | +| Microsoft.AzureStack | core | +| Microsoft.Capacity | core | +| Microsoft.Commerce - [registered by default](#registration) | core | +| Microsoft.Marketplace | core | +| Microsoft.MarketplaceApps | core | +| Microsoft.MarketplaceOrdering - [registered by default](#registration) | core | +| Microsoft.SaaS | core | +| Microsoft.Services | core | +| Microsoft.Subscription | core | +| microsoft.support - [registered by default](#registration) | core | ++## Database resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.AzureData | SQL Server registry | +| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | | Microsoft.DBforMariaDB | [Azure Database for MariaDB](../../mariadb/index.yml) | | Microsoft.DBforMySQL | [Azure Database for MySQL](../../mysql/index.yml) | | Microsoft.DBforPostgreSQL | [Azure Database for PostgreSQL](../../postgresql/index.yml) |-| Microsoft.DesktopVirtualization | [Azure Virtual Desktop](../../virtual-desktop/index.yml) | -| Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) | -| Microsoft.DeviceUpdate | [Device Update for IoT Hub](../../iot-hub-device-update/index.yml) -| Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) | -| Microsoft.DevTestLab | [Azure Lab Services](../../lab-services/index.yml) | -| Microsoft.DigitalTwins | [Azure Digital Twins](../../digital-twins/overview.md) | | Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) |-| Microsoft.DomainRegistration | [App Service](../../app-service/index.yml) | -| Microsoft.DynamicsLcs | [Lifecycle Services](https://lcs.dynamics.com/Logon/Index) | -| Microsoft.ElasticSan | [Elastic SAN Preview](../../storage/elastic-san/index.yml) | -| Microsoft.EnterpriseKnowledgeGraph | Enterprise Knowledge Graph | +| Microsoft.Sql | [Azure SQL Database](/azure/azure-sql/database/index)<br /> [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/index) <br />[Azure Synapse Analytics](/azure/sql-data-warehouse/) | +| Microsoft.SqlVirtualMachine | [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) | ++## Developer tools resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.AppConfiguration | [Azure App Configuration](../../azure-app-configuration/index.yml) | +| Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) | +| Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | +| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) | ++## DevOps resource providers ++| Resource provider namespace | Azure service | +| | - | +| microsoft.visualstudio | [Azure DevOps](/azure/devops/) | +| Microsoft.VSOnline | [Azure DevOps](/azure/devops/) | ++## Hybrid resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.AzureArcData | Azure Arc-enabled data services | +| Microsoft.AzureStackHCI | [Azure Stack HCI](/azure-stack/hci/overview) | +| Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | +| Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | +| Microsoft.KubernetesConfiguration | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | ++## Identity resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.AAD | [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) | +| Microsoft.ADHybridHealthService - [registered by default](#registration) | [Microsoft Entra ID](../../active-directory/index.yml) | +| Microsoft.AzureActiveDirectory | [Microsoft Entra ID B2C](../../active-directory-b2c/index.yml) | +| Microsoft.ManagedIdentity | [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/index.yml) | +| Microsoft.Token | Token | ++## Integration resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.ApiManagement | [API Management](../../api-management/index.yml) | +| Microsoft.Communication | [Azure Communication Services](../../communication-services/overview.md) | | Microsoft.EventGrid | [Event Grid](../../event-grid/index.yml) | | Microsoft.EventHub | [Event Hubs](../../event-hubs/index.yml) |-| Microsoft.Features - [registered](#registration) | [Azure Resource Manager](../index.yml) | -| Microsoft.GuestConfiguration | [Azure Policy](../../governance/policy/index.yml) | -| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) | -| Microsoft.HardwareSecurityModules | [Azure Dedicated HSM](../../dedicated-hsm/index.yml) | -| Microsoft.HDInsight | [HDInsight](../../hdinsight/index.yml) | | Microsoft.HealthcareApis (Azure API for FHIR) | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | | Microsoft.HealthcareApis (Healthcare APIs) | [Healthcare APIs](../../healthcare-apis/index.yml) |-| Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | -| Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) | -| Microsoft.HybridNetwork | [Network Function Manager](../../network-function-manager/index.yml) | -| Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) | -| Microsoft.Insights | [Azure Monitor](../../azure-monitor/index.yml) | +| Microsoft.Logic | [Logic Apps](../../logic-apps/index.yml) | +| Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) | +| Microsoft.PowerPlatform | [Power Platform](/power-platform/) | +| Microsoft.Relay | [Azure Relay](../../azure-relay/relay-what-is-it.md) | +| Microsoft.ServiceBus | [Service Bus](/azure/service-bus/) | ++## IoT resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) | +| Microsoft.DeviceUpdate | [Device Update for IoT Hub](../../iot-hub-device-update/index.yml) | +| Microsoft.DigitalTwins | [Azure Digital Twins](../../digital-twins/overview.md) | | Microsoft.IoTCentral | [Azure IoT Central](../../iot-central/index.yml) | | Microsoft.IoTSpaces | [Azure Digital Twins](../../digital-twins/index.yml) |-| Microsoft.Intune | [Azure Monitor](../../azure-monitor/index.yml) | -| Microsoft.KeyVault | [Key Vault](../../key-vault/index.yml) | -| Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | -| Microsoft.KubernetesConfiguration | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | -| Microsoft.Kusto | [Azure Data Explorer](/azure/data-explorer/) | -| Microsoft.LabServices | [Azure Lab Services](../../lab-services/index.yml) | -| Microsoft.Logic | [Logic Apps](../../logic-apps/index.yml) | -| Microsoft.MachineLearning | [Machine Learning Studio](../../machine-learning/classic/index.yml) | -| Microsoft.MachineLearningServices | [Azure Machine Learning](../../machine-learning/index.yml) | -| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) | -| Microsoft.ManagedIdentity | [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/index.yml) | -| Microsoft.ManagedNetwork | Virtual networks managed by PaaS services | +| Microsoft.TimeSeriesInsights | [Azure Time Series Insights](../../time-series-insights/index.yml) | +| Microsoft.WindowsIoT | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | ++## Management resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.Advisor | [Azure Advisor](../../advisor/index.yml) | +| Microsoft.Authorization - [registered by default](#registration) | [Azure Resource Manager](../index.yml) | +| Microsoft.Automation | [Automation](../../automation/index.yml) | +| Microsoft.Billing - [registered by default](#registration) | [Cost Management and Billing](/azure/billing/) | +| Microsoft.Blueprint | [Azure Blueprints](../../governance/blueprints/index.yml) | +| Microsoft.ClassicSubscription - [registered by default](#registration) | Classic deployment model | +| Microsoft.Consumption - [registered by default](#registration) | [Cost Management](/azure/cost-management/) | +| Microsoft.CostManagement - [registered by default](#registration) | [Cost Management](/azure/cost-management/) | +| Microsoft.CostManagementExports | [Cost Management](/azure/cost-management/) | +| Microsoft.CustomProviders | [Azure Custom Providers](../custom-providers/overview.md) | +| Microsoft.DynamicsLcs | [Lifecycle Services](https://lcs.dynamics.com/Logon/Index) | +| Microsoft.Features - [registered by default](#registration) | [Azure Resource Manager](../index.yml) | +| Microsoft.GuestConfiguration | [Azure Policy](../../governance/policy/index.yml) | | Microsoft.ManagedServices | [Azure Lighthouse](../../lighthouse/index.yml) | | Microsoft.Management | [Management Groups](../../governance/management-groups/index.yml) |-| Microsoft.Maps | [Azure Maps](../../azure-maps/index.yml) | -| Microsoft.Marketplace | core | -| Microsoft.MarketplaceApps | core | -| Microsoft.MarketplaceOrdering - [registered](#registration) | core | +| Microsoft.PolicyInsights | [Azure Policy](../../governance/policy/index.yml) | +| Microsoft.Portal - [registered by default](#registration) | [Azure portal](../../azure-portal/index.yml) | +| Microsoft.RecoveryServices | [Azure Site Recovery](../../site-recovery/index.yml) | +| Microsoft.ResourceGraph - [registered by default](#registration) | [Azure Resource Graph](../../governance/resource-graph/index.yml) | +| Microsoft.ResourceHealth | [Azure Service Health](../../service-health/index.yml) | +| Microsoft.Resources - [registered by default](#registration) | [Azure Resource Manager](../index.yml) | +| Microsoft.Scheduler | [Scheduler](../../scheduler/index.yml) | +| Microsoft.SoftwarePlan | License | +| Microsoft.Solutions | [Azure Managed Applications](../managed-applications/index.yml) | ++## Media resource providers ++| Resource provider namespace | Azure service | +| | - | | Microsoft.Media | [Media Services](/azure/media-services/) |-| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/overview.md) | -| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | -| Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | -| Microsoft.MobileNetwork | [Azure Private 5G Core](../../private-5g-core/index.yml) | -| Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) | -| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Azure Route Server](../../route-server/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> | -| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) | -| Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) | -| Microsoft.ObjectStore | Object Store | ++## Migration resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration | +| Microsoft.DataBox | [Azure Data Box](../../databox/index.yml) | +| Microsoft.DataBoxEdge | [Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md) | +| Microsoft.DataMigration | [Azure Database Migration Service](../../dms/index.yml) | | Microsoft.OffAzure | [Azure Migrate](../../migrate/migrate-services-overview.md) |+| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | ++## Monitoring resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.AlertsManagement | [Azure Monitor](../../azure-monitor/index.yml) | +| Microsoft.ChangeAnalysis | [Azure Monitor](../../azure-monitor/index.yml) | +| Microsoft.Insights | [Azure Monitor](../../azure-monitor/index.yml) | +| Microsoft.Intune | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.OperationalInsights | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.OperationsManagement | [Azure Monitor](../../azure-monitor/index.yml) |+| Microsoft.WorkloadMonitor | [Azure Monitor](../../azure-monitor/index.yml) | ++## Network resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.Cdn | [Content Delivery Network](../../cdn/index.yml) | +| Microsoft.ClassicNetwork | Classic deployment model virtual network | +| Microsoft.ManagedNetwork | Virtual networks managed by PaaS services | +| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Azure Route Server](../../route-server/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> | | Microsoft.Peering | [Azure Peering Service](../../peering-service/index.yml) |-| Microsoft.PolicyInsights | [Azure Policy](../../governance/policy/index.yml) | -| Microsoft.Portal - [registered](#registration) | [Azure portal](../../azure-portal/index.yml) | -| Microsoft.PowerBI | [Power BI](/power-bi/power-bi-overview) | -| Microsoft.PowerBIDedicated | [Power BI Embedded](/azure/power-bi-embedded/) | -| Microsoft.PowerPlatform | [Power Platform](/power-platform/) | -| Microsoft.ProjectBabylon | [Azure Data Catalog](../../data-catalog/overview.md) | -| Microsoft.Quantum | [Azure Quantum](https://azure.microsoft.com/services/quantum/) | -| Microsoft.RecoveryServices | [Azure Site Recovery](../../site-recovery/index.yml) | -| Microsoft.RedHatOpenShift | [Azure Red Hat OpenShift](../../virtual-machines/linux/openshift-get-started.md) | -| Microsoft.Relay | [Azure Relay](../../azure-relay/relay-what-is-it.md) | -| Microsoft.ResourceGraph - [registered](#registration) | [Azure Resource Graph](../../governance/resource-graph/index.yml) | -| Microsoft.ResourceHealth | [Azure Service Health](../../service-health/index.yml) | -| Microsoft.Resources - [registered](#registration) | [Azure Resource Manager](../index.yml) | -| Microsoft.SaaS | core | -| Microsoft.Scheduler | [Scheduler](../../scheduler/index.yml) | -| Microsoft.Search | [Azure Cognitive Search](../../search/index.yml) | ++## Security resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.Attestation | [Azure Attestation Service](../../attestation/overview.md) | +| Microsoft.CustomerLockbox | [Customer Lockbox for Microsoft Azure](../../security/fundamentals/customer-lockbox-overview.md) | +| Microsoft.DataProtection | Data Protection | +| Microsoft.HardwareSecurityModules | [Azure Dedicated HSM](../../dedicated-hsm/index.yml) | +| Microsoft.KeyVault | [Key Vault](../../key-vault/index.yml) | | Microsoft.Security | [Security Center](../../security-center/index.yml) | | Microsoft.SecurityInsights | [Microsoft Sentinel](../../sentinel/index.yml) |-| Microsoft.SerialConsole - [registered](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) | -| Microsoft.ServiceBus | [Service Bus](/azure/service-bus/) | -| Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) | -| Microsoft.Services | core | -| Microsoft.SignalRService | [Azure SignalR Service](../../azure-signalr/index.yml) | -| Microsoft.SoftwarePlan | License | -| Microsoft.Solutions | [Azure Managed Applications](../managed-applications/index.yml) | -| Microsoft.Sql | [Azure SQL Database](/azure/azure-sql/database/index)<br /> [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/index) <br />[Azure Synapse Analytics](/azure/sql-data-warehouse/) | -| Microsoft.SqlVirtualMachine | [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) | +| Microsoft.WindowsDefenderATP | [Microsoft Defender Advanced Threat Protection](../../security-center/security-center-wdatp.md) | +| Microsoft.WindowsESU | Extended Security Updates | ++## Storage resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.ClassicStorage | Classic deployment model storage | +| Microsoft.ElasticSan | [Elastic SAN Preview](../../storage/elastic-san/index.yml) | +| Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) | +| Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) | +| Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) | +| Microsoft.ObjectStore | Object Store | | Microsoft.Storage | [Storage](../../storage/index.yml) | | Microsoft.StorageCache | [Azure HPC Cache](../../hpc-cache/index.yml) | | Microsoft.StorageSync | [Storage](../../storage/index.yml) | | Microsoft.StorSimple | [StorSimple](../../storsimple/index.yml) |-| Microsoft.StreamAnalytics | [Azure Stream Analytics](../../stream-analytics/index.yml) | -| Microsoft.Subscription | core | -| microsoft.support - [registered](#registration) | core | -| Microsoft.Synapse | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | -| Microsoft.TimeSeriesInsights | [Azure Time Series Insights](../../time-series-insights/index.yml) | -| Microsoft.Token | Token | -| Microsoft.VirtualMachineImages | [Azure Image Builder](../../virtual-machines/image-builder-overview.md) | -| microsoft.visualstudio | [Azure DevOps](/azure/devops/) | -| Microsoft.VMware | [Azure VMware Solution](../../azure-vmware/index.yml) | -| Microsoft.VMwareCloudSimple | [Azure VMware Solution by CloudSimple](../../vmware-cloudsimple/index.md) | -| Microsoft.VSOnline | [Azure DevOps](/azure/devops/) | ++## Web resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.BingMaps | [Bing Maps](/BingMaps/#pivot=main&panel=BingMapsAPI) | +| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-app-service-certificate.md) | +| Microsoft.DomainRegistration | [App Service](../../app-service/index.yml) | +| Microsoft.Maps | [Azure Maps](../../azure-maps/index.yml) | +| Microsoft.SignalRService | [Azure SignalR Service](../../azure-signalr/index.yml) | | Microsoft.Web | [App Service](../../app-service/index.yml)<br />[Azure Functions](../../azure-functions/index.yml) |-| Microsoft.WindowsDefenderATP | [Microsoft Defender Advanced Threat Protection](../../security-center/security-center-wdatp.md) | -| Microsoft.WindowsESU | Extended Security Updates | -| Microsoft.WindowsIoT | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | -| Microsoft.WorkloadMonitor | [Azure Monitor](../../azure-monitor/index.yml) | ++## 5G & Space resource providers ++| Resource provider namespace | Azure service | +| | - | +| Microsoft.HybridNetwork | [Network Function Manager](../../network-function-manager/index.yml) | +| Microsoft.MobileNetwork | [Azure Private 5G Core](../../private-5g-core/index.yml) | +| Microsoft.Orbital | [Azure Orbital Ground Station](../../orbital/overview.md) | ## Registration -Resource providers marked with **- registered** in the previous section are automatically registered for your subscription. For other resource providers, you need to [register them](resource-providers-and-types.md). However, many resource providers are registered automatically when you perform specific actions. For example, when you create resources through the portal or by deploying an [Azure Resource Manager template](../templates/overview.md), Azure Resource Manager automatically registers any required unregistered resource providers. +Resource providers marked with **- registered by default** in the previous section are automatically registered for your subscription. For other resource providers, you need to [register them](resource-providers-and-types.md). However, many resource providers are registered automatically when you perform specific actions. For example, when you create resources through the portal or by deploying an [Azure Resource Manager template](../templates/overview.md), Azure Resource Manager automatically registers any required unregistered resource providers. > [!IMPORTANT] > Register a resource provider only when you're ready to use it. This registration step helps maintain least privileges within your subscription. A malicious user can't use unregistered resource providers. |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
azure-vmware | Azure Vmware Solution Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md | Refer to the table to find details about resolution dates or possible workaround |Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |+| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) |Nov 2023 |N/A|N/A| | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 | | When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This alert should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | |
azure-vmware | Deploy Arc For Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md | Title: Deploy Arc for Azure VMware Solution (Preview) + Title: Deploy Arc-enabled Azure VMware Solution description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud. Previously updated : 08/28/2023 Last updated : 11/03/2023 +# Deploy Arc-enabled Azure VMware Solution -# Deploy Arc for Azure VMware Solution (Preview) +In this article, learn how to deploy Arc for Azure VMware Solution. Once you set up the components needed for this public preview, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the actions: -In this article, you'll learn how to deploy Arc for Azure VMware Solution. Once you've set up the components needed for this public preview, you'll be ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Operations are related to Create, Read, Update, and Delete (CRUD) virtual machines (VMs) in an Arc-enabled Azure VMware Solution private cloud. Users can also enable guest management and install Azure extensions once the private cloud is Arc-enabled. +- Identify your VMware vSphere resources (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register them with Arc at scale. +- Perform different virtual machine (VM) operations directly from Azure like; create, resize, delete, and power cycle operations (start/stop/restart) on VMware VMs consistently with Azure. +- Permit developers and application teams to use VM operations on-demand with [Role-based access control (RBAC)](https://learn.microsoft.com/azure/role-based-access-control/overview). +- Install the Arc-connected machine agent to [govern, protect, configure, and monitor](https://learn.microsoft.com/azure/azure-arc/servers/overview#supported-cloud-operations) them. +- Browse your VMware vSphere resources (vms, templates, networks, and storage) in Azure -Before you begin checking off the prerequisites, verify the following actions have been done: - -- You deployed an Azure VMware Solution private cluster. -- You have a connection to the Azure VMware Solution private cloud through your on-premises environment or your native Azure Virtual Network. -- There should be an isolated NSX-T Data Center network segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center network segment doesn't exist, one will be created. -## Prerequisites +## How Arc-enabled VMware vSphere differs from Arc-enabled servers -The following items are needed to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution (Preview). +You have the flexibility to start with either option, Arc-enabled servers or Arc-enabled VMware vSphere. With both options, you receive the same consistent experience. Regardless of the initial option chosen, you can incorporate the other one later without disruption. The following information helps you understand the difference between both options: -- A jump box virtual machine (VM) with network access to the Azure VMware Solution vCenter. - - From the jump-box VM, verify you have access to [vCenter Server and NSX-T Manager portals](./tutorial-configure-networking.md). -- Verify that your Azure subscription has been enabled or you have connectivity to Azure end points, mentioned in the [Appendices](#appendices).-- Resource group in the subscription where you have owner or contributor role. -- A minimum of three free non-overlapping IPs addresses. -- Verify that your vCenter Server version is 6.7 or higher. -- A resource pool with minimum-free capacity of 16 GB of RAM, 4 vCPUs. -- A datastore with minimum 100 GB of free disk space that is available through the resource pool. -- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server.-- Please validate the regional support before starting the onboarding. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview).-- The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.-[Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md) +**Arc-enabled servers** +Azure Arc-enabled servers interact on the guest operating system level. They do that with no awareness of the underlying infrastructure or the virtualization platform they're running on. Since Arc-enabled servers support bare-metal machines, there might not be a host hypervisor in some cases. -> [!NOTE] -> Only the default port of 443 is supported. If you use a different port, Appliance VM creation will fail. +**Arc-enabled VMware vSphere** +Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the quest operating system to the VM itself that provides lifecycle management and CRUD (Create, Read, Update, Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal with a look and feel just like a regular Azure VM. Azure Arc-enabled VMware vSphere provides guest operating system management that uses the same components as Azure Arc-enabled servers. ++## Deploy Arc -At this point, you should have already deployed an Azure VMware Solution private cloud. You need to have a connection from your on-premises environment or your native Azure Virtual Network to the Azure VMware Solution private cloud. +There should be an isolated NSX-T Data Center network segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center network segment doesn't exist, one is created. ++### Prerequisites ++> [!IMPORTANT] +> You can't create the resources in a separate resource group. Ensure you use the same resource group from where the Azure VMware Solution private cloud was created to create your resources. ++You need the following items to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution. ++- Validate the regional support before you start the onboarding process. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For details, see [Azure Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview#supported-regions). +- A jump box virtual machine (VM) or a [management VM](https://learn.microsoft.com/azure/azure-arc/resource-bridge/system-requirements#management-machine-requirements) with internet access that has a direct line of site to the vCenter. + - From the jump box VM, verify you have access to [vCenter Server and NSX-T manager portals](https://learn.microsoft.com/azure/azure-vmware/tutorial-access-private-cloud#connect-to-the-vcenter-server-of-your-private-cloud). +- A resource group in the subscription where you have an owner or contributor role. +- An unused, isolated [NSX Data Center network segment](https://learn.microsoft.com/azure/azure-vmware/tutorial-nsx-t-network-segment) that is a static network segment with static IP assignment of size /28 CIDR for deploying the Arc for Azure VMware Solution OVA. If an isolated NSX-T Data Center network segment doesn't exist, one gets created. +- Verify your Azure subscription is enabled and has connectivity to Azure end points. +- The firewall and proxy URLs must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. See the [Azure eArc resource bridge (Preview) network requirements](https://learn.microsoft.com/azure/azure-arc/resource-bridge/network-requirements). +- Verify your vCenter Server version is 6.7 or higher. +- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs. +- A datastore with a minimum of 100 GB of free disk space is available through the resource pool or cluster. +- On the vCenter Server, allow inbound connections on TCP port 443. This action ensures that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server. +> [!NOTE] +> - Private endpoint is currently not supported. +> - DHCP support isn't available to customers at this time, only static IP addresses are currently supported. -For Network planning and setup, use the [Network planning checklist - Azure VMware Solution | Microsoft Docs](./tutorial-network-checklist.md) -### Registration to Arc for Azure VMware Solution feature set +## Registration to Arc for Azure VMware Solution feature set The following **Register features** are for provider registration using Azure CLI. az provider register --namespace Microsoft.AVS Alternately, users can sign into their Subscription, navigate to the **Resource providers** tab, and register themselves on the resource providers mentioned previously. -For feature registration, users will need to sign into their **Subscription**, navigate to the **Preview features** tab, and search for 'Azure Arc for Azure VMware Solution'. Once registered, no other permissions are required for users to access Arc. +For feature registration, users need to sign into their **Subscription**, navigate to the **Preview features** tab, and search for 'Azure Arc for Azure VMware Solution'. Once registered, no other permissions are required for users to access Arc. -Users need to ensure they've registered themselves to **Microsoft.AVS/earlyAccess**. After registering, use the following feature to verify registration. ```azurecli az feature show --name AzureArcForAVS --namespace Microsoft.AVS az feature show --name AzureArcForAVS --namespace Microsoft.AVS ## Onboard process to deploy Azure Arc -Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution (Preview). +Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution. 1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the preview software.-1. Open the 'config_avs.json' file and populate all the variables. +2. Open the 'config_avs.json' file and populate all the variables. **Config JSON** ```json Use the following steps to guide you through the process to onboard Azure Arc fo - Populate the `subscriptionId`, `resourceGroup`, and `privateCloud` names respectively. - `isStatic` is always true. - - `networkForApplianceVM` is the name for the segment for Arc appliance VM. One will be created if it doesn't already exist. + - `networkForApplianceVM` is the name for the segment for Arc appliance VM. One gets created if it doesn't already exist. - `networkCIDRForApplianceVM` is the IP CIDR of the segment for Arc appliance VM. It should be unique and not affect other networks of Azure VMware Solution management IP CIDR. - `GatewayIPAddress` is the gateway for the segment for Arc appliance VM. - - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the k8s node pool IP range. + - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the K8s node pool IP range. - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd` are the starting and ending IP of the pool of IPs to assign to the appliance VM. Both need to be within the `networkCIDRForApplianceVM`. - - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You may choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM` + - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You can choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM` **Json example** ```json Use the following steps to guide you through the process to onboard Azure Arc fo } ``` -1. Run the installation scripts. We've provided you with the option to set up this preview from a Windows or Linux-based jump box/VM. +3. Run the installation scripts. You can optionionally setup this preview from a Windows or Linux-based jump box/VM. Run the following commands to execute the installation script. Use the following steps to guide you through the process to onboard Azure Arc fo ``` -4. You'll notice more Azure Resources have been created in your resource group. +4. More Azure resources are created in your resource group. - Resource bridge - Custom location - VMware vCenter > [!IMPORTANT]-> You can't create the resources in a separate resource group. Make sure you use the same resource group from where the Azure VMware Solution private cloud was created to create the resources. - -## Discover and project your VMware vSphere infrastructure resources to Azure --When Arc appliance is successfully deployed on your private cloud, you can do the following actions. --- View the status from within the private cloud under **Operations > Azure Arc**, located in the left navigation. -- View the VMware vSphere infrastructure resources from the private cloud left navigation under **Private cloud** then select **Azure Arc vCenter resources**.-- Discover your VMware vSphere infrastructure resources and project them to Azure using the same browser experience, **Private cloud > Arc vCenter resources > Virtual Machines**.-- Similar to VMs, customers can enable networks, templates, resource pools, and data-stores in Azure.+> After the successful installation of Azure Arc resource bridge, it's recommended to retain a copy of the resource bridge config.yaml files and the kubeconfig file safe and secure them in a place that facilitates easy retrieval. These files could be needed later to run commands to perform management operations on the resource bridge. You can find the 3 .yaml files (config files) and the kubeconfig file in the same folder where you ran the script. -After you've enabled VMs to be managed from Azure, you can install guest management and do the following actions. +When the script is run successfully, check the status to see if Azure Arc is now configured. To verify if your private cloud is Arc-enabled, do the following actions: -- Enable customers to install and use extensions.- - To enable guest management, customers will be required to use admin credentials - - VMtools should already be running on the VM -> [!NOTE] -> Azure VMware Solution vCenter Server will be available in global search but will NOT be available in the list of vCenter Servers for Arc for VMware. --- Customers can view the list of VM extensions available in public preview.- - Change tracking - - Log analytics - - Azure policy guest configuration -- **Azure VMware Solution private cloud with Azure Arc** --When the script has run successfully, you can check the status to see if Azure Arc has been configured. To verify if your private cloud is Arc-enabled, do the following action: - In the left navigation, locate **Operations**.-- Choose **Azure Arc (preview)**. Azure Arc state will show as **Configured**.-- :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png" alt-text="Image showing navigation to Azure Arc state to verify it's configured."lightbox="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png"::: --**Arc enabled VMware vSphere resources** --After the private cloud is Arc-enabled, vCenter resources should appear under **Virtual machines**. -- From the left navigation, under **Azure Arc VMware resources (preview)**, locate **Virtual machines**.-- Choose **Virtual machines** to view the vCenter Server resources.--### Manage access to VMware resources through Azure Role-Based Access Control --After your Azure VMware Solution vCenter Server resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter Server resources used to configure VMs. --This section will demonstrate how to use custom roles to manage granular access to VMware vSphere resources through Azure. --#### Arc-enabled VMware vSphere built-in roles --There are three built-in roles to meet your Role-based access control (RBAC) requirements. You can apply these roles to a whole subscription, resource group, or a single resource. --**Azure Arc VMware Administrator role** - is used by administrators --**Azure Arc VMware Private Cloud User role** - is used by anyone who needs to deploy and manage VMs +- Choose **Azure Arc**. +- Azure Arc state shows as **Configured**. -**Azure Arc VMware VM Contributor role** - is used by anyone who needs to deploy and manage VMs +Recover from failed deployments -**Azure Arc Azure VMware Solution Administrator role** +If the Azure Arc resource bridge deployment fails, consult the [Azure Arc resource bridge troubleshooting](https://learn.microsoft.com/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge) guide. While there can be many reasons why the Azure Arc resource bridge deployment fails, one of them is KVA timeout error. Learn more about the [KVA timeout error](https://learn.microsoft.com/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge#kva-timeout-error) and how to troubleshoot. -This role provides permissions to perform all possible operations for the Microsoft.ConnectedVMwarevSphere resource provider. Assign this role to users or groups that are administrators managing Azure Arc enabled VMware vSphere deployment. +## Discover and project your VMware vSphere infrastructure resources to Azure -**Azure Arc Azure VMware Solution Private Cloud User role** +When Arc appliance is successfully deployed on your private cloud, you can do the following actions. -This role gives the user permission to use the Arc-enabled Azure VMware Solutions vSphere resources that have been made accessible through Azure. This role should be assigned to any users or groups that need to deploy, update, or delete VMs. +- View the status from within the private cloud left navigation under **Operations > Azure Arc**. +- View the VMware vSphere infrastructure resources from the private cloud left navigation under **Private cloud** then select **Azure Arc vCenter resources**. +- Discover your VMware vSphere infrastructure resources and project them to Azure by navigating, **Private cloud > Arc vCenter resources > Virtual Machines**. +- Similar to VMs, customers can enable networks, templates, resource pools, and data-stores in Azure. -We recommend assigning this role at the individual resource pool (host or cluster), virtual network, or template that you want the user to deploy VMs with. +## Enable resource pools, clusters, hosts, datastores, networks, and VM templates in Azure -**Azure Arc Azure VMware Solution VM Contributor role** +Once you connected your Azure VMware Solution private cloud to Azure, you can browse your vCenter inventory from the Azure portal. This section shows you how to enable resource pools, networks, and other non-VM resources in Azure. -This role gives the user permission to perform all VMware VM operations. This role should be assigned to any users or groups that need to deploy, update, or delete VMs. +> [!NOTE] +> Enabling Azure Arc on a VMware vSphere resource is a read-only operation on vCenter. It doesn't make changes to your resource in vCenter. -We recommend assigning this role at the subscription level or resource group you want the user to deploy VMs with. +1. On your Azure VMware Solution private cloud, in the left navigation, locate **vCenter Inventory**. +2. Select the resource(s) you want to enable, then select **Enable in Azure**. +3. Select your Azure **Subscription** and **Resource Group**, then select **Enable**. -**Assign custom roles to users or groups** + The enable action starts a deployment and creates a resource in Azure, creating representations for your VMware vSphere resources. It allows you to manage who can access those resources through Role-based access control (RBAC) granularly. -1. Navigate to the Azure portal. -1. Locate the subscription, resource group, or the resource at the scope you want to provide for the custom role. -1. Find the Arc-enabled Azure VMware Solution vCenter Server resources. - 1. Navigate to the resource group and select the **Show hidden types** checkbox. - 1. Search for "Azure VMware Solution". -1. Select **Access control (IAM)** in the table of contents located on the left navigation. -1. Select **Add role assignment** from the **Grant access to this resource**. - :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/assign-custom-role-user-groups.png" alt-text="Image showing navigation to access control IAM and add role assignment."lightbox="media/deploy-arc-for-azure-vmware-solution/assign-custom-role-user-groups.png"::: -1. Select the custom role you want to assign, Azure Arc VMware Solution: **Administrator**, **Private Cloud User**, or **VM Contributor**. -1. Search for **AAD user** or **group name** that you want to assign this role to. -1. Select the **AAD user** or **group name**. Repeat this step for each user or group you want to give permission to. -1. Repeat the above steps for each scope and role. +4. Repeat the previous steps for one or more network, resource pool, and VM template resources. +## Enable guest management and extension installation -## Create Arc-enabled Azure VMware Solution virtual machine +Before you install an extension, you need to enable guest management on the VMware VM. -This section shows users how to create a virtual machine (VM) on VMware vCenter Server using Azure Arc. Before you begin, check the following prerequisite list to ensure you're set up and ready to create an Arc-enabled Azure VMware Solution VM. +### Prerequisite -### Prerequisites +Before you can install an extension, ensure your target machine meets the following conditions: -- An Azure subscription and resource group where you have an Arc VMware VM **Contributor role**.-- A resource pool resource that you have an Arc VMware private cloud resource **User role**.-- A virtual machine template resource that you have an Arc private cloud resource **User role**.-- (Optional) a virtual network resource on which you have Arc private cloud resource **User role**.--### Create VM flow --- Open the [Azure portal](https://portal.azure.com/)-- On the **Home** page, search for **virtual machines**. Once you've navigated to **Virtual machines**, select the **+ Create** drop down and select **Azure VMware Solution virtual machine**.- :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/deploy-vm-arc-1.2.png" alt-text="Image showing the location of the plus Create drop down menu and Azure VMware Solution virtual machine selection option."lightbox="media/deploy-arc-for-azure-vmware-solution/deploy-vm-arc-1.2.png"::: --Near the top of the **Virtual machines** page, you'll find five tabs labeled: **Basics**, **Disks**, **Networking**, **Tags**, and **Review + create**. Follow the steps or options provided in each tab to create your Azure VMware Solution virtual machine. --**Basics** -1. In **Project details**, select the **Subscription** and **Resource group** where you want to deploy your VM. -1. In **Instance details**, provide the **virtual machine name**. -1. Select a **Custom location** that your administrator has shared with you. -1. Select the **Resource pool/cluster/host** where the VM should be deployed. -1. For **Template details**, pick a **Template** based on the VM you plan to create. - - Alternately, you can check the **Override template defaults** box that allows you to override the CPU and memory specifications set in the template. - - If you chose a Windows template, you can provide a **Username** and **Password** for the **Administrator account**. -1. For **Extension setup**, the box is checked by default to **Enable guest management**. If you donΓÇÖt want guest management enabled, uncheck the box. -1. The connectivity method defaults to **Public endpoint**. Create a **Username**, **Password**, and **Confirm password**. - -**Disks** - - You can opt to change the disks configured in the template, add more disks, or update existing disks. These disks will be created on the default datastore per the VMware vCenter Server storage policies. - - You can change the network interfaces configured in the template, add Network interface cards (NICs), or update existing NICs. You can also change the network that the NIC will be attached to provided you have permissions to the network resource. - -**Networking** - - A network configuration is automatically created for you. You can choose to keep it or override it and add a new network interface instead. - - To override the network configuration, find and select **+ Add network interface** and add a new network interface. - -**Tags** - - In this section, you can add tags to the VM resource. - -**Review + create** - - Review the data and properties you've set up for your VM. When everything is set up how you want it, select **Create**. The VM should be created in a few minutes. - -## Enable guest management and extension installation +- Is running a [supported operating system](https://learn.microsoft.com/azure/azure-arc/servers/prerequisites#supported-operating-systems). +- Is able to connect through the firewall to communicate over the internet and these [URLs](https://learn.microsoft.com/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. +- Has VMware tools installed and running. +- Is powered on and the resource bridge has network connectivity to the host running the VM. -The guest management must be enabled on the VMware vSphere virtual machine (VM) before you can install an extension. Use the following prerequisite steps to enable guest management. +### Enable guest management -**Prerequisite** +You need to enable guest management on the VMware VM before you can install an extension. Use the following steps to enable guest management. 1. Navigate to [Azure portal](https://portal.azure.com/).+1. From the left navigation, locate **vCenter Server Inventory** and choose **Virtual Machines** to view the list of VMs. +1. Select the VM you want to install the guest management agent on. +1. Select **Enable guest management** and provide the administrator username and password to enable guest management then select **Apply**. 1. Locate the VMware vSphere VM you want to check for guest management and install extensions on, select the name of the VM. 1. Select **Configuration** from the left navigation for a VMware VM.-1. Verify **Enable guest management** has been checked. -->[!NOTE] -> The following conditions are necessary to enable guest management on a VM. --- The machine must be running a [Supported operating system](../azure-arc/servers/agent-overview.md).-- The machine needs to connect through the firewall to communicate over the internet. Make sure the [URLs](../azure-arc/servers/agent-overview.md) listed aren't blocked.-- The machine can't be behind a proxy, it's not supported yet.-- If you're using Linux VM, the account must not prompt to sign in on pseudo commands.- - Avoid pseudo commands by following these steps: - - 1. Sign into Linux VM. - 1. Open terminal and run the following command: `sudo visudo`. - 1. Add the line `username` `ALL=(ALL) NOPASSWD:ALL` at the end of the file. - 1. Replace `username` with the appropriate user-name. --If your VM template already has these changes incorporated, you won't need to do the steps for the VM created from that template. +1. Verify **Enable guest management** is now checked. -**Extension installation steps** +### Install the LogAnalytics extension 1. Go to Azure portal. 1. Find the Arc-enabled Azure VMware Solution VM that you want to install an extension on and select the VM name. -1. Navigate to **Extensions** in the left navigation, select **Add**. +1. Locate **Extensions** from the left navigation and select **Add**. 1. Select the extension you want to install. - 1. Based on the extension, you'll need to provide details. For example, `workspace Id` and `key` for LogAnalytics extension. + 1. Based on the extension, you need to provide details. For example, `workspace Id` and `key` for LogAnalytics extension. 1. When you're done, select **Review + create**. When the extension installation steps are completed, they trigger deployment and install the selected extension on the VM. -## Change Arc appliance credential --When **cloudadmin** credentials are updated, use the following steps to update the credentials in the appliance store. --1. Log in to the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**. -1. Run the following command for Windows-based jumpbox VM. - - `./.temp/.env/Scripts/activate` -1. Run the following command. -- `az arcappliance update-infracredentials vmware --kubeconfig <kubeconfig file>` --1. Run the following command --`az connectedvmware vcenter connect --debug --resource-group {resource-group} --name {vcenter-name-in-azure} --location {vcenter-location-in-azure} --custom-location {custom-location-name} --fqdn {vcenter-ip} --port {vcenter-port} --username cloudadmin@vsphere.local --password {vcenter-password}` - -> [!NOTE] -> Customers need to ensure kubeconfig and SSH keys remain available as they will be required for log collection, appliance Upgrade, and credential rotation. These parameters will be required at the time of upgrade, log collection, and credential update scenarios. --**Parameters** --Required parameters --`-kubeconfig # kubeconfig of Appliance resource` --**Examples** --The following command invokes the set credential for the specified appliance resource. --` az arcappliance setcredential <provider> --kubeconfig <kubeconfig>` --## Manual appliance upgrade --Use the following steps to perform a manual upgrade for Arc appliance virtual machine (VM). --1. Log into vCenter Server. -1. Locate the Arc appliance VM, which should be in the resource pool that was configured during onboarding. - 1. Power off the VM. - 1. Delete the VM. -1. Delete the download template corresponding to the VM. -1. Delete the resource bridge Azure Resource Manager resource. -1. Get the previous script `Config_avs` file and add the following configuration item: - 1. `"register":false` -1. Download the latest version of the Azure VMware Solution onboarding script. -1. Run the new onboarding script with the previous `config_avs.json` from the jump box VM, without changing other config items. --## Off board from Azure Arc-enabled Azure VMware Solution --This section demonstrates how to remove your VMware vSphere virtual machines (VMs) from Azure management services. --If you've enabled guest management on your Arc-enabled Azure VMware Solution VMs and onboarded them to Azure management services by installing VM extensions on them, you'll need to uninstall the extensions to prevent continued billing. For example, if you installed an MMA extension to collect and send logs to an Azure Log Analytics workspace, you'll need to uninstall that extension. You'll also need to uninstall the Azure Connected Machine agent to avoid any problems installing the agent in future. --Use the following steps to uninstall extensions from the portal. -->[!NOTE] ->**Steps 2-5** must be performed for all the VMs that have VM extensions installed. --1. Log in to your Azure VMware Solution private cloud. -1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£vCenter Server Inventory Page" -1. Search and select the virtual machine where you have **Guest management** enabled. -1. Select **Extensions**. -1. Select the extensions and select **Uninstall**. --To avoid problems onboarding the same VM to **Guest management**, we recommend you do the following steps to cleanly disable guest management capabilities. -->[!NOTE] ->**Steps 2-3** must be performed for **all VMs** that have **Guest management** enabled. --1. Sign into the virtual machine using administrator or root credentials and run the following command in the shell. - 1. `azcmagent disconnect --force-local-only`. -1. Uninstall the `ConnectedMachine agent` from the machine. -1. Set the **identity** on the VM resource to **none**. --## Remove Arc-enabled Azure VMware Solution vSphere resources from Azure --When you activate Arc-enabled Azure VMware Solution resources in Azure, a representation is created for them in Azure. Before you can delete the vCenter Server resource in Azure, you'll need to delete all of the Azure resource representations you created for your vSphere resources. To delete the Azure resource representations you created, do the following steps: --1. Go to the Azure portal. -1. Choose **Virtual machines** from Arc-enabled VMware vSphere resources in the private cloud. -1. Select all the VMs that have an Azure Enabled value as **Yes**. -1. Select **Remove from Azure**. This step will start deployment and remove these resources from Azure. The resources will remain in your vCenter Server. - 1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**. -1. When the deletion completes, select **Overview**. - 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section. -1. Select **Remove from Azure** to remove the vCenter Server resource from Azure. -1. Go to vCenter Server resource in Azure and delete it. -1. Go to the Custom location resource and select **Delete**. -1. Go to the Azure Arc Resource bridge resources and select **Delete**. --At this point, all of your Arc-enabled VMware vSphere resources have been removed from Azure. --## Delete Arc resources from vCenter Server --For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Login to vCenter Server and delete resource bridge VM and the VM template from inside the arc-folder. Once that step is done, Arc won't work on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it won't affect the Azure VMware Solution private cloud for the customer. --## Preview FAQ --**Region support for Azure VMware Solution** - -Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview). --**How does support work?** --Standard support process for Azure VMware Solution has been enabled to support customers. --**Does Arc for Azure VMware Solution support private endpoint?** --Private endpoint is currently not supported. --**Is enabling internet the only option to enable Arc for Azure VMware Solution?** --Yes, the Azure VMware Solution private cloud and jumpbox VM must have internet access for Arc to function. --**Is DHCP support available?** --DHCP support isn't available to customers at this time, we only support static IP addresses. --## Debugging tips for known issues --Use the following tips as a self-help guide. --**What happens if I face an error related to Azure CLI?** --- For windows jumpbox, if you have 32-bit Azure CLI installed, verify that your current version of Azure CLI has been uninstalled. Verification can be done from the Control Panel. -- To ensure it's uninstalled, try the `az` version to check if it's still installed. -- If you already installed Azure CLI using MSI, `az` installed by MSI and pip will conflict on PATH. In this case, it's recommended that you uninstall the current Azure CLI version.--**My script stopped because it timed-out, what should I do?** --- Retry the script for `create`. A prompt will ask you to select **Y** and rerun it.-- It could be a cluster extension issue that would result in adding the extension in the pending state.-- Verify you have the correct script version.-- Verify the VMware pod is running correctly on the system in running state.--**Basic trouble-shooting steps if the script run was unsuccessful.** --- Follow the directions provided in the [Prerequisites](#prerequisites) section of this article to verify that the feature and resource providers are registered.--**What happens if the Arc for VMware section shows no data?** --- If the Azure Arc VMware resources in the Azure UI show no data, verify your subscription was added in the global default subscription filter.--**I see the error:** "`ApplianceClusterNotRunning` Appliance Cluster: `<resource-bridge-id>` expected states to be Succeeded found: Succeeded and expected status to be Running and found: Connected". --- Run the script again.--**I'm unable to install extensions on my virtual machine.** --- Check that **guest management** has been successfully installed.-- **VMware Tools** should be installed on the VM.--**I'm facing Network related issues during on-boarding.** --- Look for an IP conflict. You need IPs with no conflict or from free pool.-- Verify the internet is enabled for the network segment.--**Where can I find more information related to Azure Arc resource bridge?** --- For more information, go to [Azure Arc resource bridge (preview) overview](../azure-arc/resource-bridge/overview.md)--## Appendices --Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The URLs will get pre-fixed when the script runs and can be run from the jumpbox VM to ping them. The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. -[Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md) --**Additional URL resources** +## Supported extensions and management services -- [Google Container Registry](http://gcr.io/)-- [Red Hat Quay.io](http://quay.io/)-- [Docker](https://hub.docker.com/)-- [Harbor](https://goharbor.io/)-- [Container Registry](https://container-registry.com/)+Perform VM operations on VMware VMs through Azure using [supported extensions and management services](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/perform-vm-ops-through-azure#supported-extensions-and-management-services) |
azure-vmware | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md | Title: Introduction description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Previously updated : 6/20/2023 Last updated : 10/16/2023 Azure VMware Solution is a VMware validated solution with ongoing validation and The diagram shows the adjacency between private clouds and VNets in Azure, Azure services, and on-premises environments. Network access from private clouds to Azure services or VNets provides SLA-driven integration of Azure service endpoints. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. ## AV36P and AV52 node sizes available in Azure VMware Solution For pricing and region availability, see the [Azure VMware Solution pricing page You can deploy new or scale existing private clouds through the Azure portal or Azure CLI. +## Azure VMware Solution private cloud extension with AV64 node size ++The AV64 is a new Azure VMware Solution host SKU, which is available to expand (not to create) the Azure VMware Solution private cloud built with the existing AV36, AV36P, or AV52 SKU. Use the [Microsoft documentation](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware) to check for availability of the AV64 SKU in the region. ++### Prerequisite for AV64 usage ++See the following prerequisites for AV64 cluster deployment. ++- An Azure VMware solution private cloud is created using AV36, AV36P, or AV52 in AV64 supported [region/AZ](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware). ++- You need one /23 or three (contiguous or noncontiguous) /25 address blocks for AV64 cluster management. +++### Supportability for customer scenarios ++**Customer with existing Azure VMware Solution private cloud**: +When a customer has a deployed Azure VMware Solution private cloud, they can scale the private cloud by adding a separate AV64 vCenter node cluster to that private cloud. In this scenario, customers should use the following steps: ++1. Get an AV64 [quota approval from Microsoft](/azure/azure-vmware/request-host-quota-azure-vmware-solution) with the minimum of three nodes. Add other details on the Azure VMware Solution private cloud that you plan to extend using AV64. +2. Use an existing Azure VMware Solution add-cluster workflow with AV64 hosts to expand. ++**Customer plans to create a new Azure VMware Solution private cloud**: When a customer wants a new Azure VMware Solution private cloud that can use AV64 SKU but only for expansion. In this case, the customer meets the prerequisite of having an Azure VMware Solution private cloud built with AV36, AV36P, or AV52 SKU. The customer needs to buy a minimum of three nodes of AV36, AV36P, or AV52 SKU before expanding using AV64. For this scenario, use the following steps: ++1. Get AV36, AV36P, AV52 and AV64 [quota approval from Microsoft](/azure/azure-vmware/request-host-quota-azure-vmware-solution) with a minimum of three nodes each. +2. Create an Azure VMware Solution private cloud using AV36, AV36P, or AV52 SKU. +3. Use an existing Azure VMware Solution add-cluster workflow with AV64 hosts to expand. ++**Azure VMware Solution stretched cluster private cloud**: The AV64 SKU isn't supported with Azure VMware Solution stretched cluster private cloud. This means that an AV64-based expansion isn't possible for an Azure VMware Solution stretched cluster private cloud. ++### AV64 Cluster vSAN fault domain (FD) design and recommendations ++The traditional Azure VMware Solution host clusters don't have explicit vSAN FD configuration. The reasoning is the host allocation logic ensures, within clusters, that no two hosts reside in the same physical fault domain within an Azure region. This feature inherently brings resilience and high availability for storage, which the vSAN FD configuration is supposed to bring. More information on vSAN FD can be found in the [VMware documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-8491C4B0-6F94-4023-8C7A-FD7B40D0368D.html). ++The Azure VMware Solution AV64 host clusters have an explicit vSAN fault domain (FD) configuration. Azure VMware Solution control plane configures five vSAN fault domains for AV64 clusters, and hosts are balanced evenly across these five FDs, as users scale up the hosts in a cluster from three nodes to 16 nodes. ++### Cluster size recommendation ++The Azure VMware Solution minimum vSphere node cluster size supported is three. The vSAN data redundancy is handled by ensuring the minimum cluster size of three hosts are in different vSAN FDs. In a vSAN cluster with three hosts, each in a different FD, Should an FD fail (for example, the top of rack switch fails), the vSAN data would be protected. Operations such as object creation (new VM, VMDK, and others) would fail. The same is true of any maintenance activities where an ESXi host is placed into maintenance mode and/or rebooted. To avoid scenarios such as these, it's recommended to deploy vSAN clusters with a minimum of four ESXi hosts. ++### AV64 host removal workflow and best practices ++Because of the AV64 cluster vSAN fault domain (FD) configuration and need for hosts balanced across all FDs, the host removal from AV64 cluster differs from traditional Azure VMware Solution host clusters with other SKUs. ++Currently, a user can select one or more hosts to be removed from the cluster using portal or API. One condition is that a cluster should have a minimum of three hosts. However, an AV64 cluster behaves differently in certain scenarios when AV64 uses vSAN FDs. Any host removal request is checked against potential vSAN FD imbalance. If a host removal request creates an imbalance, the request is rejected with the http 409-Conflict response. The http 409-Conflict response status code indicates a request conflict with the current state of the target resource (hosts). ++The following three scenarios show examples of instances that would normally error out and demonstrate different methods that can be used to remove hosts without creating a vSAN fault domain (FD) imbalance. ++- When removing a host creates a vSAN FD imbalance with a difference of hosts between most and least populated FD to be more than one. + In the following example users, need to remove one of the hosts from FD 1 before removing hosts from other FDs. ++ :::image type="content" source="media/introduction/remove-host-scenario-1.png" alt-text="Diagram showing how users need to remove one of the hosts from FD 1 before removing hosts from other FDs." border="false"::: ++- When multiple host removal requests are made at the same time and certain host removals create an imbalance. In this scenario, the Azure VMware Solution control plane removes only hosts, which don't create imbalance. + In the following example users can't take both of the hosts from the same FDs unless they're reducing the cluster size to four or lower. ++ :::image type="content" source="media/introduction/remove-host-scenario-2.png" alt-text="Diagram showing how users can't take both of the hosts from the same FDs unless they're reducing the cluster size to four or lower." border="false"::: ++- When a selected host removal causes less than three active vSAN FDs. This scenario isn't expected to occur given that all AV64 regions have five FDs and, while adding hosts, the Azure VMware Solution control plane takes care of adding hosts from all five FDs evenly. + In the following example users can remove one of the hosts from FD 1, but not from FD 2 or 3. ++ :::image type="content" source="media/introduction/remove-host-scenario-3.png" alt-text="Diagram showing how users can remove one of the hosts from FD 1, but not from FD 2 or 3." border="false"::: ++**How to identify the host that can be removed without causing a vSAN FD imbalance**: A user can go to the vSphere user interface to get the current state of vSAN FDs and hosts associated with each of them. This helps to identify hosts (based on the previous examples) that can be removed without affecting the vSAN FD balance and avoid any errors in the removal operation. ++### AV64 supported RAID configuration ++This table provides the list of RAID configuration supported and host requirements in AV64 cluster. The RAID6/FTT2 and RAID1/FTT3 policies will be supported in future on AV64 SKU. Microsoft allows customers to use the RAID-5 FTT1 vSAN storage policy for AV64 clusters with six or more nodes to meet the service level agreement. ++|RAID configuration |Failures to tolerate (FTT) | Minimum hosts required | +|-|--|| +|RAID-1 (Mirroring) Default setting.| 1 | 3 | +|RAID-5 (Erasure Coding) | 1 | 4 | +|RAID-1 (Mirroring) | 2 | 5 | + ## Networking [!INCLUDE [avs-networking-description](includes/azure-vmware-solution-networking-description.md)] Azure VMware Solution implements a shared responsibility model that defines dist The shared responsibility matrix table outlines the main tasks that customers and Microsoft each handle in deploying and managing both the private cloud and customer application workloads. The following table provides a detailed list of roles and responsibilities between the customer and Microsoft, which encompasses the most frequent tasks and definitions. For further questions, contact Microsoft. | **Role** | **Task/details** | | -- | - |-| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX | -| Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions | +| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Back up and restore VMware vCenter Server</li><li>Back up and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX | +| Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>More Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, virtual network, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions | | Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Multitenancy - VMware Cloud director service (CDs), VMware Cloud director availability(VCDA)</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI | The next step is to learn key [private cloud and cluster concepts](concepts-priv <!-- LINKS - external --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md+ |
azure-vmware | Manage Arc Enabled Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/manage-arc-enabled-azure-vmware-solution.md | + + Title: Manage Arc-enabled Azure VMware private cloud +description: Learn how to manage your Arc-enabled Azure VMware private cloud. ++ Last updated : 11/01/2023+++++# Manage Arc-enabled Azure VMware private cloud ++In this article, learn how to update the Arc appliance credentials, upgrade the Arc resource bridge, and collect logs from the Arc resource bridge. ++## Update Arc appliance credential ++When **cloud admin** credentials are updated, use the following steps to update the credentials in the appliance store. ++1. Sign into the jumpbox VM from where the [onboard process](https://learn.microsoft.com/azure/azure-vmware/arc-enabled-azure-vmware-solution?tabs=windows#onboard-process-to-deploy-azure-arc) was performed. Change the directory to **onboarding directory**. +1. Run the following command: + For Windows-based jumpbox VM. + + `./.temp/.env/Scripts/activate` ++ For Linux-based jumpbox VM ++ `./.temp/.env/bin/activate ++1. Run the following command: ++ `az arcappliance update-infracredentials vmware --kubeconfig <kubeconfig file>` ++1. Run the following command: ++`az connectedvmware vcenter connect --debug --resource-group {resource-group} --name {vcenter-name-in-azure} --location {vcenter-location-in-azure} --custom-location {custom-location-name} --fqdn {vcenter-ip} --port {vcenter-port} --username cloudadmin@vsphere.local --password {vcenter-password}` + +> [!NOTE] +> Customers need to ensure kubeconfig and SSH keys remain available as they will be required for log collection, appliance Upgrade, and credential rotation. These parameters will be required at the time of upgrade, log collection, and credential update scenarios. ++**Parameters** ++Required parameters ++`-kubeconfig # kubeconfig of Appliance resource` ++**Examples** ++The following command invokes the set credential for the specified appliance resource. ++` az arcappliance setcredential <provider> --kubeconfig <kubeconfig>` ++## Upgrade the Arc resource bridge ++Azure Arc-enabled Azure VMware Private Cloud requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. ++> [!NOTE] +> To upgrade the Arc resource bridge VM to the latest version, you'll need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations that are performed through Arc during this time might fail. ++Use the following steps to perform a manual upgrade for Arc appliance virtual machine (VM). ++1. Sign into vCenter Server. +1. Locate the Arc appliance VM, which should be in the resource pool that was configured during onboarding. +1. Power off the VM. +1. Delete the VM. +1. Delete the download template corresponding to the VM. +1. Delete the resource bridge **Azure Resource Manager** resource. +1. Get the previous script `Config_avs` file and add the following configuration item: ++ `"register":false` ++1. Download the latest version of the Azure VMware Solution [onboarding script](https://learn.microsoft.com/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows#onboard-process-to-deploy-azure-arc). +1. Run the new onboarding script with the previous `config_avs.json` from the jump box VM, without changing other config items. ++## Collect logs from the Arc resource bridge ++Perform ongoing administration for Arc-enabled VMware vSphere by [collecting logs from the Arc resource bridge](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/administer-arc-vmware#collecting-logs-from-the-arc-resource-bridge). |
azure-vmware | Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md | + + Title: Remove Arc-enabled Azure VMware Solution vSphere resources from Azure +description: Learn how to remove Arc-enabled Azure VMware Solution vSphere resources from Azure. ++ Last updated : 11/01/2023++++# Remove Arc-enabled Azure VMware Solution vSphere resources from Azure ++In this article, learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, use the information in this article to perform the following actions: ++- Remove guest management from VMware virtual machines (VMs). +- Remove VMware vSphere resource from Azure Arc. +- Remove Arc resource bridge related items in your vCenter. ++## Remove guest management from VMware VMs ++To prevent continued billing of Azure management services, after you remove the vSphere environment from Azure Arc, you must first remove guest management from all Arc-enabled Azure VMware Solution VMs where it was enabled. ++When you enable guest management on Arc-enabled Azure VMware Solution VMs, the Arc connected machine agent is installed on them. Once guest management is enabled, you can install VM extensions on them and use Azure management services like the Log Analytics on them. ++To completely remove guest management, use the following steps to remove any VM extensions from the virtual machine, disconnect the agent, and uninstall the software from your virtual machine. It's important to complete each of the three steps to fully remove all related software components from your virtual machines. ++### Remove VM extensions ++Use the following steps to uninstall extensions from the portal. ++> [!NOTE] +> **Steps 2-5** must be performed for all the VMs that have VM extensions installed. ++1. Sign in to your Azure VMware Solution private cloud. +1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£vCenter Server Inventory Page". +1. Search and select the virtual machine where you have **Guest management** enabled. +1. Select **Extensions**. +1. Select the extensions and select **Uninstall**. ++### Disable guest management from Azure Arc ++To avoid problems onboarding the same VM to **Guest management**, we recommend you do the following steps to cleanly disable guest management capabilities. ++> [!NOTE] +> **Steps 2-3** must be performed for **all VMs** that have **Guest management** enabled. ++1. Sign into the virtual machine using administrator or root credentials and run the following command in the shell. + 1. `azcmagent disconnect --force-local-only`. +1. Uninstall the `ConnectedMachine agent` from the machine. +1. Set the **identity** on the VM resource to **none**. ++## Uninstall agents from Virtual Machines (VMs) ++### Windows VM uninstall ++To uninstall the Windows agent from the machine, use the following steps: ++1. Sign in to the computer with an account that has administrator permissions. +2. In **Control Panel**, select **Programs and Features**. +3. In **Programs and Features**, select **Azure Connected machine Agent**, select **Uninstall**, then select **Yes**. +4. Delete the `C:\Program Files\AzureConnectedMachineAgent` folder. ++### Linux VM uninstall ++To uninstall the Linux agent, the command to use depends on the Linux operating system. You must have `root` access permissions or your account must have elevated rights using sudo. ++- For Ubuntu, run the following command: ++ ```bash + sudo apt purge azcmagent + ``` ++- For RHEL, CentOS, Oracle Linux run the following command: ++ ```bash + sudo yum remove azcmagent + ``` ++- For SLES, run the following command: ++ ```bash + sudo zypper remove azcmagent + ``` ++## Remove VMware vSphere resources from Azure ++When you activate Arc-enabled Azure VMware Solution resources in Azure, a representation is created for them in Azure. Before you can delete the vCenter Server resource in Azure, you need to delete all of the Azure resource representations you created for your vSphere resources. To delete the Azure resource representations you created, do the following steps: ++1. Go to the Azure portal. +1. Choose **Virtual machines** from Arc-enabled VMware vSphere resources in the private cloud. +1. Select all the VMs that have an Azure Enabled value as **Yes**. +1. Select **Remove from Azure**. This step starts deployment and removes these resources from Azure. The resources remain in your vCenter Server. + 1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**. +1. When the deletion completes, select **Overview**. + 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section. +1. Select **Remove from Azure** to remove the vCenter Server resource from Azure. +1. Go to vCenter Server resource in Azure and delete it. +1. Go to the Custom location resource and select **Delete**. +1. Go to the Azure Arc Resource bridge resources and select **Delete**. ++At this point, all of your Arc-enabled VMware vSphere resources are removed from Azure. ++## Remove Arc resource bridge related items in your vCenter ++During onboarding, to create a connection between your VMware vCenter and Azure, an Azure Arc resource bridge is deployed into your VMware vSphere environment. As the last step, you must delete the resource bridge VM as well the VM template created during the onboarding. ++As a last step, run the following command: ++[`az rest --method delete`](https://management.azure.com/subscriptions/f%7BsubId%7D/resourcegroups/%7Brg) ++Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer. |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
communication-services | Room Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md | Here are the main scenarios where rooms are useful: - **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services. - **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call. - **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference.-- **Add PSTN participants.** Invite public switched telephone network (PSTN) participants to a call using a number purchased through your subscription or via Azure direct routing to your Session Border Controller (SBC).+- **Add PSTN participants. (Currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/))** Invite public switched telephone network (PSTN) participants to a call using a number purchased through your subscription or via Azure direct routing to your Session Border Controller (SBC). ## When to use rooms The tables below provide detailed capabilities mapped to the roles. At a high le | - Render a video in multiple places (local camera or remote stream) | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Set/Update video scaling mode | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Render remote video stream | ✔️ | ✔️ | ✔️ |-| **Add PSTN participants** | | | -| - Call participants using phone calls | ✔️ | ❌ | ❌ | +| **Add PSTN participants** **| | | +| - Call participants using phone calls | ✔️** | ❌ | ❌ | -*) Only available on the web calling SDK. Not available on iOS and Android calling SDKs +\* Only available on the web calling SDK. Not available on iOS and Android calling SDKs ++** Currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Event handling |
communication-services | Enable User Engagement Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md | In this quick start, you'll learn about how to enable user engagement tracking f **Your email domain is now ready to send emails with user engagement tracking. Please be aware that user engagement tracking is applicable to HTML content and will not function if you submit the payload in plaintext.** You can now subscribe to Email User Engagement operational logs - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.-+> [!IMPORTANT] +> If you plan to enable open/click tracking for your email links, ensure that you are formatting the email content in HTML correctly. Specifically, make sure your tracking content is properly encapsulated within the payload, as demonstrated below: +```html + <a href="https://www.contoso.com">Contoso Inc.,</a>. +``` + ## Next steps - Access logs for [Email Communication Service](../../concepts/analytics/logs/email-logs.md). -The following documents may be interesting to you: +The following documents might be interesting to you: - Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md) - [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md) |
communication-services | Get Started Rooms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md | The table below lists the main properties of `room` objects: | `roomId` | Unique `room` identifier. | | `validFrom` | Earliest time a `room` can be used. | | `validUntil` | Latest time a `room` can be used. |-| `pstnDialOutEnabled` | Enable or disable dialing out to a PSTN number in a room.| +| `pstnDialOutEnabled`* | Enable or disable dialing out to a PSTN number in a room.| | `participants` | List of participants to a `room`. Specified as a `CommunicationIdentifier`. | | `roleType` | The role of a room participant. Can be either `Presenter`, `Attendee`, or `Consumer`. | +*pstnDialOutEnabled is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) ::: zone pivot="platform-azcli" [!INCLUDE[Use rooms with Azure CLI](./includes/rooms-quickstart-az-cli.md)] |
communications-gateway | Interoperability Teams Direct Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-teams-direct-routing.md | For each customer, you must: As part of arranging updates to customer tenants, you must create DNS records containing a verification code (provided by Microsoft 365 when the customer updates their tenant with the domain name) on a DNS server that you control. These records allow Microsoft 365 to verify that the customer tenant is authorized to use the domain name. Azure Communications Gateway provides the DNS server that you must use. You must obtain the verification code from the customer and upload it with Azure Communications Gateway's Provisioning API to generate the DNS TXT records that verify the domain. > [!TIP]-> For a walkthrough of setting up a customer tenant and subdomain for your testing, see [Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-customer-teams-direct-routing.md). When you onboard a real customer, you'll need to follow a similar process, but you'll typically need to ask them to carry out the steps that need access to their tenant. +> For a walkthrough of setting up a customer tenant and numbers for your testing, see [Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-customer-teams-direct-routing.md) and [Configure test numbers for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-numbers-teams-direct-routing.md). When you onboard a real customer, you'll need to follow a similar process, but you'll typically need to ask your customer to carry out the steps that need access to their tenant. ## Support for caller ID screening |
communications-gateway | Interoperability Zoom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-zoom.md | Azure Communications Gateway can manipulate signaling and media to meet the requ ## Role and position in the network -Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to Zoom servers, allowing you to support the Zoom Phone Cloud Peering program. The following diagram shows where Azure Communications Gateway sits in your network. +Azure Communications Gateway sits at the edge of your fixed networks. It connects these networks to Zoom servers, allowing you to support the Zoom Phone Cloud Peering program. The following diagram shows where Azure Communications Gateway sits in your network. :::image type="complex" source="media/azure-communications-gateway-architecture-zoom.svg" alt-text="Architecture diagram for Azure Communications Gateway for Microsoft Teams Direct Routing." lightbox="media/azure-communications-gateway-architecture-zoom.svg"::: Architecture diagram showing Azure Communications Gateway connecting to Zoom servers and a fixed operator network over SIP and RTP. Azure Communications Gateway and Zoom Phone Cloud Peering connect multiple customers to the operator network. Azure Communications Gateway also has a provisioning API to which a BSS client in the operator's management network must connect. Azure Communications Gateway contains certified SBC function. :::image-end::: +You provide a trunk towards Zoom (via Azure Communications Gateway) for your customers. Calls flow from Zoom clients through the Zoom servers and Azure Communications Gateway into your network. [!INCLUDE [communications-gateway-multitenant](includes/communications-gateway-multitenant.md)]. -You provide a trunk towards Zoom (via Azure Communications Gateway) for your customers. Calls flow from Zoom clients through the Zoom servers and Azure Communications Gateway into your network. +You must provision Azure Communications Gateway with the details of the numbers that you upload to Zoom. This provisioning allows Azure Communications Gateway to route calls correctly. For more information, see [Identifying Zoom calls](#identifying-zoom-calls). --Azure Communications Gateway does not support Premises Peering (where each customer has an eSBC) for Zoom Phone. +Azure Communications Gateway doesn't support Premises Peering (where each customer has an eSBC) for Zoom Phone. ## SIP signaling The Zoom Phone Cloud Peering program requires SRTP for media. Azure Communicatio ### Media handling for calls -Azure Communications Gateway can use Opus, G.722 and G.711 towards Zoom servers, with a packetization time of 20ms. You must select the codecs that you want to support when you deploy Azure Communications Gateway. +Azure Communications Gateway can use Opus, G.722 and G.711 towards Zoom servers, with a packetization time of 20 ms. You must select the codecs that you want to support when you deploy Azure Communications Gateway. -If your network cannot support a packetization time of 20ms, you must contact your onboarding team or raise a support request to discuss your requirements for transrating (changing packetization time). +If your network can't support a packetization time of 20 ms, you must contact your onboarding team or raise a support request to discuss your requirements for transrating (changing packetization time). ### Media interworking options Azure Communications Gateway offers multiple media interworking options. For exa For full details of the media interworking features available in Azure Communications Gateway, raise a support request. +## Identifying Zoom calls ++You must provision Azure Communications Gateway with all the numbers that you upload to Zoom and indicate that these numbers are enabled for Zoom service. This provisioning allows Azure Communications Gateway to route calls to and from Zoom. It requires [Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md). ++> [!IMPORTANT] +> If numbers that you upload to Zoom aren't configured on Azure Communications Gateway, calls involving those numbers fail. +> +> [Configure test numbers for Zoom Phone Cloud Peering with Azure Communications Gateway](configure-test-numbers-zoom.md) explains how to set up test numbers for integration testing. You will need to follow a similar process for real customer numbers. ++Optionally, you can indicate to your network that calls are from Zoom by: ++- Using the Provisioning API to add a header to calls associated with Zoom numbers. +- Configuring Zoom to add a header with custom contents to SIP INVITEs (as part of uploading numbers to Zoom). For more information on this header, see Zoom's _Zoom Phone Provider Exchange Solution Reference Guide_. + ## Next steps - Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md). |
container-apps | Deploy Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md | In this tutorial, you'll deploy a containerized application to Azure Container A ## Clone the project -1. Begin by cloning the [sample repository](https://github.com/azure-samples/containerapps-albumapi-javascript) to your machine using the following command. +1. Open a new Visual Studio Code window. ++1. Select <kbd>F1</kbd> to open the command palette. ++1. Enter **Git: Clone** and press enter. ++1. Enter the following URL to clone the sample project: ```git- git clone https://github.com/Azure-Samples/containerapps-albumapi-javascript.git + https://github.com/Azure-Samples/containerapps-albumapi-javascript.git ``` > [!NOTE] > This tutorial uses a JavaScript project, but the steps are language agnostic. -1. Open Visual Studio Code. --1. Select **F1** to open the command palette. +1. Select a folder to clone the project into. -1. Select **File > Open Folder...** and select the folder where you cloned the sample project. +1. Select **Open** to open the project in Visual Studio Code. ## Sign in to Azure -1. Select **F1** to open the command palette. +1. Select <kbd>F1</kbd> to open the command palette. 1. Select **Azure: Sign In** and follow the prompts to authenticate. 1. Once signed in, return to Visual Studio Code. -## Create the container registry and Docker image --Docker images contain the source code and dependencies necessary to run an application. This sample project includes a Dockerfile used to build the application's container. Since you can build and publish the image for your app directly in Azure, a local Docker installation isn't required. --Container images are stored inside container registries. You can create a container registry and upload an image of your app in a single workflow using Visual Studio Code. --1. In the _Explorer_ window, expand the _src_ folder to reveal the Dockerfile. --1. Right select on the Dockerfile, and select **Build Image in Azure**. -- This action opens the command palette and prompts you to define a container tag. --1. Enter a tag for the container. Accept the default, which is the project name with a run ID suffix. --1. Select the Azure subscription that you want to use. --1. Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to creating and deploying to the container app. --1. Enter a unique name for the new registry such as `msdocscapps123`, where `123` are unique numbers of your own choosing, and then select enter. -- Container registry names must be globally unique across all over Azure. --1. Select **Basic** as the SKU. --1. Choose **+ Create new resource group**, or select an existing resource group you'd like to use. -- For a new resource group, enter a name such as `msdocscontainerapps`, and press enter. --1. Select the location that is nearest to you. Select **Enter** to finalize the workflow, and Azure begins creating the container registry and building the image. -- This process may take a few moments to complete. --1. Select **Linux** as the image base operating system (OS). --Once the registry is created and the image is built successfully, you're ready to create the container app to host the published image. --## Create and deploy to the container app +## Create and deploy to Azure Container Apps The Azure Container Apps extension for Visual Studio Code enables you to choose existing Container Apps resources, or create new ones to deploy your applications to. In this scenario, you create a new Container App environment and container app to host your application. After installing the Container Apps extension, you can access its features under the Azure control panel in Visual Studio Code. -### Create the Container Apps environment +1. Select <kbd>F1</kbd> to open the command palette and run the **Azure Container Apps: Deploy Project from Workspace** command. -Every container app must be part of a Container Apps environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other. You'll need to create an environment before you can create the container app itself. --1. Select <kbd>F1</kbd> to open the command palette. --1. Enter **Azure Container Apps: Create Container Apps Environment...** and enter the following values as prompted by the extension. +1. Enter the following values as prompted by the extension. | Prompt | Value | |--|--|- | Name | Enter **my-aca-environment** | - | Region | Select the region closest to you | --Once you issue this command, Azure begins to create the environment for you. This process may take a few moments to complete. Creating a container app environment also creates a log analytics workspace for you in Azure. --### Create the container app and deploy the Docker image --Now that you have a container app environment in Azure you can create a container app inside of it. You can also publish the Docker image you created earlier as part of this workflow. --1. Select <kbd>F1</kbd> to open the command palette. --1. Enter **Azure Container Apps: Create Container App...** and enter the following values as prompted by the extension. -- | Prompt | Value | Remarks | - |--|--|--| - | Environment | Select **my-aca-environment** | | - | Name | Enter **my-container-app** | | - | Container registry | Select **Azure Container Registries**, then select the registry you created as you published the container image. | | - | Repository | Select the container registry repository where you published the container image. | | - | Tag | Select **latest** | | - | Environment variables | Select **Skip for now** | | - | Ingress | Select **Enable** | | - | HTTP traffic type | Select **External** | | - | Port | Enter **3500** | You set this value to the port number that your container uses. | + | Select subscription | Select the Azure subscription you want to use. | + | Select a container apps environment | Select **Create new container apps environment**. You're only asked this question if you have existing Container Apps environments. | + | Enter a name for the new container app resource(s) | Enter **my-container-app**. | + | Select a location | Select an Azure region close to you. | + | Would you like to save your deployment configuration? | Select **Save**. | -During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Select this link, and to view your app in the browser. + The Azure activity log panel opens and displays the deployment progress. This process might take a few minutes to complete. +1. Once this process finishes, Visual Studio Code displays a notification. Select **Browse** to open the deployed app in a browser. -You can also append the `/albums` path at the end of the app URL to view data from a sample API request. + In the browser's location bar, append the `/albums` path at the end of the app URL to view data from a sample API request. Congratulations! You successfully created and deployed your first container app using Visual Studio code. If you're not going to continue to use this application, you can delete the Azur Follow these steps in the Azure portal to remove the resources you created: -1. Select the **msdocscontainerapps** resource group from the *Overview* section. +1. Select the **my-container-app** resource group from the *Overview* section. 1. Select the **Delete resource group** button at the top of the resource group *Overview*.-1. Enter the resource group name **msdocscontainerapps** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog. +1. Enter the resource group name **my-container-app** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog. 1. Select **Delete**. - The process to delete the resource group may take a few minutes to complete. + The process to delete the resource group might take a few minutes to complete. > [!TIP] > Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps). |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
cosmos-db | How To Container Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md | This article describes how to create, monitor, and manage intra-account containe ## Prerequisites -* You may use the portal [Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) downloaded and installed on your machine. +* You may use the portal [Cloud Shell](/azure/cloud-shell/get-started?tabs=powershell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) downloaded and installed on your machine. * Currently, container copy is only supported in [these regions](intra-account-container-copy.md#supported-regions). Make sure your account's write region belongs to this list. |
cosmos-db | How To Develop Emulator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-develop-emulator.md | The certificate for the emulator is available in the `_explorer/emulator.pem` pa > [!NOTE] > You may need to change the host (or IP address) and port number if you have previously modified those values. -1. Install the certificate according to the process typically used for your operating system. For example, in Linux you would copy the certificate to the `/usr/local/share/ca-certificats/` path. +1. Install the certificate according to the process typically used for your operating system. For example, in Linux you would copy the certificate to the `/usr/local/share/ca-certificates/` path. ```bash cp ~/emulatorcert.crt /usr/local/share/ca-certificates/ |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md | Title: Azure Cosmos DB ΓÇô Unified AI Database-+ description: Azure Cosmos DB is a global multi-model database and ideal database for AI applications requiring speed, elasticity and availability with native support for NoSQL and relational data. |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
cosmos-db | How To Configure Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-configure-authentication.md | Last updated 09/19/2023 > [!IMPORTANT] > Microsoft Entra authentication in Azure Cosmos DB for PostgreSQL is currently in preview. > This preview version is provided without a service level agreement, and it's not recommended-> for production workloads. Certain features might not be supported or might have constrained +> for production workloads. Certain features might not be supported or might have constrained > capabilities. > > You can see a complete list of other new features in [preview features](product-updates.md#features-in-preview). In this article, you configure authentication methods for Azure Cosmos DB for PostgreSQL. You manage Microsoft Entra admin users and native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL. You also learn how to use a Microsoft Entra token with Azure Cosmos DB for PostgreSQL. -An Azure Cosmos DB for PostgreSQL cluster is created with one built-in native PostgreSQL role named 'citus'. You can add more native PostgreSQL roles after cluster provisioning is completed. +An Azure Cosmos DB for PostgreSQL cluster is created with one built-in native PostgreSQL role named 'citus'. You can add more native PostgreSQL roles after cluster provisioning is completed. You can also configure Microsoft Entra authentication for Azure Cosmos DB for PostgreSQL. You can enable Microsoft Entra authentication in addition or instead of the native PostgreSQL authentication on your cluster. You can change authentication methods enabled on cluster at any point after the cluster is provisioned. When Microsoft Entra authentication is enabled, you can add multiple Microsoft Entra users to an Azure Cosmos DB for PostgreSQL cluster and make any of them administrators. Microsoft Entra user can be a user or a service principal. Once done proceed with [configuring Microsoft Entra authentication](#configure-a To add or remove Microsoft Entra roles on cluster, follow these steps on **Authentication** page: -1. In **Microsoft Entra authentication (preview)** section, select **Add Microsoft Entra admins**. +1. In **Microsoft Entra authentication (preview)** section, select **Add Microsoft Entra admins**. 1. In **Select Microsoft Entra Admins** panel, select one or more valid Microsoft Entra user or enterprise application in the current AD tenant to be a Microsoft Entra administrator on your Azure Cosmos DB for PostgreSQL cluster. 1. Use **Select** to confirm your choice. 1. In the **Authentication** page, select **Save** in the toolbar to save changes or proceed with adding native PostgreSQL roles.- + ## Configure native PostgreSQL authentication To add Postgres roles on cluster, follow these steps on **Authentication** page: We've tested the following clients: - **Other libpq-based clients**: Examples include common application frameworks and object-relational mappers (ORMs). - **pgAdmin**: Clear **Connect now** at server creation. -Use the following procedures to authenticate with Microsoft Entra ID as an Azure Cosmos DB for PostgreSQL user. You can follow along in [Azure Cloud Shell](./../../cloud-shell/quickstart.md), on an Azure virtual machine, or on your local machine. +Use the following procedures to authenticate with Microsoft Entra ID as an Azure Cosmos DB for PostgreSQL user. You can follow along in [Azure Cloud Shell](./../../cloud-shell/get-started.md), on an Azure virtual machine, or on your local machine. ### Sign in to the user's Azure subscription export PGPASSWORD=$(az account get-access-token --resource-type oss-rdbms --quer > [!NOTE]-> Make sure PGPASSWORD variable is set to the Microsoft Entra access token for your -> subscription for Microsoft Entra authentication. If you need to do Postgres role authentication -> from the same session you can set PGPASSWORD to the Postgres role password -> or clear the PGPASSWORD variable value to enter the password interactively. +> Make sure PGPASSWORD variable is set to the Microsoft Entra access token for your +> subscription for Microsoft Entra authentication. If you need to do Postgres role authentication +> from the same session you can set PGPASSWORD to the Postgres role password +> or clear the PGPASSWORD variable value to enter the password interactively. > Authentication would fail with the wrong value in PGPASSWORD. Now you can initiate a connection with Azure Cosmos DB for PostgreSQL as you usually would (without 'password' parameter in the command line): For example, to allow PostgreSQL `db_user` to read `mytable`, grant the permissi GRANT SELECT ON mytable TO db_user; ``` -To grant the same permissions to Microsoft Entra role `user@tenant.onmicrosoft.com` use the following command: +To grant the same permissions to Microsoft Entra role `user@tenant.onmicrosoft.com` use the following command: ```sql GRANT SELECT ON mytable TO "user@tenant.onmicrosoft.com"; |
cosmos-db | Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md | The script in this article creates an Azure Cosmos DB for Apache Cassandra accou - This script requires Azure CLI version 2.12.1 or later. - - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI. + - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI. [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) |
cosmos-db | Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md | The script in this article creates an Azure Cosmos DB for Gremlin account, datab - This script requires Azure CLI version 2.30 or later. - - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI. + - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI. [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) |
cosmos-db | Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md | The script in this article creates an Azure Cosmos DB for Gremlin serverless acc - This script requires Azure CLI version 2.30 or later. - - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI. + - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI. [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) |
cosmos-db | Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/autoscale.md | The script in this article creates an Azure Cosmos DB for NoSQL account, databas - This script requires Azure CLI version 2.0.73 or later. - - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. + - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) The script in this article creates an Azure Cosmos DB for NoSQL account, databas ```azurecli subscription="<subscriptionId>" # add subscription here- + az account set -s $subscription # ...or use 'az login' ``` |
cosmos-db | Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md | The script in this article creates an Azure Cosmos DB for Table account and tabl - This script requires Azure CLI version 2.12.1 or later. - - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. + - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) The script in this article creates an Azure Cosmos DB for Table account and tabl ```azurecli subscription="<subscriptionId>" # add subscription here- + az account set -s $subscription # ...or use 'az login' ``` |
cosmos-db | Lock | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md | -The script in this article demonstrates performing resource lock operations for a API for Table table. +The script in this article demonstrates performing resource lock operations for a API for Table table. > [!IMPORTANT] > To enable resource locking, the Azure Cosmos DB account must have the `disableKeyBasedMetadataWriteAccess` property enabled. This property prevents any changes to resources from clients that connect via account keys, such as the Azure Cosmos DB Table SDK, Azure Storage Table SDK, or Azure portal. For more information, see [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). The script in this article demonstrates performing resource lock operations for - This script requires Azure CLI version 2.12.1 or later. - - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. + - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) The script in this article demonstrates performing resource lock operations for ```azurecli subscription="<subscriptionId>" # add subscription here- + az account set -s $subscription # ...or use 'az login' ``` |
cosmos-db | Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md | The script in this article creates an Azure Cosmos DB for Table serverless accou - This script requires Azure CLI version 2.12.1 or later. - - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. + - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI. [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com) The script in this article creates an Azure Cosmos DB for Table serverless accou ```azurecli subscription="<subscriptionId>" # add subscription here- + az account set -s $subscription # ...or use 'az login' ``` |
cost-management-billing | Direct Ea Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md | Title: EA Billing administration on the Azure portal description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 07/21/2023 Last updated : 11/07/2023 -> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md). -> -> As of February 20, 2023 indirect EA customers no longer manage their billing account in the EA portal. Instead, they use the Azure portal. +> On November 15, 2023, the Azure Enterprise portal is retiring for EA enrollments in the Commercial cloud and is becoming read-only for EA enrollments in the Azure Government cloud. +> Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md). > -> Until August 14, 2023, this change doesnΓÇÖt affect customers with Azure Government EA enrollments. They continue using the EA portal to manage their enrollment until then. However, after August 14, 2023, EA customers won't be able to manage their Azure Government EA enrollments from the [Azure portal](https://portal.azure.com). Instead, they can manage it from the Azure Government portal at [https://portal.azure.us](https://portal.azure.us). The functionality mentioned in this article the same as the Azure Government portal. +> Since August 14, 2023, EA customers is not be able to manage their Azure Government EA enrollments from the [Azure portal](https://portal.azure.com). Instead, they can manage it from the Azure Government portal at [https://portal.azure.us](https://portal.azure.us). The functionality mentioned in this article is same as the Azure Government portal. ## Manage your enrollment |
cost-management-billing | Direct Ea Azure Usage Charges Invoices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md | Title: View your Azure usage summary details and download reports for EA enrollm description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 07/14/2023 Last updated : 11/06/2023 To review and verify the charges on your invoice, you must be an Enterprise Admi ## Review usage charges -To view detailed usage for specific accounts, download the usage detail report. Usage files may be large. If you prefer, you can use the exports feature to get the same data exported to an Azure Storage account. For more information, see [Export usage details to a storage account](../costs/tutorial-export-acm-data.md). +To view detailed usage for specific accounts, download the usage detail report. Usage files can be large. If you prefer, you can use the exports feature to get the same data exported to an Azure Storage account. For more information, see [Export usage details to a storage account](../costs/tutorial-export-acm-data.md). As an enterprise administrator: Enterprise administrators and partner administrators can also view an overall su ## Download or view your Azure billing invoice -An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment. If someone other than an EA administrator needs an email copy of the invoice, an EA administrator can send them a copy. +An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or send it in email. Invoices are sent to whoever is set up to receive invoices for the enrollment. If someone other than an EA administrator needs an email copy of the invoice, an EA administrator can send them a copy. Only an Enterprise Administrator has permission to view and download the billing invoice. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md). You receive an Azure invoice when any of the following events occur during your - Visual Studio Professional (Annual) - **Marketplace charges** - Azure Marketplace purchases and usage aren't covered by your organization's credit. So, you're invoiced for Marketplace charges despite your credit balance. In the Azure portal, an Enterprise Administrator can enable and disable Marketplace purchases. -Your invoice displays Azure usage charges with costs associated to them first, followed by any Marketplace charges. If you have a credit balance, it's applied to Azure usage. Your invoice shows Azure usage and Marketplace usage without any cost last. +Your invoice displays Azure usage charges with costs associated to them first, followed by any Marketplace charges. If you have a credit balance, it gets applied to Azure usage. Your invoice shows Azure usage and Marketplace usage without any cost last. ++### Advanced report download ++You can use the Download Advanced Report to get reports that cover specific date ranges for the selected accounts. The output file is in CSV format to accommodate large record sets. ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Search for **Cost Management + Billing** and select it. +1. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.4. In the left navigation menu, select Billing profiles and select the billing profile that you want to work with. +1. In the navigation menu, select **Usage + Charges**. +1. At the top of the Usage + charges page, select **Download Advanced report**. +1. Select a date range and the accounts to include in the report. +1. Select **Download**. +1. You can also download files from the **Report History**. It shows the latest reports that you downloaded. + ### Download your Azure invoices (.pdf) However, you *should* see: The formatting issue occurs because of default settings in Excel's import functionality. Excel imports all fields as *General* text and assumes that a number is separated in the mathematical standard. For example: *1,000.00*. -If your currency uses a period (**.**) for the thousandth place separator and a comma (**,**) for the decimal place separator, it's displayed incorrectly. For example: *1.000,00*. The import results may vary depending on your regional language setting. +If your currency uses a period (**.**) for the thousandth place separator and a comma (**,**) for the decimal place separator, it gets displayed incorrectly. For example: *1.000,00*. The import results might vary depending on your regional language setting. To import the CSV file without formatting issues: 1. In Microsoft Excel, go to **File** > **Open**. The Text Import Wizard appears. 1. Under **Original Data Type**, choose **delimited**. Default is **Fixed Width**. 1. Select **Next**.-1. Under **Delimiters**, select the box for **Comma**. Clear **Tab** if it's selected. +1. Under **Delimiters**, select the box for **Comma**. Clear **Tab** if selected. 1. Select **Next**. 1. Scroll over to the **ResourceRate** and **ExtendedCost** columns. 1. Select the **ResourceRate** column. It appears highlighted in black. |
cost-management-billing | Ea Portal Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md | Title: Azure EA portal administration description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal. Previously updated : 07/28/2023 Last updated : 11/07/2023 -> [!IMPORTANT] -> The Azure EA portal is getting deprecated. Direct and indirect EA Azure customers now use Cost Management + Billing features in the Azure portal to manage their enrollment and billing *instead of using the EA portal*. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md). +> [!NOTE] +> On November 15, 2023, the Azure Enterprise portal is retiring for EA enrollments in the Commercial cloud and is becoming read-only for EA enrollments in the Azure Government cloud. +> Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md). ## Activate your enrollment |
data-factory | Connector Microsoft Fabric Lakehouse Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse-table.md | - Title: Copy and Transform data in Microsoft Fabric Lakehouse Table (Preview) - -description: Learn how to copy and transform data to and from Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines. ------ Previously updated : 11/01/2023---# Copy and Transform data in Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics ---The Microsoft Fabric Lakehouse serves as a data architecture platform designed to store, manage, and analyse both structured and unstructured data within a single location. This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Table (Preview) and use Data Flow to transform data in Microsoft Fabric Lakehouse Files (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md). --> [!IMPORTANT] -> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/). --## Supported capabilities --This Microsoft Fabric Lakehouse Table connector is supported for the following capabilities: --| Supported capabilities|IR | Managed private endpoint| -|| --| --| -|[Copy activity](copy-activity-overview.md) (source/sink)|① ②|Γ£ô | -|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① |Γ£ô | --<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> --## Get started ---## Create a Microsoft Fabric Lakehouse linked service using UI --Use the following steps to create a Microsoft Fabric Lakehouse linked service in the Azure portal UI. --1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New: -- # [Azure Data Factory](#tab/data-factory) -- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI."::: -- # [Azure Synapse](#tab/synapse-analytics) -- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI."::: --2. Search for Microsoft Fabric Lakehouse and select the connector. -- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/microsoft-fabric-lakehouse-connector.png" alt-text="Screenshot showing select Microsoft Fabric Lakehouse connector."::: --1. Configure the service details, test the connection, and create the new linked service. -- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/configure-microsoft-fabric-lakehouse-linked-service.png" alt-text="Screenshot of configuration for Microsoft Fabric Lakehouse linked service."::: ---## Connector configuration details --The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Fabric Lakehouse. --## Linked service properties --The Microsoft Fabric Lakehouse connector supports the following authentication types. See the corresponding sections for details: --- [Service principal authentication](#service-principal-authentication)--### Service principal authentication --To use service principal authentication, follow these steps. --1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: -- - Application ID - - Application key - - Tenant ID --2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps: - 1. Go to your Microsoft Fabric workspace, select **Manage access** on the top bar. Then select **Add people or groups**. - - :::image type="content" source="media/connector-microsoft-fabric-lakehouse/fabric-workspace-manage-access.png" alt-text="Screenshot shows selecting Fabric workspace Manage access."::: -- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/manage-access-pane.png" alt-text=" Screenshot shows Fabric workspace Manage access pane."::: - - 1. In **Add people** pane, enter your service principal name, and select your service principal from the drop-down list. - - 1. Specify the role as **Contributor** or higher (Admin, Member), then select **Add**. - - :::image type="content" source="media/connector-microsoft-fabric-lakehouse/select-workspace-role.png" alt-text="Screenshot shows adding Fabric workspace role."::: -- 1. Your service principal is displayed on **Manage access** pane. --These properties are supported for the linked service: --| Property | Description | Required | -|: |: |: | -| type | The type property must be set to **Lakehouse**. |Yes | -| workspaceId | The Microsoft Fabric workspace ID. | Yes | -| artifactId | The Microsoft Fabric Lakehouse object ID. | Yes | -| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes | -| servicePrincipalId | Specify the application's client ID. | Yes | -| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes | -| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes | -| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No | -| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No | --**Example: using service principal key authentication** --You can also store service principal key in Azure Key Vault. --```json -{ - "name": "MicrosoftFabricLakehouseLinkedService", - "properties": { - "type": "Lakehouse", - "typeProperties": { - "workspaceId": "<Microsoft Fabric workspace ID>", - "artifactId": "<Microsoft Fabric Lakehouse object ID>", - "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>", - "servicePrincipalId": "<service principal id>", - "servicePrincipalCredentialType": "ServicePrincipalKey", - "servicePrincipalCredential": { - "type": "SecureString", - "value": "<service principal key>" - } - }, - "connectVia": { - "referenceName": "<name of Integration Runtime>", - "type": "IntegrationRuntimeReference" - } - } -} -``` --## Dataset properties --For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. --The following properties are supported for Microsoft Fabric Lakehouse Table dataset: --| Property | Description | Required | -| :-- | :-- | :-- | -| type | The **type** property of the dataset must be set to **LakehouseTable**. | Yes | -| schema | Name of the schema. |No for source. Yes for sink | -| table | Name of the table/view. |No for source. Yes for sink | --### Dataset properties example --```json -{ -ΓÇ»ΓÇ»ΓÇ»ΓÇ»"name":ΓÇ»"LakehouseTableDataset", -ΓÇ»ΓÇ»ΓÇ»ΓÇ»"properties":ΓÇ»{ -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"type":ΓÇ»"LakehouseTable", -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"linkedServiceName":ΓÇ»{ -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"referenceName":ΓÇ»"<Microsoft Fabric Lakehouse linked service name>", -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"type":ΓÇ»"LinkedServiceReference" -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»}, -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"typeProperties":ΓÇ»{ - "table": "<table_name>" -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»}, -ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"schema":ΓÇ»[< physical schema, optional, retrievable during authoring >] -ΓÇ»ΓÇ»ΓÇ»ΓÇ»} -} -``` --## Copy activity properties --For a full list of sections and properties available for defining activities, see [Copy activity configurations](copy-activity-overview.md#configuration) and [Pipelines and activities](concepts-pipelines-activities.md). This section provides a list of properties supported by the Microsoft Fabric Lakehouse Table source and sink. --### Microsoft Fabric Lakehouse Table as a source type --To copy data from Microsoft Fabric Lakehouse Table, set the **type** property in the Copy Activity source to **LakehouseTableSource**. The following properties are supported in the Copy Activity **source** section: --| Property | Description | Required | -| : | :-- | :- | -| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSource**. | Yes | -| timestampAsOf | The timestamp to query an older snapshot. | No | -| versionAsOf | The version to query an older snapshot. | No | --**Example: Microsoft Fabric Lakehouse Table source** --```json -"activities":[ - { - "name": "CopyFromLakehouseTable", - "type": "Copy", - "inputs": [ - { - "referenceName": "<Microsoft Fabric Lakehouse Table input dataset name>", - "type": "DatasetReference" - } - ], - "outputs": [ - { - "referenceName": "<output dataset name>", - "type": "DatasetReference" - } - ], - "typeProperties": { - "source": { - "type": "LakehouseTableSource", - "timestampAsOf": "2023-09-23T00:00:00.000Z", - "versionAsOf": 2 - }, - "sink": { - "type": "<sink type>" - } - } - } -] -``` --### Microsoft Fabric Lakehouse Table as a sink type --To copy data from Microsoft Fabric Lakehouse Table, set the **type** property in the Copy Activity source to **LakehouseTableSink**. The following properties are supported in the Copy activity **sink** section: --| Property | Description | Required | -| : | :-- | :- | -| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSink**. | Yes | -| tableActionOption | The way to write data to the sink table. Allowed values are `Append` and `Overwrite`. | No | -| partitionOption | Allowed values are `None` and `PartitionByKey`. Create partitions in folder structure based on one or multiple columns when the value is `PartitionByKey`. Each distinct column value (pair) will be a new partition (e.g. year=2000/month=01/file). It supports insert-only mode and requires an empty directory in sink. | No | -| partitionNameList | The destination columns in schemas mapping. Supported data types are string, integer, boolean and datetime. Format respects type conversion settings under "Mapping" tab. | No | --**Example: Microsoft Fabric Lakehouse Table sink** --```json -"activities":[ - { - "name": "CopyToLakehouseTable", - "type": "Copy", - "inputs": [ - { - "referenceName": "<input dataset name>", - "type": "DatasetReference" - } - ], - "outputs": [ - { - "referenceName": "<Microsoft Fabric Lakehouse Table output dataset name>", - "type": "DatasetReference" - } - ], - "typeProperties": { - "source": { - "type": "<source type>" - }, - "sink": { - "type": "LakehouseTableSink", - "tableActionOption ": "Append" - } - } - } -] -``` -## Mapping data flow properties --When transforming data in mapping data flow, you can read and write to tables in Microsoft Fabric Lakehouse. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. --### Microsoft Fabric Lakehouse Table as a source type --There are no configurable properties under source options. --### Microsoft Fabric Lakehouse Table as a sink type --The following properties are supported in the Mapping Data Flows **sink** section: --| Name | Description | Required | Allowed values | Data flow script property | -| - | -- | -- | -- | - | -| Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable | -| Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true | -| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true | -| Merge Schema | Merge schemaΓÇ»option allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true | --**Example: Microsoft Fabric Lakehouse Table sink** --``` -sink(allowSchemaDrift: true, -ΓÇ» ΓÇ» validateSchema: false, -ΓÇ» ΓÇ» input( -ΓÇ» ΓÇ» ΓÇ» ΓÇ» CustomerID as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» NameStyle as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» Title as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» FirstName as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» MiddleName as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» LastName as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» Suffix as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» CompanyName as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» SalesPerson as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» EmailAddress as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» Phone as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordHash as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordSalt as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» rowguid as string, -ΓÇ» ΓÇ» ΓÇ» ΓÇ» ModifiedDate as string -ΓÇ» ΓÇ» ), -ΓÇ» ΓÇ» deletable:false, -ΓÇ» ΓÇ» insertable:true, -ΓÇ» ΓÇ» updateable:false, -ΓÇ» ΓÇ» upsertable:false, -ΓÇ» ΓÇ» optimizedWrite: true, -ΓÇ» ΓÇ» mergeSchema: true, -ΓÇ» ΓÇ» autoCompact: true, -ΓÇ» ΓÇ» skipDuplicateMapInputs: true, -ΓÇ» ΓÇ» skipDuplicateMapOutputs: true) ~> CustomerTable --``` ---## Next steps --For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats). |
data-factory | Connector Microsoft Fabric Lakehouse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md | + + Title: Copy and transform data in Microsoft Fabric Lakehouse (Preview) ++description: Learn how to copy and transform data to and from Microsoft Fabric Lakehouse (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines. ++++++ Last updated : 11/03/2023+++# Copy and transform data in Microsoft Fabric Lakehouse (Preview) using Azure Data Factory or Azure Synapse Analytics +++Microsoft Fabric Lakehouse is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a single location. In order to achieve seamless data access across all compute engines in Microsoft Fabric, go to [Lakehouse and Delta Tables](/fabric/data-engineering/lakehouse-and-delta-tables) to learn more. ++This article outlines how to use Copy activity to copy data from and to Microsoft Fabric Lakehouse (Preview) and use Data Flow to transform data in Microsoft Fabric Lakehouse (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md). ++> [!IMPORTANT] +> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/). ++## Supported capabilities ++This Microsoft Fabric Lakehouse connector is supported for the following capabilities: ++| Supported capabilities|IR | Managed private endpoint| +|| --| --| +|[Copy activity](copy-activity-overview.md) (source/sink)|① ②|Γ£ô | +|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① |Γ£ô | ++<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> ++## Get started +++## Create a Microsoft Fabric Lakehouse linked service using UI ++Use the following steps to create a Microsoft Fabric Lakehouse linked service in the Azure portal UI. ++1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New: ++ # [Azure Data Factory](#tab/data-factory) ++ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI."::: ++ # [Azure Synapse](#tab/synapse-analytics) ++ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI."::: ++1. Search for Microsoft Fabric Lakehouse and select the connector. ++ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/microsoft-fabric-lakehouse-connector.png" alt-text="Screenshot showing select Microsoft Fabric Lakehouse connector."::: ++1. Configure the service details, test the connection, and create the new linked service. ++ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/configure-microsoft-fabric-lakehouse-linked-service.png" alt-text="Screenshot of configuration for Microsoft Fabric Lakehouse linked service."::: ++## Connector configuration details ++The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Fabric Lakehouse. ++## Linked service properties ++The Microsoft Fabric Lakehouse connector supports the following authentication types. See the corresponding sections for details: ++- [Service principal authentication](#service-principal-authentication) ++### Service principal authentication ++To use service principal authentication, follow these steps. ++1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: ++ - Application ID + - Application key + - Tenant ID ++2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps: + 1. Go to your Microsoft Fabric workspace, select **Manage access** on the top bar. Then select **Add people or groups**. + + :::image type="content" source="media/connector-microsoft-fabric-lakehouse/fabric-workspace-manage-access.png" alt-text="Screenshot shows selecting Fabric workspace Manage access."::: ++ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/manage-access-pane.png" alt-text=" Screenshot shows Fabric workspace Manage access pane."::: + + 1. In **Add people** pane, enter your service principal name, and select your service principal from the drop-down list. + + 1. Specify the role as **Contributor** or higher (Admin, Member), then select **Add**. + + :::image type="content" source="media/connector-microsoft-fabric-lakehouse/select-workspace-role.png" alt-text="Screenshot shows adding Fabric workspace role."::: ++ 1. Your service principal is displayed on **Manage access** pane. + +These properties are supported for the linked service: ++| Property | Description | Required | +|: |: |: | +| type | The type property must be set to **Lakehouse**. |Yes | +| workspaceId | The Microsoft Fabric workspace ID. | Yes | +| artifactId | The Microsoft Fabric Lakehouse object ID. | Yes | +| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes | +| servicePrincipalId | Specify the application's client ID. | Yes | +| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes | +| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes | +| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No | ++**Example: using service principal key authentication** ++You can also store service principal key in Azure Key Vault. ++```json +{ + "name": "MicrosoftFabricLakehouseLinkedService", + "properties": { + "type": "Lakehouse", + "typeProperties": { + "workspaceId": "<Microsoft Fabric workspace ID>", + "artifactId": "<Microsoft Fabric Lakehouse object ID>", + "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>", + "servicePrincipalId": "<service principal id>", + "servicePrincipalCredentialType": "ServicePrincipalKey", + "servicePrincipalCredential": { + "type": "SecureString", + "value": "<service principal key>" + } + }, + "connectVia": { + "referenceName": "<name of Integration Runtime>", + "type": "IntegrationRuntimeReference" + } + } +} +``` ++## Dataset properties ++Microsoft Fabric Lakehouse connector supports two types of datasets, which are Microsoft Fabric Lakehouse Files dataset +and Microsoft Fabric Lakehouse Table dataset. See the corresponding sections for details. ++- [Microsoft Fabric Lakehouse Files dataset](#microsoft-fabric-lakehouse-files-dataset) +- [Microsoft Fabric Lakehouse Table dataset](#microsoft-fabric-lakehouse-table-dataset) ++For a full list of sections and properties available for defining datasets, see [Datasets](concepts-datasets-linked-services.md). ++### Microsoft Fabric Lakehouse Files dataset ++Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings. ++- [Avro format](format-avro.md) +- [Binary format](format-binary.md) +- [Delimited text format](format-delimited-text.md) +- [JSON format](format-json.md) +- [ORC format](format-orc.md) +- [Parquet format](format-parquet.md) ++The following properties are supported under `location` settings in the format-based Microsoft Fabric Lakehouse Files dataset: ++| Property | Description | Required | +| - | | -- | +| type | The type property under `location` in the dataset must be set to **LakehouseLocation**. | Yes | +| folderPath | The path to a folder. If you want to use a wildcard to filter folders, skip this setting and specify it in activity source settings. | No | +| fileName | The file name under the given folderPath. If you want to use a wildcard to filter files, skip this setting and specify it in activity source settings. | No | ++**Example:** ++```json +{ + "name": "DelimitedTextDataset", + "properties": { + "type": "DelimitedText", + "linkedServiceName": { + "referenceName": "<Microsoft Fabric Lakehouse linked service name>", + "type": "LinkedServiceReference" + }, + "typeProperties": { + "location": { + "type": "LakehouseLocation", + "fileName": "<file name>", + "folderPath": "<folder name>" + }, + "columnDelimiter": ",", + "compressionCodec": "gzip", + "escapeChar": "\\", + "firstRowAsHeader": true, + "quoteChar": "\"" + }, + "schema": [ < physical schema, optional, auto retrieved during authoring > ] + } +} +``` ++### Microsoft Fabric Lakehouse Table dataset ++The following properties are supported for Microsoft Fabric Lakehouse Table dataset: ++| Property | Description | Required | +| :-- | :-- | :-- | +| type | The **type** property of the dataset must be set to **LakehouseTable**. | Yes | +| table | The name of your table. | Yes | ++**Example:** ++```json +{ +ΓÇ»ΓÇ»ΓÇ»ΓÇ»"name":ΓÇ»"LakehouseTableDataset", +ΓÇ»ΓÇ»ΓÇ»ΓÇ»"properties":ΓÇ»{ +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"type":ΓÇ»"LakehouseTable", +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"linkedServiceName":ΓÇ»{ +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"referenceName":ΓÇ»"<Microsoft Fabric Lakehouse linked service name>", +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"type":ΓÇ»"LinkedServiceReference" +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»}, +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"typeProperties":ΓÇ»{ + "table": "<table_name>" +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»}, +ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»"schema":ΓÇ»[< physical schema, optional, retrievable during authoring >] +ΓÇ»ΓÇ»ΓÇ»ΓÇ»} +} +``` ++## Copy activity properties ++The copy activity properties for Microsoft Fabric Lakehouse Files dataset and Microsoft Fabric Lakehouse Table dataset are different. See the corresponding sections for details. ++- [Microsoft Fabric Lakehouse Files in Copy activity](#microsoft-fabric-lakehouse-files-in-copy-activity) +- [Microsoft Fabric Lakehouse Table in Copy activity](#microsoft-fabric-lakehouse-table-in-copy-activity) ++For a full list of sections and properties available for defining activities, see [Copy activity configurations](copy-activity-overview.md#configuration) and [Pipelines and activities](concepts-pipelines-activities.md). ++### Microsoft Fabric Lakehouse Files in Copy activity ++To use Microsoft Fabric Lakehouse Files dataset type as a source or sink in Copy activity, go to the following sections for the detailed configurations. ++#### Microsoft Fabric Lakehouse Files as a source type ++Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings. ++- [Avro format](format-avro.md) +- [Binary format](format-binary.md) +- [Delimited text format](format-delimited-text.md) +- [JSON format](format-json.md) +- [ORC format](format-orc.md) +- [Parquet format](format-parquet.md) ++You have several options to copy data from Microsoft Fabric Lakehouse using the Microsoft Fabric Lakehouse Files dataset: ++- Copy from the given path specified in the dataset. +- Wildcard filter against folder path or file name, see `wildcardFolderPath` and `wildcardFileName`. +- Copy the files defined in a given text file as file set, see `fileListPath`. ++The following properties are under `storeSettings` settings in format-based copy source when using Microsoft Fabric Lakehouse Files dataset: ++| Property | Description | Required | +| | | | +| type | The type property under `storeSettings` must be set to **LakehouseReadSettings**. | Yes | +| ***Locate the files to copy:*** | | | +| OPTION 1: static path<br> | Copy from the folder/file path specified in the dataset. If you want to copy all files from a folder, additionally specify `wildcardFileName` as `*`. | | +| OPTION 2: wildcard<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No | +| OPTION 2: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes | +| OPTION 3: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, don't specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No | +| ***Additional settings:*** | | | +| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No | +| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. |No | +| modifiedDatetimeStart | Files filter based on the attribute: Last Modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be NULL, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has datetime value but `modifiedDatetimeEnd` is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When `modifiedDatetimeEnd` has datetime value but `modifiedDatetimeStart` is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No | +| modifiedDatetimeEnd | Same as above. | No | +| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.<br/>Allowed values are **false** (default) and **true**. | No | +| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column will be generated. | No | +| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | ++**Example:** ++```json +"activities": [ + { + "name": "CopyFromLakehouseFiles", + "type": "Copy", + "inputs": [ + { + "referenceName": "<Delimited text input dataset name>", + "type": "DatasetReference" + } + ], + "outputs": [ + { + "referenceName": "<output dataset name>", + "type": "DatasetReference" + } + ], + "typeProperties": { + "source": { + "type": "DelimitedTextSource", + "storeSettings": { + "type": "LakehouseReadSettings", + "recursive": true, + "enablePartitionDiscovery": false + }, + "formatSettings": { + "type": "DelimitedTextReadSettings" + } + }, + "sink": { + "type": "<sink type>" + } + } + } +] +``` +++#### Microsoft Fabric Lakehouse Files as a sink type ++Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings. ++- [Avro format](format-avro.md) +- [Binary format](format-binary.md) +- [Delimited text format](format-delimited-text.md) +- [JSON format](format-json.md) +- [ORC format](format-orc.md) +- [Parquet format](format-parquet.md) ++The following properties are under `storeSettings` settings in format-based copy sink when using Microsoft Fabric Lakehouse Files dataset: ++| Property | Description | Required | +| | | -- | +| type | The type property under `storeSettings` must be set to **LakehouseWriteSettings**. | Yes | +| copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No | +| blockSizeInMB | Specify the block size in MB used to write data to Microsoft Fabric Lakehouse. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into Microsoft Fabric Lakehouse, the default block size is 100 MB so as to fit in at most approximately 4.75-TB data. It might be not optimal when your data isn't large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No | +| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | +| metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No | ++**Example:** ++```json +"activities": [ + { + "name": "CopyToLakehouseFiles", + "type": "Copy", + "inputs": [ + { + "referenceName": "<input dataset name>", + "type": "DatasetReference" + } + ], + "outputs": [ + { + "referenceName": "<Parquet output dataset name>", + "type": "DatasetReference" + } + ], + "typeProperties": { + "source": { + "type": "<source type>" + }, + "sink": { + "type": "ParquetSink", + "storeSettings": { + "type": "LakehouseWriteSettings", + "copyBehavior": "PreserveHierarchy", + "metadata": [ + { + "name": "testKey1", + "value": "value1" + }, + { + "name": "testKey2", + "value": "value2" + } + ] + }, + "formatSettings": { + "type": "ParquetWriteSettings" + } + } + } + } +] +``` ++#### Folder and file filter examples ++This section describes the resulting behavior of the folder path and file name with wildcard filters. ++| folderPath | fileName | recursive | Source folder structure and filter result (files in **bold** are retrieved)| +|: |: |: |: | +| `Folder*` | (Empty, use default) | false | FolderA<br/> **File1.csv**<br/> **File2.json**<br/> Subfolder1<br/> File3.csv<br/> File4.json<br/> File5.csv<br/>AnotherFolderB<br/> File6.csv | +| `Folder*` | (Empty, use default) | true | FolderA<br/> **File1.csv**<br/> **File2.json**<br/> Subfolder1<br/> **File3.csv**<br/> **File4.json**<br/> **File5.csv**<br/>AnotherFolderB<br/> File6.csv | +| `Folder*` | `*.csv` | false | FolderA<br/> **File1.csv**<br/> File2.json<br/> Subfolder1<br/> File3.csv<br/> File4.json<br/> File5.csv<br/>AnotherFolderB<br/> File6.csv | +| `Folder*` | `*.csv` | true | FolderA<br/> **File1.csv**<br/> File2.json<br/> Subfolder1<br/> **File3.csv**<br/> File4.json<br/> **File5.csv**<br/>AnotherFolderB<br/> File6.csv | ++#### File list examples ++This section describes the resulting behavior of using file list path in copy activity source. ++Assuming you have the following source folder structure and want to copy the files in bold: ++| Sample source structure | Content in FileListToCopy.txt | ADF configuration | +| | | | +| filesystem<br/> FolderA<br/> **File1.csv**<br/> File2.json<br/> Subfolder1<br/> **File3.csv**<br/> File4.json<br/> **File5.csv**<br/> Metadata<br/> FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- File system: `filesystem`<br>- Folder path: `FolderA`<br><br>**In copy activity source:**<br>- File list path: `filesystem/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. | +++#### Some recursive and copyBehavior examples ++This section describes the resulting behavior of the copy operation for different combinations of recursive and copyBehavior values. ++| recursive | copyBehavior | Source folder structure | Resulting target | +|: |: |: |: | +| true |preserveHierarchy | Folder1<br/> File1<br/> File2<br/> Subfolder1<br/> File3<br/> File4<br/> File5 | The target Folder1 is created with the same structure as the source:<br/><br/>Folder1<br/> File1<br/> File2<br/> Subfolder1<br/> File3<br/> File4<br/> File5 | +| true |flattenHierarchy | Folder1<br/> File1<br/> File2<br/> Subfolder1<br/> File3<br/> File4<br/> File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/> autogenerated name for File1<br/> autogenerated name for File2<br/> autogenerated name for File3<br/> autogenerated name for File4<br/> autogenerated name for File5 | +| true |mergeFiles | Folder1<br/> File1<br/> File2<br/> Subfolder1<br/> File3<br/> File4<br/> File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/> File1 + File2 + File3 + File4 + File5 contents are merged into one file with an autogenerated file name. | +| false |preserveHierarchy | Folder1<br/> File1<br/> File2<br/> Subfolder1<br/> File3<br/> File4<br/> File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/> File1<br/> File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. | +| false |flattenHierarchy | Folder1<br/> File1<br/> File2<br/> Subfolder1<br/> File3<br/> File4<br/> File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/> autogenerated name for File1<br/> autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. | +| false |mergeFiles | Folder1<br/> File1<br/> File2<br/> Subfolder1<br/> File3<br/> File4<br/> File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/> File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. | +++### Microsoft Fabric Lakehouse Table in Copy activity ++To use Microsoft Fabric Lakehouse Table dataset as a source or sink dataset in Copy activity, go to the following sections for the detailed configurations. ++#### Microsoft Fabric Lakehouse Table as a source type ++To copy data from Microsoft Fabric Lakehouse using Microsoft Fabric Lakehouse Table dataset, set the **type** property in the Copy activity source to **LakehouseTableSource**. The following properties are supported in the Copy activity **source** section: ++| Property | Description | Required | +| : | :-- | :- | +| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSource**. | Yes | +| timestampAsOf | The timestamp to query an older snapshot. | No | +| versionAsOf | The version to query an older snapshot. | No | ++**Example:** ++```json +"activities":[ + { + "name": "CopyFromLakehouseTable", + "type": "Copy", + "inputs": [ + { + "referenceName": "<Microsoft Fabric Lakehouse Table input dataset name>", + "type": "DatasetReference" + } + ], + "outputs": [ + { + "referenceName": "<output dataset name>", + "type": "DatasetReference" + } + ], + "typeProperties": { + "source": { + "type": "LakehouseTableSource", + "timestampAsOf": "2023-09-23T00:00:00.000Z", + "versionAsOf": 2 + }, + "sink": { + "type": "<sink type>" + } + } + } +] +``` ++#### Microsoft Fabric Lakehouse Table as a sink type ++To copy data to Microsoft Fabric Lakehouse using Microsoft Fabric Lakehouse Table dataset, set the **type** property in the Copy Activity sink to **LakehouseTableSink**. The following properties are supported in the Copy activity **sink** section: ++| Property | Description | Required | +| : | :-- | :- | +| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSink**. | Yes | ++**Example:** ++```json +"activities":[ + { + "name": "CopyToLakehouseTable", + "type": "Copy", + "inputs": [ + { + "referenceName": "<input dataset name>", + "type": "DatasetReference" + } + ], + "outputs": [ + { + "referenceName": "<Microsoft Fabric Lakehouse Table output dataset name>", + "type": "DatasetReference" + } + ], + "typeProperties": { + "source": { + "type": "<source type>" + }, + "sink": { + "type": "LakehouseTableSink", + "tableActionOption ": "Append" + } + } + } +] +``` ++## Mapping data flow properties ++When transforming data in mapping data flow, you can read and write to files or tables in Microsoft Fabric Lakehouse. See the corresponding sections for details. ++- [Microsoft Fabric Lakehouse Files in mapping data flow](#microsoft-fabric-lakehouse-files-in-mapping-data-flow) +- [Microsoft Fabric Lakehouse Table in mapping data flow](#microsoft-fabric-lakehouse-table-in-mapping-data-flow) ++For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. ++### Microsoft Fabric Lakehouse Files in mapping data flow ++To use Microsoft Fabric Lakehouse Files dataset as a source or sink dataset in mapping data flow, go to the following sections for the detailed configurations. ++#### Microsoft Fabric Lakehouse Files as a source type ++Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings. ++- [Avro format](format-avro.md) +- [Delimited text format](format-delimited-text.md) +- [JSON format](format-json.md) +- [ORC format](format-orc.md) +- [Parquet format](format-parquet.md) ++#### Microsoft Fabric Lakehouse Files as a sink type ++Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings. ++- [Avro format](format-avro.md) +- [Delimited text format](format-delimited-text.md) +- [JSON format](format-json.md) +- [ORC format](format-orc.md) +- [Parquet format](format-parquet.md) ++### Microsoft Fabric Lakehouse Table in mapping data flow ++To use Microsoft Fabric Lakehouse Table dataset as a source or sink dataset in mapping data flow, go to the following sections for the detailed configurations. ++#### Microsoft Fabric Lakehouse Table as a source type ++There are no configurable properties under source options. ++#### Microsoft Fabric Lakehouse Table as a sink type ++The following properties are supported in the Mapping Data Flows **sink** section: ++| Name | Description | Required | Allowed values | Data flow script property | +| - | -- | -- | -- | - | +| Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable | +| Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you might notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true | +| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true | +| Merge Schema | Merge schemaΓÇ»option allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true | ++**Example: Microsoft Fabric Lakehouse Table sink** ++``` +sink(allowSchemaDrift: true, +ΓÇ» ΓÇ» validateSchema: false, +ΓÇ» ΓÇ» input( +ΓÇ» ΓÇ» ΓÇ» ΓÇ» CustomerID as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» NameStyle as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» Title as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» FirstName as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» MiddleName as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» LastName as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» Suffix as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» CompanyName as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» SalesPerson as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» EmailAddress as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» Phone as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordHash as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordSalt as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» rowguid as string, +ΓÇ» ΓÇ» ΓÇ» ΓÇ» ModifiedDate as string +ΓÇ» ΓÇ» ), +ΓÇ» ΓÇ» deletable:false, +ΓÇ» ΓÇ» insertable:true, +ΓÇ» ΓÇ» updateable:false, +ΓÇ» ΓÇ» upsertable:false, +ΓÇ» ΓÇ» optimizedWrite: true, +ΓÇ» ΓÇ» mergeSchema: true, +ΓÇ» ΓÇ» autoCompact: true, +ΓÇ» ΓÇ» skipDuplicateMapInputs: true, +ΓÇ» ΓÇ» skipDuplicateMapOutputs: true) ~> CustomerTable ++``` ++## Next steps ++For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats). |
data-factory | Connector Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md | |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
ddos-protection | Test Through Simulations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md | Simulations help you: ## Azure DDoS simulation testing policy -You may only simulate attacks using our approved testing partners: +You can only simulate attacks using our approved testing partners: - [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud): a self-service traffic generator where your customers can generate traffic against DDoS Protection-enabled public endpoints for simulations. - [Red Button](https://www.red-button.net/): work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment. - [RedWolf](https://www.redwolfsecurity.com/services/#cloud-ddos) a self-service or guided DDoS testing provider with real-time control. For this tutorial, you'll create a test environment that includes: - A virtual network - An Azure Bastion host - A load balancer -- Two virtual machines. +- Two virtual machines You'll then configure diagnostic logs and alerts to monitor for attacks and traffic patterns. Finally, you'll configure a DDoS attack simulation using one of our approved testing partners. You'll then configure diagnostic logs and alerts to monitor for attacks and traf - An Azure account with an active subscription. - In order to use diagnostic logging, you must first create a [Log Analytics workspace with diagnostic settings enabled](ddos-configure-log-analytics-workspace.md).+- For this tutorial you'll need to deploy a Load Balancer, a public IP address, Bastion, and two virtual machines. For more information, see [Deploy Load Balancer with DDoS Protection](../load-balancer/tutorial-protect-load-balancer-ddos.md). You can skip the NAT Gateway step in the Deploy Load Balancer with DDoS Protection tutorial. -## Prepare test environment -### Create a DDoS protection plan --1. Select **Create a resource** in the upper left corner of the Azure portal. -1. Search the term *DDoS*. When **DDoS protection plan** appears in the search results, select it. -1. Select **Create**. -1. Enter or select the following values. -- :::image type="content" source="./media/ddos-attack-simulation/create-ddos-plan.png" alt-text="Screenshot of creating a DDoS protection plan."::: -- |Setting |Value | - | | | - |Subscription | Select your subscription. | - |Resource group | Select **Create new** and enter **MyResourceGroup**.| - |Name | Enter **MyDDoSProtectionPlan**. | - |Region | Enter **East US**. | --1. Select **Review + create** then **Create** --### Create the virtual network --In this section, you'll create a virtual network, subnet, Azure Bastion host, and associate the DDoS Protection plan. The virtual network and subnet contains the load balancer and virtual machines. The bastion host is used to securely manage the virtual machines and install IIS to test the load balancer. The DDoS Protection plan will protect all public IP resources in the virtual network. --> [!IMPORTANT] -> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] -> --1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results. --1. In **Virtual networks**, select **+ Create**. --1. In **Create virtual network**, enter or select the following information in the **Basics** tab: -- | **Setting** | **Value** | - ||| - | **Project Details** | | - | Subscription | Select your Azure subscription. | - | Resource Group | Select **MyResourceGroup** | - | **Instance details** | | - | Name | Enter **myVNet** | - | Region | Select **East US** | --1. Select the **Security** tab. --1. Under **BastionHost**, select **Enable**. Enter this information: -- | Setting | Value | - |--|-| - | Bastion name | Enter **myBastionHost** | - | Azure Bastion Public IP Address | Select **myvent-bastion-publicIpAddress**. Select **OK**. | --1. Under **DDoS Network Protection**, select **Enable**. Then from the drop-down menu, select **MyDDoSProtectionPlan**. -- :::image type="content" source="./media/ddos-attack-simulation/enable-ddos.png" alt-text="Screenshot of enabling DDoS during virtual network creation."::: --1. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page. --1. In the **IP Addresses** tab, enter this information: -- | Setting | Value | - |--|-| - | IPv4 address space | Enter **10.1.0.0/16** | --1. Under **Subnets**, select the word **default**. If a subnet isn't present, select **+ Add subnet**. --1. In **Edit subnet**, enter this information, then select **Save**: -- | Setting | Value | - |--|-| - | Name | Enter **myBackendSubnet** | - | Starting Address | Enter **10.1.0.0/24** | --1. Under **Subnets**, select **AzureBastionSubnet**. In **Edit subnet**, enter this information,then select **Save**: -- | Setting | Value | - |--|-| - | Starting Address | Enter **10.1.1.0/26** | --1. Select the **Review + create** tab or select the **Review + create** button, then select **Create**. - - > [!NOTE] - > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created. --### Create load balancer --In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy. --During the creation of the load balancer, you'll configure: --* Frontend IP address -* Backend pool -* Inbound load-balancing rules -* Health probe --1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. In the **Load balancer** page, select **+ Create**. --1. In the **Basics** tab of the **Create load balancer** page, enter or select the following information: -- | Setting | Value | - | | | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **MyResourceGroup**. | - | **Instance details** | | - | Name | Enter **myLoadBalancer** | - | Region | Select **East US**. | - | SKU | Leave the default **Standard**. | - | Type | Select **Public**. | - | Tier | Leave the default **Regional**. | -- :::image type="content" source="./media/ddos-attack-simulation/create-standard-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true"::: --1. Select **Next: Frontend IP configuration** at the bottom of the page. --1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**, then enter the following information. Leave the rest of the defaults and select **Add**. -- | Setting | Value | - | --| -- | - | **Name** | Enter **myFrontend**. | - | **IP Type** | Select *Create new*. In *Add a public IP address*, enter **myPublicIP** for Name | - | **Availability zone** | Select **Zone-redundant**. | -- > [!NOTE] - > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md). ---1. Select **Next: Backend pools** at the bottom of the page. --1. In the **Backend pools** tab, select **+ Add a backend pool**, then enter the following information. Leave the rest of the defaults and select **Save**. -- | Setting | Value | - | --| -- | - | **Name** | Enter **myBackendPool**. | - | **Backend Pool Configuration** | Select **IP Address**. | - --1. Select **Save**, then select **Next: Inbound rules** at the bottom of the page. --1. Under **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**. --1. In **Add load balancing rule**, enter or select the following information: -- | Setting | Value | - | - | -- | - | Name | Enter **myHTTPRule** | - | IP Version | Select **IPv4** or **IPv6** depending on your requirements. | - | Frontend IP address | Select **myFrontend (To be created)**. | - | Backend pool | Select **myBackendPool**. | - | Protocol | Select **TCP**. | - | Port | Enter **80**. | - | Backend port | Enter **80**. | - | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | - | Session persistence | Select **None**. | - | Idle timeout (minutes) | Enter or select **15**. | - | TCP reset | Select the *Enabled* radio. | - | Floating IP | Select the *Disabled* radio. | - | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** | --1. Select **Save**. --1. Select the blue **Review + create** button at the bottom of the page. --1. Select **Create**. --### Create virtual machines --In this section, you'll create two virtual machines that will be load balanced by the load balancer. You'll also install IIS on the virtual machines to test the load balancer. --1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. In the **Virtual machines** page, select **+ Create**. --1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab: -- | Setting | Value | - |--|-| - | **Project Details** | | - | Subscription | Select your Azure subscription | - | Resource Group | Select **MyResourceGroup** | - | **Instance details** | | - | Virtual machine name | Enter **myVM1** | - | Region | Select **((US) East US)** | - | Availability Options | Select **Availability zones** | - | Availability zone | Select **Zone 1** | - | Security type | Select **Standard**. | - | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2** | - | Azure Spot instance | Leave the default of unchecked. | - | Size | Choose VM size or take default setting | - | **Administrator account** | | - | Username | Enter a username | - | Password | Enter a password | - | Confirm password | Reenter password | - | **Inbound port rules** | | - | Public inbound ports | Select **None** | --1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. - -1. In the Networking tab, select or enter the following information: -- | Setting | Value | - | - | -- | - | **Network interface** | | - | Virtual network | Select **myVNet** | - | Subnet | Select **myBackendSubnet** | - | Public IP | Select **None**. | - | NIC network security group | Select **Advanced** | - | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.| - | Delete NIC when VM is deleted | Leave the default of **unselected**. | - | Accelerated networking | Leave the default of **selected**. | - | **Load balancing** | - | **Load balancing options** | - | Load-balancing options | Select **Azure load balancer** | - | Select a load balancer | Select **myLoadBalancer** | - | Select a backend pool | Select **myBackendPool** | - | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** | - -1. Select **Review + create**. - -1. Review the settings, and then select **Create**. --1. Follow the steps 1 through 7 to create another VM with the following values and all the other settings the same as **myVM1**: -- | Setting | VM 2 - | - | -- | - | Name | **myVM2** | - | Availability zone | **Zone 2** | - | Network security group | Select the existing **myNSG** | ---### Install IIS --1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. --1. Select **myVM1**. --1. On the **Overview** page, select **Connect**, then **Bastion**. --1. Enter the username and password entered during VM creation. --1. Select **Connect**. --1. On the server desktop, navigate to **Start** > **Windows PowerShell** > **Windows PowerShell**. --1. In the PowerShell Window, run the following commands to: -- * Install the IIS server - * Remove the default iisstart.htm file - * Add a new iisstart.htm file that displays the name of the VM: -- ```powershell - # Install IIS server role - Install-WindowsFeature -name Web-Server -IncludeManagementTools - - # Remove default htm file - Remove-Item C:\inetpub\wwwroot\iisstart.htm - - # Add a new htm file that displays server name - Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername) - - ``` --1. Close the Bastion session with **myVM1**. --1. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2**. - ## Configure DDoS Protection metrics and alerts -Now we'll configure metrics and alerts to monitor for attacks and traffic patterns. +In this tutorial, we'll configure DDoS Protection metrics and alerts to monitor for attacks and traffic patterns. ### Configure diagnostic logs BreakingPoint Cloud offers: - Predefined DDoS test sizing and test duration profiles enable safer validations by eliminating the potential of configuration errors. > [!NOTE]-> For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud). +> For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud). Example attack values: Example attack values: > - For a video demonstration of utilizing BreakingPoint Cloud, see [DDoS Attack Simulation](https://www.youtube.com/watch?v=xFJS7RnX-Sw). - ### Red Button Red ButtonΓÇÖs [DDoS Testing](https://www.red-button.net/ddos-testing/) service suite includes three stages: |
defender-for-cloud | Alerts Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md | Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in [Further details and notes](defender-for-servers-introduction.md) -| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | -||--|:-:|| -| **A logon from a malicious IP has been detected. [seen multiple times]** | A successful remote authentication for the account [account] and process [process] occurred, however the logon IP address (x.x.x.x) has previously been reported as malicious or highly unusual. A successful attack has probably occurred. Files with the .scr extensions are screen saver files and are normally reside and execute from the Windows system directory. | - | High | -| **Addition of Guest account to Local Administrators group** | Analysis of host data has detected the addition of the built-in Guest account to the Local Administrators group on %{Compromised Host}, which is strongly associated with attacker activity. | - | Medium | -| **An event log was cleared** | Machine logs indicate a suspicious event log clearing operation by user: '%{user name}' in Machine: '%{CompromisedEntity}'. The %{log channel} log was cleared. | - | Informational | -| **Antimalware Action Failed** | Microsoft Antimalware has encountered an error when taking an action on malware or other potentially unwanted software. | - | Medium | -| **Antimalware Action Taken** | Microsoft Antimalware for Azure has taken an action to protect this machine from malware or other potentially unwanted software. | - | Medium | -| **Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium | -| **Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High | -| **Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium | -| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion, Execution | High | -| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion, Execution | High | -| **Antimalware file exclusion in your virtual machine**<br>(VM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion | Medium | -| **Antimalware real-time protection was disabled in your virtual machine**<br>(VM_AmRealtimeProtectionDisabled) | Real-time protection disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | -| **Antimalware real-time protection was disabled temporarily in your virtual machine**<br>(VM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | -| **Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine**<br>(VM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | - | High | -| **Antimalware scans blocked for files potentially related to malware campaigns on your virtual machine (Preview)**<br>(VM_AmMalwareCampaignRelatedExclusion) | An exclusion rule was detected in your virtual machine to prevent your antimalware extension scanning certain files that are suspected of being related to a malware campaign. The rule was detected by analyzing the Azure Resource Manager operations in your subscription. Attackers might exclude files from antimalware scans to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | -| **Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium | -| **Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | -| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium | -| **Detected actions indicative of disabling and deleting IIS log files** | Analysis of host data detected actions that show IIS log files being disabled and/or deleted. | - | Medium | -| **Detected anomalous mix of upper and lower case characters in command-line** | Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host. | - | Medium | -| **Detected change to a registry key that can be abused to bypass UAC** | Analysis of host data on %{Compromised Host} detected that a registry key that can be abused to bypass UAC (User Account Control) was changed. This kind of configuration, while possibly benign, is also typical of attacker activity when trying to move from unprivileged (standard user) to privileged (for example administrator) access on a compromised host. | - | Medium | -| **Detected decoding of an executable using built-in certutil.exe tool** | Analysis of host data on %{Compromised Host} detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. | - | High | -| **Detected enabling of the WDigest UseLogonCredential registry key** | Analysis of host data detected a change in the registry key HKLM\SYSTEM\ CurrentControlSet\Control\SecurityProviders\WDigest\ "UseLogonCredential". Specifically this key has been updated to allow logon credentials to be stored in clear text in LSA memory. Once enabled, an attacker can dump clear text passwords from LSA memory with credential harvesting tools such as Mimikatz. | - | Medium | -| **Detected encoded executable in command line data** | Analysis of host data on %{Compromised Host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. | - | High | -| **Detected obfuscated command line** | Attackers use increasingly complex obfuscation techniques to evade detections that run against the underlying data. Analysis of host data on %{Compromised Host} detected suspicious indicators of obfuscation on the commandline. | - | Informational | -| **Detected Petya ransomware indicators** | Analysis of host data on %{Compromised Host} detected indicators associated with Petya ransomware. See <https://aka.ms/petya-blog> for more information. Review the command line associated in this alert and escalate this alert to your security team. | - | High | -| **Detected possible execution of keygen executable** | Analysis of host data on %{Compromised Host} detected execution of a process whose name is indicative of a keygen tool; such tools are typically used to defeat software licensing mechanisms but their download is often bundled with other malicious software. Activity group GOLD has been known to make use of such keygens to covertly gain back door access to hosts that they compromise. | - | Medium | -| **Detected possible execution of malware dropper** | Analysis of host data on %{Compromised Host} detected a filename that has previously been associated with one of activity group GOLD's methods of installing malware on a victim host. | - | High | -| **Detected possible local reconnaissance activity** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing reconnaissance activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession in the way that has occurred here is rare. | - | | -| **Detected potentially suspicious use of Telegram tool** | Analysis of host data shows installation of Telegram, a free cloud-based instant messaging service that exists both for mobile and desktop system. Attackers are known to abuse this service to transfer malicious binaries to any other computer, phone, or tablet. | - | Medium | -| **Detected suppression of legal notice displayed to users at logon** | Analysis of host data on %{Compromised Host} detected changes to the registry key that controls whether a legal notice is displayed to users when they log on. Microsoft security analysis has determined that this is a common activity undertaken by attackers after having compromised a host. | - | Low | -| **Detected suspicious combination of HTA and PowerShell** | mshta.exe (Microsoft HTML Application Host) which is a signed Microsoft binary is being used by the attackers to launch malicious PowerShell commands. Attackers often resort to having an HTA file with inline VBScript. When a victim browses to the HTA file and chooses to run it, the PowerShell commands and scripts that it contains are executed. Analysis of host data on %{Compromised Host} detected mshta.exe launching PowerShell commands. | - | Medium | -| **Detected suspicious commandline arguments** | Analysis of host data on %{Compromised Host} detected suspicious commandline arguments that have been used in conjunction with a reverse shell used by activity group HYDROGEN. | - | High | -| **Detected suspicious commandline used to start all executables in a directory** | Analysis of host data has detected a suspicious process running on %{Compromised Host}. The commandline indicates an attempt to start all executables (*.exe) that may reside in a directory. This could be an indication of a compromised host. | - | Medium | -| **Detected suspicious credentials in commandline** | Analysis of host data on %{Compromised Host} detected a suspicious password being used to execute a file by activity group BORON. This activity group has been known to use this password to execute Pirpi malware on a victim host. | - | High | -| **Detected suspicious document credentials** | Analysis of host data on %{Compromised Host} detected a suspicious, common precomputed password hash used by malware being used to execute a file. Activity group HYDROGEN has been known to use this password to execute malware on a victim host. | - | High | -| **Detected suspicious execution of VBScript.Encode command** | Analysis of host data on %{Compromised Host} detected the execution of VBScript.Encode command. This encodes the scripts into unreadable text, making it more difficult for users to examine the code. Microsoft threat research shows that attackers often use encoded VBscript files as part of their attack to evade detection systems. This could be legitimate activity, or an indication of a compromised host. | - | Medium | -| **Detected suspicious execution via rundll32.exe** | Analysis of host data on %{Compromised Host} detected rundll32.exe being used to execute a process with an uncommon name, consistent with the process naming scheme previously seen used by activity group GOLD when installing their first stage implant on a compromised host. | - | High | -| **Detected suspicious file cleanup commands** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing post-compromise self-cleanup activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession, followed by a delete command in the way that has occurred here is rare. | - | High | -| **Detected suspicious file creation** | Analysis of host data on %{Compromised Host} detected creation or execution of a process that has previously indicated post-compromise action taken on a victim host by activity group BARIUM. This activity group has been known to use this technique to download more malware to a compromised host after an attachment in a phishing doc has been opened. | - | High | -| **Detected suspicious named pipe communications** | Analysis of host data on %{Compromised Host} detected data being written to a local named pipe from a Windows console command. Named pipes are known to be a channel used by attackers to task and communicate with a malicious implant. This could be legitimate activity, or an indication of a compromised host. | - | High | -| **Detected suspicious network activity** | Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | - | Low | -| **Detected suspicious new firewall rule** | Analysis of host data detected a new firewall rule has been added via netsh.exe to allow traffic from an executable in a suspicious location. | - | Medium | -| **Detected suspicious use of Cacls to lower the security state of the system** | Attackers use myriad ways like brute force, spear phishing etc. to achieve initial compromise and get a foothold on the network. Once initial compromise is achieved they often take steps to lower the security settings of a system. CaclsΓÇöshort for change access control list is Microsoft Windows native command-line utility often used for modifying the security permission on folders and files. A lot of time the binary is used by the attackers to lower the security settings of a system. This is done by giving Everyone full access to some of the system binaries like ftp.exe, net.exe, wscript.exe etc. Analysis of host data on %{Compromised Host} detected suspicious use of Cacls to lower the security of a system. | - | Medium | -| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file, which is configured to connect to a remote FTP server and download more malicious binaries. | - | Medium | -| **Detected suspicious use of Pcalua.exe to launch executable code** | Analysis of host data on %{Compromised Host} detected the use of pcalua.exe to launch executable code. Pcalua.exe is component of the Microsoft Windows "Program Compatibility Assistant", which detects compatibility issues during the installation or execution of a program. Attackers are known to abuse functionality of legitimate Windows system tools to perform malicious actions, for example using pcalua.exe with the -a switch to launch malicious executables either locally or from remote shares. | - | Medium | -| **Detected the disabling of critical services** | The analysis of host data on %{Compromised Host} detected execution of "net.exe stop" command being used to stop critical services like SharedAccess or the Windows Security app. The stopping of either of these services can be indication of a malicious behavior. | - | Medium | -| **Digital currency mining related behavior detected** | Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining. | - | High | -| **Dynamic PS script construction** | Analysis of host data on %{Compromised Host} detected a PowerShell script being constructed dynamically. Attackers sometimes use this approach of progressively building up a script in order to evade IDS systems. This could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium | -| **Executable found running from a suspicious location** | Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | - | High | -| **Fileless attack behavior detected**<br>(VM_FilelessAttackBehavior.Windows) | The memory of the process specified contains behaviors commonly used by fileless attacks. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Active network connections. See NetworkConnections below for details.<br>3) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>4) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion | Low | -| **Fileless attack technique detected**<br>(VM_FilelessAttackTechnique.Windows) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Executable image injected into the process, such as in a code injection attack.<br>3) Active network connections. See NetworkConnections below for details.<br>4) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>5) Process hollowing, which is a technique used by malware in which a legitimate process is loaded on the system to act as a container for hostile code.<br>6) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion, Execution | High | -| **Fileless attack toolkit detected**<br>(VM_FilelessAttackToolkit.Windows) | The memory of the process specified contains a fileless attack toolkit: [toolkit name]. Fileless attack toolkits use techniques that minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions. Specific behaviors include:<br>1) Well-known toolkits and crypto mining software.<br>2) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>3) Injected malicious executable in process memory. | Defense Evasion, Execution | Medium | -| **High risk software detected** | Analysis of host data from %{Compromised Host} detected the usage of software that has been associated with the installation of malware in the past. A common technique utilized in the distribution of malicious software is to package it within otherwise benign tools such as the one seen in this alert. When you use these tools, the malware can be silently installed in the background. | - | Medium | -| **Local Administrators group members were enumerated** | Machine logs indicate a successful enumeration on group %{Enumerated Group Domain Name}\%{Enumerated Group Name}. Specifically, %{Enumerating User Domain Name}\%{Enumerating User Name} remotely enumerated the members of the %{Enumerated Group Domain Name}\%{Enumerated Group Name} group. This activity could either be legitimate activity, or an indication that a machine in your organization has been compromised and used to reconnaissance %{vmname}. | - | Informational | -| **Malicious firewall rule created by ZINC server implant [seen multiple times]** | A firewall rule was created using techniques that match a known actor, ZINC. The rule was possibly used to open a port on %{Compromised Host} to allow for Command & Control communications. This behavior was seen [x] times today on the following machines: [Machine names] | - | High | -| **Malicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is considered malicious. | - | High | -| **Multiple Domain Accounts Queried** | Analysis of host data has determined that an unusual number of distinct domain accounts are being queried within a short time period from %{Compromised Host}. This kind of activity could be legitimate, but can also be an indication of compromise. | - | Medium | -| **Possible credential dumping detected [seen multiple times]** | Analysis of host data has detected use of native windows tool (for example, sqldumper.exe) being used in a way that allows to extract credentials from memory. Attackers often use these techniques to extract credentials that they then further use for lateral movement and privilege escalation. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | -| **Potential attempt to bypass AppLocker detected** | Analysis of host data on %{Compromised Host} detected a potential attempt to bypass AppLocker restrictions. AppLocker can be configured to implement a policy that limits what executables are allowed to run on a Windows system. The command-line pattern similar to that identified in this alert has been previously associated with attacker attempts to circumvent AppLocker policy by using trusted executables (allowed by AppLocker policy) to execute untrusted code. This could be legitimate activity, or an indication of a compromised host. | - | High | -| **PsExec execution detected**<br>(VM_RunByPsExec) | Analysis of host data indicates that the process %{Process Name} was executed by PsExec utility. PsExec can be used for running processes remotely. This technique might be used for malicious purposes. | Lateral Movement, Execution | Informational | -| **Ransomware indicators detected [seen multiple times]** | Analysis of host data indicates suspicious activity traditionally associated with lock-screen and encryption ransomware. Lock screen ransomware displays a full-screen message preventing interactive use of the host and access to its files. Encryption ransomware prevents access by encrypting data files. In both cases a ransom message is typically displayed, requesting payment in order to restore file access. This behavior was seen [x] times today on the following machines: [Machine names] | - | High | -| **Ransomware indicators detected** | Analysis of host data indicates suspicious activity traditionally associated with lock-screen and encryption ransomware. Lock screen ransomware displays a full-screen message preventing interactive use of the host and access to its files. Encryption ransomware prevents access by encrypting data files. In both cases a ransom message is typically displayed, requesting payment in order to restore file access. | - | High | -| **Rare SVCHOST service group executed**<br>(VM_SvcHostRunInRareServiceGroup) | The system process SVCHOST was observed running a rare service group. Malware often uses SVCHOST to masquerade its malicious activity. | Defense Evasion, Execution | Informational | -| **Sticky keys attack detected** | Analysis of host data indicates that an attacker may be subverting an accessibility binary (for example sticky keys, onscreen keyboard, narrator) in order to provide backdoor access to the host %{Compromised Host}. | - | Medium | -| **Successful brute force attack**<br>(VM_LoginBruteForceSuccess) | Several sign in attempts were detected from the same source. Some successfully authenticated to the host.<br>This resembles a burst attack, in which an attacker performs numerous authentication attempts to find valid account credentials. | Exploitation | Medium/High | -| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium | -| **Suspect service installation** | Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium | -| **Suspected Kerberos Golden Ticket attack parameters observed** | Analysis of host data detected commandline parameters consistent with a Kerberos Golden Ticket attack. | - | Medium | -| **Suspicious Account Creation Detected** | Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator. | - | Medium | -| **Suspicious Activity Detected**<br>(VM_SuspiciousActivity) | Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | -| **Suspicious authentication activity**<br>(VM_LoginBruteForceValidUserFailed) | Although none of them succeeded, some of them used accounts were recognized by the host. This resembles a dictionary attack, in which an attacker performs numerous authentication attempts using a dictionary of predefined account names and passwords in order to find valid credentials to access the host. This indicates that some of your host account names might exist in a well-known account name dictionary. | Probing | Medium | -| **Suspicious code segment detected** | Indicates that a code segment has been allocated by using non-standard methods, such as reflective injection and process hollowing. The alert provides more characteristics of the code segment that have been processed to provide context for the capabilities and behaviors of the reported code segment. | - | Medium | -| **Suspicious double extension file executed** | Analysis of host data indicates an execution of a process with a suspicious double extension. This extension may trick users into thinking files are safe to be opened and might indicate the presence of malware on the system. | - | High | -| **Suspicious download using Certutil detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | -| **Suspicious download using Certutil detected** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. | - | Medium | -| **Suspicious PowerShell Activity Detected** | Analysis of host data detected a PowerShell script running on %{Compromised Host} that has features in common with known suspicious scripts. This script could either be legitimate activity, or an indication of a compromised host. | - | High | -| **Suspicious PowerShell cmdlets executed** | Analysis of host data indicates execution of known malicious PowerShell PowerSploit cmdlets. | - | Medium | -| **Suspicious process executed [seen multiple times]** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names] | - | High | -| **Suspicious process executed** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. | - | High | -| **Suspicious process name detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | -| **Suspicious process name detected** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium | -| **Suspicious process termination burst**<br>(VM_TaskkillBurst) | Analysis of host data indicates a suspicious process termination burst in %{Machine Name}. Specifically, %{NumberOfCommands} processes were killed between %{Begin} and %{Ending}. | Defense Evasion | Low | -| **Suspicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is uncommon with this account. | - | Medium | -| **Suspicious SVCHOST process executed** | The system process SVCHOST was observed running in an abnormal context. Malware often uses SVCHOST to masquerade its malicious activity. | - | High | -| **Suspicious system process executed**<br>(VM_SystemProcessInAbnormalContext) | The system process %{process name} was observed running in an abnormal context. Malware often uses this process name to masquerade its malicious activity. | Defense Evasion, Execution | High | -| **Suspicious Volume Shadow Copy Activity** | Analysis of host data has detected a shadow copy deletion activity on the resource. Volume Shadow Copy (VSC) is an important artifact that stores data snapshots. Some malware and specifically Ransomware, targets VSC to sabotage backup strategies. | - | High | -| **Suspicious WindowPosition registry value detected** | Analysis of host data on %{Compromised Host} detected an attempted WindowPosition registry configuration change that could be indicative of hiding application windows in nonvisible sections of the desktop. This could be legitimate activity, or an indication of a compromised machine: this type of activity has been previously associated with known adware (or unwanted software) such as Win32/OneSystemCare and Win32/SystemHealer and malware such as Win32/Creprote. When the WindowPosition value is set to 201329664, (Hex: 0x0c00 0c00, corresponding to X-axis=0c00 and the Y-axis=0c00) this places the console app's window in a non-visible section of the user's screen in an area that is hidden from view below the visible start menu/taskbar. Known suspect Hex value includes, but not limited to c000c000 | - | Low | -| **Suspiciously named process detected** | Analysis of host data on %{Compromised Host} detected a process whose name is very similar to but different from a very commonly run process (%{Similar To Process Name}). While this process could be benign attackers are known to sometimes hide in plain sight by naming their malicious tools to resemble legitimate process names. | - | Medium | -| **Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium | -| **Unusual process execution detected** | Analysis of host data on %{Compromised Host} detected the execution of a process by %{User Name} that was unusual. Accounts such as %{User Name} tend to perform a limited set of operations, this execution was determined to be out of character and may be suspicious. | - | High | -| **Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium | -| **Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium | -| **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. | | | -| **Suspicious installation of GPU extension in your virtual machine (Preview)** <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Impact | Low | +| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | +| | | :-: | - | +| **A logon from a malicious IP has been detected. [seen multiple times]** | A successful remote authentication for the account [account] and process [process] occurred, however the logon IP address (x.x.x.x) has previously been reported as malicious or highly unusual. A successful attack has probably occurred. Files with the .scr extensions are screen saver files and are normally reside and execute from the Windows system directory. | - | High | +| **Addition of Guest account to Local Administrators group** | Analysis of host data has detected the addition of the built-in Guest account to the Local Administrators group on %{Compromised Host}, which is strongly associated with attacker activity. | - | Medium | +| **An event log was cleared** | Machine logs indicate a suspicious event log clearing operation by user: '%{user name}' in Machine: '%{CompromisedEntity}'. The %{log channel} log was cleared. | - | Informational | +| **Antimalware Action Failed** | Microsoft Antimalware has encountered an error when taking an action on malware or other potentially unwanted software. | - | Medium | +| **Antimalware Action Taken** | Microsoft Antimalware for Azure has taken an action to protect this machine from malware or other potentially unwanted software. | - | Medium | +| **Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium | +| **Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High | +| **Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium | +| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion, Execution | High | +| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion, Execution | High | +| **Antimalware file exclusion in your virtual machine**<br>(VM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion | Medium | +| **Antimalware real-time protection was disabled in your virtual machine**<br>(VM_AmRealtimeProtectionDisabled) | Real-time protection disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | +| **Antimalware real-time protection was disabled temporarily in your virtual machine**<br>(VM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | +| **Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine**<br>(VM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | - | High | +| **Antimalware scans blocked for files potentially related to malware campaigns on your virtual machine (Preview)**<br>(VM_AmMalwareCampaignRelatedExclusion) | An exclusion rule was detected in your virtual machine to prevent your antimalware extension scanning certain files that are suspected of being related to a malware campaign. The rule was detected by analyzing the Azure Resource Manager operations in your subscription. Attackers might exclude files from antimalware scans to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | +| **Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium | +| **Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | +| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium | +| **Detected actions indicative of disabling and deleting IIS log files** | Analysis of host data detected actions that show IIS log files being disabled and/or deleted. | - | Medium | +| **Detected anomalous mix of upper and lower case characters in command-line** | Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host. | - | Medium | +| **Detected change to a registry key that can be abused to bypass UAC** | Analysis of host data on %{Compromised Host} detected that a registry key that can be abused to bypass UAC (User Account Control) was changed. This kind of configuration, while possibly benign, is also typical of attacker activity when trying to move from unprivileged (standard user) to privileged (for example administrator) access on a compromised host. | - | Medium | +| **Detected decoding of an executable using built-in certutil.exe tool** | Analysis of host data on %{Compromised Host} detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. | - | High | +| **Detected enabling of the WDigest UseLogonCredential registry key** | Analysis of host data detected a change in the registry key HKLM\SYSTEM\ CurrentControlSet\Control\SecurityProviders\WDigest\ "UseLogonCredential". Specifically this key has been updated to allow logon credentials to be stored in clear text in LSA memory. Once enabled, an attacker can dump clear text passwords from LSA memory with credential harvesting tools such as Mimikatz. | - | Medium | +| **Detected encoded executable in command line data** | Analysis of host data on %{Compromised Host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. | - | High | +| **Detected obfuscated command line** | Attackers use increasingly complex obfuscation techniques to evade detections that run against the underlying data. Analysis of host data on %{Compromised Host} detected suspicious indicators of obfuscation on the commandline. | - | Informational | +| **Detected possible execution of keygen executable** | Analysis of host data on %{Compromised Host} detected execution of a process whose name is indicative of a keygen tool; such tools are typically used to defeat software licensing mechanisms but their download is often bundled with other malicious software. Activity group GOLD has been known to make use of such keygens to covertly gain back door access to hosts that they compromise. | - | Medium | +| **Detected possible execution of malware dropper** | Analysis of host data on %{Compromised Host} detected a filename that has previously been associated with one of activity group GOLD's methods of installing malware on a victim host. | - | High | +| **Detected possible local reconnaissance activity** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing reconnaissance activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession in the way that has occurred here is rare. | - | | +| **Detected potentially suspicious use of Telegram tool** | Analysis of host data shows installation of Telegram, a free cloud-based instant messaging service that exists both for mobile and desktop system. Attackers are known to abuse this service to transfer malicious binaries to any other computer, phone, or tablet. | - | Medium | +| **Detected suppression of legal notice displayed to users at logon** | Analysis of host data on %{Compromised Host} detected changes to the registry key that controls whether a legal notice is displayed to users when they log on. Microsoft security analysis has determined that this is a common activity undertaken by attackers after having compromised a host. | - | Low | +| **Detected suspicious combination of HTA and PowerShell** | mshta.exe (Microsoft HTML Application Host) which is a signed Microsoft binary is being used by the attackers to launch malicious PowerShell commands. Attackers often resort to having an HTA file with inline VBScript. When a victim browses to the HTA file and chooses to run it, the PowerShell commands and scripts that it contains are executed. Analysis of host data on %{Compromised Host} detected mshta.exe launching PowerShell commands. | - | Medium | +| **Detected suspicious commandline arguments** | Analysis of host data on %{Compromised Host} detected suspicious commandline arguments that have been used in conjunction with a reverse shell used by activity group HYDROGEN. | - | High | +| **Detected suspicious commandline used to start all executables in a directory** | Analysis of host data has detected a suspicious process running on %{Compromised Host}. The commandline indicates an attempt to start all executables (*.exe) that may reside in a directory. This could be an indication of a compromised host. | - | Medium | +| **Detected suspicious credentials in commandline** | Analysis of host data on %{Compromised Host} detected a suspicious password being used to execute a file by activity group BORON. This activity group has been known to use this password to execute Pirpi malware on a victim host. | - | High | +| **Detected suspicious document credentials** | Analysis of host data on %{Compromised Host} detected a suspicious, common precomputed password hash used by malware being used to execute a file. Activity group HYDROGEN has been known to use this password to execute malware on a victim host. | - | High | +| **Detected suspicious execution of VBScript.Encode command** | Analysis of host data on %{Compromised Host} detected the execution of VBScript.Encode command. This encodes the scripts into unreadable text, making it more difficult for users to examine the code. Microsoft threat research shows that attackers often use encoded VBscript files as part of their attack to evade detection systems. This could be legitimate activity, or an indication of a compromised host. | - | Medium | +| **Detected suspicious execution via rundll32.exe** | Analysis of host data on %{Compromised Host} detected rundll32.exe being used to execute a process with an uncommon name, consistent with the process naming scheme previously seen used by activity group GOLD when installing their first stage implant on a compromised host. | - | High | +| **Detected suspicious file cleanup commands** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing post-compromise self-cleanup activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession, followed by a delete command in the way that has occurred here is rare. | - | High | +| **Detected suspicious file creation** | Analysis of host data on %{Compromised Host} detected creation or execution of a process that has previously indicated post-compromise action taken on a victim host by activity group BARIUM. This activity group has been known to use this technique to download more malware to a compromised host after an attachment in a phishing doc has been opened. | - | High | +| **Detected suspicious named pipe communications** | Analysis of host data on %{Compromised Host} detected data being written to a local named pipe from a Windows console command. Named pipes are known to be a channel used by attackers to task and communicate with a malicious implant. This could be legitimate activity, or an indication of a compromised host. | - | High | +| **Detected suspicious network activity** | Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | - | Low | +| **Detected suspicious new firewall rule** | Analysis of host data detected a new firewall rule has been added via netsh.exe to allow traffic from an executable in a suspicious location. | - | Medium | +| **Detected suspicious use of Cacls to lower the security state of the system** | Attackers use myriad ways like brute force, spear phishing etc. to achieve initial compromise and get a foothold on the network. Once initial compromise is achieved they often take steps to lower the security settings of a system. CaclsΓÇöshort for change access control list is Microsoft Windows native command-line utility often used for modifying the security permission on folders and files. A lot of time the binary is used by the attackers to lower the security settings of a system. This is done by giving Everyone full access to some of the system binaries like ftp.exe, net.exe, wscript.exe etc. Analysis of host data on %{Compromised Host} detected suspicious use of Cacls to lower the security of a system. | - | Medium | +| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file, which is configured to connect to a remote FTP server and download more malicious binaries. | - | Medium | +| **Detected suspicious use of Pcalua.exe to launch executable code** | Analysis of host data on %{Compromised Host} detected the use of pcalua.exe to launch executable code. Pcalua.exe is component of the Microsoft Windows "Program Compatibility Assistant", which detects compatibility issues during the installation or execution of a program. Attackers are known to abuse functionality of legitimate Windows system tools to perform malicious actions, for example using pcalua.exe with the -a switch to launch malicious executables either locally or from remote shares. | - | Medium | +| **Detected the disabling of critical services** | The analysis of host data on %{Compromised Host} detected execution of "net.exe stop" command being used to stop critical services like SharedAccess or the Windows Security app. The stopping of either of these services can be indication of a malicious behavior. | - | Medium | +| **Digital currency mining related behavior detected** | Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining. | - | High | +| **Dynamic PS script construction** | Analysis of host data on %{Compromised Host} detected a PowerShell script being constructed dynamically. Attackers sometimes use this approach of progressively building up a script in order to evade IDS systems. This could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium | +| **Executable found running from a suspicious location** | Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | - | High | +| **Fileless attack behavior detected**<br>(VM_FilelessAttackBehavior.Windows) | The memory of the process specified contains behaviors commonly used by fileless attacks. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Active network connections. See NetworkConnections below for details.<br>3) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>4) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion | Low | +| **Fileless attack technique detected**<br>(VM_FilelessAttackTechnique.Windows) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Executable image injected into the process, such as in a code injection attack.<br>3) Active network connections. See NetworkConnections below for details.<br>4) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>5) Process hollowing, which is a technique used by malware in which a legitimate process is loaded on the system to act as a container for hostile code.<br>6) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion, Execution | High | +| **Fileless attack toolkit detected**<br>(VM_FilelessAttackToolkit.Windows) | The memory of the process specified contains a fileless attack toolkit: [toolkit name]. Fileless attack toolkits use techniques that minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions. Specific behaviors include:<br>1) Well-known toolkits and crypto mining software.<br>2) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>3) Injected malicious executable in process memory. | Defense Evasion, Execution | Medium | +| **High risk software detected** | Analysis of host data from %{Compromised Host} detected the usage of software that has been associated with the installation of malware in the past. A common technique utilized in the distribution of malicious software is to package it within otherwise benign tools such as the one seen in this alert. When you use these tools, the malware can be silently installed in the background. | - | Medium | +| **Local Administrators group members were enumerated** | Machine logs indicate a successful enumeration on group %{Enumerated Group Domain Name}\%{Enumerated Group Name}. Specifically, %{Enumerating User Domain Name}\%{Enumerating User Name} remotely enumerated the members of the %{Enumerated Group Domain Name}\%{Enumerated Group Name} group. This activity could either be legitimate activity, or an indication that a machine in your organization has been compromised and used to reconnaissance %{vmname}. | - | Informational | +| **Malicious firewall rule created by ZINC server implant [seen multiple times]** | A firewall rule was created using techniques that match a known actor, ZINC. The rule was possibly used to open a port on %{Compromised Host} to allow for Command & Control communications. This behavior was seen [x] times today on the following machines: [Machine names] | - | High | +| **Malicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is considered malicious. | - | High | +| **Multiple Domain Accounts Queried** | Analysis of host data has determined that an unusual number of distinct domain accounts are being queried within a short time period from %{Compromised Host}. This kind of activity could be legitimate, but can also be an indication of compromise. | - | Medium | +| **Possible credential dumping detected [seen multiple times]** | Analysis of host data has detected use of native windows tool (for example, sqldumper.exe) being used in a way that allows to extract credentials from memory. Attackers often use these techniques to extract credentials that they then further use for lateral movement and privilege escalation. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | +| **Potential attempt to bypass AppLocker detected** | Analysis of host data on %{Compromised Host} detected a potential attempt to bypass AppLocker restrictions. AppLocker can be configured to implement a policy that limits what executables are allowed to run on a Windows system. The command-line pattern similar to that identified in this alert has been previously associated with attacker attempts to circumvent AppLocker policy by using trusted executables (allowed by AppLocker policy) to execute untrusted code. This could be legitimate activity, or an indication of a compromised host. | - | High | +| **PsExec execution detected**<br>(VM_RunByPsExec) | Analysis of host data indicates that the process %{Process Name} was executed by PsExec utility. PsExec can be used for running processes remotely. This technique might be used for malicious purposes. | Lateral Movement, Execution | Informational | +| **Rare SVCHOST service group executed**<br>(VM_SvcHostRunInRareServiceGroup) | The system process SVCHOST was observed running a rare service group. Malware often uses SVCHOST to masquerade its malicious activity. | Defense Evasion, Execution | Informational | +| **Sticky keys attack detected** | Analysis of host data indicates that an attacker may be subverting an accessibility binary (for example sticky keys, onscreen keyboard, narrator) in order to provide backdoor access to the host %{Compromised Host}. | - | Medium | +| **Successful brute force attack**<br>(VM_LoginBruteForceSuccess) | Several sign in attempts were detected from the same source. Some successfully authenticated to the host.<br>This resembles a burst attack, in which an attacker performs numerous authentication attempts to find valid account credentials. | Exploitation | Medium/High | +| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium | +| **Suspect service installation** | Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium | +| **Suspected Kerberos Golden Ticket attack parameters observed** | Analysis of host data detected commandline parameters consistent with a Kerberos Golden Ticket attack. | - | Medium | +| **Suspicious Account Creation Detected** | Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator. | - | Medium | +| **Suspicious Activity Detected**<br>(VM_SuspiciousActivity) | Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | +| **Suspicious authentication activity**<br>(VM_LoginBruteForceValidUserFailed) | Although none of them succeeded, some of them used accounts were recognized by the host. This resembles a dictionary attack, in which an attacker performs numerous authentication attempts using a dictionary of predefined account names and passwords in order to find valid credentials to access the host. This indicates that some of your host account names might exist in a well-known account name dictionary. | Probing | Medium | +| **Suspicious code segment detected** | Indicates that a code segment has been allocated by using non-standard methods, such as reflective injection and process hollowing. The alert provides more characteristics of the code segment that have been processed to provide context for the capabilities and behaviors of the reported code segment. | - | Medium | +| **Suspicious double extension file executed** | Analysis of host data indicates an execution of a process with a suspicious double extension. This extension may trick users into thinking files are safe to be opened and might indicate the presence of malware on the system. | - | High | +| **Suspicious download using Certutil detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | +| **Suspicious download using Certutil detected** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. | - | Medium | +| **Suspicious PowerShell Activity Detected** | Analysis of host data detected a PowerShell script running on %{Compromised Host} that has features in common with known suspicious scripts. This script could either be legitimate activity, or an indication of a compromised host. | - | High | +| **Suspicious PowerShell cmdlets executed** | Analysis of host data indicates execution of known malicious PowerShell PowerSploit cmdlets. | - | Medium | +| **Suspicious process executed [seen multiple times]** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names] | - | High | +| **Suspicious process executed** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. | - | High | +| **Suspicious process name detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | +| **Suspicious process name detected** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium | +| **Suspicious process termination burst**<br>(VM_TaskkillBurst) | Analysis of host data indicates a suspicious process termination burst in %{Machine Name}. Specifically, %{NumberOfCommands} processes were killed between %{Begin} and %{Ending}. | Defense Evasion | Low | +| **Suspicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is uncommon with this account. | - | Medium | +| **Suspicious SVCHOST process executed** | The system process SVCHOST was observed running in an abnormal context. Malware often uses SVCHOST to masquerade its malicious activity. | - | High | +| **Suspicious system process executed**<br>(VM_SystemProcessInAbnormalContext) | The system process %{process name} was observed running in an abnormal context. Malware often uses this process name to masquerade its malicious activity. | Defense Evasion, Execution | High | +| **Suspicious Volume Shadow Copy Activity** | Analysis of host data has detected a shadow copy deletion activity on the resource. Volume Shadow Copy (VSC) is an important artifact that stores data snapshots. Some malware and specifically Ransomware, targets VSC to sabotage backup strategies. | - | High | +| **Suspicious WindowPosition registry value detected** | Analysis of host data on %{Compromised Host} detected an attempted WindowPosition registry configuration change that could be indicative of hiding application windows in nonvisible sections of the desktop. This could be legitimate activity, or an indication of a compromised machine: this type of activity has been previously associated with known adware (or unwanted software) such as Win32/OneSystemCare and Win32/SystemHealer and malware such as Win32/Creprote. When the WindowPosition value is set to 201329664, (Hex: 0x0c00 0c00, corresponding to X-axis=0c00 and the Y-axis=0c00) this places the console app's window in a non-visible section of the user's screen in an area that is hidden from view below the visible start menu/taskbar. Known suspect Hex value includes, but not limited to c000c000 | - | Low | +| **Suspiciously named process detected** | Analysis of host data on %{Compromised Host} detected a process whose name is very similar to but different from a very commonly run process (%{Similar To Process Name}). While this process could be benign attackers are known to sometimes hide in plain sight by naming their malicious tools to resemble legitimate process names. | - | Medium | +| **Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium | +| **Unusual process execution detected** | Analysis of host data on %{Compromised Host} detected the execution of a process by %{User Name} that was unusual. Accounts such as %{User Name} tend to perform a limited set of operations, this execution was determined to be out of character and may be suspicious. | - | High | +| **Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium | +| **Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium | +| **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. | | | +| **Suspicious installation of GPU extension in your virtual machine (Preview)** <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Impact | Low | ## <a name="alerts-linux"></a>Alerts for Linux machines Microsoft Defender for Containers provides security alerts on the cluster level [Further details and notes](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) -| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | -|-||:-:|| -| **Exposed Postgres service with trust authentication configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresTrustAuth) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer. The service is configured with trust authentication method, which doesn't require credentials. | InitialAccess | Medium | -| **Exposed Postgres service with risky configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresBroadIPRange) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer with a risky configuration. Exposing the service to a wide range of IP addresses poses a security risk. | InitialAccess | Medium | -| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium | -| **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium | -| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | -| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation, which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium | -| **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium | -| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert's extended properties. | Execution | Medium | -| **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium | -| **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational | -| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium | -| **Behavior similar to Fairware ransomware detected**<br>(K8S.NODE_FairwareMalware) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium | -| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low | -| **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low | -| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium | -| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the cluster's DNS server and poison it. | Lateral Movement | Low | -| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low | -| **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | -| **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low | -| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It's rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium | -| **Detected suspicious use of the useradd command**<br>(K8S.NODE_SuspectUserAddition) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium | -| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High | -| **Digital currency mining related behavior detected**<br>(K8S.NODE_DigitalCurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High | -| **Docker build operation detected on a Kubernetes node**<br>(K8S.NODE_ImageBuildOnNode) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low | -| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low | -| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised system. | Execution | Medium | -| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: <https://aka.ms/exposedkubeflow-blog> | Initial Access | Medium | -| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High | -| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium | -| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low | -| **Indicators associated with DDOS toolkit detected**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium | -| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low | -| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes that contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low | -| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | -| **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | -| **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High | -| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces shouldn't contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low | -| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low | -| **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | -| **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium | -| **Possible command line exploitation attempt**<br>(K8S.NODE_ExploitAttempt) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium | -| **Possible credential access tool detected**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium | -| **Possible Cryptocoinminer download detected**<br>(K8S.NODE_CryptoCoinMinerDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium | -| **Possible data exfiltration detected**<br>(K8S.NODE_DataEgressArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium | -| **Possible Log Tampering Activity Detected**<br>(K8S.NODE_SystemLogRemoval) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium | -| **Possible password change using crypt-method detected**<br>(K8S.NODE_SuspectPasswordChange) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium | -| **Potential port forwarding to external IP address**<br>(K8S.NODE_SuspectPortForwarding) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium | -| **Potential reverse shell detected**<br>(K8S.NODE_ReverseShell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium | -| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the node's resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low | -| **Process associated with digital currency mining detected**<br>(K8S.NODE_CryptoCoinMinerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium | -| **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low | -| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low | -| **Security-related process termination detected**<br>(K8S.NODE_SuspectProcessTermination) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low | -| **SSH server is running inside a container**<br>(K8S.NODE_ContainerSSH) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium | -| **Suspicious file timestamp modification**<br>(K8S.NODE_TimestampTampering) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low | -| **Suspicious request to Kubernetes API**<br>(K8S.NODE_KubernetesAPI) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium | -| **Suspicious request to the Kubernetes Dashboard**<br>(K8S.NODE_KubernetesDashboard) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium | -| **Potential crypto coin miner started**<br>(K8S.NODE_CryptoCoinMinerExecution) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium | -| **Suspicious password access**<br>(K8S.NODE_SuspectPasswordFileAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational | -| **Suspicious use of DNS over HTTPS**<br>(K8S.NODE_SuspiciousDNSOverHttps) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium | -| **A possible connection to malicious location has been detected.**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium | -| **Possible malicious web shell detected.**<br>(K8S.NODE_Webshell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected a possible web shell. Attackers will often upload a web shell to a compute resource they have compromised to gain persistence or for further exploitation. | Persistence, Exploitation | Medium | -| **Burst of multiple reconnaissance commands could indicate initial activity after compromise**<br>(K8S.NODE_ReconnaissanceArtifactsBurst) <sup>[1](#footnote1)</sup> | Analysis of host/device data detected execution of multiple reconnaissance commands related to gathering system or host details performed by attackers after initial compromise. | Discovery, Collection | Low | -| **Suspicious Download Then Run Activity**<br>(K8S.NODE_DownloadAndRunCombo) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a file being downloaded then run in the same command. While this isn't always malicious, this is a very common technique attackers use to get malicious files onto victim machines. | Execution, CommandAndControl, Exploitation | Medium | -| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low | -| **Access to kubelet kubeconfig file detected**<br>(K8S.NODE_KubeConfigAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running on a Kubernetes cluster node detected access to kubeconfig file on the host. The kubeconfig file, normally used by the Kubelet process, contains credentials to the Kubernetes cluster API server. Access to this file is often associated with attackers attempting to access those credentials, or with security scanning tools which check if the file is accessible. | CredentialAccess | Medium | -| **Access to cloud metadata service detected**<br>(K8S.NODE_ImdsCall) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected access to the cloud metadata service for acquiring identity token. The container doesn't normally perform such operation. While this behavior might be legitimate, attackers might use this technique to access cloud resources after gaining initial access to a running container. | CredentialAccess | Medium | -| **MITRE Caldera agent detected**<br>(K8S.NODE_MitreCalderaTools) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium | +| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | +| | | :-: | - | +| **Exposed Postgres service with trust authentication configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresTrustAuth) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer. The service is configured with trust authentication method, which doesn't require credentials. | InitialAccess | Medium | +| **Exposed Postgres service with risky configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresBroadIPRange) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer with a risky configuration. Exposing the service to a wide range of IP addresses poses a security risk. | InitialAccess | Medium | +| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium | +| **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium | +| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | +| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation, which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium | +| **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium | +| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert's extended properties. | Execution | Medium | +| **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium | +| **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational | +| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium | +| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low | +| **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low | +| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium | +| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the cluster's DNS server and poison it. | Lateral Movement | Low | +| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low | +| **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | +| **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low | +| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It's rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium | +| **Detected suspicious use of the useradd command**<br>(K8S.NODE_SuspectUserAddition) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium | +| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High | +| **Digital currency mining related behavior detected**<br>(K8S.NODE_DigitalCurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High | +| **Docker build operation detected on a Kubernetes node**<br>(K8S.NODE_ImageBuildOnNode) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low | +| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low | +| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised system. | Execution | Medium | +| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: <https://aka.ms/exposedkubeflow-blog> | Initial Access | Medium | +| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High | +| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium | +| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low | +| **Indicators associated with DDOS toolkit detected**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium | +| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low | +| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes that contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low | +| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | +| **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | +| **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High | +| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces shouldn't contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low | +| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low | +| **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | +| **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium | +| **Possible command line exploitation attempt**<br>(K8S.NODE_ExploitAttempt) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium | +| **Possible credential access tool detected**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium | +| **Possible Cryptocoinminer download detected**<br>(K8S.NODE_CryptoCoinMinerDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium | +| **Possible data exfiltration detected**<br>(K8S.NODE_DataEgressArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium | +| **Possible Log Tampering Activity Detected**<br>(K8S.NODE_SystemLogRemoval) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium | +| **Possible password change using crypt-method detected**<br>(K8S.NODE_SuspectPasswordChange) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium | +| **Potential port forwarding to external IP address**<br>(K8S.NODE_SuspectPortForwarding) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium | +| **Potential reverse shell detected**<br>(K8S.NODE_ReverseShell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium | +| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the node's resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low | +| **Process associated with digital currency mining detected**<br>(K8S.NODE_CryptoCoinMinerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium | +| **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low | +| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low | +| **Security-related process termination detected**<br>(K8S.NODE_SuspectProcessTermination) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low | +| **SSH server is running inside a container**<br>(K8S.NODE_ContainerSSH) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium | +| **Suspicious file timestamp modification**<br>(K8S.NODE_TimestampTampering) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low | +| **Suspicious request to Kubernetes API**<br>(K8S.NODE_KubernetesAPI) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium | +| **Suspicious request to the Kubernetes Dashboard**<br>(K8S.NODE_KubernetesDashboard) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium | +| **Potential crypto coin miner started**<br>(K8S.NODE_CryptoCoinMinerExecution) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium | +| **Suspicious password access**<br>(K8S.NODE_SuspectPasswordFileAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational | +| **Suspicious use of DNS over HTTPS**<br>(K8S.NODE_SuspiciousDNSOverHttps) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium | +| **A possible connection to malicious location has been detected.**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium | +| **Possible malicious web shell detected.**<br>(K8S.NODE_Webshell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected a possible web shell. Attackers will often upload a web shell to a compute resource they have compromised to gain persistence or for further exploitation. | Persistence, Exploitation | Medium | +| **Burst of multiple reconnaissance commands could indicate initial activity after compromise**<br>(K8S.NODE_ReconnaissanceArtifactsBurst) <sup>[1](#footnote1)</sup> | Analysis of host/device data detected execution of multiple reconnaissance commands related to gathering system or host details performed by attackers after initial compromise. | Discovery, Collection | Low | +| **Suspicious Download Then Run Activity**<br>(K8S.NODE_DownloadAndRunCombo) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a file being downloaded then run in the same command. While this isn't always malicious, this is a very common technique attackers use to get malicious files onto victim machines. | Execution, CommandAndControl, Exploitation | Medium | +| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low | +| **Access to kubelet kubeconfig file detected**<br>(K8S.NODE_KubeConfigAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running on a Kubernetes cluster node detected access to kubeconfig file on the host. The kubeconfig file, normally used by the Kubelet process, contains credentials to the Kubernetes cluster API server. Access to this file is often associated with attackers attempting to access those credentials, or with security scanning tools which check if the file is accessible. | CredentialAccess | Medium | +| **Access to cloud metadata service detected**<br>(K8S.NODE_ImdsCall) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected access to the cloud metadata service for acquiring identity token. The container doesn't normally perform such operation. While this behavior might be legitimate, attackers might use this technique to access cloud resources after gaining initial access to a running container. | CredentialAccess | Medium | +| **MITRE Caldera agent detected**<br>(K8S.NODE_MitreCalderaTools) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium | <sup><a name="footnote1"></a>1</sup>: **Preview for non-AKS clusters**: This alert is generally available for AKS clusters, but it is in preview for other environments, such as Azure Arc, EKS and GKE. VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) - [Continuously export Defender for Cloud data](continuous-export.md)-- |
defender-for-cloud | Data Aware Security Dashboard Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-aware-security-dashboard-overview.md | description: Learn about the capabilities and functions of the data-aware securi Previously updated : 10/17/2023 Last updated : 11/06/2023 # Data security dashboard You can select any element on the page to get more detailed information. ||| |Release state: | Public Preview | | Prerequisites: | Defender for CSPM fully enabled, including sensitive data discovery <br/> Workload protection for database and storage to explore active risks |-| Required roles and permissions: | No other roles needed on top of what is required for the security explorer. | +| Required roles and permissions: | No other roles needed aside from what is required for the security explorer. <br><br> To access the dashboard with more than 1000 subscriptions, you must have tenant-level permissions, which include one of the following roles: **Global Reader**, **Global Administrator**, **Security Administrator**, or **Security Reader**. | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds <br/> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government <br/> :::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet | ## Prerequisites -In order to view the dashboard, you must enable Defender CSPM and also enable the sensitive data discovery extensions button underneath. In addition, to receive the alerts for data sensitivity, you must also enable the Defender for Storage plan. +In order to view the dashboard, you must enable Defender CSPM and also enable the sensitive data discovery extension button underneath. In addition, to receive the alerts for data sensitivity, you must also enable the Defender for Storage plan for storage related alerts or Defender for Databases for database related alerts. :::image type="content" source="media/data-aware-security-dashboard/select-sensitive-data-discovery.png" alt-text="Screenshot that shows where to turn on the sensitive data discovery extension." lightbox="media/data-aware-security-dashboard/select-sensitive-data-discovery.png"::: The feature is turned on at the subscription level. ## Required permissions and roles -- To view the dashboard you must have either one of the following scenarios:+- To view the dashboard, you must have either one of the following scenarios: - **all of the following permissions**: You can select the **Manage data sensitivity settings** to get to the **Data sen ### Data resources security status -**Sensitive resources status over time** - displays how data security evolves over time with a graph that shows the number of sensitive resources affected by alerts, attack paths, and recommendations within a defined period (last 30, 14, or 7 days). +**Sensitive resources status over time** - displays how data security evolves over time with a graph that shows the number of sensitive resources affected by alerts, attack paths, and recommendations within a defined period (last 30, 14, or 7 days). :::image type="content" source="media/data-aware-security-dashboard/data-resources-security-status.png" alt-text="Screenshot that shows the data resources security status section of the data security view." lightbox="media/data-aware-security-dashboard/data-resources-security-status.png"::: |
defender-for-cloud | Defender For Apis Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-posture.md | This article describes how to investigate API security findings, alerts, and sec :::image type="content" source="media/defender-for-apis-posture/resource-health.png" alt-text="Screenshot that shows the health of an endpoint." lightbox="media/defender-for-apis-posture/resource-health.png"::: +## Remediate recommendations using Workflow Automation +You can remediate recommendations generated by Defender for APIs using workflow automations. +1. In an eligible recommendation, select one or more unhealthy resources. +2. Select **Trigger logic app**. +3. Confirm the **Selected subscription**. +4. Select a relevant logic app from the list. +5. Select **Trigger**. ++You can browse the [Microsoft Defender for Cloud GitHub](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Defender%20for%20API) repository for available workflow automation. + ## Create sample security alerts In Defender for Cloud you can use sample alerts to evaluate your Defender for Cloud plans, and validate your security configuration. [Follow these instructions](alert-validation.md#generate-sample-security-alerts) to set up sample alerts, and select the relevant APIs within your subscriptions. |
defender-for-cloud | Defender For Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md | With a simple agentless setup at scale, you can [enable Defender for Storage](tu |-|:-| |Release state:|General Availability (GA)| |Feature availability:|- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware Scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview|-|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): $0.15/GB (USD) of data ingested\*\* <br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Billing begins on September 3, 2023. To limit expenses, use the `Monthly capping` feature to set a cap on the amount of GB scanned per month, per storage account to help you control your costs. | -| Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring | +|Pricing:|**Microsoft Defender for Storage** pricing applies to commercial clouds. Learn more about [pricing and availability per region.](https://azure.microsoft.com/pricing/details/defender-for-cloud/)<br>| +|<br><br> Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring | |Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.| |Clouds:|:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the [classic plan](/azure/defender-for-cloud/defender-for-storage-classic))<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts| In this article, you learned about Microsoft Defender for Storage. + |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | AWS Systems Manager manages auto-provisioning by using the SSM Agent. Some Amazo - [Install SSM Agent for a hybrid and multicloud environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid and multicloud environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html) -Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html), which enables core functionality for the AWS Systems Manager service. +Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html), which enables core functionality for the AWS Systems Manager service. ++**You must have the SSM Agent for auto provisioning Arc agent on EC2 machines. If the SSM doesn't exist, or is removed from the EC2, the Arc provisioning wonΓÇÖt be able to procced.** ++> [!NOTE] +> As part of the cloud formation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the cloud formation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the cloud formation. If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed. Connecting your AWS account is part of the multicloud experience available in Mi - Set up your [on-premises machines](quickstart-onboard-machines.md) and [GCP projects](quickstart-onboard-gcp.md). - Get answers to [common questions](faq-general.yml) about onboarding your AWS account. - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector).+ |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | Today, there are four Service Level 2 names: Azure Defender, Advanced Threat Pro The change will simplify the process of reviewing Defender for Cloud charges and provide better clarity in cost analysis. -To ensure a smooth transition, we've taken measures to maintain the consistency of the Product/Service name, SKU, and Meter IDs. Impacted customers will receive an informational Azure Service Notification to communicate the changes. No action is necessary from customers. +To ensure a smooth transition, we've taken measures to maintain the consistency of the Product/Service name, SKU, and Meter IDs. Impacted customers will receive an informational Azure Service Notification to communicate the changes. ++Organizations that retrieve cost data by calling our APIs, will need to update the values in their calls to accomodate the change. For example, in this filter function, the values will return no information: ++```json +"filter": { + "dimensions": { + "name": "MeterCategory", + "operator": "In", + "values": [ + "Advanced Threat Protection", + "Advanced Data Security", + "Azure Defender", + "Security Center" + ] + } + } +``` The change is planned to go into effect on December 1, 2023. |
defender-for-cloud | Update Regulatory Compliance Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md | If a subscription, account, or project has *any* Defender plan enabled, more sta | -| | | | PCI-DSS v3.2.1 **(deprecated)** | CIS AWS Foundations v1.2.0 | CIS GCP Foundations v1.1.0 | | PCI DSS v4 | CIS AWS Foundations v1.5.0 | CIS GCP Foundations v1.2.0 |-| SOC TSP | PCI DSS v3.2.1 | PCI DSS v3.2.1 | +| SOC TSP **(deprecated)** | PCI DSS v3.2.1 | PCI DSS v3.2.1 | | SOC 2 Type 2 | AWS Foundational Security Best Practices | NIST 800-53 | | ISO 27001:2013 | | ISO 27001 | | CIS Azure Foundations v1.1.0 ||| If a subscription, account, or project has *any* Defender plan enabled, more sta | FedRAMP M ||| | HIPAA/HITRUST ||| | SWIFT CSP CSCF v2020 |||+| SWIFT CSP CSCF v2022 ||| | UK OFFICIAL and UK NHS ||| | Canada Federal PBMM ||| | New Zealand ISM Restricted ||| |
defender-for-iot | Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md | Alert options also differ depending on your location and user role. For more inf ### Enterprise IoT alerts and Microsoft Defender for Endpoint -Alerts triggered by Enterprise IoT sensors are shown in the Azure portal only. +If you're using [Enterprise IoT security](eiot-defender-for-endpoint.md) in Microsoft 365 Defender, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Microsoft 365 Defender only. -If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Microsoft 365 Defender only. +Alerts triggered by [Enterprise IoT sensors](eiot-sensor.md) are shown in the Azure portal only. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response). ## Managing OT alerts in a hybrid environment -Users working in hybrid environments may be managing OT alerts in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console. +Users working in hybrid environments might be managing OT alerts in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console. Alert statuses are fully synchronized between the Azure portal and the OT sensor, and between the sensor and the on-premises management console. This means that regardless of where you manage the alert in Defender for IoT, the alert is updated in other locations as well. Setting an alert status to **Closed** or **Muted** on a sensor or on-premises ma New alerts are automatically closed if no identical traffic is detected 90 days after the initial detection. If identical traffic is detected within those first 90 days, the 90-day count is reset. -In addition to the default behavior, you may want to help your SOC and OT management teams triage and remediate alerts faster. Sign into an OT sensor or an on-premises management console as an **Admin** user to use the following options: +In addition to the default behavior, you might want to help your SOC and OT management teams triage and remediate alerts faster. Sign into an OT sensor or an on-premises management console as an **Admin** user to use the following options: - **Create custom alert rules**. OT sensors only. Use the following table to learn more about each alert status and triage option. |**Active** | - Azure portal only | Set an alert to *Active* to indicate that an investigation is underway, but that the alert can't yet be closed or otherwise triaged. <br><br>This status has no effect elsewhere in Defender for IoT. | |**Closed** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console | Close an alert to indicate that it's fully investigated, and you want to be alerted again the next time the same traffic is detected.<br><br>Closing an alert adds it to the sensor event timeline.<br><br>On the on-premises management console, *New* alerts are called *Acknowledged*. | |**Learn** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console <br><br>*Unlearning* an alert is available only on the OT sensor. | Learn an alert when you want to close it and add it as allowed traffic, so that you aren't alerted again the next time the same traffic is detected. <br><br>For example, when the sensor detects firmware version changes following standard maintenance procedures, or when a new, expected device is added to the network. <br><br>Learning an alert closes the alert and adds an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating other OT sensor reports. <br><br>Learning alerts is available for selected alerts only, mostly those triggered by *Policy* and *Anomaly* engine alerts. |-|**Mute** | - OT network sensors <br><br>- On-premises management console <br><br>*Unmuting* an alert is available only on the OT sensor. | Mute an alert when you want to close it and not see again for the same traffic, but without adding the alert allowed traffic. <br><br>For example, when the Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode may indicate that the PLC isn't secure, but after investigation, it's determined that the new mode is acceptable. <br><br>Muting an alert closes it, but doesn't add an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating data for other sensor reports. <br><br>Muting an alert is available for selected alerts only, mostly those triggered by the *Anomaly*, *Protocol Violation*, or *Operational* engines. | +|**Mute** | - OT network sensors <br><br>- On-premises management console <br><br>*Unmuting* an alert is available only on the OT sensor. | Mute an alert when you want to close it and not see again for the same traffic, but without adding the alert allowed traffic. <br><br>For example, when the Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode might indicate that the PLC isn't secure, but after investigation, it's determined that the new mode is acceptable. <br><br>Muting an alert closes it, but doesn't add an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating data for other sensor reports. <br><br>Muting an alert is available for selected alerts only, mostly those triggered by the *Anomaly*, *Protocol Violation*, or *Operational* engines. | > [!TIP] > If you know ahead of time which events are irrelevant for you, such as during a maintenance window, or if you don't want to track the event in the event timeline, create an alert exclusion rule on an on-premises management console instead. |
defender-for-iot | Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md | Title: Subscription billing -description: Learn how you're billed for the Microsoft Defender for IoT service on your Azure subscription. + Title: Microsoft Defender for IoT billing +description: Learn how you're billed for the Microsoft Defender for IoT service. Previously updated : 05/17/2023 Last updated : 09/13/2023 +#CustomerIntent: As a Defender for IoT customer, I want to understand how I'm billed for Defender for IoT services so that I can best plan my deployment. -# Defender for IoT subscription billing +# Defender for IoT billing As you plan your Microsoft Defender for IoT deployment, you typically want to understand the Defender for IoT pricing plans and billing models so you can optimize your costs. -OT monitoring is billed using site-based licenses, where each license applies to an individual site, based on the site size. A site is a physical location, such as a facility, campus, office building, hospital, rig, and so on. Each site can contain any number of network sensors, all which monitor devices detected in connected networks. +**OT monitoring** is billed using site-based licenses, where each license applies to an individual site, based on the site size. A site is a physical location, such as a facility, campus, office building, hospital, rig, and so on. Each site can contain any number of network sensors, all which monitor devices detected in connected networks. -Enterprise IoT monitoring is billed based on the number of devices covered by your plan. +**Enterprise IoT** monitoring supports 5 devices per Microsoft 365 E5 (ME5) or E5 Security license, or is available as standalone, per-device licenses for Microsoft Defender for Endpoint P2 customers. ## Free trial -If you would like to evaluate Defender for IoT, you can use a trial license: +To evaluate Defender for IoT, start a free trial as follows: -- **For OT networks**, use a trial to deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. An OT trial supports a **Large** site license for 60 days. For more information, see [Start a Microsoft Defender for IoT trial](getting-started.md).+- **For OT networks**, use a trial license for 60 days. Deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. An OT trial supports a **Large** site license for 60 days. For more information, see [Start a Microsoft Defender for IoT trial](getting-started.md). -- **For Enterprise IoT networks**, use a 30-day trial to view alerts, recommendations, and vulnerabilities in Microsoft 365. An Enterprise IoT trial is not limited to a specific number of devices. For more information, see [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md).+- **For Enterprise IoT networks**, use a trial, standalone license for 90 days as an add-on to Microsoft Defender for Endpoint. Trial licenses support 100 devices. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md). ## Defender for IoT devices -When purchasing a Defender for IoT license for an OT plan, or when onboarding or editing a monthly Enterprise IoT plan, we recommend that you have a sense of how many devices you'll want to cover. +We recommend that you have a sense of how many devices you want to monitor so that you know how many OT sites you need to license, or if you need any standalone licenses for enterprise IoT security. - **OT monitoring**: Purchase a license for each site that you're planning to monitor. License fees differ based on the site size, each which covers a different number of devices. -- **Enterprise IoT monitoring**: Purchase a price plan based on the number of devices you want to monitor.+- **Enterprise IoT monitoring**: Five devices are supported for each ME5/E5 Security user license. If you have more devices to monitor, and are a Defender for Endpoint P2 customer, purchase extra, standalone licenses for each device you want to monitor. [!INCLUDE [devices-inventoried](includes/devices-inventoried.md)] |
defender-for-iot | Concept Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md | Title: Securing IoT devices in the enterprise with Microsoft Defender for Endpoint -description: Learn how integrating Microsoft Defender for Endpoint and Microsoft Defender for IoT's security content and network sensors enhances your IoT network security. + Title: Securing IoT devices | Microsoft Defender for IoT +description: Learn how integrating Microsoft Defender for Endpoint and Microsoft Defender for IoT's security content enhances your IoT network security. Previously updated : 05/31/2023 Last updated : 09/13/2023 +#CustomerIntent: As a Defender for IoT customer, I want to understand how I can secure my enterprise IoT devices with Microsoft Defender for IoT so that I can protect my organization from IoT threats. # Securing IoT devices in the enterprise -The number of IoT devices continues to grow exponentially across enterprise networks, such as the printers, Voice over Internet Protocol (VoIP) devices, smart TVs, and conferencing systems scattered around many office buildings. +The number of IoT devices continues to grow exponentially across enterprise networks, such as the printers, Voice over Internet Protocol (VoIP) devices, smart TVs, and conferencing systems scattered around many office buildings. While the number of IoT devices continues to grow, they often lack the security safeguards that are common on managed endpoints like laptops and mobile phones. To bad actors, these unmanaged devices can be used as a point of entry for lateral movement or evasion, and too often, the use of such tactics leads to the exfiltration of sensitive information. -[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data. +[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft 365 Defender](/microsoft-365/security/defender) and [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data. -## IoT security across Microsoft 365 Defender and Azure +## Enterprise IoT security in Microsoft 365 Defender -Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and [Azure portals](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started). +Enterprise IoT security in Microsoft 365 Defender provides IoT-specific security value, including alerts, risk and exposure levels, vulnerabilities, and recommendations in Microsoft 365 Defender. -[Add an Enterprise IoT plan](eiot-defender-for-endpoint.md) in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. The extra security value is provided for IoT devices detected by Defender for Endpoint. +- If you're a Microsoft 365 E5 (ME5)/ E5 Security and Defender for Endpoint P2 customer, [toggle on support](eiot-defender-for-endpoint.md) for **Enterprise IoT Security** in the Microsoft 365 Defender portal. -Integrating your Enterprise IoT plan with Microsoft 365 Defender requires the following: +- If you don't have ME5/E5 Security licenses, but you're a Microsoft Defender for Endpoint customer, start with a [free trial](billing.md#free-trial) or purchase standalone, per-device licenses to gain the same IoT-specific security value. -- A Microsoft Defender for Endpoint P2 license-- Microsoft 365 Defender access as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)-- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner)--## Security value in Microsoft 365 Defender --Defender for IoT's Enterprise IoT plan adds purpose-built alerts, recommendations, and vulnerability data for the IoT devices discovered by Defender for Endpoint agents. The added security value is available in Microsoft 365 Defender, which is Microsoft's central portal for combined enterprise IT and IoT device security. --For example, use the added security recommendations in Microsoft 365 Defender to open a single IT ticket to patch vulnerable applications on both servers and printers. Or, use a recommendation to request that the network team adds firewall rules that apply for both workstations and cameras communicating with a suspicious IP address. --The following image shows the architecture and extra features added with an Enterprise IoT plan in Microsoft 365 Defender: +The following image shows the architecture and extra features added with **Enterprise IoT security** in Microsoft 365 Defender: :::image type="content" source="media/enterprise-iot/architecture-endpoint-only.png" alt-text="Diagram of the service architecture when you have an Enterprise IoT plan added to Defender for Endpoint." border="false"::: -> [!NOTE] -> Defender for Endpoint doesn't issue IoT-specific alerts, recommendations, and vulnerability data without an Enterprise IoT plan in Microsoft 365 Defender. Use our [quickstart](eiot-defender-for-endpoint.md) to start seeing this extra security value across your network. -> For more information, see: -- [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md)+- [Get started with enterprise IoT monitoring in Microsoft 365 Defender](eiot-defender-for-endpoint.md) +- [Defender for IoT subscription billing](billing.md) +- [Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery) - [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response) - [Security recommendations](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation) - [Vulnerabilities in my organization](/microsoft-365/security/defender-vulnerability-management/tvm-weaknesses) - [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) - [Proactively hunt with advanced hunting in Microsoft 365 Defender](/microsoft-365/security/defender/advanced-hunting-overview) +## Frequently asked questions ++This section provides a list of frequently asked questions about securing Enterprise IoT networks with Microsoft Defender for IoT. ++### What is the difference between OT and Enterprise IoT? ++- **Operational Technology (OT)**: OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into Operational Technology (OT) / Industrial Control System (ICS) risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency. ++- **Enterprise IoT**: Enterprise IoT provides visibility and security for IoT devices in the corporate environment. ++ Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment might include printers, cameras, and purpose-built, proprietary, devices. ++### What extra security value can Enterprise IoT provide Microsoft Defender for Endpoint customers? ++Enterprise IoT is designed to help customers secure unmanaged devices throughout the organization and extend IT security to also cover IoT devices. ++While Microsoft 365 P2 customers already have visibility for discovered IoT devices in the **Device inventory** page in Defender for Endpoint, they can use enterprise IoT security to gain security value with extra alerts, recommendations and vulnerabilities for their discovered IoT devices. ++### How can I start using Enterprise IoT? ++Microsoft E5 (ME5) and E5 Security customers already have devices supported for enterprise IoT security. If you only have a Defender for Endpoint P2 license, you can purchase standalone, per-device licenses for enterprise IoT monitoring, or use a trial. ++For more information, see: ++- [Get started with enterprise IoT monitoring in Microsoft 365 Defender](eiot-defender-for-endpoint.md) +- [Manage enterprise IoT monitoring support with Microsoft Defender for IoT](manage-subscriptions-enterprise.md) ++### What permissions do I need to use Enterprise IoT security with Defender for IoT? ++For information on required permissions, see [Prerequisites](eiot-defender-for-endpoint.md#prerequisites). ++### Which devices are billable? ++For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot). ++### How should I estimate the number of devices I want to monitor? ++For more information, see [Calculate monitored devices for Enterprise IoT monitoring](manage-subscriptions-enterprise.md#calculate-monitored-devices-for-enterprise-iot-monitoring). ++### How can I cancel Enterprise IoT? ++For more information, see [Turn off enterprise IoT security](manage-subscriptions-enterprise.md#turn-off-enterprise-iot-security). ++### What happens when the trial ends? ++If you haven't added a standalone license by the time your trial ends, your trial is automatically canceled, and you lose access to Enterprise IoT security features. ++For more information, see [Defender for IoT subscription billing](billing.md). ++### How can I resolve billing issues associated with my Defender for IoT plan? ++For any billing or technical issues, open a support ticket for Microsoft 365 Defender. + ## Next steps Start securing your Enterprise IoT network resources with by [onboarding to Defender for IoT from Microsoft 365 Defender](eiot-defender-for-endpoint.md). |
defender-for-iot | Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md | A *transient* device type indicates a device that was detected for only a short ## Device management options -The Defender for IoT device inventory is available in the Azure portal, OT network sensor consoles, and the on-premises management console. --While you can view device details from any of these locations, each location also offers extra device inventory support. The following table describes the device inventory support for each location and the extra actions available from that location only: +Defender for IoT device inventory is available in the following locations: |Location |Description | Extra inventory support | ||||-|**Azure portal** | Devices detected from all cloud-connected OT sensors and Enterprise IoT sensors. <br><br> | - If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) on your Azure subscription, the device inventory also includes devices detected by Microsoft Defender for Endpoint agents. <br><br>- If you also use [Microsoft Sentinel](iot-solution.md), incidents in Microsoft Sentinel are linked to related devices in Defender for IoT. <br><br>- Use Defender for IoT [workbooks](workbooks.md) for visibility into all cloud-connected device inventory, including related alerts and vulnerabilities. | +|**Azure portal** | OT devices detected from all cloud-connected OT sensors. | - If you also use [Microsoft Sentinel](iot-solution.md), incidents in Microsoft Sentinel are linked to related devices in Defender for IoT. <br><br>- Use Defender for IoT [workbooks](workbooks.md) for visibility into all cloud-connected device inventory, including related alerts and vulnerabilities. <br><br>- If you have a [legacy Enterprise IoT plan](whats-new.md#enterprise-iot-protection-now-included-in-microsoft-365-e5-and-e5-security-licenses) on your Azure subscription, the Azure portal also includes devices detected by Microsoft Defender for Endpoint agents. If you have an [Enterprise IoT sensor](eiot-sensor.md), the Azure portal also includes devices detected by the Enterprise IoT sensor. | +| **Microsoft 365 Defender** | Enterprise IoT devices detected by Microsoft Defender for Endpoint agents | Correlate devices across Microsoft 365 Defender in purpose-built alerts, vulnerabilities, and recommendations. | |**OT network sensor consoles** | Devices detected by that OT sensor | - View all detected devices across a network device map<br><br>- View related events on the **Event timeline** | |**An on-premises management console** | Devices detected across all connected OT sensors | Enhance device data by importing data manually or via script |+| For more information, see: - [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)+- [Defender for Endpoint device discovery](/microsoft-365/security/defender-endpoint/device-discovery) - [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md) - [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md) -> [!NOTE] -> If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) to [integrate with Microsoft Defender for Endpoint](concept-enterprise.md), devices detected by an Enterprise IoT sensor are also listed in Defender for Endpoint. For more information, see: -> -> - [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview) -> - [Defender for Endpoint device discovery](/microsoft-365/security/defender-endpoint/device-discovery) -> - ## Automatically consolidated devices When you've deployed Defender for IoT at scale, with several OT sensors, each sensor might detect different aspects of the same device. To prevent duplicated devices in your device inventory, Defender for IoT assumes that any devices found in the same zone, with a logical combination of similar characteristics, is the same device. Defender for IoT automatically consolidates these devices and lists them only once in the device inventory. -For example, any devices with the same IP and MAC address detected in the same zone are consolidated and identified as a single device in the device inventory. If you have separate devices from recurring IP addresses that are detected by multiple sensors, you'll want each of these devices to be identified separately. In such cases, [onboard your OT sensors](onboard-sensors.md) to different zones so that each device is identified as a separate and unique device, even if they have the same IP address. Devices that have the same MAC addresses, but different IP addresses are not merged, and continue to be listed as unique devices. +For example, any devices with the same IP and MAC address detected in the same zone are consolidated and identified as a single device in the device inventory. If you have separate devices from recurring IP addresses that are detected by multiple sensors, you want each of these devices to be identified separately. In such cases, [onboard your OT sensors](onboard-sensors.md) to different zones so that each device is identified as a separate and unique device, even if they have the same IP address. Devices that have the same MAC addresses, but different IP addresses aren't merged, and continue to be listed as unique devices. A *transient* device type indicates a device that was detected for only a short time. We recommend investigating these devices carefully to understand their impact on your network. The following table lists the columns available in the Defender for IoT device i |Name |Description |||-|**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value may need to change as the device security changes. | +|**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value might need to change as the device security changes. | |**Business Function** | Editable. Describes the device's business function. | | **Class** | Editable. The device's class. <br>Default: `IoT` | |**Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`| |
defender-for-iot | Eiot Defender For Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md | Title: Enable Enterprise IoT security in Microsoft 365 with Defender for Endpoint - Microsoft Defender for IoT -description: Learn how to start integrating between Microsoft Defender for IoT and Microsoft Defender for Endpoint in Microsoft 365 Defender. + Title: Get started with enterprise IoT monitoring in Microsoft 365 Defender | Microsoft Defender for IoT +description: Learn how to get added value for enterprise IoT devices in Microsoft 365 Defender. Previously updated : 10/19/2022 Last updated : 09/13/2023 +#CustomerIntent: As a Microsoft 365 administrator, I want to understand how to turn on support for enterprise IoT monitoring in Microsoft 365 Defender and where I can find the added security value so that I can keep my EIoT devices safe. -# Enable Enterprise IoT security with Defender for Endpoint +# Get started with enterprise IoT monitoring in Microsoft 365 Defender -This article describes how [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) customers can add an Enterprise IoT plan in Microsoft 365 Defender, providing extra security value for IoT devices. +This article describes how [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) customers can monitor enterprise IoT devices in their environment, using added security value in Microsoft 365 Defender. -While IoT device inventory is already available for Defender for Endpoint P2 customers, adding an Enterprise IoT plan adds alerts, recommendations, and vulnerability data, purpose-built for IoT devices in your enterprise network. +While IoT device inventory is already available for Defender for Endpoint P2 customers, turning on enterprise IoT security adds alerts, recommendations, and vulnerability data, purpose-built for IoT devices in your enterprise network. -IoT devices include printers, cameras, VOIP phones, smart TVs, and more. Adding an Enterprise IoT plan means, for example, that you can use a recommendation in Microsoft 365 Defender to open a single IT ticket for patching vulnerable applications across both servers and printers. +IoT devices include printers, cameras, VOIP phones, smart TVs, and more. Turning on enterprise IoT security means, for example, that you can use a recommendation in Microsoft 365 Defender to open a single IT ticket for patching vulnerable applications across both servers and printers. ## Prerequisites Before you start the procedures in this article, read through [Secure IoT device Make sure that you have: -- A Microsoft Defender for Endpoint P2 license- - IoT devices in your network, visible in the Microsoft 365 Defender **Device inventory** -- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/).+- Access to the Microsoft 365 Defender portal as a [Security administrator](../../active-directory/roles/permissions-reference.md#security-administrator) ++- One of the following licenses: ++ - A Microsoft 365 E5 (ME5) or E5 Security license ++ - Microsoft Defender for Endpoint P2, with an extra, standalone **Microsoft Defender for IoT - EIoT Device License - add-on** license, available for purchase or trial from the Microsoft 365 admin center. ++ > [!TIP] + > If you have a standalone license, you don't need to toggle on **Enterprise IoT Security** and can skip directly to [View added security value in Microsoft 365 Defender](#view-added-security-value-in-microsoft-365-defender). + > -- The following user roles:+ For more information, see [Enterprise IoT security in Microsoft 365 Defender](concept-enterprise.md#enterprise-iot-security-in-microsoft-365-defender). - |Identity management |Roles required | - ||| - |**In Microsoft Entra ID** | [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant | - |**In Azure RBAC** | [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration | -## Onboard a Defender for IoT plan +## Turn on enterprise IoT monitoring -1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**. +This procedure describes how to turn on enterprise IoT monitoring in Microsoft 365 Defender, and is relevant only for ME5/E5 Security customers. -1. Select the following options for your plan: +Skip this procedure if you have one of the following types of licensing plans: - - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription. +- Customers with legacy Enterprise IoT pricing plan and an ME5/E5 Security license. +- Customers with standalone, per-device licenses added on to Microsoft Defender for Endpoint P2. In such cases, the Enterprise IoT security setting is turned on as read-only. - - **Price plan**: For the sake of this tutorial, select a **Trial** pricing plan. Microsoft Defender for IoT provides a [30-day free trial](billing.md#free-trial) for evaluation purposes. +**To turn on enterprise IoT monitoring**: -1. Select the **I accept the terms and conditions** option and then select **Save**. +1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Settings** \> **Device discovery** \> **Enterprise IoT**. -For example: +1. Toggle the Enterprise IoT security option to **On**. For example: + :::image type="content" source="media/enterprise-iot/eiot-toggle-on.png" alt-text="Screenshot of Enterprise IoT toggled on in Microsoft 365 Defender."::: ## View added security value in Microsoft 365 Defender -This procedure describes how to view related alerts, recommendations, and vulnerabilities for a specific device in Microsoft 365 Defender. Alerts, recommendations, and vulnerabilities are shown for IoT devices only after you've added an Enterprise IoT plan. +This procedure describes how to view related alerts, recommendations, and vulnerabilities for a specific device in Microsoft 365 Defender, when the **Enterprise IoT security** option is turned on. **To view added security value**: -1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Assets** \> **Devices** to open the **Device inventory** page. +1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Assets** \> **Devices** to open the **Device inventory** page. 1. Select the **IoT devices** tab and select a specific device **IP** to drill down for more details. For example: :::image type="content" source="media/enterprise-iot/select-a-device.png" alt-text="Screenshot of the IoT devices tab in Microsoft 365 Defender." lightbox="media/enterprise-iot/select-a-device.png"::: -1. On the device details page, explore the following tabs to view data added by the Enterprise IoT plan for your device: +1. On the device details page, explore the following tabs to view data added by the enterprise IoT security for your device: - On the **Alerts** tab, check for any alerts triggered by the device. |
defender-for-iot | Eiot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md | -**If you're a Defender for Endpoint customer** with an Enterprise IoT plan for Defender for IoT, adding an Enterprise IoT network sensor extends your network visibility to IoT segments in your corporate network not otherwise covered by Microsoft Defender for Endpoint. For example, if you have a VLAN dedicated to VoIP devices with no other endpoints, Defender for Endpoint may not be able to discover devices on that VLAN. +Microsoft 365 Defender customers with an Enterprise IoT network sensor can see all discovered devices in the **Device inventory** in either Microsoft 365 Defender or Defender for IoT. You'll also get extra security value from more alerts, vulnerabilities, and recommendations in Microsoft 365 Defender for the newly discovered devices. -Customers that have set up an Enterprise IoT network sensor can see all discovered devices in the **Device inventory** in either Microsoft 365 Defender or Defender for IoT. You'll also get extra security value from more alerts, vulnerabilities, and recommendations in Microsoft 365 Defender for the newly discovered devices. --**If you're a Defender for IoT customer** working solely in the Azure portal, an Enterprise IoT network sensor provides extra device visibility to Enterprise IoT devices, such as Voice over Internet Protocol (VoIP) devices, printers, and cameras, which may not be covered by your OT network sensors. +If you're a Defender for IoT customer working solely in the Azure portal, an Enterprise IoT network sensor provides extra device visibility to Enterprise IoT devices, such as Voice over Internet Protocol (VoIP) devices, printers, and cameras, which might not be covered by your OT network sensors. Defender for IoT [alerts](how-to-manage-cloud-alerts.md) and [recommendations](recommendations.md) for devices discovered by the Enterprise IoT sensor only are available only in the Azure portal. This section describes the prerequisites required before deploying an Enterprise ### Azure requirements -- To view Defender for IoT data in Microsoft 365 Defender, including devices, alerts, recommendations, and vulnerabilities, you must have an Enterprise IoT plan, [onboarded from Microsoft 365 Defender](eiot-defender-for-endpoint.md). +- To view Defender for IoT data in Microsoft 365 Defender, including devices, alerts, recommendations, and vulnerabilities, you must have **Enterprise IoT security** turned on in [Microsoft 365 Defender](eiot-defender-for-endpoint.md). - If you only want to view data in the Azure portal, an Enterprise IoT plan isn't required. You can also onboard your Enterprise IoT plan from Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#security-value-in-microsoft-365-defender) to your organization. + If you only want to view data in the Azure portal, you don't need Microsoft 365 Defender. You can also turn on **Enterprise IoT security** in Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#enterprise-iot-security-in-microsoft-365-defender) to your organization. - Make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/). ### Network requirements -- Identify the devices and subnets you want to monitor so that you understand where to place an Enterprise IoT sensor in your network. You may want to deploy multiple Enterprise IoT sensors.+- Identify the devices and subnets you want to monitor so that you understand where to place an Enterprise IoT sensor in your network. You might want to deploy multiple Enterprise IoT sensors. - Configure traffic mirroring in your network so that the traffic you want to monitor is mirrored to your Enterprise IoT sensor. Supported traffic mirroring methods are the same as for OT monitoring. For more information, see [Choose a traffic mirroring method for traffic monitoring](best-practices/traffic-mirroring-methods.md). This procedure describes how to prepare your physical appliance or VM to install The system displays a list of all monitored interfaces. - Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic will show an increasing number of RX packets. + Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic show an increasing number of RX packets. 1. For each interface you want to monitor, run the following command to enable *Promiscuous mode* in the network adapter: This procedure describes how to prepare your physical appliance or VM to install ## Register an Enterprise IoT sensor in Defender for IoT -This section describes how to register an Enterprise IoT sensor in Defender for IoT. When you're done registering your sensor, you'll continue on with installing the Enterprise IoT monitoring software on your sensor machine. +This section describes how to register an Enterprise IoT sensor in Defender for IoT. When you're done registering your sensor, you continue on with installing the Enterprise IoT monitoring software on your sensor machine. **To register a sensor in the Azure portal**: This section describes how to register an Enterprise IoT sensor in Defender for :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor."::: -1. Copy the command to a safe location, where you'll be able to copy it to your physical appliance or VM in order to [install sensor software](#install-enterprise-iot-sensor-software). +1. Copy the command to a safe location, where you're able to copy it to your physical appliance or VM in order to [install sensor software](#install-enterprise-iot-sensor-software). ## Install Enterprise IoT sensor software This procedure describes how to install Enterprise IoT monitoring software on [y 1. In the **Set up proxy server?** screen, select whether to set up a proxy server for your sensor. For example: - :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server? screen."::: + :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server screen."::: If you're setting up a proxy server, select **Yes**, and then define the proxy server host, port, username, and password, selecting **Ok** after each option. In the **Sites and sensors** page, Enterprise IoT sensors are all automatically Once you've validated your setup, the Defender for IoT **Device inventory** page will start to populate with new devices detected by your sensor after 15 minutes. -If you're a Defender for Endpoint customer with an Enterprise IoT plan, you'll be able to view all detected devices in the **Device inventory** pages, in both Defender for IoT and Microsoft 365 Defender. Detected devices include both devices detected by Defender for Endpoint and devices detected by the Enterprise IoT sensor. +If you're a Defender for Endpoint customer with a [legacy Enterprise IoT plan](whats-new.md#enterprise-iot-protection-now-included-in-microsoft-365-e5-and-e5-security-licenses), you're able to view all detected devices in the **Device inventory** pages, in both Defender for IoT and Microsoft 365 Defender. Detected devices include both devices detected by Defender for Endpoint and devices detected by the Enterprise IoT sensor. For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) and [Microsoft 365 Defender device discovery](/microsoft-365/security/defender-endpoint/machines-view-overview). -If you're on a monthly commitment, you may want to edit the number of devices covered by your Enterprise IoT plan. For more information, see: --- [Calculate monitored devices for Enterprise IoT monitoring](manage-subscriptions-enterprise.md#calculate-monitored-devices-for-enterprise-iot-monitoring)-- [Defender for IoT subscription billing](billing.md) ## Delete an Enterprise IoT network sensor For more information, see [Manage sensors with Defender for IoT in the Azure por > [!TIP] > You can also remove your sensor manually from the CLI. For more information, see [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md#remove-an-enterprise-iot-network-sensor-optional). -If you want to cancel your Enterprise IoT plan and stop the integration with Defender for Endpoint, do so from [Microsoft 365 Defender](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan). --## Move existing sensors to a different subscription --If you've registered an Enterprise IoT network sensor, you may need to apply it to a different subscription than the one youΓÇÖre currently using. --**To apply an existing sensor to a different subscription**: --1. Onboard a new plan to the new subscription -1. Register the sensors under the new subscription -1. Remove the sensors from the previous subscription --Billing changes will take effect one hour after cancellation of the previous subscription, and will be reflected on the next month's bill. Devices will be synchronized from the sensor to the new subscription automatically. --**To switch to a new subscription**: --1. In Defender for Endpoint, onboard a new Enterprise IoT plan to the new subscription you want to use. For more information, see [Onboard a Defender for IoT plan](eiot-defender-for-endpoint.md#onboard-a-defender-for-iot-plan). --1. In the Azure portal, register your Enterprise IoT sensor under the new subscription and run the activation command. For more information, see [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md). --1. Delete the legacy sensor from the previous subscription. In Defender for IoT, go to the **Sites and sensors** page and locate the legacy sensor on the previous subscription. --1. In the row for your sensor, from the options (**...**) menu, select **Delete** to delete the sensor from the previous subscription. --1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan). +If you want to cancel enterprise IoT security with Microsoft 365 Defender, do so from the Microsoft 365 Defender portal. For more information, see [Turn off enterprise IoT security](manage-subscriptions-enterprise.md#turn-off-enterprise-iot-security). ## Next steps |
defender-for-iot | Faqs Eiot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-eiot.md | - Title: FAQs for Enterprise IoT networks - Microsoft Defender for IoT -description: Find answers to the most frequently asked questions about Microsoft Defender for IoT Enterprise IoT networks. - Previously updated : 06/05/2023----# Enterprise IoT network security frequently asked questions --This article provides a list of frequently asked questions about securing Enterprise IoT networks with Microsoft Defender for IoT. --## What is the difference between OT and Enterprise IoT? --### Operational Technology (OT) --OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into Operational Technology (OT) / Industrial Control System (ICS) risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency. --### Enterprise IoT --Enterprise IoT provides visibility and security for IoT devices in the corporate environment. --Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, devices. --## What additional security value can Enterprise IoT provide Microsoft Defender for Endpoint customers? --Enterprise IoT is designed to help customers secure un-managed devices throughout the organization and extend IT security to also cover IoT devices. The solution leverages multiple means in order to ensure optimal coverage. --- **In the Microsoft Defender for Endpoint portal**: This is the GA offering for Enterprise IoT. Microsoft 365 P2 customers already have visibility for discovered IoT devices in the **Device inventory** page in Defender for Endpoint. Customers can onboard an Enterprise IoT plan in the same portal and gain security value by viewing alerts, recommendations and vulnerabilities for their discovered IoT devices.-- For more information, see [Onboard with Microsoft Defender for IoT](eiot-defender-for-endpoint.md). --- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal.-- For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md). --## How can I start using Enterprise IoT? --To get started, Microsoft 365 P2 customers need to [add a Defender for IoT plan with Enterprise IoT](eiot-defender-for-endpoint.md) to an Azure subscription from the Microsoft Defender for Endpoint portal. --If youΓÇÖre a Defender for Endpoint customer, when adding your Defender for IoT plan, take care to exclude any devices already [managed by Defender for Endpoint](/microsoft-365/security/defender-endpoint/device-discovery) from your count of devices you want to monitor. --## What permissions do I need to add a Defender for IoT plan? Can I use any Azure subscription? --For information on required permissions, see [Prerequisites](eiot-defender-for-endpoint.md#prerequisites). --## Which devices are billable? --For more information about billable devices, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot). --## How should I estimate the number of devices I want to monitor? --In the **Device inventory** in Defender for Endpoint: --Add the total number of discovered network devices with the total number of discovered IoT devices. Round that up to a multiple of 100, and that is the number of devices to enter. --For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot). --## How does the integration between Microsoft Defender for Endpoint and Microsoft Defender for IoT work? --Once you've [added a Defender for IoT plan with Enterprise IoT to an Azure subscription in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#onboard-a-defender-for-iot-plan), integration between the two products takes place seamlessly. --Discovered IoT devices can be viewed in both Defender for IoT and Defender for Endpoint. For more information, see [Defender for IoT integration](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). --## Can I change the subscription IΓÇÖm using for Defender for IoT? --To change the subscription you're using for your Defender for IoT plan, you'll need to cancel your plan on the existing subscription, and then onboard a new plan to a new subscription. Your existing data won't be migrated to the new subscription. For more information, see [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md). --## How can I edit my plan in Defender for Endpoint? --To make any changes to an existing plan, you'll need to cancel your existing plan and onboard a new plan with the new details. Changes might include moving billing charges from one subscription to another, changing the number of devices you want to cover, or changing the plan commitment from a trial to a monthly commitment. --## How can I cancel Enterprise IoT? --To remove only Enterprise IoT from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see [Cancel your Defender for IoT plan](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan). --To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan). --## What happens when the 30-day trial ends? --If you haven't changed your plan from a trial to a monthly commitment by the time your trial ends, your plan is automatically canceled, and youΓÇÖll lose access to Defender for IoT security features. --To change your plan from a trial to a monthly commitment before the end of the trial, you'll need to cancel your trial plan and onboard a new plan in Defender for Endpoint. For more information, see [Defender for IoT integration](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). --## How can I resolve billing issues associated with my Defender for IoT plan? --For any billing or technical issues, create a support request in the Azure portal. --## Next steps --For more information on getting started with Enterprise IoT, see: --- [Securing IoT devices in the enterprise](concept-enterprise.md)-- [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md)-- [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md) |
defender-for-iot | Faqs General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-general.md | To learn more about how to get started with Defender for IoT, see the following - Read the Defender for IoT [overview](overview.md) - [Get started with Defender for IoT](getting-started.md) - [OT Networks frequently asked questions](faqs-ot.md)-- [Enterprise IoT networks frequently asked questions](faqs-eiot.md) |
defender-for-iot | How To Manage Cloud Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md | Microsoft Defender for IoT alerts enhance your network security and operations w - [Integrate with Microsoft Sentinel](iot-solution.md) to view Defender for IoT alerts in Microsoft Sentinel and manage them together with security incidents. -- If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Defender for Endpoint only.+- If you have **Enterprise IoT security** [turned on in Microsoft 365 Defender](eiot-defender-for-endpoint.md), alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Defender for Endpoint only. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response). Microsoft Defender for IoT alerts enhance your network security and operations w ## Prerequisites -- **To have alerts in Defender for IoT**, you must have an [OT](onboard-sensors.md) or [Enterprise IoT sensor](eiot-sensor.md) on-boarded, and network data streaming into Defender for IoT.+- **To have alerts in Defender for IoT**, you must have an [OT](onboard-sensors.md) onboarded, and network data streaming into Defender for IoT. - **To view alerts on the Azure portal**, you must have access as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) For more information, see [Azure user roles and permissions for Defender for IoT | **Destination device** | The destination IP or MAC address, or the destination device name.| | **First detection** | The first time the alert was detected in the network. | | **ID** |The unique alert ID.|- | **Last activity** | The last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication | + | **Last activity** | The last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert deduplication | | **Protocol** | The protocol detected in the network traffic for the alert.| | **Sensor** | The sensor that detected the alert.| | **Zone** | The zone assigned to the sensor that detected the alert.| For example, filter alerts by **Category**: Use the **Group by** menu at the top-right to collapse the grid into subsections according to specific parameters. -For example, while the total number of alerts appears above the grid, you may want more specific information about alert count breakdown, such as the number of alerts with a specific severity, protocol, or site. +For example, while the total number of alerts appears above the grid, you might want more specific information about alert count breakdown, such as the number of alerts with a specific severity, protocol, or site. Supported grouping options include *Engine*, *Name*, *Sensor*, *Severity*, and *Site*. Downloading the PCAP file can take several minutes, depending on the quality of ## Export alerts to a CSV file -You may want to export a selection of alerts to a CSV file for offline sharing and reporting. +You might want to export a selection of alerts to a CSV file for offline sharing and reporting. 1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left. |
defender-for-iot | How To Manage Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md | -If you're looking to manage Enterprise IoT plans, see [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md). +If you're looking to manage support for enterprise IoT security, see [Manage enterprise IoT monitoring support with Microsoft Defender for IoT](manage-subscriptions-enterprise.md). -This article is relevant for commercial Defender for IoT customers. If you're a government cusetomer, contact your Microsoft sales representative for more information. +This article is relevant for commercial Defender for IoT customers. If you're a government customer, contact your Microsoft sales representative for more information. ## Prerequisites Before performing the procedures in this article, make sure that you have: - An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/). -- A [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user role for the Azure subscription that you'll be using for the integration+- A [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user role for the Azure subscription that you're using for the integration - An understanding of your site size. For more information, see [Calculate devices in your network](best-practices/plan-prepare-deploy.md#calculate-devices-in-your-network). This procedure describes how to add an OT plan for Defender for IoT in the Azure 1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started), select **Plans and pricing** > **Add plan**. -1. In the **Plan settings** pane, select the Azure subscription where you want to add a plan. You can only add a single subscription, and you'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription. +1. In the **Plan settings** pane, select the Azure subscription where you want to add a plan. You can only add a single subscription, and you need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription. > [!NOTE] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. Also make sure that you have the right subscriptions selected in your Azure settings > **Directories + subscriptions** page. This procedure describes how to add an OT plan for Defender for IoT in the Azure - Select the terms and conditions. - If you're working with an on-premises management console, select **Download OT activation file (Optional)**. - When you're finished, select **Save**. If you've selected to download the on-premises management console activation file, the file is downloaded and you're prompted to save it locally. You'll use it later, when [activating your on-premises management console](ot-deploy/activate-deploy-management.md#activate-the-on-premises-management-console). + When you're finished, select **Save**. If you've selected to download the on-premises management console activation file, the file is downloaded and you're prompted to save it locally. You use it later, when [activating your on-premises management console](ot-deploy/activate-deploy-management.md#activate-the-on-premises-management-console). Your new plan is listed under the relevant subscription on the **Plans and pricing** > **Plans** page. -## Cancel a Defender for IoT plan +## Cancel a Defender for IoT plan for OT networks -You may need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a different subscription, or if you no longer need the service. --> [!IMPORTANT] -> Canceling a plan removes all Defender for IoT services from the subscription, including both OT and Enterprise IoT services. If you have an Enterprise IoT plan on your subscription, do this with care. -> -> To cancel only an Enterprise IoT plan, do so from Microsoft 365. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan). -> +You might need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a different subscription, or if you no longer need the service. **Prerequisites**: Before canceling your plan, make sure to delete any sensors that are associated with the subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). -**To cancel a Defender for IoT plan for OT networks**: +**To cancel an OT network plan**: 1. In the Azure portal, go to **Defender for IoT** > **Plans and pricing**. Existing customers can continue to use any legacy OT plan, with no changes in fu ### Warnings for exceeding committed devices -If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you may see a warning message in the Azure portal and on your OT sensor that you have exceeded the maximum number of devices for your subscription. +If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you might see a warning message in the Azure portal and on your OT sensor that you have exceeded the maximum number of devices for your subscription. -This warning indicates you need to update the number of committed devices on the relevant subscription to the actual number of devices being monitored. Click the link in the warning message to take you to the **Plans and pricing** page, with the **Edit plan** pane already open. +This warning indicates you need to update the number of committed devices on the relevant subscription to the actual number of devices being monitored. Select the link in the warning message to take you to the **Plans and pricing** page, with the **Edit plan** pane already open. ### Move existing sensors to a different subscription -If you have multiple legacy subscriptions and are migrating to a Microsoft 365 plan, you'll first need to consolidate your sensors to a single subscription. To do this, you'll need to register the sensors under the new subscription and remove them from the original subscription. +If you have multiple legacy subscriptions and are migrating to a Microsoft 365 plan, you'll first need to consolidate your sensors to a single subscription. To do this, you need to register the sensors under the new subscription and remove them from the original subscription. - Devices are synchronized from the sensor to the new subscription automatically. If you have multiple legacy subscriptions and are migrating to a Microsoft 365 p - Replicate site and sensor hierarchy as is. - - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device. + - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone are merged into one device. 1. On your sensor, upload the new activation file. 1. Delete the sensor identities from the previous subscription. For more information, see [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal). -1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel a Defender for IoT plan](#cancel-a-defender-for-iot-plan). +1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel a Defender for IoT plan for OT networks](#cancel-a-defender-for-iot-plan-for-ot-networks). ### Edit a legacy plan on the Azure portal If you have multiple legacy subscriptions and are migrating to a Microsoft 365 p 1. If you have an on-premises management console, make sure to upload a new activation file, which reflects the changes made. For more information, see [Upload a new activation file](how-to-manage-the-on-premises-management-console.md#upload-a-new-activation-file). -Changes to your plan will take effect one hour after confirming the change. This change will appear on your next monthly statement, and you'll be charged based on the length of time each plan was in effect. +Changes to your plan will take effect one hour after confirming the change. This change appears on your next monthly statement, and you're charged based on the length of time each plan was in effect. ## Next steps For more information, see: |
defender-for-iot | Manage Subscriptions Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md | Title: Manage Enterprise IoT plans on Azure subscriptions -description: Manage Defender for IoT plans for Enterprise IoT monitoring on your Azure subscriptions. Previously updated : 05/17/2023+ Title: Manage EIoT monitoring support | Microsoft Defender for IoT +description: Learn how to manage your EIoT monitoring support with Microsoft Defender for IoT. Last updated : 09/13/2023 +#CustomerIntent: As a Defender for IoT customer, I want to understand how to manage my EIoT monitoring support with Microsoft Defender for IoT so that I can best plan my deployment. -# Manage Defender for IoT plans for Enterprise IoT security monitoring +# Manage enterprise IoT monitoring support with Microsoft Defender for IoT -Enterprise IoT security monitoring with Defender for IoT is managed by an Enterprise IoT plan on your Azure subscription. While you can view your plan in Microsoft Defender for IoT, onboarding and canceling a plan is done with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) in Microsoft 365 Defender. +Enterprise IoT security monitoring with Defender for IoT is supported by a Microsoft 365 E5 (ME5) or E5 Security license, or extra standalone, per-device licenses purchased as add-ons to Microsoft Defender for Endpoint. -For each monthly price plan, you'll be asked to define an approximate number of [devices](billing.md#defender-for-iot-devices) that you want to monitor and cover by your plan. +This article describes how to: ++- Calculate the devices detected in your environment so that you can understand if you need extra, standalone licenses. +- Cancel support for enterprise IoT monitoring with Microsoft Defender for IoT If you're looking to manage OT plans, see [Manage Defender for IoT plans for OT security monitoring](how-to-manage-subscriptions.md). If you're looking to manage OT plans, see [Manage Defender for IoT plans for OT Before performing the procedures in this article, make sure that you have: -- A Microsoft Defender for Endpoint P2 license+- One of the following sets of licenses: ++ - A Microsoft 365 E5 (ME5) or E5 Security license and a Microsoft Defender for Endpoint P2 license + - A Microsoft Defender for Endpoint P2 license alone ++ For more information, see [Enterprise IoT security in Microsoft 365 Defender](concept-enterprise.md#enterprise-iot-security-in-microsoft-365-defender). ++- Access to the Microsoft 365 Defender portal as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) ++## Obtain a standalone, Enterprise IoT trial license ++This procedure describes how to start using a trial, standalone license for enterprise IoT monitoring, for customers who have a Microsoft Defender for Endpoint P2 license only. ++Customers with ME5/E5 Security plans have support for enterprise IoT monitoring available on by default, and don't need to start a trial. For more information, see [Get started with enterprise IoT monitoring in Microsoft 365 Defender](eiot-defender-for-endpoint.md). -- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/).+Start your enterprise IoT trial using the [Microsoft Defender for IoT - EIoT Device License - add-on wizard](https://signup.microsoft.com/get-started/signup?products=b2f91841-252f-4765-94c3-75802d7c0ddb&ali=1&bac=1) or via the Microsoft 365 admin center. -- The following user roles: - - **In Microsoft Entra ID**: [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant +**To start an Enterprise IoT trial**: - - **In Azure RBAC**: [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration +1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) > **Marketplace**. -### Calculate monitored devices for Enterprise IoT monitoring +1. Search for the **Microsoft Defender for IoT - EIoT Device License - add-on** and filter the results by **Other services**. For example: ++ :::image type="content" source="media/enterprise-iot/eiot-standalone.png" alt-text="Screenshot of the Marketplace search results for the EIoT Device License."::: ++ > [!IMPORTANT] + > The prices shown in this image are for example purposes only and are not intended to reflect actual prices. + > ++1. Under **Microsoft Defender for IoT - EIoT Device License - add-on**, select **Details**. ++1. On the **Microsoft Defender for IoT - EIoT Device License - add-on** page, select **Start free trial**. On the **Check out** page, select **Try now**. ++> [!TIP] +> Make sure to [assign your licenses to specific users]/microsoft-365/admin/manage/assign-licenses-to-users to start using them. +> -If you're working with a monthly commitment, you'll need to periodically update the number of devices covered by your plan as your network grows. +For more information, see [Free trial](billing.md#free-trial). ++## Calculate monitored devices for Enterprise IoT monitoring ++Use the following procedure to calculate how many devices you need to monitor if: ++- You're an ME5/E5 Security customer and thinks you need to monitor more devices than the devices allocated per ME5/E5 Security license +- You're a Defender for Endpoint P2 customer who's purchasing standalone enterprise IoT licenses **To calculate the number of devices you're monitoring:**: -1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Assets** \> **Devices** to open the **Device inventory** page. +1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Assets** \> **Devices** to open the **Device inventory** page. 1. Add the total number of devices listed on both the **Network devices** and **IoT devices** tabs. If you're working with a monthly commitment, you'll need to periodically update :::image type="content" source="media/how-to-manage-subscriptions/eiot-calculate-devices.png" alt-text="Screenshot of network device and IoT devices in the device inventory in Microsoft Defender for Endpoint." lightbox="media/how-to-manage-subscriptions/eiot-calculate-devices.png"::: -1. Round up your total to a multiple of 100. +1. Round up your total to a multiple of 100 and compare it against the number of licenses you have. For example: - In the Microsoft 365 Defender **Device inventory**, you have *473* network devices and *1206* IoT devices. - Added together, the total is *1679* devices.-- Rounded up to a multiple of 100 is **1700**.+- You have 320 ME5 licenses, which cover **1600** devices -Use **1700** as the estimated number of devices in your plan +You need **79** standalone devices to cover the gap. For more information, see the [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery). > [!NOTE] > Devices listed on the **Computers & Mobile** tab, including those managed by Defender for Endpoint or otherwise, are not included in the number of [devices](billing.md#defender-for-iot-devices) monitored by Defender for IoT. -## Onboard an Enterprise IoT plan --This procedure describes how to add an Enterprise IoT plan to your Azure subscription from Microsoft 365 Defender. --**To add an Enterprise IoT plan**: --1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**. +## Purchase standalone licenses -1. Select the following options for your plan: +Purchase standalone, per-device licenses if you're an ME5/E5 Security customer who needs more than the five devices allocated per license, or if you're a Defender for Endpoint customer who wants to add enterprise IoT security to your organization. - - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription. +**To purchase standalone licenses**: - > [!TIP] - > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. +1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) **Billing > Purchase services**. If you don't have this option, select **Marketplace** instead. - - **Price plan**: Select a trial or monthly commitment. +1. Search for the **Microsoft Defender for IoT - EIoT Device License - add-on** and filter the results by **Other services**. For example: - Microsoft Defender for IoT provides a [30-day free trial](billing.md#free-trial) for evaluation purposes, with an unlimited number of devices. + :::image type="content" source="media/enterprise-iot/eiot-standalone.png" alt-text="Screenshot of the Marketplace search results for the EIoT Device License."::: - Monthly commitments require that you enter the number of [devices](#calculate-monitored-devices-for-enterprise-iot-monitoring) that you'd calculated earlier. + > [!IMPORTANT] + > The prices shown in this image are for example purposes only and are not intended to reflect actual prices. + > -1. Select the **I accept the terms and conditions** option and then select **Save**. +1. On the **Microsoft Defender for IoT - EIoT Device License - add-on** page, enter your selected license quantity, select a billing frequency, and then select **Buy**. - For example: -- :::image type="content" source="media/enterprise-iot/defender-for-endpoint-onboard.png" alt-text="Screenshot of the Enterprise IoT tab in Defender for Endpoint." lightbox="media/enterprise-iot/defender-for-endpoint-onboard.png"::: +For more information, see the [Microsoft 365 admin center help](/microsoft-365/admin/). -After you've onboarded your plan, you'll see it listed in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example: +## Turn off enterprise IoT security +This procedure describes how to turn off enterprise IoT monitoring in Microsoft 365 Defender, and is supported only for customers who don't have any standalone, per-device licenses added on to Microsoft 365 Defender. -## Edit your Enterprise IoT plan +Turn off the **Enterprise IoT security** option if you're no longer using the service. -To edit your plan, such as to edit your commitment level or the number of devices covered by your plan, first [cancel the plan](#cancel-your-enterprise-iot-plan) and then [onboard a new plan](#onboard-an-enterprise-iot-plan). +**To turn off enterprise IoT monitoring**: -## Cancel your Enterprise IoT plan +1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Settings** \> **Device discovery** \> **Enterprise IoT**. -You'll need to cancel your plan if you want to edit the details of your plan, such as the price plan or the number of devices covered by your plan, or if you no longer need the service. +1. Toggle the option to **Off**. -You'd also need to cancel your plan and onboard again if you need to work with a new payment entity or Azure subscription. +You stop getting security value in Microsoft 365 Defender, including purpose-built alerts, vulnerabilities, and recommendations. -**To cancel your Enterprise IoT plan**: +### Cancel a legacy Enterprise IoT plan -1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**. +If you have a legacy Enterprise IoT plan, are *not* an ME5/E5 Security customer, and no longer to use the service, cancel your plan as follows: -1. Select **Cancel plan**. For example: +1. In [Microsoft 365 Defender](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**. - :::image type="content" source="media/enterprise-iot/defender-for-endpoint-cancel-plan.png" alt-text="Screenshot of the Cancel plan option on the Microsoft 365 Defender page."::: +1. Select **Cancel plan**. This page is available only for legacy Enterprise IoT plan customers. After you cancel your plan, the integration stops and you'll no longer get added security value in Microsoft 365 Defender, or detect new Enterprise IoT devices in Defender for IoT. -The cancellation takes effect one hour after confirming the change. This change will appear on your next monthly statement, and you will be charged based on the length of time the plan was in effect. --If you're canceling your plan as part of an [editing procedure](#edit-your-enterprise-iot-plan), make sure to [onboard a new plan](#onboard-an-enterprise-iot-plan) back with the new details. +The cancellation takes effect one hour after confirming the change. This change appears on your next monthly statement, and you're charged based on the length of time the plan was in effect. > [!IMPORTANT] > If you're canceling your plan as part of an [editing procedure](#edit-your-enter For more information, see: +- [Securing IoT devices in the enterprise](concept-enterprise.md) - [Defender for IoT subscription billing](billing.md)- - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)- - [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md)- - [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)-- |
defender-for-iot | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md | -The Internet of Things (IoT) supports billions of connected devices that use both operational technology (OT) and IoT networks. IoT/OT devices and networks are often built using specialized protocols, and may prioritize operational challenges over security. +The Internet of Things (IoT) supports billions of connected devices that use both operational technology (OT) and IoT networks. IoT/OT devices and networks are often built using specialized protocols, and might prioritize operational challenges over security. When IoT/OT devices can't be protected by traditional security monitoring systems, each new wave of innovation increases the risk and possible attack surfaces across those IoT devices and OT networks. -Microsoft Defender for IoT is a unified security solution built specifically to identify IoT and OT devices, vulnerabilities, and threats. Use Defender for IoT to secure your entire IoT/OT environment, including existing devices that may not have built-in security agents. +Microsoft Defender for IoT is a unified security solution built specifically to identify IoT and OT devices, vulnerabilities, and threats. Use Defender for IoT to secure your entire IoT/OT environment, including existing devices that might not have built-in security agents. Defender for IoT provides agentless, network layer monitoring, and integrates with both industrial equipment and security operation center (SOC) tools. Defender for IoT provides agentless, network layer monitoring, and integrates wi ## Agentless device monitoring -If your IoT and OT devices don't have embedded security agents, they may remain unpatched, misconfigured, and invisible to IT and security teams. Un-monitored devices can be soft targets for threat actors looking to pivot deeper into corporate networks. +If your IoT and OT devices don't have embedded security agents, they might remain unpatched, misconfigured, and invisible to IT and security teams. Unmonitored devices can be soft targets for threat actors looking to pivot deeper into corporate networks. Defender for IoT uses agentless monitoring to provide visibility and security across your network, and identifies specialized protocols, devices, or machine-to-machine (M2M) behaviors. Defender for IoT uses agentless monitoring to provide visibility and security ac - Run searches in historical traffic across all relevant dimensions and protocols. Access full-fidelity PCAPs to drill down further. - - Detect advanced threats that you may have missed by static indicators of compromise (IOCs), such as zero-day malware, fileless malware, and living-off-the-land tactics. + - Detect advanced threats that you might have missed by static indicators of compromise (IOCs), such as zero-day malware, fileless malware, and living-off-the-land tactics. - **Respond to threats** by integrating with Microsoft services such as Microsoft Sentinel, other partner systems, and APIs. Integrate with security information and event management (SIEM) services, security operations and response (SOAR) services, extended detection and response (XDR) services, and more. Install OT network sensors on-premises, at strategic locations in your network t - **Hybrid services**: - You may have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises. + You might have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises. In this case, set up your system in a flexible and scalable configuration to fit your needs. Connect some of your OT sensors to the cloud and view data on the Azure portal, and keep other sensors managed on-premises only. For more information, see [System architecture for OT system monitoring](archite ## Extend support to proprietary OT protocols -IoT and industrial control system (ICS) devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. If you have devices that run on protocols that aren't supported by Defender for IoT out-of-the-box, use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins to decode network traffic for your protocols. +IoT and industrial control system (ICS) devices can be secured using both embedded protocols and proprietary, custom, or nonstandard protocols. If you have devices that run on protocols that aren't supported by Defender for IoT out-of-the-box, use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins to decode network traffic for your protocols. Create custom alerts for your plugin to pinpoint specific network activity and effectively update your security, IT, and operational teams. For example, have alerts triggered when: For more information, see [Manage proprietary protocols with Horizon plugins](re ## Protect enterprise IoT networks -Extend Defender for IoT's agentless security features beyond OT environments to enterprise IoT devices. Add an Enterprise IoT plan in Microsoft Defender for Endpoint for added alerts, vulnerabilities, and recommendations for IoT devices in Defender for Endpoint. An Enterprise IoT plan also provides a shared device inventory across the Azure portal and Microsoft 365 Defender. +Extend Defender for IoT's agentless security features beyond OT environments to enterprise IoT devices by using enterprise IoT security with Microsoft Defender for Endpoint, and view related alerts, vulnerabilities, and recommendations for IoT devices in Microsoft 365 Defender. Enterprise IoT devices can include devices such as printers, smart TVs, and conferencing systems and purpose-built, proprietary devices. |
defender-for-iot | Roles Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md | Title: Azure user roles and permissions for Microsoft Defender for IoT description: Learn about the Azure user roles and permissions available for OT and Enterprise IoT monitoring with Microsoft Defender for IoT on the Azure portal. Previously updated : 09/19/2022 Last updated : 10/22/2023 -Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](../../role-based-access-control/index.yml) to provide access to Enterprise IoT monitoring services and data on the Azure portal. +Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](../../role-based-access-control/index.yml) to provide access to Defender for IoT monitoring services and data on the Azure portal. The built-in Azure [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) roles are relevant for use in Defender for IoT. Permissions are applied to user roles across an entire Azure subscription, or in | Action and scope|[Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) |[Security Admin](../../role-based-access-control/built-in-roles.md#security-admin) |[Contributor](../../role-based-access-control/built-in-roles.md#contributor) | [Owner](../../role-based-access-control/built-in-roles.md#owner) | |||||| | **[Grant permissions to others](manage-users-portal.md)**<br>Apply per subscription or site | - | - | - | Γ£ö |-| **Onboard [OT](onboard-sensors.md) or [Enterprise IoT sensors](eiot-sensor.md)** [*](#enterprise-iot-security) <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | +| **Onboard [OT](onboard-sensors.md) or [Enterprise IoT sensors](eiot-sensor.md)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | | **[Download OT sensor and on-premises management console software](update-ot-software.md#download-the-update-package-from-the-azure-portal)**<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | **[Download sensor endpoint details](how-to-manage-sensors-on-the-cloud.md#endpoint)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | **[Download sensor activation files](how-to-manage-sensors-on-the-cloud.md#reactivate-an-ot-sensor)** <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö |-| **[View values on the Plans and pricing page](how-to-manage-subscriptions.md)** [*](#enterprise-iot-security) <br>Apply per subscription only| Γ£ö | Γ£ö | Γ£ö | Γ£ö | -| **[Modify values on the Plans and pricing page](how-to-manage-subscriptions.md)** [*](#enterprise-iot-security) <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö | -| **[View values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md)** [*](#enterprise-iot-security)<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö| -| **[Modify values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** [*](#enterprise-iot-security), including remote OT sensor updates<br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö| +| **[View values on the Plans and pricing page](how-to-manage-subscriptions.md)** <br>Apply per subscription only| Γ£ö | Γ£ö | Γ£ö | Γ£ö | +| **[Modify values on the Plans and pricing page](how-to-manage-subscriptions.md)** <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö | +| **[View values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö| +| **[Modify values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** , including remote OT sensor updates<br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö| | **[Recover on-premises management console passwords](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | | **[Download OT threat intelligence packages](how-to-work-with-threat-intelligence-packages.md#manually-update-locally-managed-sensors)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | **[Push OT threat intelligence updates](how-to-work-with-threat-intelligence-packages.md#manually-push-updates-to-cloud-connected-sensors)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö |-| **[Onboard an Enterprise IoT plan from Microsoft 365 Defender](eiot-defender-for-endpoint.md)** [*](#enterprise-iot-security)<br>Apply per subscription only | - | Γ£ö | - | - | | **[View Azure alerts](how-to-manage-cloud-alerts.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| | **[Modify Azure alerts](how-to-manage-cloud-alerts.md) (write access - change status, learn, download PCAP)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö | | **[View Azure device inventory](how-to-manage-device-inventory-for-organizations.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| Permissions are applied to user roles across an entire Azure subscription, or in | **[View Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | Γ£ö | Γ£ö |Γ£ö | Γ£ö | | **[Configure Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | - | Γ£ö |Γ£ö | Γ£ö | --## Enterprise IoT security --Add, edit, or cancel an Enterprise IoT plan with [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) from Microsoft 365 Defender. Alerts, vulnerabilities, and recommendations for Enterprise IoT networks are also only available from Microsoft 365 Defender. --In addition to the permissions listed above, Enterprise IoT security with Defender for IoT has the following requirements: --- **To add an Enterprise IoT plan**, you'll need an E5 license and specific permissions in your Microsoft 365 Defender tenant.-- **To view Enterprise IoT devices in your Azure device inventory**, you'll need an Enterprise IoT network sensor registered.--For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md). - ## Next steps For more information, see: For more information, see: - [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md) - [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)-- |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Title: What's new in Microsoft Defender for IoT description: This article describes new features available in Microsoft Defender for IoT, including both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 10/23/2023 Last updated : 11/01/2023 Features released earlier than nine months ago are described in the [What's new |Service area |Updates | |||+| **Enterprise IoT networks** | [Enterprise IoT protection now included in Microsoft 365 E5 and E5 Security licenses](#enterprise-iot-protection-now-included-in-microsoft-365-e5-and-e5-security-licenses) | | **OT networks** | [Updated security stack integration guidance](#updated-security-stack-integration-guidance)| +### Enterprise IoT protection now included in Microsoft 365 E5 and E5 Security licenses ++Enterprise IoT (EIoT) security with Defender for IoT discovers unmanaged IoT devices and also provides added security value, including continuous monitoring, vulnerability assessments and tailored recommendations specifically designed for Enterprise IoT devices. Seamlessly integrated with Microsoft 365 Defender, Microsoft Defender Vulnerability Management, and Microsoft Defender for Endpoint on the Microsoft 365 Defender portal, it ensures a holistic approach to safeguarding an organization's network. ++De |